Quick start

Getting started

Connect any AI workflow to Tracira in minutes — you need a token and one HTTP call.

1

Create a free account

Sign up at tracira.com/login. Create your first workspace when prompted.
2

Copy your webhook token

Go to Integrations and copy your token — it starts with wh-.
3

Define a rule

Go to Rules and write a plain-English rule — e.g. “Flag if the response mentions a discount over 20%”.
4

Send your first output

POST the AI output to the webhook. You get a log ID back immediately; evaluation runs in the background.
Pass sync: true to receive the verdict inline — useful when you need to gate or continue a workflow based on the result.

Getting started

Rate limits

The webhook endpoint enforces a sliding-window rate limit per workspace token to protect service quality for all users.

EndpointLimitWindowScope
POST /api/logs60 requests1 minutePer workspace token

When the limit is exceeded the API returns 429 Too Many Requests with a Retry-After: 60 header indicating how many seconds to wait before retrying.

The limit is enforced globally across all server instances using a sliding-window algorithm — it is not per-server.

For high-throughput pipelines, omit sync: true so requests return immediately with status: "pending". Sync evaluation holds the connection open and counts against the same limit.

Integrations

Make

Tracira has a native Make app — add monitoring to any scenario with a single module, no HTTP setup required.

1

Install the Tracira app for Make

Open the Tracira app invitation link and click Install. This only needs to be done once per Make organization.
2

Add the Log Execution module to your scenario

In your scenario, click + after the module that generates the AI output. Search for Tracira and select Log Execution.
3

Connect your workspace

Click Add to create a connection. When prompted for an API key, paste your webhook token from Integrations in your Tracira workspace (the one starting with wh-).
4

Map the fields

At minimum, map project and output from the previous module. Optional fields:
  • task — groups logs inside the project
  • input — the prompt sent to the AI
  • model, latencyMs, costUsd — for analytics
  • sync — set to true to get the verdict inline (gate mode)
5

Branch on the verdict (gate mode)

When sync is enabled, add a Router after the Tracira module:
  • Route 1 — filter: {{tracira.status}} = pass → continue
  • Route 2 — filter: {{tracira.status}} = flagged → alert or stop
Leave sync blank for fire-and-forget logging (Make continues instantly). Set it to true only when you need to gate the scenario on the evaluation result.

Integrations

n8n

Use n8n's built-in HTTP Request node to send AI outputs to Tracira. Route your workflow based on the verdict using an IF node.

1

Add an HTTP Request node after your AI node

In your workflow, click + after the node that produces the AI output and search for HTTP Request.
2

Configure the node

Set these fields:
  • Method: POST
  • URL: https://tracira.com/api/logs
  • Authentication: Header Auth — Name: Authorization, Value: Bearer YOUR_TOKEN
  • Body Content Type: JSON
3

Set the body

Under Body Parameters, paste the JSON from the code panel. Replace {{ $json.aiOutput }} with an expression pointing to the AI output field from the previous node.
4

Branch on the result

Add an IF node after the HTTP Request node:
  • Condition: {{ $json.status }} equals pass
  • True branch → continue; False branch → notify or stop
In n8n, enable Always Output Data on the HTTP Request node so the IF node receives the response even on a 4xx (e.g., credits exhausted). This prevents the workflow from halting silently.
Add a screenshot of the n8n HTTP Request node configuration here.

Integrations

Any HTTP client

Tracira works with any tool that can make an HTTP POST request — Zapier, Pipedream, custom scripts, or your own server. Here is the complete request format.

1

Get your webhook token

Go to Integrations in your workspace and copy the token that starts with wh-.
2

Send the request

POST to https://tracira.com/api/logs with:
  • Header: Authorization: Bearer YOUR_TOKEN
  • Body: JSON with at minimum project and output
3

Use the verdict

Pass "sync": true to receive the evaluation result immediately in the response. Check statuspass means all rules passed, flagged means at least one rule triggered.
For Zapier or Pipedream: use their built-in HTTP/Webhook action, set Method to POST, add the Authorization header, and paste the JSON body. Then add a conditional step after it to branch on status.

Logs

Submit an AI output for evaluation

Default mode is async and returns immediately with `status: pending`. Pass `sync: true` to wait for the verdict inline. Rate limit: 60 requests per minute per workspace token. Exceeding the limit returns `429` with a `Retry-After: 60` header.

Body

projectstringrequired

Project name. Auto-created on first use.

taskstringoptional

Segments logs within a project. Used for per-task rule targeting and filtering. Auto-created on first use.

outputstringrequired

The AI-generated text or content to evaluate. Up to 50,000 characters.

modelstringoptional

The model identifier that produced this output (e.g. `gpt-4o`, `claude-3-5-sonnet`). Stored for filtering and display.

latencyMsnumberoptional

End-to-end latency of the AI call in milliseconds. Stored for tracking and display.

costUsdnumberoptional

Cost of the AI call in US dollars. Stored for cost tracking and display.

inputstring | object[]optional

The prompt or conversation that produced this output. Accepts plain text or an array of multi-modal content parts (text, image_url, audio_url, file_url).

inputTextstringoptional

Plain-text prompt shown alongside the output in the log viewer. Use when `input` is multi-modal.

attachmentsobject[]optional

Files attached to the input. Each file is fetched and durably stored during ingestion. 10 MB per file. Accepts `url` or `upload` (base64) sources.

callbackUrlstringoptional

URL to POST the evaluation result to when it completes.

callbackEventsstringoptional

Which events trigger the callback. `all` fires on every completion; `flagged_error` only on non-pass outcomes; `decisions` on human approve/reject.

One of: allflagged_errordecisionsflagged_error_decisionspass

metadataobjectoptional

Arbitrary key-value pairs attached to the log. Useful for customer IDs, run metadata, or environment tags.

idstringoptional

Optional caller-supplied log ID for idempotent writes.

syncbooleanoptional

When true, waits for evaluation and returns the verdict inline.

confidencenumberoptional

The model's self-reported confidence score (0–1). Stored on the log and usable in `confidence_threshold` rules.

sessionIdstringoptional

Optional run or conversation grouping ID. Equivalent context aliases such as `conversationId`, `threadId`, and `chatId` are also accepted.

subjectIdstringoptional

Optional impacted entity ID. Equivalent aliases such as `customerId`, `recordId`, `ticketId`, and `accountId` are also accepted.

actorIdstringoptional

Optional actor ID. Equivalent aliases such as `userId`, `endUserId`, and `agentId` are also accepted.

timestampstringoptional

Optional ISO 8601 timestamp to use as the log date instead of the current server time. Useful for replaying or reprocessing past executions (e.g. `2024-11-15T10:30:00Z`).

Logs

List logs

Supports webhook-token auth for integrations and session auth for the dashboard. The response key is `executions` for backwards compatibility, but the records are logs.

Query parameters

pendingbooleanqueryoptional

When true, only returns undecided flagged or error logs.

statusstringqueryoptional

One of: pendingpassflaggederror

projectstringqueryoptional
taskstringqueryoptional
sessionIdstringqueryoptional
subjectIdstringqueryoptional
actorIdstringqueryoptional
qstringqueryoptional

Searches project, task, model, session, subject, and actor fields.

tagsstringqueryoptional

Comma-separated tag list.

fromstringqueryoptional
tostringqueryoptional
sortBystringqueryoptional

One of: timestampprojectNamemodelstatuslatencyMscostUsd

sortDirstringqueryoptional

One of: ascdesc

pagenumberqueryoptional
limitnumberqueryoptional

Logs

Get a single log

Path parameters

idstringpathrequired

Logs

Delete a single log

Path parameters

idstringpathrequired

Logs

Poll async log status

Path parameters

idstringpathrequired

Logs

Approve or reject a log

Path parameters

idstringpathrequired

Body

decisionstringrequired

One of: approvedrejected

Logs

Bulk-approve, bulk-reject, or bulk-delete logs

Body

idsstring[]required
actionstringrequired

One of: approverejectdelete

Logs

List distinct task names

Returns the union of registered project tasks and task names inferred from historical logs.

Query parameters

projectstringqueryoptional

Optional project name filter.

Projects

List projects

No parameters required.

Projects

Create a project

Body

namestringrequired

Projects

Rename, merge, or change icon for a project

Path parameters

idstringpathrequired

Body

namestringoptional
iconstringoptional
mergebooleanoptional

Projects

Delete a project and its logs, rules, and tasks

Path parameters

idstringpathrequired

Projects

List tasks for a project

Path parameters

idstringpathrequired

Projects

Create a task inside a project

Path parameters

idstringpathrequired

Body

namestringrequired

Projects

Delete a task from a project

Deletes the task row and any logs using the same project/task combination.

Path parameters

idstringpathrequired

Query parameters

taskNamestringqueryrequired

Rules

List rules

No parameters required.

Rules

Create a rule

Body

namestringrequired
typestringrequired

One of: keyword_requiredkeyword_forbiddenregex_matchlength_limitconfidence_thresholdllm_judgejson_fieldjson_field_keyword_requiredjson_field_keyword_forbidden

descriptionstringoptional
valuestringrequired
isActivebooleanoptional
projectIdstringoptional
projectNamestringoptional
taskNamestringoptional
llmJudgeProviderstringoptional

One of: anthropicopenaiopenrouter

llmJudgeModelstringoptional

Rules

Get a single rule

Path parameters

idstringpathrequired

Rules

Update a rule

Path parameters

idstringpathrequired

Body

namestringoptional
typestringoptional

One of: keyword_requiredkeyword_forbiddenregex_matchlength_limitconfidence_thresholdllm_judgejson_fieldjson_field_keyword_requiredjson_field_keyword_forbidden

descriptionstringoptional
valuestringoptional
isActivebooleanoptional
projectIdstringoptional
projectNamestringoptional
taskNamestringoptional
llmJudgeProviderstringoptional

One of: anthropicopenaiopenrouter

llmJudgeModelstringoptional

Rules

Delete a rule

Path parameters

idstringpathrequired

Rules

Evaluate one rule against an output without storing a log

Body

outputstringrequired
confidencenumberoptional
ruleobjectrequired

Settings

Get workspace settings and usage

No parameters required.

Settings

Update workspace settings

Body

onboardingDismissedbooleanoptional
workspaceNamestringoptional
slackWebhookUrlstringoptional
notifyOnFlagbooleanoptional
notifyOnFailbooleanoptional
slackChannelIdstringoptional
notificationEmailstringoptional
emailOnFlagbooleanoptional
emailOnFailbooleanoptional
slackNotificationsEnabledbooleanoptional
emailNotificationsEnabledbooleanoptional
webhookEnabledbooleanoptional
outboundWebhookUrlstringoptional
outboundWebhookEventsstring[]optional
estimatedIncidentCostUsdnumberoptional
openaiApiKeystringoptional
anthropicApiKeystringoptional
openrouterApiKeystringoptional
openaiJudgeModelstringoptional
anthropicJudgeModelstringoptional
openrouterJudgeModelstringoptional
defaultJudgeProviderstringoptional

One of: anthropicopenaiopenrouter

colorstringoptional

Settings

Rotate the workspace webhook token

Workspace-admin only. Rate limited to 5 rotations per hour.

No parameters required.

Media

Redirect to a signed media URL

Accepts either the browser session cookie or `x-api-key`. The `key` path parameter is the full media key path, for example `workspaceId/logId/0-file.pdf`.

Path parameters

keystringpathrequired

Query parameters

downloadstringqueryoptional

Use `1` to request `attachment` disposition instead of inline preview.

One of: 1

Utilities

Verify a webhook token

No parameters required.

Utilities

List preset model names

Public helper endpoint used by the Make custom app.

No parameters required.

Ready to connect your workflow?

Under 5 minutes. No SDK required — one HTTP call is all it takes.