Settingsintermediate

Test and publish a capability

Run a capability against a real endpoint with sample inputs, inspect the response, and flip it from Draft to Published when everything looks right.

6 min read

Test and publish a capability

A Draft capability isn’t callable by any specialist. Before you publish it, run a test execution to confirm it actually works against the real endpoint with realistic inputs.

Before you start

  • A capability you’ve already created and saved as a Draft (see Create a capability)
  • Sample input values that represent a real call — a real order number, a real customer email, etc.
  • Access to the external API the capability calls (so you can verify the side effects, if any)

Steps

  1. Open Settings → Capabilities and click your Draft capability.
  2. Click Run Test in the editor toolbar.
  3. Fill in the test input values. The fields match the inputs you declared on the capability.
  4. Click Run Test in the dialog. Atender executes the capability end to end against the real API endpoint.
  5. Inspect the result. The dialog shows:
    The final response the capability would return to the AI specialist
    The HTTP request that was sent (expand View Request Details)
    For canvas-built capabilities, the path the flow took through your nodes
  6. If something’s wrong — auth fails, the response shape is unexpected, the wrong branch was taken — close the dialog, edit, save, and run the test again.
  7. Once you’re happy, change the status to Published in the editor and save. The capability is now live.

Verify it worked

After publishing, open one of your Agent Stacks and add the capability to a specialist (see Add a capability to a specialist). Then test the stack in the built-in test sandbox by sending a message that should trigger the capability — the specialist should call it and return real data in its reply.

Troubleshooting

  • Symptom: Run Test returns a 401 or 403. Fix: The authentication isn’t right. Open the API definition and verify the credential, the auth type, and (if OAuth2) that the token hasn’t expired. For OAuth2 + External OAuth, you’ll need to authorize as a test customer first.

  • Symptom: The test returns the right HTTP response but the capability output is empty. Fix: The output mapping doesn’t match the response shape. Open the response section and confirm you’re plucking the right JSON path. Use a tool like jq or the API’s docs to confirm the shape.

  • Symptom: A canvas capability takes the wrong branch. Fix: Open the Condition node and verify the comparison operator and the value it’s comparing against. Edge cases like null vs. empty string vs. missing key are common gotchas.

  • Symptom: The capability runs in test but specialists don’t call it in real conversations. Fix: Confirm the capability is Published, the specialist has been granted access to it (Settings → Agent Stacks → specialist → Capabilities), and the description clearly explains when to use it. The AI picks capabilities based on the description, so vague descriptions get skipped.

A note on test data

Test mode runs against your real configured endpoint. If the capability writes data — cancels an order, charges a card — those writes really happen. Use a test endpoint or a sandbox API key whenever the API distinguishes one, especially for capabilities at security level 2 or 3.

See also

Tags

Ai FeaturesHow ToTroubleshooting