Claude
Use Claude models in workflows
Actions
Section titled “Actions” Run Claude Agent Runs a Claude Managed Agent in Anthropic’s managed environment and waits until the session is idle or terminated.
Text Prompt Generate a response using Anthropic's Claude models via the Messages API
Instructions
Section titled “Instructions”To get new Claude API key, go to platform.claude.com.
Run Claude Agent
Section titled “Run Claude Agent”The Run Claude Agent component uses Claude Managed Agents to start a session with a configured agent and environment, sends your task as a user message, and waits until the session reaches a terminal state (idle or terminated) by polling. Log streaming is not used.
Prerequisites
Section titled “Prerequisites”- A Claude API key on the integration.
- An agent and environment already created in the Anthropic API (or Console). This step references them by ID.
Configuration
Section titled “Configuration”- Agent ID and optional Version: the Managed Agent to run (latest, or a pinned version if Version is set).
- Environment ID: The environment the session runs in.
- Prompt: The user message (task) sent to the agent.
- Vault IDs (optional): For MCP tools that need vault-backed credentials.
Output
Section titled “Output”Emits a finished payload with session status, session id, and the final agent message when available so downstream steps can branch or consume the result. For failure cases the status is still emitted when the session is terminated or the step times out.
Example Output
Section titled “Example Output”{ "data": { "lastMessage": "Finished the requested task.", "sessionId": "sess_01ExampleManagedSession", "status": "idle" }, "timestamp": "2026-04-26T12:00:00Z", "type": "claude.runAgent.finished"}Text Prompt
Section titled “Text Prompt”The Text Prompt component uses Anthropic’s Claude models to generate text responses.
Use Cases
Section titled “Use Cases”- Summarization: Generate summaries of incidents or deployments.
- Code Analysis: specific code review or PR comments.
- Content Generation: Create documentation or drafting communications.
Configuration
Section titled “Configuration”- Model: The Claude model to use (e.g., claude-3-5-sonnet-latest).
- Prompt: The main user message/instruction.
- System Message: (Optional) Context to define the assistant’s behavior or persona.
- Max Tokens: (Optional) Limit the length of the generated response.
- Temperature: (Optional) Control randomness (0.0 to 1.0).
Output
Section titled “Output”Returns a payload containing:
- text: The content generated by Claude.
- usage: Input and output token counts.
- stopReason: Why the generation ended (e.g., “end_turn”, “max_tokens”).
- model: The specific model version used.
- Requires a valid Claude API key configured in integration
- Response quality and speed depend on the selected model
- Token usage is tracked and may incur costs based on your Claude plan
Example Output
Section titled “Example Output”{ "data": { "id": "msg_01X9JGt5...123456", "model": "claude-3-5-sonnet-latest", "response": { "content": [ { "text": "Here is the summary of the deployment logs you requested...", "type": "text" } ], "id": "msg_01X9JGt5...123456", "model": "claude-3-5-sonnet-latest", "role": "assistant", "stop_reason": "end_turn", "type": "message", "usage": { "input_tokens": 45, "output_tokens": 120 } }, "stopReason": "end_turn", "text": "Here is the summary of the deployment logs you requested...", "usage": { "input_tokens": 45, "output_tokens": 120 } }, "timestamp": "2026-02-06T12:00:00Z", "type": "claude.message"}