ZelaxyDocs
Core Blocks
Block

Agent Block

Build AI agents with multi-provider LLM support, tool calling, and structured outputs

Agent Block

The Agent block is the most powerful block in Zelaxy. It connects to any major LLM provider, supports tool calling, structured JSON outputs, conversation memory, and intelligent fallback systems. Use it whenever you need AI reasoning, text generation, classification, summarization, or any language task.

Overview

PropertyValue
Typeagent
CategoryCore Block
Color#6366f1 (Indigo)

When to Use

  • Generate text, summaries, or analyses from input data
  • Classify, categorize, or extract information from content
  • Call external tools (search, APIs, databases) as part of reasoning
  • Build chatbots with persistent memory
  • Get structured JSON responses for downstream processing

Configuration

System Prompt

Define the agent's behavior, personality, and reasoning framework. The built-in AI Wand can generate sophisticated system prompts for you — just describe what you want the agent to do.

User Message / Context

The input the agent processes. Typically references another block's output: {{starter.input}} or {{previous_block.content}}.

Model Selection

Searchable dropdown with all supported providers. Each model shows its provider icon and performance characteristics.

ProviderModelsAuth
OpenAIGPT-4o, GPT-4o-mini, o1, o3-miniAPI Key
AnthropicClaude 3.5 Sonnet, Claude 3 Opus/HaikuAPI Key
GoogleGemini 2.0 Flash, Gemini ProAPI Key
xAIGrok-2, Grok-2-miniAPI Key
DeepSeekDeepSeek Chat, DeepSeek ReasonerAPI Key
GroqLlama, Mixtral (fast inference)API Key
CerebrasUltra-fast inference modelsAPI Key
Azure OpenAIGPT-4o (Azure-hosted)API Key + Endpoint
OpenRouterAccess to 100+ modelsAPI Key
OllamaAny local modelNone (local)

Advanced Settings

SettingTypeRangeDescription
TemperatureSlider0–2Controls randomness. 0 = deterministic, 1 = balanced, 2 = creative
Top-PSlider0.1–1Nucleus sampling — limits token selection pool
Top-KSlider1–100Restricts vocabulary to top K tokens
Max Output TokensSlider100–8192Caps response length
Presence PenaltySlider-2 to 2Encourages topic diversity
Frequency PenaltySlider-2 to 2Reduces word repetition
Fallback ModelDropdownBackup model if primary fails
Max RetriesSlider0–5Retry attempts on failure
TimeoutSlider10–300sRequest timeout
Context WindowSlider1K–200KMax context tokens
Context PriorityDropdownRecent/Relevant/BalancedHow to prioritize context
Safety LevelDropdownStrict/Moderate/PermissiveContent filtering
Confidence ThresholdSlider0.1–1Minimum confidence score
Enable StreamingToggleReal-time token streaming
Enable CachingToggleCache responses for performance

Response Format (Structured Output)

Define a JSON schema to get typed, predictable responses. The AI Wand can generate schemas from natural language descriptions.

Tool Integration

Connect tool blocks (Slack, Gmail, Search, etc.) to the Agent. The agent automatically discovers connected tools and calls them during reasoning.

Outputs

FieldTypeDescription
contentstringThe agent's generated text response
modelstringModel identifier used (e.g., gpt-4o)
tokensjsonToken usage: {prompt, completion, total}
toolCallsjsonList of tool calls made with arguments and results
contextjsonConversation context and session data

Example: Research Assistant

Goal: Build an agent that researches a topic and returns a structured summary.

Workflow:

[Starter] → [Agent] → [Response]

Configuration:

  • System Prompt:
    You are a research assistant. Given a topic, provide a comprehensive summary
    with key facts, recent developments, and sources. Always be factual and cite
    your reasoning.
  • User Message: {{starter.input}}
  • Model: gpt-4o
  • Temperature: 0.3 (factual, low creativity)
  • Response Format:
    {
      "type": "object",
      "properties": {
        "summary": { "type": "string" },
        "keyFacts": { "type": "array", "items": { "type": "string" } },
        "confidence": { "type": "number" }
      }
    }

Result: The agent returns a structured JSON object that downstream blocks can parse reliably.

Example: Agent with Tools

Goal: Agent that answers questions using web search and then sends results via Slack.

Workflow:

[Starter] → [Agent] → [Slack]

        [Google Search]  (connected as tool)

How it works:

  1. Connect a Google Search block to the Agent (draw line to the tools input)
  2. The Agent automatically discovers the search tool
  3. When the user asks a question, the Agent decides whether to search the web
  4. Search results feed back into the Agent's reasoning
  5. Final answer goes to {{agent.content}} → Slack message

Tips

  • Use structured output for pipelines — downstream blocks can reliably parse JSON fields
  • Set low temperature (0.1–0.3) for factual/classification tasks
  • Set high temperature (0.7–1.0) for creative writing
  • Enable fallback model for production workflows — prevents failures if primary model is down
  • Connect memory blocks for multi-turn chatbot experiences