ZelaxyDocs
Core Blocks
Block

Knowledge Block

Query your knowledge base using RAG for context-aware AI responses

Knowledge Block

The Knowledge block performs Retrieval-Augmented Generation (RAG) — it searches your uploaded documents, PDFs, websites, and data using vector similarity and feeds the most relevant chunks to an AI agent. This grounds AI responses in your actual data, reducing hallucinations.

Overview

PropertyValue
Typeknowledge
CategoryCore Block
Color#06B6D4 (Cyan)

When to Use

  • Build Q&A systems over your own documents
  • Ground AI responses in company-specific data
  • Search uploaded PDFs, websites, or databases
  • Create domain-specific chatbots with accurate answers

Configuration

SettingTypeDescription
Knowledge BaseDropdownSelect from your uploaded knowledge bases
Search QueryLong textWhat to search for: {{starter.input}}
Top K ResultsSliderNumber of chunks to retrieve (1–20)
Similarity ThresholdSliderMinimum relevance score (0–1)

Outputs

FieldTypeDescription
contentstringRetrieved text chunks combined
resultsjsonArray of matched chunks with metadata and scores
sourcesjsonSource document references

Example: Company FAQ Bot

Goal: Answer questions using your company's documentation.

Workflow:

[Starter] → [Knowledge] → [Agent] → [Response]

Configuration:

  • Knowledge Base: "Company Handbook" (uploaded PDF)
  • Search Query: {{starter.input}}
  • Top K: 5
  • Similarity Threshold: 0.7

Agent System Prompt:

Answer the user's question using ONLY the provided context. If the context
doesn't contain the answer, say "I don't have that information."

Context:
{{knowledge.content}}

How it works:

  1. User asks "What is the PTO policy?"
  2. Knowledge block searches the handbook using vector similarity
  3. Top 5 relevant chunks are retrieved
  4. Agent reads the chunks and generates an accurate answer
  5. Sources are cited for transparency

Tips

  • Upload diverse sources — PDFs, websites, text files all get embedded and indexed
  • Set threshold to 0.7+ for high-quality matches; lower for broader recall
  • Top K = 3–5 is usually enough; more chunks = more context tokens = higher cost
  • Always pair with an Agent — the Knowledge block retrieves, the Agent reasons