Introduction
The AI Workflow Builder provides a chat-driven interface for creating and modifying workflows using natural language:- Gather input from users or systems
- Enrich that input with contextual data
- Execute one or more AI reasoning steps
- Process intermediate and final results
- Deliver structured output to downstream consumers
Key Features
- Build with AI Button: Access the AI builder directly from the workflow canvas
- Context-Aware Generation: Select existing workflow components to provide context for modifications
- Session History: View and resume previous workflow building sessions
- Suggested Questions: Get started with pre-defined workflow templates and examples
- Component Visualization: See which components were added, modified, or deleted in each generation
- Missing Connector Detection: Automatically detects and prompts for required connector setup
- Inline Connector Creation: Create missing data or AI/ML source connectors without leaving the builder
How It Works
- Click “Build with AI” in the workflow canvas
- Describe your workflow or select suggested questions
- Optionally add existing components as context for modifications
- Review the generated workflow with visual change indicators
- Continue refining through conversational prompts
Key Concepts
| Concept | Description |
|---|---|
| Chat Input | Accepts user questions to initiate the workflow |
| Prompt | Templates used to structure input for the LLM |
| LLM | Connects to OpenAI, Anthropic, etc., to generate completions |
| Database / Vector Search | Queries your structured or embedded data |
| Chat Output | Displays the final result in the interface |
| Workflow Canvas | Visual UI to arrange and connect components |
| Human Approval | Pauses workflow execution for manual approval or rejection |
Core Components of an AI Workflow
Prompt to Workflow
AI Squared now supports creating workflows directly from natural language descriptions. This feature enables you to:- Rapid Prototyping: Describe your workflow in plain English and have it automatically generated
- Workflow Sessions: Each workflow execution creates a session that tracks conversation history and maintains context
- Session Management: Sessions automatically expire after a configurable duration (default: 10 minutes)
- Usage Tracking: Workflow sessions are tracked for billing and analytics purposes
Templates
Templates provide pre-built starting points for common workflow patterns.- Direct LLM Chat: A pipeline connecting chat input to a Prompt, LLM, and Chat Output.
- Contextual Chat with Vector Store (RAG): Connects Chat Input and a Vector Store to the Prompt Component, enabling the model to respond based on the documents shared.
- Query Database with LLM: Helps users ask questions about structured data in everyday language. The workflow uses LLM to generate SQL query from the user’s question and database schema, executes the query, and gives a summarized result.
Input Components
Input components serve as the entry point into a workflow.Responsibilities:- Accept user or system input
- Normalize input into a consumable format for downstream components
- Chat input (natural language queries)
- Programmatic input (API-triggered payloads)

Prompt Components
Prompt components define instruction templates that guide AI behavior.Responsibilities:
- Translate raw input into structured instructions
- Define task boundaries and constraints
- Introduce dynamic variables for runtime substitution

- Support dynamic placeholders (e.g., user_input, query, results)
- Are reusable across workflows
- Do not execute AI models directly
AI Components
AI components execute prompts and logic using connected AI capabilities.-
Large Language Models (LLMs): LLMs are used for reasoning, transformation, summarizing, and other logic based activities.
.png?fit=max&auto=format&n=EA4TUHpuBNxP4EP0&q=85&s=3fc36398c22d82f5ff317a9d529b6884)
- Agents: Agents are helpful in managing reasoning (involving multiple steps), co-ordinating across multiple tools, maintaining the context of execution across steps. Agents will always operate within the guardrails or boundaries set by the users.
- Tools: Tools allow AI systems to interact with external systems or perform specific activities such as query execution, data transformation, or enrichment.
Data Components
Data components enable the workflows to interact with structured and unstructured data sources.- Database: These components allow workflows to execute structured queries and retrieve results in a tabular format.
- Vector: Vector search components help in semantic retrieval and contextual augmentation using embeddings.
Other Components
Workflow Canvas
Workflow canvas is the layer where all the components are assembled. It gives users a visualization of workflows and ensures that the workflows execute in a predictable manner.Guardrails
Guardrails defines the boundaries of operation for your AI instance. Guardrails can include prompt-level limitations, data access limitations or checks on output format to ensure security. Guardrails ensure workflows remain safe and auditable, and meet enterprise requirements.Conditional Component
Conditional components allow workflows to make decisions during execution. Based on the output of a previous component, conditions are evaluated to determine which branch of the workflow to proceed with. These decisions are made dynamically at run-time.Workflow Versioning
Workflow versioning lets you track, compare and manage different versions of your workflow over time. Instead of manually tracking changes, Workflow Versioning maintains a complete version history for teams to refer.Python Executor
The Python Executor allows custom python code to run directly within a workflow. The execution is secure and results are returned to the workflow in real-time. Python executor supports libraries that help extend workflow capabilities, giving you more dynamic and customized processing steps.Smart Router
Smart Router selects the most appropriate LLM for each request. It uses context-aware routing to match requests to models based on task requirements and configured optimization goals. Smart Routers works in the following way:- Analyzes the incoming query
- Analyzes model metadata (including strengths, capabilities, limits etc)
- Determines the best fit (using LLMs)
- Routes the request automatically.
Workflow Playground
The Workflow Playground is where you can validate AI workflows before they are published or deployed. It allows workflows to be executed in an isolated testing environment where inputs, intermediate steps, and outputs can be inspected without exposing the workflow to end users. Using the Playground, teams can:- Submit test inputs to trigger workflow execution
- Inspect intermediate and final outputs produced by AI and data components
- Identify configuration issues, data access problems, or execution errors
- Validate workflow behavior before deployment
- The Workflow Playground supports controlled testing and debugging during workflow development.
Human Approval Component
The Human Approval component enables human-in-the-loop workflows by pausing execution and requiring manual approval before proceeding. This is useful for:- Quality Control: Review AI-generated content or classifications before taking action
- Compliance: Ensure sensitive operations are manually verified
- Decision Gates: Add human judgment to automated processes
- Customizable approval prompts and messages
- Three status states: approved, rejected, and expired
- Configurable session timeouts
- Visual feedback for approval status
Workflow Analytics
AI Squared tracks comprehensive analytics for workflow executions, including:Token Usage and Cost Tracking
Workflows now track LLM token consumption and associated costs:- Token Metrics: Monitor input tokens, output tokens, and total tokens used per workflow run
- Cost Calculation: Automatic cost calculation based on token usage and model pricing
- Usage Logs: Detailed logs of LLM usage including token counts and costs for each model invocation
- Billing Integration: Token usage and costs are tracked in your organization’s subscription for billing purposes
- Understand the cost of running specific workflows
- Optimize prompts and model selection to reduce costs
- Track usage trends over time
- Set budgets and alerts based on token consumption