Agent Step
The Agent Step block lets a GPT-powered assistant run inside your quiz flow. Insert it between the Questions step and completion so the agent can summarize answers, set structured variables, emit buttons/cards/carousels, and then send respondents to the next destination.
When to use the Agent Step
- You need GPT to turn open-text answers or tags into structured variables that other steps, automations, or completion copy can consume.
- You want the respondent to interact with AI-generated buttons, cards, or carousels without writing every option manually.
- You need smarter routing that evaluates the respondents story and, depending on the outcomes, sends them to another question, the lead capture block, or a tailored completion.
Where to add it
Open the Questions step, click the block palette, and choose the Agent Step tile (helper text: "Run an AI agent to set variables and guide routing"). The block behaves like other steps: you can reorder it, collapse it, and keep editing instructions without republishing.
Configuring the agent
Instructions & conversation
Use the textarea at the top of the block to explain what the assistant should do, what tone it should keep, and which variables to fill. The agent receives the quiz ID, step ID, submission ID, previous answers, variables, tags, and metadata, so you can refer to those data points directly.
Every run includes the answers you collected so far, any derived variables, and metadata such as the respondents locale and session count. Mention those fields in your instructions if you need the agent to reason about them.
The sheet also exposes a Conversation limit input (defaults to six). The limit counts both respondent and agent messages, and the step forces completion or routing once the cap is hit. Keep the limit tight enough to avoid runaway conversations but high enough for the assistant to finish its work.
Variable outputs
Outputs map the agents structured response to quiz variables. Click Add output, pick a variable from the dropdown (or create a new one via + Create new variable...), and choose the type (string, number, boolean, or enum). The builder shows hints when a variable depends on lead capture data, so you know whether a lead block is required.
- Toggle Required when the flow should wait until the agent returns that value.
- For enums, provide Allowed values to keep the agent on the same set of choices.
- Output keys appear in the variable picker just like tags, so you can reference
{{agent_variable}}in completion copy, Jump Logic, or exports.
See the Response Tags guide for how placeholders behave when you insert these outputs elsewhere.
Response UI
Turn on Buttons, Cards, or Carousels to let the agent surface quick replies or rich media. Enabling a control exposes a guidance textarea so you can spell out the label style, ordering, or CTA copy you expect the agent to render. Disable the controls that your interface does not support to keep the assistant focused on text alone.
Exit conditions & routing
Click Add exit to turn an output into a branching rule. Give each exit a name, pick a condition (equals, not_equals, exists, not_exists, greater, or less), bind it to one of the outputs or variables, and then choose the next destination (any later question, the lead capture block, or completion).
Rules evaluate from top to bottom, and the first match wins. If nothing matches, the step follows its default patheither the next sequential block or completionso always include a fallback exit or leave the built-in track intact.
Prompt settings
The OpenAI badge in the block header opens the Prompt settings sheet. There you can choose which GPT model powers the step (GPT-4.1 is the default, but GPT-4o, GPT-4o Mini, GPT-4.1 Mini, and GPT-4.1 Nano are also available), adjust the temperature slider, and cap the response length with max tokens. Models expose different token ranges, so the sliders update to reflect the selected capability, and a Reset overrides link brings everything back to the quiz default.
Step-by-step
- Add the Agent Step block from the palette inside the Questions step.
- Write precise instructions that describe the assistants role, the context it can read, and the variables it must set.
- Declare outputs for every value you want to reuse, matching the type and, if needed, allowed values or the required flag.
- Use the Response UI toggles and guidance fields so the assistant knows how to label buttons, cards, or carousels, and set the conversation limit to bound the exchange.
- Build exit conditions in priority order, select the next destination for each, and leave a fallback path so the flow keeps moving even when no rule matches.
- Click the OpenAI badge to open Prompt Settings, pick a model, tweak temperature or tokens, then preview the agent step before publishing.
Tips / Gotchas
- Mention the variable names in your instructions (e.g.,
customer.intent) so the agent knows what to populate and what the rest of the flow expects. - Use
RequiredorAllowed valuesto keep enum outputs predictable; downstream rules and copy rely on the exact keys you register. - Keep exit rules short and ordered by priority. Place any catch-all exit at the bottom or rely on the default destination to avoid dead ends.
- The conversation limit counts every message from the agent and the respondent. Raise it slightly if your agent needs extra clarifying questions, and lower it when you only need a single response.
- Enabling Buttons, Cards, or Carousels exposes guidance fieldsuse them to describe the layout, CTA language, or ordering you want.
- Agent outputs behave just like other variables, so you can reference them in completion copy, Jump Logic, or exports with the standard
{{variable}}placeholder syntax.

