Every SaaS product has an onboarding flow, and most of them have historically looked the same: a multi-step form wizard with progress bars, radio buttons, and input fields. You pick a goal from a predefined list, enter some numbers, click Next five times, and land on a dashboard. It works, but it's the digital equivalent of filling out paperwork at a doctor's office before anyone has talked to you.
I recently replaced the entire onboarding wizard in FreedomTrack — a financial independence tracking app — with a conversational AI agent. No more steps. No more radio buttons. Just a chat where you describe your financial situation in plain English, and the agent creates your records, explains FI concepts along the way, and gets your dashboard populated in a couple minutes.
Here's how I did it, what I learned, and why I think this pattern is going to become standard for onboarding flows.
What Was Wrong with the Wizard
FreedomTrack's original onboarding was a 5-step form:
- Welcome — Pick your financial goal from a list, select how you found us
- Personalize — Enter your age, target FI age
- Expenses — Enter a total monthly expense number
- Income — Add income sources one by one
- Review — See a summary, optionally add an asset
It wasn't broken, but it had clear problems:
It assumed users already understood FI concepts. When someone sees a field labeled "yield rate" during onboarding, they either guess, skip it, or leave. The form had no room to explain why a yield rate matters or what a reasonable value would be for their 401k.
It forced a rigid structure. Some users want to talk about their investments first. Others want to start with debt. A form wizard forces everyone through the same sequence regardless of what's on their mind.
It captured the minimum. The wizard got a total expense number and maybe one or two income sources. A conversation naturally surfaces more detail — "I spend about 2k on rent, 400 on groceries, 200 on subscriptions" gives you three expense records instead of one lump sum.
It felt like a chore. Five steps with a progress bar signals "this will take a while." A chat with an opening question signals "just tell me about yourself."
The Architecture
The replacement has three pieces: an edge function that talks to Claude, a React hook that manages conversation state, and a set of chat UI components.
The Edge Function
The edge function is a Supabase Edge Function that acts as a proxy between the frontend and the Claude API. It authenticates the user, builds the Claude request with tools, and executes tool calls against the database.
The key design decision is the tool loop. When Claude decides to create a record, the edge function executes the database insert and feeds the result back to Claude as a tool result. Claude might then create another record, ask a follow-up question, or summarize what it did. This loop continues until Claude responds with just text (no more tool calls), at which point the function returns.
The tools available to Claude mirror the app's data model:
create_asset— investments, retirement accounts, propertycreate_liability— mortgages, loans, credit card debtcreate_income— salary, freelance, side incomecreate_expense— rent, groceries, subscriptionsupdate_profile— goals, age infocomplete_onboarding— marks onboarding done, writes a summary
Each tool inserts directly into the database with the authenticated user's ID. By the time the conversation is done, the user's dashboard is fully populated with real data.
The System Prompt
The system prompt is where the domain knowledge lives. It teaches Claude the FI concepts that matter for the app:
- The 25x rule (FI number = 25x annual expenses)
- What yielding assets are and why they matter
- How savings rate is calculated
- Reasonable yield rates and growth rates for common asset types
It also defines conversation flow guidance — start with goals, then expenses, income, assets, debts — but explicitly tells Claude to be flexible and adapt if the user volunteers information in a different order.
One important behavioral rule: confirm before creating records. The agent should say "I'll create a $4,200/month expense record for your total monthly expenses" before actually calling the tool. This gives the user a chance to correct mistakes before data hits the database.
Conversation History
The first version sent only the text content of previous messages to Claude on each request. This meant Claude had no memory of its own tool calls — it didn't know what records it had already created. The result: duplicate records.
The fix was to return the full message chain from the edge function, including the raw tool_use and tool_result message blocks. The frontend stores these in a ref and sends them back as conversation history on each subsequent request. Claude sees its own prior tool calls and their results, so it knows exactly what's been created.
The display messages (what the user sees in the chat) remain simple — just text and created record cards. The API conversation history is a separate, richer data structure that preserves the full tool interaction chain.
The Frontend
The chat UI is straightforward: a message list with auto-scroll, a text input, and a header with a "Skip to Dashboard" button. Messages are styled as bubbles — user messages right-aligned, assistant messages left-aligned. When the agent creates a record, a compact card appears below the message showing the record type, name, and value.
The hook manages two parallel data structures:
messages— the display state (simple text + record cards)apiHistoryRef— the full API conversation history (tool calls included)
On completion, the agent calls the complete_onboarding tool, the response includes an onboardingComplete flag, and the UI swaps the input for a "Go to Dashboard" button.
What Makes This Better
Natural language parsing removes friction. "I make about 120k" becomes a $10,000/month income record. "We spend maybe 4k a month total" becomes an expense record. Users don't need to think about which field to put a number in or what format to use.
Concepts get explained in context. When the agent asks about investments, it can naturally explain yield rates: "For a broad index fund, a yield rate around 2% and growth rate around 7-10% would be typical." This happens in conversation, not as a tooltip the user has to hover over.
The flow adapts to the user. If someone opens with "I have $200k in student loans and I want to get out of debt," the agent starts with liabilities instead of forcing them through a goal-selection step first. The conversation goes where the user takes it.
More data gets captured. In the wizard, users entered a single total expense number. In conversation, they naturally break things down — rent, car payment, groceries, subscriptions — which gives the dashboard much richer data to work with.
Skip still works. The "Skip to Dashboard" button is always available. Any records created during the conversation persist even if the user skips partway through. There's no all-or-nothing commitment.
What I'd Watch Out For
Latency matters more in a chat. A form submission can take a second and feel fine. In a conversation, if the agent takes 5+ seconds to respond, it feels broken. The tool loop can make this worse — if Claude needs to create three records before responding, that's three database inserts plus multiple API calls. Choosing a fast model (Sonnet over Opus) was important here.
You need to handle the conversation history carefully. The duplicate records bug was a direct result of not preserving tool call history. Any conversational agent that calls tools needs to send the full message chain back on subsequent requests.
Testing is harder. A form wizard has predictable paths you can test with Cypress or Playwright. A conversation can go in any direction. You can test the edge function tools directly, and you can test the UI components, but the full end-to-end conversation path is inherently variable.
Cost is real but manageable. Each onboarding conversation is maybe 5-10 API calls to Claude. At Sonnet pricing, that's a few cents per user. Compared to the lifetime value of a user who actually completes onboarding with good data, it's an easy trade.
The Pattern Going Forward
I think every multi-step form wizard is a candidate for this pattern. The ingredients are:
- A domain where users might not know the right values or categories
- A backend with a clear data model that maps to tool definitions
- A flow where the order doesn't actually matter (most don't)
The conversational approach isn't just a UI change — it fundamentally changes what data you collect. When you remove the constraints of predefined form fields and let users describe their situation in their own words, you get richer, more accurate data. And when the agent can teach concepts while collecting that data, users start their experience with understanding rather than confusion.
The form wizard had its day. For onboarding flows where users need guidance, not just input fields, conversation is the better interface.
Want to see this in action? Try the conversational onboarding yourself at FreedomTrack.io — it's free to sign up and you'll have your financial independence dashboard populated in a couple minutes._
If you're thinking about adding conversational AI or agentic workflows to your own product, I'd love to help. Hit the "Get in Touch" button below or reach out on LinkedIn.
