Docs/Feedback Collection/AI Intake

AI Intake

How the AI-powered conversation works

At the heart of Flowback is an AI-powered feedback conversation. Instead of static forms with text fields, Flowback uses a conversational AI assistant that engages with submitters in real-time, asking targeted follow-up questions to gather the most useful information possible.

How it works

When a submitter opens your feedback form and enters their details, the AI assistant begins a conversation. Here's what happens behind the scenes:

  1. Session creation — A temporary intake session is created to track the conversation state. Sessions automatically expire after 30 minutes of inactivity.
  2. Contextual greeting — The AI generates a greeting tailored to the selected category. For a bug report, it might ask about the issue; for a feature request, it asks about the problem being solved.
  3. Follow-up questions — Based on the submitter's responses, the AI asks relevant follow-up questions. For bugs, it asks about steps to reproduce and expected behavior. For features, it explores use cases and priority.
  4. Completion — When the AI determines it has enough information, it signals completion and the submitter can review and submit their feedback.

Conversation context

The AI assistant is not a generic chatbot. It receives specific context about:

  • Feedback category — Bug reports get different questions than feature requests
  • Channel configuration — The AI knows what types of feedback the channel collects
  • Previous messages — The full conversation history is maintained throughout the session
  • Codebase context — If GitHub is connected, the AI can reference relevant files and recent changes

This context-awareness means the AI asks questions that are actually useful — not generic prompts that frustrate users.

Streaming responses

All AI responses are streamed word-by-word to the feedback form. This creates a natural, real-time chat experience where submitters can see the assistant's response forming as they watch. Streaming reduces perceived latency and keeps users engaged.

Note
The streaming is powered by Server-Sent Events (SSE) for real-time delivery. The AI generates responses using Claude, Anthropic's language model, which is optimized for helpful, contextual conversations.

Category-specific behavior

The AI adapts its conversation style based on the selected category:

  • Bug reports — Focuses on reproducing the issue: what happened, what was expected, steps to reproduce, environment details, and severity.
  • Feature requests — Explores the problem being solved, desired outcome, current workarounds, and how many users are affected.
  • UX improvements — Asks about the specific interaction, what feels wrong, what the ideal experience would be, and frequency of use.
  • Performance issues — Gathers details about slowness, affected areas, consistency of the problem, and any patterns noticed.

Submitter experience

From the submitter's perspective, the experience is simple:

  1. Enter their name and email
  2. Optionally select a feedback category
  3. Chat with the AI assistant about their feedback
  4. Optionally upload screenshots or files
  5. Submit when the conversation is complete

The AI handles all the structure — the submitter just talks naturally about their experience. No forms to fill out, no required fields to guess at, no templates to follow.

Session lifecycle

Intake sessions have a defined lifecycle:

  • Active — The conversation is in progress
  • Ready — The AI has gathered enough information and the session is ready to submit
  • Submitting — The submission is being processed (PRD generation, issue creation)
  • Submitted — The submission is complete
  • Expired — The session timed out after 30 minutes of inactivity
Warning
Sessions expire after 30 minutes of inactivity. If a submitter returns after this period, they'll need to start a new conversation.