Five tools collapsed into one AI agent platform for PepTalk's speaker booking workflow
Four chained AI agents replacing five separate tools — inbound parsing, semantic speaker matching, proposal generation, CRM follow-up. Built by a team of three.

The challenge
PepTalk receives hundreds of speaker requests every month. Each one arrives as unstructured text — a polished RFP, a two-line email, a phone call transcribed into notes. Each one requires extracting the parameters (event type, date, topic, budget, audience size), finding suitable speakers, assembling a proposal, and managing the deal through to close.
That process worked at lower volumes. At hundreds of requests per month, it became a bottleneck. The sales team was spending more time on data extraction and cross-referencing than on the consultative work that closes bookings.
The fragmentation made it worse. At least five systems touched each request: an email inbox, a speaker database, a document editor, a CRM, a calendar. A single booking required a team member to copy data between all five, hoping nothing fell through the cracks. Proposals sometimes went out with outdated speaker availability. Follow-ups slipped. The institutional knowledge about which speakers worked well for which kinds of events lived entirely in individual team members' heads, never in any searchable system.
For a speaker agency, speed is a competitive differentiator. A client who sends the same brief to three agencies typically goes with whoever responds first with a relevant, well-assembled shortlist. Every hour of delay was a potential lost booking. This wasn't an operational problem. It was a revenue problem.
What we learned
| Five systems, one workflow | When a single request touches five tools, transfer overhead grows faster than the work itself. |
| Matching logic was tribal | Speaker-to-event fit lived in team members' heads — capacity and quality vanished with any departure. |
| First response wins the deal | In booking markets, the agency that answers first books the speaker — slow triage costs revenue directly. |
The solution
Twistag framed the project as an agent problem. We could have approached it as integration — connect PepTalk's existing tools with middleware, add some automation, reduce the manual steps. We chose not to. Most of the manual work in PepTalk's workflow was structured reasoning, not creative judgment: extracting parameters from text, matching criteria against a database, assembling documents from templates, scheduling follow-up based on deal stage. Exactly the tasks where agentic AI outperforms traditional automation. A middleware layer just connects the existing friction.
The platform is built on four chained agents. The inbound parser reads whatever format a request arrives in and outputs a structured JSON object — event type, date, topic, budget, audience profile, language requirements, specific constraints. It runs through AWS Bedrock and flags ambiguity explicitly rather than guessing. That eliminates the first 10-15 minutes of manual handling on every request and standardises the data before downstream agents touch it.
The speaker matching engine is the analytical core. PepTalk's 3,000+ speaker catalogue is stored as vector embeddings in Supabase using pgvector, capturing topic expertise, speaking style, audience fit, fee range, geographic availability, language capabilities, and past performance ratings. When a parsed request arrives, the matching agent converts the event parameters into a query vector and runs a multi-dimensional similarity search across the catalogue. A request for "someone who can talk about resilience in a way that resonates with frontline healthcare workers" fails in keyword search unless a profile contains those exact terms. In a semantic search system, speakers covering mental health in high-pressure environments, burnout prevention, or post-crisis recovery all surface as relevant. The agent returns a ranked shortlist with relevance scores and match reasoning.
The proposal generator takes the matched speakers and produces a customised document for each request — speaker bios tailored to the event context, fee schedules, availability confirmation, references from similar past bookings. AWS Bedrock's prompt versioning handles consistency: the team iterates on templates, tests new structures, rolls back changes if a new version produces worse output, all without redeploying. Bedrock guardrails check every generated proposal for factual consistency, pricing accuracy, and tone compliance before it can be sent.
The fourth agent manages the deal after the proposal goes out — monitoring engagement signals, triggering follow-up actions based on context rather than rigid schedules, logging every interaction back into the CRM with structured data. PepTalk's CRM now reflects reality. That sounds like a small thing. In practice it means the sales team can make decisions based on accurate pipeline data rather than a database that's perpetually two weeks out of date.
Argilla runs across all four agents as a continuous evaluation layer. Every AI output — parsed requests, speaker matches, proposals, follow-ups — is reviewable in a structured interface. Sales team corrections feed directly into prompt optimisation. The system improves continuously without retraining.
What this shaped
| Patternable judgment is agentic | Work that looks like judgment but follows patterns is exactly the agent zone. |
| Agents behind clean interfaces | Wrap each agent in an abstraction so prompts improve and providers swap without rewriting the workflow. |
| Reviews feed the loop | Every team correction becomes training signal — agents improve continuously without an explicit retraining cycle. |
The impact
PepTalk's sales team now spends time on relationship building and consultative selling rather than on data entry across disconnected systems. The same team handles substantially higher request volume with faster turnaround, and the agency competes on the dimension that matters most in this market: who responds first with a relevant shortlist.
The architectural lesson — and the reason Twistag now defaults to this framing on operations work — is that most workflow inefficiency in service businesses looks like creative judgment from the outside but is actually structured reasoning underneath. Once you map the real decisions, most of them are patternable. Agents handle the patterns. Humans handle the genuinely ambiguous edges.
What this proved
| Humans handle the ambiguous 20% | Once patternable work moves to agents, team time shifts to decisions that genuinely require human judgment. |
| Architecture absorbs vendor change | When agents sit behind abstractions, model market shifts become procurement events, not engineering ones. |
| Speed becomes the product | When triage compresses from hours to minutes, the whole sales motion changes — faster cycles, more bookings. |
Technologies used
- AWS Bedrock
- Supabase
- pgvector
- Argilla

