How We Built a 7-Agent Sales Team in 2 Weeks
92+ prospects in CRM. Zero manual data entry. 7 specialized agents managing pipeline, competitive intel, and outreach sequences.
92+ prospects in a Notion CRM. 100 records with stage, owner, tier, ICP category, and next action. Four active outreach sequences for 22 prospects with LLM-personalized emails. Structured competitive intelligence across 4 battle cards. Call preparation with automated briefs for every discovery call.
All managed by 7 AI agents. Zero human coordinator. Zero manual data entry.
We’re running this system in production for a DeFi API sales operation selling to financial institutions in LATAM. The human team is 3 people. Without the agents, we’d need 2 to 4 additional hires to cover prospect research, pipeline maintenance, competitive intelligence, call preparation, follow-up sequences, and ongoing qualification.
Here’s the architecture, the numbers, and what we learned.
The 7 agents
Each agent has a defined role, an assigned model, and a strict scope. There’s no generalist agent that “does a bit of everything.” The separation of responsibilities mirrors what you’d apply to a human sales team: nobody researches prospects, writes outreach sequences, prepares calls, AND manages pipeline simultaneously.
1. Orchestrator (pan-orchestrator). Coordinates all other agents. Decides what gets researched, what gets prioritized, when to advance a deal’s stage. Maintains canonical documents: project brief, decision log, open questions. Runs on Claude Opus. It’s the only agent that delegates tasks to the other 6.
2. Prospect researcher (prospect-researcher). Takes a company name or URL as input. Produces a complete profile: what they do, who they sell to, size, funding, tech stack, hiring signals, competitive pressure. Scores the prospect across 5 dimensions (pain intensity, budget signal, urgency, technical fit, decision-maker access) on a 25-point scale. Writes the result directly to Notion.
3. Outreach engine (outreach-engine). Writes personalized email sequences for each prospect. No generic templates with mail merge. Reads the full prospect profile from the CRM, identifies the most relevant angle of approach, and produces an email under 100 words with a specific hook. Hard rule: never use “AI” in the subject line (spam filters) and never promise functionality that doesn’t exist in production.
4. Call intelligence (call-intel). Before each discovery call, produces a brief with: company snapshot, intel on attendees (title, background, what motivates them), pain hypothesis, and 5 prioritized questions. After the call, structures notes into actionable insights: confirmed pains, objections, budget signals, next steps. Doesn’t manage pipeline or write outreach. Its only job is making every conversation count.
5. Pipeline and qualification (customer-discovery). The CRM brain. Tracks each prospect through the funnel: Lead, Qualified, Discovery, Technical Review, Proposal, LOI, Pilot. Flags stalled deals (no activity in 7+ days). Generates pipeline reports on demand. Scores prospects by cross-referencing the 5 ICP scoring dimensions. When a deal needs to advance or get dropped, this agent flags it.
6. Market research (pan-sales-research). Produces structured competitive intelligence. We’re running 4 updated battle cards against direct and indirect competitors, each with differentiators, weaknesses, and displacement angles. Analyzes market segments (neobanks, exchanges, crypto wallets, payment orchestrators, cross-border) and prioritizes by fit with the current offering.
7. Deal desk (deal-desk). Handles proposals, design partner terms, pricing, and unit economics. Has the numbers: blended cost per transaction ($0.089), gross margin by volume, break-even point (~10K transactions/month), LTV/CAC (13.5x). When a deal reaches proposal stage, this agent produces the specific terms.
Data architecture: Notion as CRM
The CRM is a Notion database with 100 records. Each record has: company name, ICP category (Fintech, Neobank, Payment, Exchange, etc.), tier (1-3 by ACV potential), lead quality, owner, contact, email, URL, and stage.
The pipeline splits into 5 active stages and one terminal:
| Stage | Current records | Entry criteria |
|---|---|---|
| Lead (Pending Call) | 64 | Company identified as ICP fit |
| Qualified (Reached Out) | 15 | First outreach sent |
| Discovery (Scheduled) | 3 | Call scheduled with decision-maker |
| Technical Review (1st Call Done) | 4 | Discovery completed, interest confirmed |
| Dropped | 14 | Clear no-fit signal |
Three human owners (Walls, Losi, Joseph) manage the deals. Agents feed data, qualify, research, and draft. Humans make the advance, reject, and close decisions.
Sync works in both directions. When an agent researches a prospect, it creates or updates the record in Notion via API. When a human moves a deal stage manually, agents read the updated state on their next cycle.
Automated outreach: 4 touches in 14 days
We’re running cold outreach sequences for 22 prospects with resolved email addresses. The system works like this:
Fixed cadence: Touch 1 on day 0, Touch 2 on day 3, Touch 3 on day 7, Touch 4 on day 14. Maximum 20 emails per day (domain reputation protection). Each email runs through an LLM flow that personalizes the template with real prospect data: decision-maker name, industry, specific pain, public signal (talks, publications, product announcements).
Sending uses Resend. Every sent email is logged in SQLite with a resend_id for downstream tracking. The system processes Resend events automatically: deliveries, opens, clicks, and bounces. When an email bounces, the sequence pauses for that prospect and the Notion record updates.
Personalization rules are strict: never use invented metrics, never promise future functionality, always reference something specific about the prospect. The proof point we use in current sequences is real: “legal research engine with 10 agents that completed 33 tasks autonomously” and “intelligence agent managing a pipeline of 92+ prospects via Slack and Notion.”
We chose the 14-day cadence with 4 touches based on typical response rates in LATAM B2B sales. The first touch has the highest open rate. The fourth touch is the breakup: 2 sentences, no pressure, open door. We’re measuring open, click, and reply rates per touch to optimize the cadence in the coming weeks.
Structured competitive intelligence
The market research agent maintains 4 active battle cards. Each follows the same structure: what the competitor does, where we’re stronger, where they’re stronger, and the displacement angle for sales conversations.
The 4 comparison axes our salespeople use most:
| Comparison | Key differentiator |
|---|---|
| vs. direct integration (Aave/Morpho) | 100+ lines of code vs. fewer than 10 |
| vs. building in-house | $255-330K first year, 2-3 engineers, 3 months vs. deployment in weeks |
| vs. Enso (route-based) | Intent-based (what, not how), gas included, embedded wallets |
| vs. Halliday (workflow engine) | No embedded wallets, no gas included, different abstraction |
These battle cards aren’t static documents. The agent updates them when it detects changes: new competitor features, pricing changes, fundraising announcements. The sales team has fresh competitive intelligence before every call without having to search for it.
What the human does, what the agent does
The division is clear and we’re strict about it.
The agent handles: prospect research, ICP qualification, outreach drafting, pre-call brief preparation, post-call note structuring, pipeline maintenance, competitive intelligence, email event tracking, report generation.
The human handles: deciding whether to advance a deal, reject it, or pause it. Approving emails before they’re sent. Running discovery calls. Negotiating terms. Closing deals. Building relationships.
The boundary isn’t arbitrary. Everything repetitive, data-driven, and executable with clear rules goes to the agent. Everything requiring judgment about human context, relationships, or financial commitments goes to the person.
A concrete example: when a prospect replies to an outreach email, the agent detects the response and updates the status in Notion. But who decides how to respond to that reply is the human. The agent can prepare a suggested draft with prospect context, but it never sends without explicit approval.
The numbers so far
| Metric | Value |
|---|---|
| Prospects in CRM | 100 (92+ active) |
| ICP categories covered | 17 (Fintech, Neobank, Payment, Exchange, etc.) |
| Prospects with active sequence | 22 |
| Touches per sequence | 4 (14-day cadence) |
| Competitive battle cards | 4 |
| Segments researched | 6 |
| Agents in production | 7 |
| Human owners | 3 |
| Manually entered data | 0 records |
Three deals are in Technical Review with confirmed interest signals. Salespeople use the pre-call briefs for every discovery. ICP scoring runs automatically each time a new prospect is researched.
What we learned building this
Specialization matters more than tool count. Early versions had fewer agents with more responsibilities. An agent that researched prospects AND wrote outreach AND managed pipeline produced mediocre work across all three areas. Splitting into 7 specialized agents raised output quality on every front. The reason is the same one that applies to human teams: a prompt with 3 distinct responsibilities produces more generic results than 3 focused prompts.
The CRM as single source of truth simplifies everything. All agents read from and write to the same Notion CRM. No local files diverging, no parallel spreadsheets, no information trapped in one agent’s context that the others can’t see. When the prospect researcher updates a profile, the outreach engine has that information immediately available.
Authority limits prevent disasters. The orchestrator can delegate tasks to any agent. But no agent can send an email, create a binding proposal, or move a deal to a critical stage without human intervention. This restriction adds a few minutes of latency but eliminates an entire category of errors.
Personalized outreach with LLMs works if you have enough data. A generic email with {name} and {company} convinces nobody. An email that references the talk the CTO gave last week, mentions the prospect’s specific pain by name, and includes a real metric (not invented) from a comparable case gets a measurable response rate. The LLM isn’t what makes the difference. The prospect data feeding the LLM is what makes the difference.
Build time and cost
The complete system was built in 14 days. The first 5 sessions covered agent definition, data structure, and Notion integration. Days 6-10 were the outreach engine, email sequences, and Resend integration. The last 4 days were competitive intelligence, call preparation, and ICP scoring calibration.
Monthly operating cost is the LLM APIs (Claude Sonnet for the 6 specialist agents, Claude Opus for the orchestrator) and the Notion tier we already pay for. No dedicated VPS, no external database, no additional infrastructure. The CRM is Notion. Tracking is SQLite. Execution runs on demand.
For a client looking to replicate this sales architecture, deployment takes 10 to 15 days. What changes: CRM structure, ICP categories, outreach templates, battle cards, and prospect scoring. What stays the same: the 7-agent architecture, human approval pattern, outreach cadence, and event tracking system.
Synaptic turns businesses into AI-native organizations. We start where the demo ends. synaptic.so