How I Built a Full SaaS Product in 30 Days Using AI Automation
There’s a version of this story where I tell you AI is magic and you can build anything by talking to a chatbot for an afternoon.
This isn’t that story.
This is what actually happened when I used AI-assisted development to build a production SaaS product — with real authentication, real payments, a Chrome extension, and four distinct AI pipelines — in 30 calendar days. What worked. What broke. What I’d do differently. And what it means for anyone building software right now.
The Product
An outreach sequencer. Not a mass-email blaster — a tool that helps people manage relationship-based prospecting. The kind of outreach where you actually read someone’s profile, write something relevant, and follow up like a human.
The core features:
- Campaign builder — multi-step sequences (profile view → comment → warm intro → direct message → follow-up)
- Daily queue — each morning, you see exactly who to contact and what step they’re on
- Chrome extension — visit a LinkedIn profile, click a button, AI extracts their data and adds them to a campaign
- AI message generation — profile intelligence feeds into personalized message drafts
- Stripe billing — trial period, subscription checkout, webhook-driven status management
- Analytics — reply rates, campaign performance, step-by-step funnel analysis
This isn’t a toy. It’s a multi-table Postgres schema with 28 migrations, Clerk authentication, Stripe integration, a Chrome Manifest V3 extension, and AI features calling GPT-4.1 for extraction and generation.
But the real story isn’t how fast I built it. It’s the four AI pillars running inside the product — and how Claude Code helped me design, build, and optimize each one through a structured PR workflow.
The Four AI Pillars
Most SaaS products bolt AI onto an existing workflow — a “generate” button here, a chatbot there. This product was designed around AI from the ground up. Every core workflow depends on one of four AI pipelines: transformation, enrichment, automation, and optimization.
Understanding these four patterns matters if you’re building any product with AI at its core — not just an outreach tool.
Pillar 1: Transformation — Raw Data to Structured Intelligence
The first AI pipeline turns unstructured LinkedIn profile text into structured, queryable data.
A user visits a LinkedIn profile and clicks the Chrome extension button. The extension scrapes the visible page text — an unstructured mess of headlines, job titles, education, posts, endorsements, and company descriptions — and sends it to the backend. GPT-4.1 receives up to 8,000 characters and returns structured JSON with temperature: 0 for deterministic output.
The extracted structure isn’t just name and title. The AI produces a full profileIntel object: industry, years of experience, company size, notable clients, certifications, pain points, services offered, social proof, recent activity, content themes, and contact options. It also generates a warmIntro — a 75–100 word conversation opener that references something concrete from the profile and ends with a natural question.
This is the transformation that makes everything downstream possible. Without it, you have a name and a LinkedIn URL. With it, you have a complete intelligence profile that feeds into message generation, lead scoring, and campaign personalization.
The technical challenge: Getting the AI to consistently return valid JSON with 15+ nested fields from messy, inconsistent input. The response_format: { type: "json_object" } flag in the OpenAI API was critical — it forces structured output without markdown fencing. The prompt went through three iterations before the extraction was reliable enough for production.
Pillar 2: Enrichment — Keeping Intelligence Current
Extraction happens once. Enrichment happens repeatedly.
People change jobs, launch projects, publish content, earn certifications. A profile you extracted six months ago is stale. The enrichment pipeline lets users re-extract a contact’s profile data with a single click, pulling fresh intelligence from LinkedIn while preserving the full history.
Here’s how it works: before overwriting any data, the system snapshots the current description, warm_intro, and profile_intel into a profile_intel_history JSONB array on the contact row. Each snapshot carries a timestamp. So you can see how a contact’s profile evolved over time — when they changed roles, when their company grew, when new pain points emerged.
The enrichment pipeline uses the same AI extraction as the initial save, but it’s smarter about what it updates. Core identity fields (name, title, company) aren’t overwritten — they’re already verified. Only the intelligence fields refresh: description, warm intro, and the full profileIntel object. The industry field only backfills if it was previously empty, never overwrites.
Rate limiting prevents abuse: maximum 5 re-enrichments per contact per 90-day window, enforced at both the API layer and the database layer with FOR UPDATE row locking to prevent race conditions.
Why this matters beyond outreach: Any product that depends on external data needs an enrichment strategy. The pattern — snapshot before overwrite, rate-limit re-extraction, preserve history — applies to CRM enrichment, market intelligence, competitor monitoring, or any system where data freshness drives decision quality.
Pillar 3: Automation — The Message Recipe System
The third pipeline is where AI moves from data processing to content creation.
The “Message Recipe” system combines three ingredients to generate personalized outreach messages:
- A prompt — system-level instructions that define the AI’s persona, tone, and approach (e.g., “Write as a curious peer, not a salesperson”)
- A template — the message structure with variable placeholders (e.g., a cold outreach template vs. a warm follow-up template)
- Profile intelligence — the contact’s extracted data, assembled into a context block
The context assembly is where the real value lives. The system builds a plain-text block that includes name, title, company, industry, headline, experience, company size, pain points, services, social proof, recent activity, content themes, notable clients, certifications, warm intro notes — and if available, the recent LinkedIn conversation history. All of this feeds into the LLM as the user message alongside the template, while the prompt provides the system instructions.
The result: a personalized message draft that references the contact’s actual work, addresses their likely pain points, and follows the structural pattern defined by the template. The user reviews, edits if needed, copies it, and sends it manually from LinkedIn.
15 starter templates ship with the product across five categories: cold outreach, warm follow-up, meeting request, objection handling, and re-engagement. Each template is linked to campaign steps, so the right template automatically appears at the right stage.
The conversation context loop: When a user pastes a LinkedIn conversation thread into the system, it’s stored and then fed back into future message generation. The AI sees what’s already been said and tailors the next message accordingly. This closes the loop — extraction feeds generation, generation produces messages, conversation history feeds back into the next generation cycle.
Pillar 4: Optimization — Smart Queue and Warm Lead Prioritization
The fourth pipeline doesn’t use an LLM — it uses the data the other three produce to optimize the user’s daily workflow.
The daily queue is a single SQL query that joins contacts, enrollments, campaign steps, touch logs, and message templates. It calculates the next action date for each contact based on when they were last touched and what interval the current step specifies. But the critical optimization is warm lead prioritization.
The query runs a correlated subquery against the touch log: any contact who has ever replied, booked an appointment, had a video call, phone call, or in-person meeting gets flagged as is_warm. Warm contacts sort to the top of the queue — before cold contacts, before overdue contacts, before everything else.
This means the user’s first actions every morning are with the people most likely to convert. The warmest leads get attention first, while they’re still warm. Cold outreach fills the remaining time.
The queue also pre-loads each item with its linked prompt and template, so the “Generate Message” button appears instantly — no extra API calls, no loading states. The user opens the queue, sees a warm lead at the top, clicks “Generate Message,” gets a personalized draft informed by the contact’s full intelligence profile and conversation history, and sends it. The whole cycle — from opening the app to sending a message — can be under 60 seconds.
Why four pillars instead of one: Each pipeline could exist independently, but the compounding effect is what makes the product work. Transformation creates the data. Enrichment keeps it fresh. Automation turns it into action. Optimization ensures the highest-value actions happen first. Remove any one pillar and the others lose most of their value.
The Development Workflow: Claude Code and the PR Pipeline
The product wasn’t just built with AI — the development process itself was an AI-optimized pipeline. Every significant feature went through a structured Claude Code workflow using pull requests.
How Claude Code drove the PR workflow
Each feature started as a conversation with Claude Code: describe the feature, discuss the approach, agree on the implementation plan. Claude Code then generated the code across multiple files — database migration, API route, TypeScript types, query functions, UI components — and I reviewed the full diff before merging.
The PR history tells the story of how AI-assisted development actually works in practice:
PR #1 — Conversation context for AI message generation. Claude Code implemented the full pipeline: new database columns, updated queries, a formatProfileData function that assembles the context block for the LLM, and API changes to accept conversation data from the extension. One PR, six files changed, shipped the same day.
PR #8 — Conversation history for LinkedIn messaging analysis. A new conversations table, three API endpoints (GET/POST/PATCH), a collapsible UI section with expandable entries and outcome dropdowns, plus Chrome extension integration. Claude Code generated the migration, the queries, the API routes, and the React components. My job was reviewing the data model and verifying the outcome tracking logic made business sense.
PR #9 — Dark mode with WCAG AA contrast. Claude Code installed next-themes, wired up the theme provider, generated dark-mode variants for every color badge (conversation outcomes, step types, campaign states), and verified contrast ratios. The PR included specific numbers: 19.5:1 for primary text (WCAG AAA), 9.8:1 for muted text. Accessibility compliance that would normally be an afterthought was built in from the start because AI made it cheap to do right.
PR #10 — Warm lead tab navigation. The queue optimization described in Pillar 4 needed a UI: tab bar with count badges, filtered views, tab-aware empty states. Claude Code built the client-side tab component, wired the filtering logic, and generated contextual empty state messages. The whole feature — from “warm leads should be easier to find” to merged PR — was a single session.
The pattern that emerged
Every Claude Code PR followed the same structure:
- Describe the feature — what it does, why it matters, what the data model should look like
- Claude Code generates the implementation — migrations, queries, API routes, types, UI
- Review the diff — focus on data model correctness, business logic, and security
- Test against real data — not unit tests (the codebase was moving too fast), but real data in the database
- Merge and ship — with a Vercel preview deploy for larger changes
The AI handled the breadth of each change (touching six files instead of one). I handled the depth (is this the right data model? does this business logic make sense? is this secure?).
The Build Timeline
Week 1: Foundation and Database
Next.js scaffolded with Clerk auth and Neon Postgres. Core schema: users, contacts, campaigns, campaign steps, enrollments, touch log. CRUD API routes. Basic dashboard layout.
The biggest win wasn’t code generation — it was schema design iteration. I described the domain to Claude Code, and instead of sketching ER diagrams for two hours, I had a working schema in minutes. When the enrollment model needed to change, AI refactored the schema, updated all queries, and adjusted the API routes in a single session.
Traditional timeline: 2–3 weeks for a solo developer.
Week 2: Chrome Extension and AI Transformation
Chrome extension (Manifest V3) with popup, content script, and background service worker. The AI extraction pipeline (Pillar 1). Duplicate detection. Contact detail pages with full profile intelligence display.
Chrome extension development is a minefield of obscure gotchas. AI handled the Manifest V3 boilerplate and message-passing architecture. When I needed to inject a floating action button into LinkedIn’s DOM without breaking their SPA navigation, AI wrote the MutationObserver + pushState intercept pattern on the first try.
What broke: Double-encoded JSON. The AI-extracted profileIntel was being JSON.stringify()’d before insertion into a JSONB column — Postgres stored a string instead of an object. The fix was one line, but finding it required understanding the data flow end-to-end. AI can write the fix in seconds; you still need to diagnose the problem.
Traditional timeline: 3–4 weeks.
Week 3: Message Recipes and Queue Optimization
Campaign builder with seven step types. The daily queue system (Pillar 4). Message Recipes (Pillar 3). A/B experiments. 15 starter templates. The enrichment pipeline (Pillar 2).
The queue query — a single SQL statement joining five tables with date arithmetic, warm-lead detection via correlated subquery, and pre-loaded templates — is the heart of the product. AI wrote the first version in minutes. Each edge case fix was a conversation: describe the problem, AI adjusts the query, verify against test data.
Traditional timeline: 2–3 weeks.
Week 4: Payments, Polish, and Production
Stripe integration (checkout, webhooks, subscription lifecycle). 30-day trial system enforced across all API routes. Schema hardening (CHECK constraints, UNIQUE constraints, performance indexes). Admin system. Dark mode (WCAG AA). Analytics page.
Claude Code handled the Stripe boilerplate end-to-end. I focused on business logic: trial expiry edge cases, Clerk-to-Stripe race conditions, subscription state management.
Traditional timeline: 3–4 weeks.
How Much Time the App Saves Users
The four AI pillars aren’t just interesting engineering — they eliminate hours of manual work from every outreach workflow. Here’s what each one replaces:
Contact Research: 5–8 minutes → 10 seconds
Without the app, researching a new prospect means opening their LinkedIn profile, scanning their experience, noting their industry and role, and typing a few key details into a spreadsheet or CRM. Even a quick pass takes 5–8 minutes per contact. Most people cut corners here — they skim for 30 seconds and wing it — because thorough research is tedious at scale.
With AI extraction, you click a button on their LinkedIn profile. Ten seconds later, you have structured data: industry, years of experience, company size, notable clients, certifications, pain points, services, recent activity, content themes — plus a ready-to-use warm intro. The app captures more detail in 10 seconds than most people bother to note down manually.
At 10 new contacts per week, that’s roughly an hour saved weekly on research alone.
Keeping Contact Data Fresh: 5–10 minutes → one click
Profiles go stale. People change jobs, launch projects, shift focus. Manually re-researching a contact means going back to LinkedIn, re-reading their updated profile, figuring out what changed, and updating your records. Even a quick check takes 5–10 minutes.
One-click re-enrichment does this in seconds — and preserves the full history so you can see how a contact evolved over time. Did they just get promoted? Change companies? Start posting about a new pain point? The enrichment pipeline catches it without you re-reading the entire profile.
Across a pipeline of 50+ active contacts, this saves 1–2 hours per month that would otherwise go to manual data maintenance — or more likely, the data would just go stale and you’d be working with outdated information.
Writing Personalized Messages: 5–8 minutes → 2 minutes
Writing a personalized outreach message — one that references something specific about the person and doesn’t sound like a template — takes 5–8 minutes when you’re doing it properly. You re-read the key parts of their profile, think about the angle, draft the message, and edit it.
The Message Recipe system generates a personalized draft in seconds, informed by the contact’s full intelligence profile, conversation history, and the specific campaign step. You review it, tweak a line or two, and send. The total time drops to about 2 minutes — and the message is often more thoroughly personalized because the AI references more structured data than you’d typically pull up when writing by hand.
At 5 messages per day, that’s 15–30 minutes saved daily. Over a month, that’s 8–10 hours back.
Figuring Out Who to Contact: 10–15 minutes → 0 minutes
Without a system, the start of every outreach session involves overhead: open your spreadsheet, check who you last contacted, figure out who’s due for a follow-up, try to remember who responded, and decide where to start. Even with a well-organized spreadsheet, this planning phase takes 10–15 minutes.
The daily queue eliminates this entirely. You open the app and your prioritized list is waiting — warm leads at the top, overdue contacts flagged, each item pre-loaded with the right message template for the current step. There’s no planning phase. You start working immediately.
That’s 10–15 minutes saved every session — 5+ hours per month of planning overhead eliminated.
The Combined Effect
| Task | Manual approach | With the app | Time saved |
|---|---|---|---|
| Research a new contact | 5–8 min | 10 sec | ~97% |
| Re-research a stale contact | 5–10 min | 10 sec | ~97% |
| Write a personalized message | 5–8 min | 2 min | ~65% |
| Plan daily outreach session | 10–15 min | 0 min | 100% |
| Daily total (10 contacts) | 1.5–2.5 hours | 25–35 min | ~70% |
| Daily total (20 contacts) | 3–5 hours | 45–65 min | ~75% |
The gap widens as volume increases. At 10 contacts per day, the app saves roughly an hour. At 20 contacts per day — a realistic number for someone doing outreach as a core part of their role — manual work scales linearly (every contact adds another 8–15 minutes) while the app stays roughly flat per contact (the queue is already loaded, the AI generates in seconds, you just move down the list).
At 20 contacts per day, you’re looking at 3–5 hours of manual work compressed into under an hour.
Over a five-day work week at 20 contacts per day, the math gets hard to ignore:
| Period | Manual | With the app | Time saved |
|---|---|---|---|
| Weekly (100 contacts) | 15–25 hours | 4–5 hours | 11–20 hours |
| Monthly (400+ contacts) | 60–100 hours | 15–22 hours | 45–78 hours |
| Yearly | 720–1,200 hours | 180–264 hours | 540–936 hours |
That’s the difference between outreach consuming your entire morning and outreach being something you finish before your second cup of coffee — with higher quality personalization on every message.
What AI Is Actually Good At
Transformation pipelines. Taking unstructured input and producing structured output is where LLMs excel. The extraction prompt that turns messy LinkedIn text into a 15-field JSON object would have required months of hand-coded parsing rules. With an LLM, it’s a well-crafted prompt and a response_format flag.
Boilerplate and glue code. API routes, CRUD operations, form handling, database queries, webhook integrations. AI handles this at 5–10x speed.
Iterating on known patterns. “Add a new column to this table, update the query, update the API route, update the TypeScript types, update the UI component.” AI does this in one pass across six files.
Multi-file PRs. Claude Code’s biggest advantage over a code completion tool: it understands the full context of your codebase and can generate changes across migration, query, API, type, and UI files in a single coherent PR.
What AI Is Bad At
Architecture decisions. AI will happily build whatever you describe, even if it’s the wrong approach. It won’t tell you that your enrollment model should be event-sourced instead of state-based. You need to know what to build — AI accelerates how fast you build it.
Enrichment strategy. Knowing when to re-extract data, what rate limits to enforce, how to handle history — these are domain decisions. AI implemented the enrichment pipeline, but the decision to snapshot before overwrite, enforce a 90-day cooldown, and use FOR UPDATE locking was human judgment about data integrity.
Optimization logic. The warm-lead prioritization query works because someone understood that reply signals are more valuable than recency. AI wrote the SQL, but the insight — “warm contacts should sort before overdue contacts” — was a product decision.
Security edge cases. AI writes code that works, but it doesn’t always write code that’s secure. Rate limiting on the AI extraction endpoint, row-level locking for re-enrichment, webhook signature verification — each of these was a deliberate decision, not an AI suggestion.
What This Means for Building Software
If you’re a founder or business owner
The cost of building AI-powered SaaS products just dropped by 60–70%. What used to require a team of three for three months can now be done by one experienced developer in a month. But the key word is “experienced.” AI is a multiplier — a developer who understands architecture, security, and data modeling will use AI to ship faster. Someone without that experience will use AI to build a mess faster.
The real unlock isn’t “AI writes code.” It’s that AI-native features — transformation, enrichment, automation, optimization — are now accessible to small teams. Building an AI extraction pipeline used to require an NLP team. Now it’s a well-crafted prompt and a structured output flag.
If you’re building AI features into your product
Think in pipelines, not features. A “generate message” button is a feature. A system where extraction feeds enrichment, enrichment feeds generation, generation feeds conversation tracking, and conversation tracking feeds the next generation cycle — that’s a product moat.
The four-pillar framework (transform, enrich, automate, optimize) applies to most AI-powered products:
- Transform: Turn unstructured input into structured data your system can use
- Enrich: Keep that data current with periodic re-processing and history preservation
- Automate: Use the structured data to generate outputs (messages, recommendations, reports)
- Optimize: Use the accumulated data to improve prioritization and workflow efficiency
If you’re a CTO or engineering leader
Your team should be using AI-assisted development for all new feature work. The productivity gains are real and immediate. But you need guardrails:
- Code review is more important, not less — AI-generated code needs human verification
- Architecture decisions still need senior engineers — AI accelerates implementation, not design
- The PR workflow matters — Claude Code generating a full-stack PR is powerful, but someone needs to review the data model, the business logic, and the security implications before merging
The Honest Summary
AI didn’t build this product. I built it, and AI made me dramatically faster — both in writing the code and in building the AI features inside the product itself.
The architecture decisions, the four-pillar AI design, the domain knowledge about LinkedIn outreach ethics, the business logic around enrichment cooldowns and warm-lead scoring — all of that was human judgment. AI was the implementation engine that turned those decisions into working code at a pace that would have been impossible alone.
30 days. One developer. A production SaaS with four AI pipelines, 28 database migrations, a Chrome extension, and real users paying real money.
Two years ago, this would have required a small team and a quarter. The tools have changed. The fundamentals haven’t.
Building Something Similar?
If you’re building a product with AI transformation, enrichment, or automation at its core — or you need to ship faster with AI-assisted development — we use these exact tools and workflows every day. Whether you need an AI pipeline designed from scratch or an experienced developer to accelerate your team, we can help.
More Posts
All postsMarch 4, 2026
Build and Validate Your First SaaS in 30 Days (For Non-Technical Founders)
A step-by-step framework to go from idea to validated SaaS product in 4 weeks — without a dev team, without a big budget, and without writing code. Clarify the problem, build the MVP, test with real users, and decide what's next.
February 24, 2026
Stop Losing Revenue While Your Team Struggles to Hire
You need a senior developer yesterday. Hiring takes 3–6 months. Here's how the SkillGap Eliminator deploys top-tier developers in 7 days — with exclusive bonuses, a 6-layer guarantee, and zero hiring headaches.