I Learned Vibe Coding in 48 Hours — My Exact Process

I set myself a challenge: start from absolute zero and build a real, working, deployable product using vibe coding in 48 hours. Not a toy. Not a tutorial project. Something a person could actually use.

I want to walk you through exactly what happened, hour by hour, because the internet is full of "I built X in a weekend" stories that skip the ugly middle — the failed attempts, the wrong turns, the moments where you stare at an error message and wonder if you were foolish to try. The ugly middle is where the actual learning happens. So I am keeping it in.

For context: when I started this challenge, I had zero experience building software. I knew what a website was. I knew that code existed. I did not know what a terminal was, what Node.js meant, or what an API did. I had used no-code tools like Webflow and Zapier, but I had never touched actual code.

The product I decided to build: a simple web app that takes a URL, analyzes the page for SEO issues, and gives a plain-language report. Not because the world needs another SEO tool — because SEO analysis was a problem I personally had, and building something you actually need is the fastest way to stay motivated when the learning curve gets steep.

Hours 0-4: Setting Up and Choosing Tools

Hour 0: The research spiral.

I opened Google and searched "best tools for vibe coding." This was a mistake. I found seventeen articles, each recommending different tools, each with affiliate links, each contradicting the others. I spent an hour reading comparisons and getting more confused.

Here is what I wish someone had told me: it does not matter which tool you choose. They all work. Pick one and start. The tool is not the bottleneck. You are.

Hour 1: Installing things.

I chose Claude because a friend used it. I signed up for Claude Pro. I downloaded VS Code because every tutorial mentioned it. I opened the terminal for the first time in my life.

The terminal looked like a movie hacking scene. A black screen with a blinking cursor. I did not know what to type. I felt immediately stupid, which is a feeling I have learned to push through rather than obey.

I typed my first prompt to Claude: "I have never used a terminal before. I am on a Mac. I want to build a web application. What do I need to install and how do I do it?"

Claude gave me step-by-step instructions. Install Node.js. Install npm (it comes with Node). Create a folder. Navigate to it with cd. Initialize a project with npm init. Each step was explained like I was five, which was exactly what I needed.

Hour 2: The first code.

I asked Claude: "Write me a simple web page that has a text input where I can paste a URL and a button that says 'Analyze.' When I click the button, it should show a message that says 'Analyzing...' for now."

Claude wrote the HTML, CSS, and JavaScript. I copied it into a file called index.html. I opened the file in Chrome. There was a text box and a button. I clicked the button. It said "Analyzing..."

I stared at it for a full minute. I had just built something. It was the simplest possible thing, but I had built it. The feeling was specific — not excitement exactly, more like disbelief. A quiet "oh."

Hours 3-4: Understanding the basics.

I spent two hours asking Claude to explain what each part of the code did. Not because I needed to understand it to continue, but because I was curious. What is a div? What does addEventListener do? Why is there a script tag?

This was the right instinct but the wrong timing. I did not need to understand the code to build the product. I needed to understand the code to debug the product — and debugging was coming. But for now, understanding was slowing me down.

Lesson learned: build first, understand later. The understanding comes naturally through the debugging process. Front-loading it just delays the building.

Hours 4-12: The First Real Attempt (And Its Spectacular Failure)

Hour 4: Ambition arrives.

I asked Claude to make the "Analyze" button actually do something. "When the user pastes a URL and clicks Analyze, I want the app to fetch that URL, look at the HTML, and check for: missing title tag, missing meta description, missing H1 tag, images without alt text, and slow load time."

Claude wrote the code. It was about 80 lines of JavaScript. I pasted it in, ran it, and — nothing. The button did nothing. The console (which Claude had taught me to open with Cmd+Option+J) showed a red error: "CORS policy: No 'Access-Control-Allow-Origin' header."

I had no idea what CORS was. I asked Claude. It explained that browsers prevent web pages from fetching other websites directly for security reasons. My approach — fetching a URL from the browser — was fundamentally wrong.

This was my first real failure and it taught me the most important lesson of vibe coding: the AI will write code that works technically but fails practically if you do not describe the constraints. I had not told Claude this was a browser-based app that needed to fetch external URLs. The AI assumed a context that did not match my reality.

Hours 5-7: The wrong solution.

Claude suggested using a proxy server. I did not know what a proxy server was. Claude explained. I asked it to set one up. It wrote a Node.js server using Express. I had to learn how to run a Node.js file (node server.js). I had to learn what localhost:3000 meant. I had to learn how to keep two things running at once — the server and the front end.

It worked. Kind of. The proxy server fetched external URLs and passed the HTML back to my front end. I could now analyze a page.

But the analysis was wrong. The code was checking for HTML elements by looking for literal strings like <title> in the raw HTML — which broke on every page that used uppercase tags, self-closing tags, or attributes inside the tag. My analyzer said every page was missing a title tag, because it was looking for <title> and the pages had <title lang="en"> or similar variations.

Hours 8-10: Fixing the analysis.

Back to Claude. "The HTML parsing is too simple. It breaks on real-world HTML. Can you use a proper HTML parser?"

Claude rewrote the analysis using a library called cheerio. This required me to learn npm install — adding a package to the project. The new version was dramatically better. It correctly identified missing titles, descriptions, H1s, and alt text.

But it was slow. Analyzing a single page took 8-12 seconds. Some pages timed out entirely.

Hours 10-12: The moment I almost quit.

At hour 10, I tried to add load time analysis. The idea was simple: measure how long it takes to load the page. The implementation was a nightmare. The proxy server was adding its own latency. The measurement was inaccurate. I spent two hours trying to fix it and the numbers were still wrong.

At midnight — hour 12 — I sat on my couch, laptop open, staring at code I did not understand, trying to measure something I could not accurately measure with the architecture I had. I was tired, frustrated, and seriously considering quitting.

I did not quit, but I did go to sleep. And sleeping was the second most important thing I did during the 48 hours, because by morning, I had an idea.

Hours 12-24: The Pivot

Hour 12 (morning): The clarity.

I woke up and realized the problem: I was trying to build a full SEO tool. That was too ambitious for a first project. The tool did not need to measure load time, check backlinks, or analyze keyword density. It needed to do one thing well: check a page for basic SEO issues and explain them in plain language.

I described this to Claude: "Let us simplify. The tool takes a URL, fetches the page, and checks only these things: does it have a title tag (and is it the right length), does it have a meta description (and is it the right length), does it have an H1 tag, are there images without alt text, are there broken internal links. No performance metrics. No keyword analysis. Just these five checks. Present the results as a simple report card — pass, fail, or warning for each item, with a plain-language explanation of why it matters."

This prompt — clear, specific, constrained — produced the best code of the entire project. Claude generated a clean, focused analysis that worked on the first try for simple pages and needed only minor tweaks for complex ones.

Hours 13-16: Making it real.

I spent these hours iterating on the output. The analysis worked but the presentation was ugly — raw JSON dumped on the page. I described what I wanted the report to look like: "A clean card layout. Green checkmark for pass, red X for fail, yellow warning triangle for warnings. Below each result, a one-sentence explanation in plain language. At the bottom, an overall score from A to F."

Claude built a beautiful report page. I adjusted colors, spacing, and fonts through conversation — "make the score larger," "add more space between the cards," "the fail color is too aggressive, make it a softer red." Each adjustment took about thirty seconds.

Hours 17-20: The feature that changed everything.

I had an idea: what if the tool did not just show the problems but suggested fixes? Not vague suggestions like "add a meta description." Actual, copy-paste-ready fixes.

"For each failing check, I want the tool to generate a suggested fix. If the meta description is missing, write a meta description based on the page content. If the title is too long, suggest a shorter version. If an image lacks alt text, suggest alt text based on the image context."

This was where AI analyzing AI-generated suggestions got weird and interesting. Claude wrote code that used AI to analyze the page and then generate context-aware fix suggestions. The meta descriptions it generated were genuinely good — not generic boilerplate but actual descriptions tailored to the page content.

Hours 20-24: Debugging the edge cases.

This was the grind. I tested the tool on fifty different websites. It broke on: pages behind login walls, pages with JavaScript-rendered content, pages with non-English characters, pages with malformed HTML, pages that returned 403 errors.

Get essays like this — plus behind-the-scenes builds — in your inbox every week. Subscribe free →

For each failure, the process was the same: paste the error into Claude, describe what happened, ask for a fix, test again. Some fixes were simple (add a try-catch block). Some required rethinking the approach (JavaScript-rendered pages could not be analyzed with simple HTTP fetching — I had to acknowledge this limitation and show a clear error message).

By hour 24, the tool worked reliably on about 85% of public web pages. The remaining 15% got a graceful error message instead of a crash.

Hours 24-48: Shipping Something Real

Hours 24-30: Making it deployable.

I asked Claude: "I want to put this on the internet so anyone can use it. What is the easiest way?"

The answer was Vercel. Claude walked me through: create a GitHub account, push the project to a repository, connect Vercel, deploy. The words in that sentence were mostly foreign to me 24 hours earlier. Now I was executing them from muscle memory.

The first deployment failed because my server-side code was not structured the way Vercel expected. Claude restructured it into Vercel serverless functions. Second deployment worked.

I had a live URL. A real thing on the real internet that a real person could use. It had been 26 hours since I opened a terminal for the first time.

Hours 30-38: Polish.

I am a designer by instinct if not by training, and the default look was not good enough. I spent eight hours on polish — adding a landing page, improving the loading states, adding animation to the report cards, making it responsive for mobile, writing clear copy for the instructions.

This was the most fun part. The creative decisions — "this should feel clean and confident, not sterile and corporate" — were mine. The implementation was Claude's. I was vibing. This was the actual vibe in vibe coding — the feeling of being in creative flow while someone else handles the tedious implementation.

Hours 38-44: Testing with real people.

I sent the URL to five friends and asked them to try it. Three of them found bugs I had missed. One pointed out that the report was confusing for someone who did not know what "meta description" meant. One said the mobile layout was broken on her Android phone.

Each piece of feedback became a prompt to Claude. Fix the bug. Add tooltips explaining each term. Fix the mobile layout for Android Chrome specifically. Test, fix, test, fix.

Hours 44-48: The final push.

I added one last feature: the ability to export the report as a PDF. This took four hours and was harder than expected because generating PDFs from HTML is apparently one of the most annoying things in web development. Claude and I went through three different PDF libraries before finding one that produced clean output.

At hour 47, I deployed the final version. At hour 48, I closed my laptop.

The tool was live. It was real. It worked. And I had built it — a person who, 48 hours earlier, did not know what a terminal was.

What I Actually Learned

The 48 hours taught me things that no tutorial could:

The terminal is not scary. It looks scary. It is actually just a text-based way to talk to your computer. Once you learn five commands (cd, ls, npm install, node, git), you can do almost everything.

Errors are the curriculum. Every error message taught me something. CORS taught me about browser security. npm errors taught me about dependencies. Deployment errors taught me about server architecture. You do not need to study these concepts in advance. You learn them when they break your thing.

Specificity is the skill. The single most important skill in vibe coding is describing what you want precisely. Vague prompts produce vague code. Specific prompts produce specific solutions. "Make it better" is useless. "Make the score text 48px, bold, centered, and use green for A-B grades, yellow for C, red for D-F" is useful.

You do not need to understand everything. My PDF export works. I do not know how. I know what it does, I know when it breaks, and I know how to describe the problem to Claude if it breaks. That is enough.

Small scope wins. My best decision was pivoting from "full SEO tool" to "five basic checks with fix suggestions." The constrained version was better, more useful, and actually shippable. Scope is the enemy of shipping.

The Prompts That Worked Best

For anyone starting their own 48-hour vibe coding challenge, here are the prompts that produced the best results:

The setup prompt: "I have never built a web application before. I want to build [specific thing]. Walk me through the setup step by step, assuming I know nothing."

The feature prompt: "Add this specific feature: [describe exactly what it should do, what input it takes, what output it produces, and what it should look like]."

The debug prompt: "I am getting this error: [paste the full error]. Here is what I was trying to do: [describe the action]. Here is the relevant code: [paste the code]. What is wrong and how do I fix it?"

The pivot prompt: "This approach is too complex. I want to simplify. The tool should only do [specific list]. Remove everything else and rebuild the core to be clean and focused."

The polish prompt: "The feature works but looks bad. I want it to look like [describe the visual style, reference specific things — colors, spacing, fonts, animations]."

Your 48-Hour Challenge

Here is what I want you to do.

Pick a weekend. Pick a problem — something small, something you personally have, something that would genuinely make your life better if a tool solved it. Not a billion-dollar startup idea. A personal itch.

Sign up for Claude or Cursor. Open a terminal. Type your first prompt.

You will feel stupid for the first four hours. That is normal. You will want to quit around hour ten. That is also normal. You will pivot your approach at least once. That is not failure — that is learning.

By hour 48, you will have something real. Maybe not polished. Maybe not perfect. But real — a thing on the internet that does a thing you told it to do.

And you will know, in your body and not just in your mind, that you can build things. That the wall between "people who build software" and "people who use software" has a door in it, and the door is unlocked.

Walk through it.


Want the exact prompt library I use for vibe coding? It is inside the tools section — free templates for every stage of the build process.

More from the journal

  • Vibe Coding: The Complete Guide for Solopreneurs (2026)
  • What Is Vibe Coding? A Non-Developer's Guide to Building With AI
  • Vibe Coding Changed My Solopreneur Business — Here's How