May 16, 2026
How to Use Claude Code: A Real-World Tutorial
Claude Code is Anthropic's agentic coding tool that runs in your terminal. This hands-on tutorial covers setup, real workflows, and when it beats Cursor or GitHub Copilot.
Claude Code is the AI coding tool I reach for when a problem is genuinely complex — not autocomplete-complex, but "this requires understanding the whole codebase and making coordinated changes across twelve files" complex.
This is a hands-on tutorial based on actual use. No fluff.
What Claude Code actually is
Claude Code is a command-line tool built by Anthropic that runs in your terminal. Unlike GitHub Copilot or Cursor — which live inside your editor and complete code inline — Claude Code operates at the level of your entire project. It can read files, write files, run commands, and coordinate multi-step changes across your codebase.
The key distinction: it's agentic. Instead of suggesting what to type next, you describe a goal and Claude Code figures out how to accomplish it — reading relevant files, understanding how they connect, making the changes, and verifying the result.
Install it in one line:
npm install -g @anthropic-ai/claude-code
Then navigate to your project and launch it:
cd your-project
claude
That's it. You're now in an interactive session with a model that has access to your entire codebase.
First run: what to expect
When you launch Claude Code for the first time in a project, it automatically reads your directory structure and key files to build context. You'll see it thinking through your project before you've said anything.
The interface is a simple REPL (read-eval-print loop) — you type a request, it responds with a plan, asks for confirmation if it's about to make significant changes, and then executes.
A typical first interaction:
> What's the overall architecture of this project?
Claude Code will give you a genuine summary — not a generic response, but a specific description of your project based on what it read. This is the moment most people realize this is different from other AI coding tools. It actually knows what you're working on.
Real workflow 1: Adding a feature end-to-end
This is where Claude Code earns its reputation. Here's an example from a Next.js project — adding a new API endpoint with database integration, error handling, and tests.
The naive approach with most AI tools:
- Open Copilot, write the route handler manually with suggestions
- Switch files, write the database query manually
- Switch files again, write the types manually
- Write tests manually
- Debug the inevitable type mismatch between steps 2 and 3
With Claude Code:
> Add a POST /api/waitlist endpoint that accepts an email, validates it,
saves it to the waitlist table in Neon, and returns appropriate error
responses. Follow the same patterns as the existing /api/contact endpoint.
Claude Code will:
- Read your existing
/api/contactendpoint to understand your patterns - Read your database schema to understand the
waitlisttable structure - Read your validation utilities to reuse existing validation logic
- Write the new route handler
- Write the necessary TypeScript types
- If you have a test file for
/api/contact, write a parallel test file for the new endpoint - Show you everything it changed before applying
The "follow the same patterns as" instruction is key. Claude Code is excellent at consistency — it will match your naming conventions, error handling style, and code structure because it's actually read your codebase. This is something that inline autocomplete tools fundamentally cannot do.
Real workflow 2: Debugging production issues
This is the scenario where Claude Code genuinely surprised me. Paste in an error, give context, and watch it trace through your code.
> I'm getting this error in production:
TypeError: Cannot read properties of undefined (reading 'map')
at ResultsList (components/ResultsList.tsx:47)
The component works in development. Happens only when the search
returns zero results from the API.
Claude Code will read ResultsList.tsx, trace the data flow back to wherever the API response is handled, find the missing null check, and fix it. It will also often find related issues — similar patterns elsewhere in the codebase that have the same problem but haven't surfaced yet.
The thing that makes this powerful isn't the fix itself — it's that it reads the actual code. It's not generating a plausible fix based on patterns from its training data. It's reading your component and fixing your specific problem.
Real workflow 3: Large refactors
This is where Claude Code has no real competition. Refactoring a large codebase — changing a data model, migrating from one library to another, updating an API contract across multiple consumers — is exactly the kind of coordinated multi-file work that inline tools handle poorly.
Example:
> I want to rename the 'provider' field to 'daycareProvider' across the
entire codebase. Update the database schema migration, the TypeScript
types, all API routes that reference it, the frontend components,
and the test fixtures.
Claude Code will find every reference, understand the context of each one, and make coordinated changes that preserve the logic. It will ask you to confirm before touching anything, and it will show you a diff. A rename like this that might take a careful engineer 2–3 hours takes about 10 minutes.
Claude Code vs. Cursor vs. GitHub Copilot: honest comparison
These tools solve different problems. Here's how I think about them:
GitHub Copilot: Best for autocomplete. You're writing code and it's suggesting the next line. Great for boilerplate, repetitive patterns, and cases where you know what you want to write but don't want to type it. Weak on cross-file context.
Cursor: The best IDE experience for AI-assisted coding. The Tab autocomplete is excellent, the Cmd+K inline edit is useful, and the chat sidebar has good codebase awareness. The sweet spot is "I mostly know what I'm doing but want an AI collaborator at my side." The composer mode for multi-file edits is solid but not as capable as Claude Code on complex tasks.
Claude Code: Best when the task requires genuine reasoning about your entire codebase. Not autocomplete — agency. You give it a goal, it figures out how to accomplish it. The weak spot is that it's terminal-only, so you lose the visual feedback of seeing changes in your editor as they happen. You're also context-window constrained on very large codebases.
For most of my work, I use Cursor for day-to-day coding and Claude Code when I need to accomplish something that requires coordinating changes across many files, understanding a complex bug, or doing a significant refactor.
Practical tips from actual use
Be specific about constraints. "Add authentication" is vague. "Add JWT authentication that follows the same pattern as the existing session-based auth, without breaking the existing tests" is specific. The more context you give about how you want something done, the better the output.
Use /clear between unrelated tasks. Context accumulates. If you spent 20 minutes debugging one component, Claude Code's context is now heavily weighted toward that part of the codebase. Clear it before starting a completely different task.
Let it read before it writes. If you're about to ask for something complex, start with: "Read [file] and [file] and explain the relationship between them before making any changes." This catches misunderstandings before they become bad commits.
Check the diff, always. Claude Code shows you what it's about to change. Read it. Not because it's often wrong, but because reading it keeps you in the loop on your own codebase and catches the occasional misunderstanding before it's applied.
It's not magic on terrible codebases. If your codebase has no patterns, inconsistent naming, and no documentation, Claude Code will struggle for the same reasons a new human engineer would. It excels in codebases that follow conventions.
The honest limitations
Claude Code is not a junior developer you can fully delegate to. It makes mistakes — usually at the boundaries of its context window on very large projects, or when the task requires external knowledge it doesn't have (the specific behavior of a third-party API, your organization's undocumented conventions, etc.).
It also has no memory between sessions. Every claude launch starts fresh. If you've been building context over a long session and something goes wrong, /clear will lose that context permanently.
The token cost is real. Complex, multi-file tasks with long back-and-forth conversations can consume meaningful API credits. For individual developers, this is usually fine. For teams using it at scale, it's worth monitoring.
Getting started today
If you've read this far and want to try it:
npm install -g @anthropic-ai/claude-code- Navigate to a project you know well
- Start with something low-stakes:
claude→ "Explain the architecture of this project" - Once you're comfortable with how it reads your code, try a small feature addition
- Work up to a refactor
The learning curve is not steep. The main shift is moving from "AI that helps me write code" to "AI that I direct to accomplish goals." That mental model change is more important than any specific command or workflow.
We use Claude Code daily in our consulting work at AQM Hub. If you're building AI-powered software and want a second opinion on your stack or approach, reach out.
Need help implementing this?
If this is a problem you're dealing with, I'm happy to talk through it. Book a free 30-minute call and we can figure out if I can help.