🏠

7.5 Framework-Specific Instrumentation

Debuggers are universal and powerful, but they require you to know where to set breakpoints. When exploring an unfamiliar codebase, you often don't know where to start. Framework-specific tools solve this by automatically instrumenting common patterns—HTTP requests, database queries, template rendering—giving you a high-level execution map before you dive into debugger details.

These tools answer the critical first question: "What happens when I perform this action?" They show you the execution landscape before you navigate the terrain step-by-step with a debugger.

7.5.1 Django Debug Toolbar: The X-Ray Vision Tool

The Django Debug Toolbar is perhaps the most valuable tool for understanding Django codebases. It provides a real-time, visual overlay showing exactly what happened during a request: which SQL queries ran, which templates rendered, which middleware executed, and how long everything took. It's like having X-ray vision into Django's execution model.

Installation and configuration (the 5-minute setup)

Let's get this running immediately. The entire setup takes about five minutes and requires zero code changes to your existing views or models.

First, install it:

pip install django-debug-toolbar

Add to your settings.py:

# In INSTALLED_APPS, add:

INSTALLED_APPS = [

    # ... your existing apps ...

    'django.contrib.staticfiles',  # Required, usually already present

    'debug_toolbar',

]



# In MIDDLEWARE, add at the top (order matters):

MIDDLEWARE = [

    'debug_toolbar.middleware.DebugToolbarMiddleware',

    # ... your existing middleware ...

]



# Add this configuration:

INTERNAL_IPS = [

    '127.0.0.1',

]



# Optional but recommended - configure which panels to show:

DEBUG_TOOLBAR_PANELS = [

    'debug_toolbar.panels.history.HistoryPanel',

    'debug_toolbar.panels.versions.VersionsPanel',

    'debug_toolbar.panels.timer.TimerPanel',

    'debug_toolbar.panels.settings.SettingsPanel',

    'debug_toolbar.panels.headers.HeadersPanel',

    'debug_toolbar.panels.request.RequestPanel',

    'debug_toolbar.panels.sql.SQLPanel',

    'debug_toolbar.panels.staticfiles.StaticFilesPanel',

    'debug_toolbar.panels.templates.TemplatesPanel',

    'debug_toolbar.panels.cache.CachePanel',

    'debug_toolbar.panels.signals.SignalsPanel',

    'debug_toolbar.panels.logging.LoggingPanel',

    'debug_toolbar.panels.redirects.RedirectsPanel',

    'debug_toolbar.panels.profiling.ProfilingPanel',

]

Add to your main urls.py:

from django.urls import path, include

from django.conf import settings



urlpatterns = [

    # ... your existing URLs ...

]



if settings.DEBUG:

    import debug_toolbar

    urlpatterns = [

        path('__debug__/', include(debug_toolbar.urls)),

    ] + urlpatterns

Run your development server:

python manage.py runserver

Navigate to any page in your application. You'll see a sidebar on the right side of your browser window with collapsible panels. This is the Debug Toolbar. You've just gained comprehensive instrumentation of your Django application without modifying a single view.

Important note about INTERNAL_IPS: The toolbar only appears when your request comes from an IP in INTERNAL_IPS. If you're running Django in Docker, 127.0.0.1 won't work because Docker sees a different internal IP. Add this to make it work in Docker:

import socket

hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())

INTERNAL_IPS = [ip[: ip.rfind(".")] + ".1" for ip in ips] + ["127.0.0.1"]

Reading the SQL panel: Query optimization insights

Click the "SQL" panel in the toolbar. You'll see something like:

23 queries in 142ms



SELECT "auth_user"."id", "auth_user"."username", "auth_user"."email"

FROM "auth_user"

WHERE "auth_user"."id" = 42

(1 query, 3ms)



SELECT "blog_post"."id", "blog_post"."title", "blog_post"."author_id"

FROM "blog_post"

WHERE "blog_post"."author_id" = 42

(1 query, 2ms)



... 21 more queries ...

This immediately reveals critical information:

  1. Total query count: 23 queries is high for a single page. This might indicate N+1 problems.

  2. Total time: 142ms in database time—good to know for performance analysis.

  3. Individual queries: Each query shows the SQL, execution time, and where in your code it originated.

Click "Explain" next to any query to see the database's query execution plan—invaluable for optimization. Click "Select" to see the full query with formatting. But the most powerful feature is "Stack trace"—click it to see exactly which line of code triggered this query.

Here's a real example. You see these queries:

-- Query 1

SELECT * FROM "blog_post" WHERE "author_id" = 1

-- Query 2

SELECT * FROM "auth_user" WHERE "id" = 1

-- Query 3

SELECT * FROM "blog_post" WHERE "author_id" = 2

-- Query 4

SELECT * FROM "auth_user" WHERE "id" = 2

-- Query 5

SELECT * FROM "blog_post" WHERE "author_id" = 3

-- Query 6

SELECT * FROM "auth_user" WHERE "id" = 3

This is the classic N+1 problem: fetching posts, then for each post fetching its author. Click the stack trace for Query 2 and you see:

File: /app/blog/views.py, line 45

    post.author.username

Your template (or view code) is accessing post.author, which triggers a query for each post. The fix is to use select_related:

# Before (causes N+1)

posts = Post.objects.all()



# After (single query with JOIN)

posts = Post.objects.select_related('author').all()

The Debug Toolbar revealed this N+1 problem in seconds. Without it, you might notice the page is slow, spend time setting up query logging, parsing logs to find duplicates, then tracing back to the source. The toolbar gives you the diagnosis and the exact code location immediately.

Highlighting duplicate and similar queries: The SQL panel highlights similar queries in yellow and duplicates in red. If you see a red "SIMILAR" label, you're probably hitting the same query multiple times with different parameters—a sign of an N+1 problem or inefficient data access pattern.

Tracing view execution and middleware

Click the "Timer" panel. It shows a detailed breakdown of execution time:

Total time: 284ms

  Request: 284ms

    Middleware (process_request): 12ms

      SecurityMiddleware: 1ms

      SessionMiddleware: 5ms

      AuthenticationMiddleware: 4ms

      CustomLoggingMiddleware: 2ms

    View: 156ms

      blog.views.post_list: 156ms

    Middleware (process_response): 8ms

      MessageMiddleware: 3ms

      CustomLoggingMiddleware: 5ms

    Template: 108ms

      blog/post_list.html: 108ms

This is your execution flow map. You can see:

  1. Middleware execution order: Request flows through SecurityMiddleware first, then SessionMiddleware, etc. This confirms the MIDDLEWARE list in settings.py is actually being followed.

  2. Time spent in each phase: The view took 156ms, template rendering took 108ms. This tells you where to focus optimization efforts.

  3. Which view actually ran: Sometimes URL routing is complex and you're not sure which view handled the request. The Timer panel tells you explicitly: blog.views.post_list.

Now click the "Request" panel to see the request object details:

View: blog.views.post_list

URL: /blog/posts/

Method: GET

User: john@example.com (authenticated)

GET parameters: page=2, filter=published

POST parameters: (none)

Session: {'_auth_user_id': '42', 'last_visit': '2025-10-11T10:30:00'}

Cookies: sessionid, csrftoken

This is crucial for tracing: you see the complete request context without adding any logging or debug prints. You know which user made the request, what parameters they sent, what's in their session—all the context needed to reproduce the execution path.

Template rendering visualization

The "Templates" panel shows which templates were rendered and their context variables:

Templates rendered: 3



base.html

  Context:

    user: <User: john@example.com>

    request: <WSGIRequest>



blog/post_list.html (extends base.html)

  Context:

    posts: <QuerySet [<Post: First>, <Post: Second>, ...]>

    page_obj: <Page 1 of 5>

    is_paginated: True



includes/sidebar.html

  Context:

    recent_posts: <QuerySet [<Post: Recent 1>, ...]>

This answers several tracing questions immediately:

  1. Template inheritance chain: You see that post_list.html extends base.html, and that sidebar.html is included.

  2. Available context: Each template shows its context variables. If a variable is missing or has an unexpected value, you see it here before even setting a breakpoint.

  3. Template location: Click a template name to open it in your editor (if configured).

This is especially valuable in large codebases with complex template inheritance. You might be looking at the rendered page wondering "where does this navbar come from?" The Templates panel shows you: it's in base.html, which is extended by post_list.html, which also includes sidebar.html. You've mapped the template architecture in seconds.

Cache hit/miss analysis

The "Cache" panel shows cache operations during the request:

Cache calls: 12

  Hits: 8 (67%)

  Misses: 4 (33%)



Details:

  GET 'blog:post_list:page_2' → HIT (retrieved in 1ms)

  GET 'blog:post_detail:42' → MISS (key not found)

  SET 'blog:post_detail:42' → OK (stored in 2ms)

  GET 'sidebar:recent_posts' → HIT (retrieved in 1ms)

This is invaluable for understanding caching behavior. You see:

  1. Which cache keys are being accessed: 'blog:post_list:page_2' tells you about your cache key naming scheme.

  2. Hit rate: 67% means your caching is somewhat effective, but there's room for improvement.

  3. What's being cached and when: The SET operation shows when cached data is created.

If you see a cache miss for something that should be cached, you've found a bug. If you see cache hits for data that changes frequently, you might have a stale cache problem.

Custom panels: When and how to extend

The Debug Toolbar is extensible. You can create custom panels to instrument application-specific behavior. This is justified when:

  1. Your application has a unique subsystem that's hard to trace (e.g., a custom job queue, an internal API client, a rules engine)

  2. You need team-wide visibility into specific behaviors

  3. Standard panels don't capture important metrics

Here's a minimal custom panel that tracks custom event emissions:

# myapp/debug_toolbar_panels.py

from debug_toolbar.panels import Panel



class EventPanel(Panel):

    title = "Events"

    template = "debug_toolbar/events_panel.html"



    def __init__(self, *args, **kwargs):

        super().__init__(*args, **kwargs)

        self.events = []



    def record_event(self, event_type, data):

        self.events.append({

            'type': event_type,

            'data': data,

            'timestamp': timezone.now()

        })



    def generate_stats(self, request, response):

        self.record_stats({

            'events': self.events,

            'total': len(self.events)

        })

Create the template:

<!-- templates/debug_toolbar/events_panel.html -->

<table>
  <thead>
    <tr>
      <th>Time</th>

      <th>Event Type</th>

      <th>Data</th>
    </tr>
  </thead>

  <tbody>
    {% for event in events %}

    <tr>
      <td>{{ event.timestamp }}</td>

      <td>{{ event.type }}</td>

      <td>{{ event.data }}</td>
    </tr>

    {% endfor %}
  </tbody>
</table>

Register the panel in settings.py:

DEBUG_TOOLBAR_PANELS = [

    # ... standard panels ...

    'myapp.debug_toolbar_panels.EventPanel',

]

Now integrate it into your application code:

from debug_toolbar.middleware import get_current_request



def emit_event(event_type, data):

    # Your normal event emission logic

    publish_to_event_bus(event_type, data)



    # Debug Toolbar integration

    request = get_current_request()

    if request and hasattr(request, 'toolbar'):

        panel = request.toolbar.get_panel_by_id('EventPanel')

        if panel:

            panel.record_event(event_type, data)

Now every event emission during a request appears in your custom panel. This is powerful for tracing application-specific workflows that standard instrumentation doesn't capture.

When NOT to create custom panels: If you're instrumenting standard Django behavior (queries, template rendering, caching), use the built-in panels or existing third-party panels. Creating custom panels is justified for domain-specific instrumentation, not for reimplementing what already exists.

7.5.2 Flask Debugging Tools

Flask takes a more minimal approach than Django, but it has equally powerful debugging tools. The philosophy is similar: give developers immediate visibility into request execution without requiring code changes.

Flask-DebugToolbar configuration

Flask-DebugToolbar is inspired by Django Debug Toolbar but adapted for Flask's lightweight architecture. Install it:

pip install flask-debugtoolbar

Add to your Flask app:

from flask import Flask

from flask_debugtoolbar import DebugToolbarExtension



app = Flask(__name__)



# Required: Set a secret key

app.config['SECRET_KEY'] = 'dev-secret-key-change-in-production'



# Enable the toolbar

app.config['DEBUG_TB_ENABLED'] = True



# Optionally configure which panels to show

app.config['DEBUG_TB_PANELS'] = [

    'flask_debugtoolbar.panels.versions.VersionDebugPanel',

    'flask_debugtoolbar.panels.timer.TimerDebugPanel',

    'flask_debugtoolbar.panels.headers.HeaderDebugPanel',

    'flask_debugtoolbar.panels.request_vars.RequestVarsDebugPanel',

    'flask_debugtoolbar.panels.config_vars.ConfigVarsDebugPanel',

    'flask_debugtoolbar.panels.template.TemplateDebugPanel',

    'flask_debugtoolbar.panels.sqlalchemy.SQLAlchemyDebugPanel',

    'flask_debugtoolbar.panels.logger.LoggingPanel',

    'flask_debugtoolbar.panels.route_list.RouteListDebugPanel',

    'flask_debugtoolbar.panels.profiler.ProfilerDebugPanel',

]



# Initialize the toolbar

toolbar = DebugToolbarExtension(app)



@app.route('/users')

def list_users():

    users = User.query.all()

    return render_template('users.html', users=users)



if __name__ == '__main__':

    app.run(debug=True)

Run your Flask app and visit any route. The Debug Toolbar appears as a sidebar, just like Django's version.

Key differences from Django Debug Toolbar:

  1. SQLAlchemy panel: If you're using SQLAlchemy, this panel shows all queries with the same detail as Django's SQL panel—execution time, stack traces, and query parameters.

  2. Route list panel: Shows all registered routes in your application. This is incredibly useful for understanding Flask's URL routing, especially in large applications with blueprints.

  3. Profiler panel: Built-in code profiling. Enable it in config:

app.config['DEBUG_TB_PROFILER_ENABLED'] = True

Now each request shows a profile with function call counts and execution times. This is more intrusive than other panels (it adds overhead) but invaluable for performance tracing.

Werkzeug debugger: Interactive stack traces

Flask runs on Werkzeug, which includes an incredibly powerful feature: the interactive debugger. When an exception occurs in development mode, instead of just showing a stack trace, Werkzeug gives you an interactive Python console at each frame in the stack.

Enable it (it's on by default in debug mode):

app.run(debug=True)

Now cause an exception in your code:

@app.route('/users/<int:user_id>')

def show_user(user_id):

    user = User.query.get(user_id)

    return render_template('user.html', username=user.name)  # Will fail if user is None

Visit /users/9999 (a user ID that doesn't exist). Instead of a generic 500 error, you see the Werkzeug debugger:

AttributeError: 'NoneType' object has no attribute 'name'



Traceback (most recent call last):

  File "/app/views.py", line 45, in show_user

    return render_template('user.html', username=user.name)

           ^^^^^^^^^

Next to each frame in the traceback, there's a small console icon. Click it and a Python console opens at that frame. You can inspect variables, evaluate expressions, even modify state:

>>> user

None

>>> user_id

9999

>>> User.query.filter_by(id=user_id).first()

None

>>> User.query.all()

[<User 1: Alice>, <User 2: Bob>]

This is debugging nirvana. You're exploring the failure state interactively without restarting the application or setting breakpoints. You can test hypotheses immediately: "What if I query differently?" "What's in the session?" "What does this helper function return?"

Critical security warning: NEVER enable this in production. The interactive console allows arbitrary code execution. An attacker who can trigger an exception can execute Python code on your server. Always set debug=False in production and use proper error logging instead.

Flask-Profiler for request timing

Flask-Profiler gives you a different view: it stores data about requests over time, letting you identify slow endpoints and performance trends.

Install and configure:

pip install flask_profiler
from flask import Flask

import flask_profiler



app = Flask(__name__)



app.config["flask_profiler"] = {

    "enabled": True,

    "storage": {

        "engine": "sqlite",

        "FILE": "flask_profiler.sql"

    },

    "basicAuth": {

        "enabled": False  # Set True and add credentials for protection

    },

    "ignore": [

        "^/static/.*"  # Don't profile static files

    ]

}



@app.route('/users')

def list_users():

    users = User.query.all()

    return render_template('users.html', users=users)



flask_profiler.init_app(app)



if __name__ == '__main__':

    app.run(debug=True)

Now Flask-Profiler instruments every request automatically. Visit http://localhost:5000/flask-profiler/ to see the dashboard. It shows:

  1. All endpoints: Sorted by average response time or total time spent

  2. Request count: How many times each endpoint was called

  3. Percentiles: 50th, 95th, 99th percentile response times

  4. Individual requests: Drill down to see specific slow requests

This is different from Debug Toolbar because it tracks requests over time, not just the current request. You can identify patterns: "The /api/users endpoint is slow only when the role parameter is set" or "Response times degrade after the 100th request (memory leak?)."

Combining tools: Use Debug Toolbar for understanding a single request's execution flow, use Werkzeug debugger for exploring failures interactively, and use Flask-Profiler for identifying performance trends across many requests. Each tool answers different tracing questions.

7.5.3 FastAPI Debugging Patterns

FastAPI is built on ASGI (asynchronous server gateway interface) and uses async Python heavily. This creates unique debugging challenges: traditional debuggers sometimes struggle with async code, and execution flow is less linear due to event loop concurrency.

Using Uvicorn's --reload for development

Uvicorn is the recommended ASGI server for FastAPI. Its --reload flag watches for file changes and restarts the server automatically—essential for rapid development. But it also affects debugging:

# Development mode with auto-reload

uvicorn main:app --reload --log-level debug



# Production mode (no reload, optimized)

uvicorn main:app --host 0.0.0.0 --port 8000

The --reload flag uses a file watcher that spawns a subprocess for the actual server. This breaks some debuggers. When debugging FastAPI, run without --reload:

uvicorn main:app --log-level debug

Then restart manually when you change code. Alternatively, use VS Code's debugger, which can handle the reloader:

// .vscode/launch.json

{
  "version": "0.2.0",

  "configurations": [
    {
      "name": "FastAPI",

      "type": "debugpy",

      "request": "launch",

      "module": "uvicorn",

      "args": ["main:app", "--reload"],

      "jinja": true,

      "justMyCode": false
    }
  ]
}

With this configuration, you can set breakpoints in FastAPI route handlers and they'll work even with auto-reload enabled.

Debugging async functions: The async/await call stack

Async functions complicate call stacks. When you await something, execution pauses and the event loop runs other tasks. When the await completes, execution resumes—but the call stack might look different than you expect.

Here's a FastAPI route:

from fastapi import FastAPI

app = FastAPI()



@app.get("/users/{user_id}")

async def get_user(user_id: int):

    user_data = await fetch_user_from_db(user_id)

    profile_data = await fetch_user_profile(user_id)

    return {"user": user_data, "profile": profile_data}



async def fetch_user_from_db(user_id: int):

    # Simulated async database call

    await asyncio.sleep(0.1)

    return {"id": user_id, "name": "Alice"}



async def fetch_user_profile(user_id: int):

    await asyncio.sleep(0.1)

    return {"bio": "Software developer"}

Set a breakpoint in fetch_user_from_db on the return statement. When it hits, examine the call stack:

→ fetch_user_from_db (main.py:12)

  get_user (main.py:6)

  [... FastAPI internal frames ...]

  uvicorn.protocols.http.HttpProtocolH11._run_asgi

  asyncio.Task.__step

Now step over (F10) to return to get_user. You're back at the line after the first await. Step over again to hit the second await. Look at the call stack again:

→ fetch_user_profile (main.py:16)

  get_user (main.py:7)

  [... different FastAPI internal frames ...]

  uvicorn.protocols.http.HttpProtocolH11._run_asgi

  asyncio.Task.__step

Notice the internal frames might be different. That's because each await yields control to the event loop, which might have scheduled other tasks in between. The call stack shows you the state at this moment, not the entire async execution history.

Key insight for tracing async code: The call stack in async code shows the current execution context, not the complete causal chain. To understand async flow, you need to think in terms of task dependencies: "This task waits for that task, which waits for another task." Debuggers show you one task's stack at a time.

Practical async debugging strategy:

  1. Set breakpoints at await statements to see when async operations start

  2. Set breakpoints at the first line of async functions to see when they actually execute (might be later than you expect)

  3. Use the debugger's async stack view if available (VS Code shows "Running" and "Awaiting" tasks separately)

  4. Log task IDs or request IDs to correlate async operations across the event loop

Profiling async performance with py-spy

Traditional profilers struggle with async code because they sample the call stack, and async code spends most of its time waiting (not on the call stack). py-spy handles this better by sampling at a high rate and understanding async contexts.

Install py-spy:

pip install py-spy

Run your FastAPI app under py-spy:

py-spy record --rate 100 --subprocesses -- python -m uvicorn main:app

This records a profile at 100 samples per second (high rate for async code). Generate load on your application, then stop py-spy (Ctrl+C). It creates a flame graph (profile.svg) showing where time is spent.

For async FastAPI applications, you'll typically see:

Here's a critical discovery py-spy often reveals: you have an await db.execute() call, but the flame graph shows significant time in psycopg2 (a synchronous Postgres driver). You thought you were using async I/O, but you're actually using a sync driver wrapped in async syntax. The fix is to switch to asyncpg or databases for true async I/O.

Alternative for live production profiling:

# Attach to a running FastAPI process

py-spy top --pid <process_id>

This shows a live, top-like view of where the application is spending time. It's safe for production because py-spy samples (doesn't pause) the process and has minimal overhead.

7.5.4 JavaScript Framework DevTools

Modern JavaScript frameworks—React, Vue, Angular—have execution models that don't map cleanly to function calls. Components, reactivity, change detection—these are framework-specific concepts that require framework-specific debugging tools.

React DevTools: Component hierarchy and props flow

Install the React DevTools browser extension. It adds "Components" and "Profiler" tabs to your browser's developer tools.

Scenario: You're debugging a React app where a form's submit button is disabled, but you can't figure out why. The component tree is deeply nested, and props are passed through multiple levels.

Open React DevTools, click the "Components" tab, and use the picker tool (top-left icon) to select the disabled button in your page. React DevTools highlights the component in the tree:

App

  └─ UserDashboard

      └─ ProfileSection

          └─ ProfileForm

              └─ SubmitButton (← selected)

In the right panel, you see the SubmitButton component's props:

Props:

  disabled: true

  onClick: function handleSubmit() {...}

  children: "Save Changes"

So it's receiving disabled: true. But why? Click the parent component (ProfileForm) and examine its state and props:

State:

  formData: {name: "Alice", email: "alice@example.com"}

  isValid: false

  errors: {email: "Invalid email format"}



Props:

  user: {id: 42, name: "Alice", ...}

  onSave: function() {...}

There it is: isValid: false. The form determines validity based on field validation. Check errors and you see the email field has a validation error. You've traced the data flow from UI element → component prop → parent state → validation logic in seconds.

Profiler: This shows why components render. Record a profile, perform an action (like typing in the form), then stop. You get a flame graph showing which components rendered and why:

You immediately see that ProfileForm re-renders on every keystroke (expected), but ProfileSection also re-renders unnecessarily. Click ProfileSection to see which props changed—turns out none did. It's re-rendering because the parent renders and doesn't use React.memo(). You've identified an optimization opportunity through tracing execution, not speculation.

Vue DevTools: Reactivity tracing

Vue's reactivity system is famously "magical"—you modify data, and the UI updates automatically. But understanding why something updates (or doesn't) requires tracing the reactivity chain.

Install Vue DevTools extension, then open the "Components" tab. Select a component and you see:

Component: UserProfile



Props:

  userId: 42



Data:

  user: {name: "Alice", email: "alice@example.com"}

  loading: false



Computed:

  displayName: "Alice (alice@example.com)"

  initials: "A"



Watchers:

  userId → function() { this.fetchUser() }

This is Vue's execution model laid bare. You see:

  1. Data: Reactive properties that trigger updates when changed

  2. Computed: Derived values that update when dependencies change

  3. Watchers: Side effects that run when specific properties change

Now use the "Timeline" feature (clock icon in Vue DevTools). Click "Record," interact with your app, then stop. You see a timeline of events:

0ms:   Component mounted: UserProfile

50ms:  Data mutation: userId = 42 → 43

51ms:  Watcher triggered: userId watcher

55ms:  API call started: fetchUser(43)

200ms: API call completed

201ms: Data mutation: user = {...}

202ms: Computed recalculated: displayName

203ms: Component re-rendered

This traces the complete reactivity chain from user interaction to DOM update. You see that changing userId triggers a watcher, which makes an API call, which updates user, which recalculates computed properties, which triggers a render. Without this timeline, you'd need to set breakpoints in multiple places and mentally reconstruct the flow.

Vue DevTools Inspector: Right-click any element in your page and select "Inspect Vue component." DevTools jumps directly to that component in the component tree. This is invaluable when you're looking at rendered output and thinking "Which component is this?"

Angular DevTools: Change detection visualization

Angular's change detection is notoriously complex. Unlike React and Vue, Angular runs change detection in zones—when any async operation completes (timers, HTTP requests, user events), Angular checks all components for changes.

Install Angular DevTools, open the "Profiler" tab, and click "Record." Interact with your app, then stop. You see a timeline with colored bars:

Click any bar to see details:

Change Detection Cycle #47

Duration: 12.3ms

Components checked: 156

  AppComponent: 0.5ms

  HeaderComponent: 0.3ms

  UserListComponent: 8.2ms ← Slow!

    UserItemComponent (x50): 7.8ms

  FooterComponent: 0.2ms

You've discovered that UserListComponent dominates change detection time. Why? Click it to see its code location and template. You find:

@Component({
  selector: "app-user-list",

  template: `
    <div *ngFor="let user of users">
      <app-user-item [user]="user" [formatDate]="formatDate"></app-user-item>
    </div>
  `,
})
export class UserListComponent {
  users: User[];

  formatDate(date: Date): string {
    return date.toLocaleDateString(); // Function called on every change detection!
  }
}

The problem: [formatDate]="formatDate" passes a function reference. On every change detection cycle, Angular sees this as a new function (different reference), so all 50 UserItemComponent children re-render. The fix is to pass the formatted date, not the formatting function, or use ChangeDetectionStrategy.OnPush.

Angular DevTools revealed this performance issue through execution tracing—you saw which components check during change detection, how long they take, and could drill down to the exact cause. Without this visibility, you'd be guessing based on symptoms ("the page feels sluggish") rather than data.

Component explorer and dependency injection inspector: Angular DevTools also shows the component tree with injected dependencies. Select any component to see:

UserListComponent



Inputs:

  users: Array(50)

  filter: "active"



Outputs:

  userSelected: EventEmitter



Providers:

  UserService (singleton from root)

  DateFormatService (component-level)



State:

  selectedUserId: 42

  isLoading: false

This is crucial for understanding Angular's dependency injection—you can see exactly which services are injected into each component and at what level (root, module, or component). When debugging service state issues, this tells you whether components share a service instance or have separate instances.

Redux DevTools: State change archaeology

Redux (and similar state management libraries like Vuex, NgRx, Pinia) centralizes state management, making it highly traceable. Redux DevTools is the most mature state debugging tool in the JavaScript ecosystem.

Install the Redux DevTools browser extension. It works automatically with any Redux-enabled application. Open the "Redux" tab in DevTools and you see:

Actions History:

  1. @@INIT

  2. SET_USER {userId: 42}

  3. FETCH_POSTS_REQUEST {}

  4. FETCH_POSTS_SUCCESS {posts: [...]}

  5. UPDATE_POST {postId: 5, changes: {title: "New Title"}}

  6. DELETE_POST {postId: 3}

This is a complete log of every state change in your application. Click any action to see:

  1. Action details: The exact action object dispatched

  2. State before: Complete application state before this action

  3. State after: Complete application state after this action

  4. Diff: What changed (highlighted)

Let's trace a bug: a post deletion doesn't update the UI. You see action #6: DELETE_POST {postId: 3}. Click it and examine the diff:

State diff for DELETE_POST:

  posts:

    - byId:

        3: {id: 3, title: "Post 3"} ← Still present!

  + deletingPosts:

      3: true ← Added

The post wasn't actually deleted from posts.byId—instead, it was added to deletingPosts. The reducer is broken. Click "Action" to see where this action was dispatched:

// Stack trace

  deletePost (actions/posts.js:45)

  handleDelete (components/PostItem.jsx:23)

  onClick (PostItem.jsx:18)

You've traced from symptom (UI doesn't update) → action (DELETE_POST) → state change (wrong mutation) → code location (reducer bug) in under a minute.

Time-travel debugging: Redux DevTools lets you replay state changes. Click any action in the history, and your application's UI jumps to that point in time. You can step forward and backward through actions, watching the UI update accordingly. This is invaluable for understanding complex state transitions:

  1. Reproduce the bug

  2. Use Redux DevTools to step backward through actions

  3. Identify the first action where state became incorrect

  4. Examine that reducer's logic

You're debugging state changes by literally traveling through time, not by setting breakpoints and re-triggering the entire flow.

Action filtering and dispatch: You can filter actions (e.g., show only FETCH_* actions) and manually dispatch actions from DevTools. This lets you test edge cases: "What happens if I dispatch UPDATE_POST with invalid data?" You can experiment with state changes without writing test code.