Measuring AI Adoption & Tool Usage — What to Track Before You Code



Smiling person in layered hair w/eyelashes,gesturing

Published on 8 January 2026 by Zoia Baletska

afa22y.webp

AI tools are increasingly embedded into developers’ daily workflows — from IDE autocomplete and chat-based assistants to AI-powered pull-request reviews and test generation.

But before you try to measure whether AI improves productivity, quality, or Developer Experience, there’s a more basic question you must answer: are developers actually using AI — and how?

Many organisations skip this step. They jump straight to output metrics (PRs, cycle time, bug rates) without understanding adoption patterns underneath. The result is confusing data, false conclusions, and “AI impact” debates driven by anecdotes instead of evidence.

This article focuses on Layer 1 of AI impact measurement: adoption & tool usage metrics — what to track, why it matters, and how to collect the data responsibly before you start measuring code output.

Why AI adoption metrics come first

AI impact cannot be measured in a vacuum. If only 20% of developers actively use AI tools — or if usage is shallow and inconsistent — then any downstream productivity metric will be noisy at best and misleading at worst.

Adoption metrics help you answer foundational questions:

  • Who is using AI tools — and who isn’t?

  • How frequently are they used?

  • Are developers experimenting, or relying on AI as a daily productivity aid?

  • Which tools are becoming core — and which are abandoned after trials?

Without these answers, “AI helped us” or “AI slowed us down” are both weak claims.

Core adoption metrics you should track

1. DAU / WAU / MAU — Active AI Users

DAU, WAU, and MAU stand for Daily, Weekly, and Monthly Active Users — metrics traditionally used in product analytics to understand how often people use a product. Applied to AI developer tools, they answer a crucial first question: are engineers actually using the tool consistently, or just trying it once and forgetting it? DAU shows how many developers rely on AI as part of their daily workflow, WAU captures more occasional but recurring use, and MAU reflects broad adoption across the organisation. Looking at these metrics together helps distinguish real behavioural change from novelty-driven experimentation. A tool with high MAU but low DAU may be “enabled” but not trusted for day-to-day work, while strong DAU/WAU ratios usually indicate AI has become embedded in the development process — a prerequisite before any productivity or quality gains can realistically occur.

What it measures:
The percentage of developers who actively use AI tools daily, weekly, or monthly.

Why it matters:
Adoption is rarely uniform. You’ll often see:

  • Early adopters using AI daily

  • Occasional users experimenting

  • A silent majority not using AI at all

If only a small subset of the team uses AI, improvements (or regressions) in delivery metrics may reflect team composition, not AI effectiveness.

What to watch for:

  • Adoption plateauing after initial rollout

  • Declining MAU (sign of novelty wearing off)

  • Strong DAU but low WAU (short-lived bursts, not habit formation)

2. Session Frequency & Session Depth

What it measures:
How often developers invoke AI tools — and how intensively they use them per session.

Not all “AI usage” is equal:

  • Single-line autocomplete ≠ multi-turn refactor sessions

  • One-off prompts ≠ iterative problem-solving conversations

Useful signals:

  • Prompts per session

  • Duration of AI-assisted sessions

  • Ratio of shallow vs deep usage

Why it matters:
Deep, repeated sessions are more likely to correlate with real productivity gains. Shallow usage often reflects curiosity, experimentation, or low trust.

3. Prompt Complexity & Intent

What it measures:
The nature of requests developers make to AI tools.

Examples:

  • Simple syntax or boilerplate generation

  • Refactoring or architectural suggestions

  • Debugging and root-cause analysis

  • Test generation or documentation writing

Why it matters:
Prompt complexity reveals how developers perceive AI:

  • As a faster autocomplete

  • As a pair programmer

  • Or as a design assistant

This helps explain why some teams benefit more than others — and why experienced developers sometimes slow down when AI suggestions conflict with deep domain knowledge.

4. Tool Diversity Index

What it measures:
How many distinct AI tools are actively used across the team?

Examples:

  • IDE copilots

  • PR review bots

  • Test generation tools

  • Documentation assistants

  • Security or compliance AI scanners

Why it matters:
High tool diversity often signals mature adoption, where AI is embedded across the software lifecycle — not just coding. Low diversity may indicate narrow use cases or unresolved trust issues.

5. Retention & Drop-off Rates

What it measures:
How many developers stop using AI after initial exposure?

Why it matters:
Drop-off is one of the strongest early warning signals that:

  • AI suggestions aren’t useful

  • Context quality is poor

  • Overhead outweighs benefits

  • Generated code creates more rework

High churn often precedes negative productivity outcomes.

How to collect AI adoption data (without violating trust)

IDE plugins & tool telemetry

Most AI tools already emit usage signals:

  • Invocation counts — How many times a developer calls an AI feature (e.g., code completion, test generation) within a session. Helps estimate adoption frequency.

  • Session duration — The length of time a developer interacts with the AI tool in one session. Longer sessions may indicate deeper engagement.

  • Feature usage — Which specific AI capabilities are being used (e.g., refactoring, pull-request suggestions, documentation generation). Highlight which tools add real value.

  • Error or rejection signals — Tracks when AI suggestions are ignored, rejected, or cause errors. Useful to identify friction points and improve the tool or workflow.

Best practices:

  • Aggregate at the team level, not the individual level

  • Avoid storing raw prompt content

  • Focus on patterns, not surveillance

Logs & platform-level analytics

For tools integrated into CI/CD or PR workflows:

  • PR review invocation frequency

  • AI-generated comments accepted vs ignored

  • Test generation success rates

These signals complement IDE data and show whether AI extends beyond local development.

Lightweight developer surveys

Quantitative data alone won’t explain why usage looks the way it does.

Short, recurring surveys can reveal:

  • Trust levels

  • Perceived usefulness

  • Friction points

  • Cognitive overhead

Used quarterly, these surveys add crucial context without becoming noisy.

Common mistakes in adoption measurement

  • Counting “installed” tools instead of active users

  • Equating usage frequency with effectiveness

  • Ignoring drop-off and churn

  • Tracking prompts but not outcomes

Treating all AI usage as equal

Most failed AI measurement efforts collapse here — long before output metrics enter the picture.

Why does this layer set up everything else

Adoption metrics are not the goal — they’re the foundation.

Once you understand:

  • Who uses AI

  • How deeply they use it

  • Where it fits in the workflow

…you can responsibly move to Layer 2: output and quality metrics, and later to Layer 3: Developer Experience and long-term health.

Without this layer, downstream measurements are guesswork.

Where Agile Analytics fits in

At Agile Analytics, we treat AI adoption like any other engineering capability — something that must be observed, contextualised, and validated over time.

By combining AI usage telemetry, delivery and reliability metrics, developer feedback and DevEx signals we help teams understand not just if AI is used, but whether it creates real value — without compromising privacy or trust.

What’s next in the series

In the next article, we’ll move to Layer 2: Output Metrics — and show how to accurately measure code throughput, quality, and rework without falling into misleading “AI productivity” traps.

Supercharge your Software Delivery!

Become a High-Performing Agile Team with Agile Analytics

  • Implement DevOps with Agile Analytics

  • Implement Site Reliability with Agile Analytics

  • Implement Service Level Objectives with Agile Analytics

  • Implement DORA Metrics with Agile Analytics