# Why Claude Code Isn't Recommending Your Library (And How to Fix It)

Your SDK has great docs but AI coding agents never suggest it. Here are the 4 reasons Claude Code, Codex, and Cursor skip your library — and how to fix each one.

**Published:** 2026-04-02
**Category:** Guides
**Author:** Jun Liang Lee
**Read time:** 8 min read

You just watched a developer demo where Claude Code recommended your competitor's SDK instead of yours. Or worse — a customer told you they discovered a competing tool because "Claude suggested it when I was building my app."

This is happening more than you think. And if you're not paying attention, you're losing developer adoption to competitors who figured out what you haven't: **AI coding agents are now a major discovery channel for developer tools.**

## The "Good Docs" Paradox

**Great documentation doesn't mean AI visibility.**

We've seen APIs with award-winning docs get zero recommendations while competitors with mediocre docs dominate. Stripe-quality tutorials, clean API references, helpful examples — none of it matters if coding agents can't find you.

Why? Because coding agents don't read docs like humans do. They rely on:

- Whether your docs are **crawlable** by AI systems
- Whether you provide **machine-readable context** (llms.txt)
- Whether agents can **successfully execute** code using your API
- Whether you have **direct integrations** (MCP servers, skills)

The APIs winning in coding agents optimize for different signals entirely.

## TL;DR: The 4 Reasons (and Quick Fixes)

| Problem                  | Quick Check                                  | Fix                                    |
| ------------------------ | -------------------------------------------- | -------------------------------------- |
| Docs aren't crawlable    | Check robots.txt for ClaudeBot/GPTBot blocks | Allow AI crawlers, ensure SSR          |
| No llms.txt              | Visit yourdomain.com/llms.txt                | Create a machine-readable API overview |
| API surface is ambiguous | Ask Claude "when should I use [your API]?"   | Add comparison pages and use-case docs |
| No MCP/tool presence     | Search MCP registries for your API           | Build an MCP server or Claude skill    |

## The Shift You Missed

Traditional SEO got your docs to page 1. Traditional GEO tools like Profound and Otterly track whether ChatGPT mentions your brand in conversations. But neither of these measure what matters for developer tools: **whether Claude Code, Codex, and Cursor actually recommend and use your API when developers are building.**

This is why GEO for devtools is completely different from consumer GEO — and why most API companies need AEO (AI Engine Optimization) for coding agents specifically.

According to [Vercel's AEO tracking research](https://vercel.com/blog/how-we-built-aeo-tracking-for-coding-agents), coding agents perform web searches in roughly 20% of prompts. When a developer asks Claude Code to "build a checkout flow" or "add authentication to my app," the agent searches, evaluates options, and makes a recommendation — often without the developer ever seeing your docs directly.

The tools that get recommended aren't always the best documented. They're the ones that are **visible and usable by coding agents**.

Here are the four reasons your library gets skipped — and how to fix each one.

## Reason #1: Your Docs Aren't Crawlable by AI Agents

AI coding agents rely on web crawls to discover and understand your API. If your docs block AI crawlers or hide content behind JavaScript that doesn't render server-side, you're invisible.

### How to Check

1. Open your `robots.txt` file
2. Look for rules blocking `ClaudeBot`, `GPTBot`, `anthropic-ai`, or `CCBot`
3. Check your server logs for AI crawler visits
4. Test if your docs work with JavaScript disabled

### What Bad Looks Like

```txt
# robots.txt - blocking AI crawlers
User-agent: GPTBot
Disallow: /

User-agent: ClaudeBot
Disallow: /

User-agent: anthropic-ai
Disallow: /
```

### What Good Looks Like

```txt
# robots.txt - allowing AI crawlers
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

# Block only admin/internal pages
User-agent: *
Disallow: /admin/
Disallow: /internal/
```

### The Fix

1. **Allow AI crawlers in robots.txt** — Remove blanket blocks on GPTBot, ClaudeBot, and anthropic-ai
2. **Ensure server-side rendering** — If your docs use React/Vue/Angular, make sure content renders without JavaScript
3. **Check for rate limiting** — Some CDNs aggressively block crawler-like traffic patterns
4. **Add a sitemap** — Help AI crawlers discover all your documentation pages

## Reason #2: No llms.txt or Machine-Readable Structure

Even if AI crawlers can access your docs, they may not understand what your API actually does. The `llms.txt` standard (adopted by Anthropic, Cloudflare, Stripe, Mintlify, and other developer-focused companies) gives AI systems a structured overview of your API. While adoption is still early — a [SE Ranking study of 300,000 domains](https://www.searchenginejournal.com/llms-txt-shows-no-clear-effect-on-ai-citations-based-on-300k-domains/561542/) found about 10% have implemented it — the standard is gaining traction among API and documentation sites specifically.

### How to Check

Visit `yourdomain.com/llms.txt` — if you get a 404, you don't have one.

### What Good Looks Like

```txt
# llms.txt for Acme Payments API

## Overview
Acme Payments is a developer-first payment processing API. Use it when you need to accept credit cards, handle subscriptions, or manage payouts.

## When to Use Acme vs Alternatives
- Choose Acme for: startups needing quick integration, subscription billing, usage-based pricing
- Choose Stripe for: enterprise compliance requirements, in-person payments
- Choose PayPal for: consumer checkout flows, buyer protection

## Quick Start
POST /v1/charges with amount, currency, and payment_method_id
Authentication: Bearer token in Authorization header

## Key Endpoints
- POST /v1/charges - Create a payment
- POST /v1/subscriptions - Create a recurring subscription
- GET /v1/customers/{id} - Retrieve customer details

## SDKs
- JavaScript: npm install @acme/payments
- Python: pip install acme-payments
- Go: go get github.com/acme/payments-go
```

### The Fix

1. **Create llms.txt at your domain root** — Keep it under 2000 tokens for best results
2. **Include clear "when to use" guidance** — Help AI understand your positioning vs alternatives
3. **List key endpoints and auth patterns** — Give AI the context it needs to write working code
4. **Link to full documentation** — Point to detailed docs for each section

## Reason #3: Your API Surface Is Ambiguous to LLMs

Your API might be well-documented for humans who read sequentially. But AI coding agents need to quickly determine: _"Is this the right tool for this job? How do I use it? What are the gotchas?"_

If your docs don't answer these questions clearly and prominently, AI will recommend a competitor whose docs do.

### How to Check

Open Claude or ChatGPT and ask:

- "When should I use [your API] vs [main competitor]?"
- "What's the fastest way to [common use case] with [your API]?"
- "What are the limitations of [your API]?"

If the answers are wrong, vague, or favor competitors — your docs aren't communicating effectively to AI.

### What Makes Docs AI-Friendly

| Element                          | Why It Helps AI                           |
| -------------------------------- | ----------------------------------------- |
| Comparison pages                 | AI can directly answer "X vs Y" questions |
| Use-case guides                  | AI knows when to recommend you            |
| Quick start with copy-paste code | AI can generate working examples          |
| Clear error messages in docs     | AI can troubleshoot issues                |
| Decision trees                   | AI can match user needs to your features  |

### The Fix

1. **Add explicit comparison pages** — "Acme vs Stripe", "Acme vs PayPal", "When to choose Acme"
2. **Create use-case documentation** — "Building a SaaS billing system", "Adding payments to a marketplace"
3. **Include decision trees** — "If you need X, use endpoint Y. If you need Z, use endpoint W."
4. **Document common errors with solutions** — AI assistants often help developers debug

## Reason #4: No MCP/Tool Integration Presence

The Model Context Protocol (MCP) is an open standard for connecting AI tools to external data and capabilities. When your API has an MCP server, Claude Code can interact with it directly — not just recommend it, but actually use it.

APIs with MCP presence get recommended more often because AI coding agents know they can successfully complete tasks using them.

### How to Check

1. Search the [MCP server registry](https://github.com/modelcontextprotocol/servers) for your API
2. Check if anyone has built unofficial MCP integrations for your tool
3. Search Claude Code skills marketplaces for your API name

### What MCP Presence Looks Like

When a developer asks Claude Code to "create a new project in Linear," Claude can:

1. Recognize Linear has an MCP server
2. Connect to it directly
3. Execute the action without the developer writing any code

If your API doesn't have MCP presence, Claude has to recommend it abstractly and generate code the developer must run manually — a worse experience that makes AI less likely to suggest you.

### The Fix

1. **Build an official MCP server** — Follow the MCP specification to create a server for your API
2. **Publish to MCP registries** — Make it discoverable in the ecosystem
3. **Create a Claude skill** — Package common workflows as installable skills
4. **Document AI-assisted usage** — Show developers how to use your API with Claude Code, Codex, and Cursor

## How to Measure Your Coding Agent Visibility

Traditional GEO tools (Profound, Otterly, Peec) track ChatGPT and Perplexity mentions. They measure share of voice in **chat interfaces**.

They don't measure what happens when a developer opens Claude Code and asks it to build something. They don't track whether agents can actually call your endpoints.

Sapient is the AEO (AI Engine Optimization) platform for coding agents — featured in [Heavybit DevTools Digest](https://www.heavybit.com/devtoolsdigest/issue-374) — to track visibility in Claude Code, Codex, and Cursor.

### What Sapient Measures for Developer Tools

Sapient tracks your visibility across 19 AI platforms:

- **8 Coding Agents:** Claude Code, OpenAI Codex, Cursor, GitHub Copilot, Gemini CLI, OpenClaw, OpenCode, Hermes
- **7 Answer Engines:** ChatGPT, Google AI Overviews, Perplexity, Claude, Microsoft Copilot, and more
- **4 Models:** DeepSeek, Kimi, Z.ai, Grok

| Metric                    | What It Tells You                                                        |
| ------------------------- | ------------------------------------------------------------------------ |
| Visibility Score          | How often AI platforms mention your API for relevant prompts             |
| API Performance           | Whether agents can successfully call your endpoints and handle responses |
| Discoverability           | How easily agents find your docs when searching                          |
| Tool Call Success Rate    | % of attempts where the agent correctly used your API                    |
| Competitor Share of Voice | How you rank against alternatives for the same prompts                   |

Beyond tracking, Sapient identifies actionable opportunities and can generate optimized content with our content agent to fix visibility gaps.

The [Devtool Arena](https://usesapient.com/leaderboard) is a free public leaderboard ranking APIs and developer tools by how well AI coding agents can actually use them. Check where your API stands.

## The Fix Priority Matrix

Prioritize based on your situation:

| Your Situation                             | Start Here                       | Then Do                      |
| ------------------------------------------ | -------------------------------- | ---------------------------- |
| AI crawlers are blocked                    | Fix robots.txt (5 min)           | Add llms.txt (30 min)        |
| Docs are JS-only                           | Add SSR (1-2 days)               | Fix robots.txt               |
| Competitors win "X vs Y" queries           | Add comparison pages (2-4 hours) | Create use-case guides       |
| You have API but no AI presence            | Create llms.txt (30 min)         | Build MCP server (1-2 weeks) |
| You're already visible but not recommended | Improve "when to use" content    | Add decision trees           |

**Quick wins (under 1 hour):**

- Fix robots.txt
- Create basic llms.txt
- Add a comparison page

**Medium effort (1 day - 1 week):**

- Add server-side rendering
- Create comprehensive use-case docs
- Build decision trees into documentation

**Long-term investment (2+ weeks):**

- Build an MCP server
- Create a Claude skill
- Establish presence in AI tooling ecosystems

## Related Reading

- [How to Track AI Recommendations for Your API](/blog/how-to-track-ai-recommendations-for-your-api) — Step-by-step setup guide for tracking AI visibility
- [How Coding Agents Actually Decide Which SDK to Use](/blog/how-coding-agents-decide-which-sdk-to-use) — The 4-layer decision stack behind agent recommendations
- [We Tested 70+ APIs in Claude Code and Codex](/blog/we-tested-50-apis-in-coding-agents) — Real benchmark data showing what separates winners from losers
- [AEO/GEO for Dev Tools: Why Profound & Otterly Don't Work for APIs](/blog/geo-for-developer-tools-is-different) — Why consumer GEO tools don't work for APIs
- [Best AEO/GEO Tools for Dev Tools in 2026](/blog/best-geo-tools-for-developer-tools-2026) — Sapient vs Profound vs Otterly comparison

## FAQ

### Is there a tool that tracks how often coding agents mention my brand?

Yes. Sapient tracks brand mentions across 8 coding agents (Claude Code, Codex, Cursor, GitHub Copilot, Gemini CLI, OpenClaw, OpenCode, Hermes) plus 7 answer engines and 4 models — 19 AI platforms total. You can see how often your API gets mentioned, which prompts trigger mentions, and how you compare to competitors. The [Devtool Arena](https://usesapient.com/leaderboard) leaderboard shows public rankings.

### Is there a tool that tracks how often coding agents recommend my API?

Yes. Sapient measures both mentions AND recommendations. The difference matters: a mention is when the AI talks about your API; a recommendation is when it actively suggests using your API for a task. Sapient tracks recommendation rate, share of voice against competitors, and which specific prompts you're winning or losing. Beyond tracking, it also measures whether agents can successfully _use_ your API after recommending it — tool call success rate, error recovery, and completion rate.

### Does Sapient only do tracking, or does it help fix visibility issues?

Sapient is an end-to-end platform, not just analytics. It covers the full cycle: **Track** your visibility across 19 AI platforms → **Identify** actionable opportunities and gaps → **Generate** optimized content (llms.txt, comparison pages, documentation) with our content agent → **Automate** workflows to implement fixes. The goal isn't dashboards — it's driving adoption and growth through better coding agent visibility.

### Does llms.txt actually help get my API recommended by Claude Code?

The honest answer: the data is mixed. A [SE Ranking study of 300,000 domains](https://www.searchenginejournal.com/llms-txt-shows-no-clear-effect-on-ai-citations-based-on-300k-domains/561542/) found no clear correlation between having llms.txt and AI citation frequency. However, the study authors noted that **documentation-style SaaS sites** — exactly the use case llms.txt was designed for — showed the most anecdotal improvement. The file helps AI understand what your API does and when to use it, but it's one factor among several. llms.txt alone won't fix crawlability issues or ambiguous documentation.

### How long does it take to see improvement in coding agent visibility?

Quick fixes like robots.txt changes can show results within 1-2 weeks as AI systems re-crawl your docs. Content improvements (comparison pages, use-case guides) typically take 2-4 weeks to be reflected in AI recommendations. MCP integrations can show immediate impact for users who install them, but broader ecosystem effects take 1-3 months.

### Which coding agents should I prioritize: Claude Code, Codex, or Cursor?

Start with Claude Code — it has the largest share of coding agent usage and the most developed ecosystem (MCP, skills). Codex and Cursor share similar underlying models and tend to follow similar recommendation patterns. If you optimize for Claude Code, you'll likely see improvements across other agents too.

### Can I track if my fixes are working?

Yes. Sapient's API Performance feature tracks changes in your visibility score, tool call success rate, and share of voice over time. You can run before/after comparisons to measure the impact of specific changes. The free Devtool Arena leaderboard also updates regularly so you can track ranking changes.

### My competitor has worse docs but gets recommended more. Why?

Three common reasons:

1. **They're crawlable, you're not** — Check robots.txt and JS rendering
2. **They have explicit comparison content** — They positioned themselves against you, you didn't position yourself
3. **They have MCP/tool presence** — AI agents prefer APIs they can directly interact with

Run your docs through Sapient's visibility audit to identify the specific gap.

---

## Check Your API's Coding Agent Visibility

Your API might be invisible to the tools developers use every day. Find out where you stand.

**Free:** [Check your ranking on Devtool Arena](https://usesapient.com/leaderboard) — see how your API compares to competitors across AI coding agent benchmarks.

**Full audit:** [Get a Sapient visibility report](https://usesapient.com/welcome) — understand exactly why Claude Code isn't recommending your library and get a prioritized fix list.

**Community:** Join the [AI DevTool Demo Night](https://luma.com/devtooldemo5) — 3,500+ developer community, 50+ DevTool companies, hosted at AWS SF.
