# Claude Code Can Build Anything - But Does It Know What Your Customers Actually Need? > Claude Code hit $1B revenue in 6 months and writes 4% of all GitHub commits. But agentic coding tools build from ticket descriptions, not customer insights. MCP is the bridge that changes everything. --- [Claude Code](https://closedloop.sh/tag/claude+code)[Agentic Coding](https://closedloop.sh/tag/agentic+coding)[MCP](https://closedloop.sh/tag/mcp)[Product Intelligence](https://closedloop.sh/tag/product+intelligence)[AI Development](https://closedloop.sh/tag/ai+development) # Claude Code Can Build Anything - But Does It Know What Your Customers Actually Need? Feb 24, 2026 13 min read Jiri Kobelka Claude Code hit $1B revenue in 6 months and writes 4% of all GitHub commits. But agentic coding tools build from ticket descriptions, not customer insights. MCP is the bridge that changes everything. On this page On this page Key Takeaways Claude Code writes 4% of all GitHub commits today, projected to hit 20% by end of 2026 - but it builds from ticket descriptions, not customer context 64% of software features are rarely or never used after shipping, because customer signal is lost in the telephone chain between feedback and code MCP (Model Context Protocol) can connect coding agents to product intelligence - giving them the customer context that changes what gets built, not just how The teams that compound the most value from agentic coding won't have the best prompts - they'll have the best customer signal flowing into their agents Right now, **4% of every public commit on GitHub was written by Claude Code.**Not assisted. Not suggested. Written. By the end of 2026, that number is projected to exceed 20%. 4% of all GitHub commits are written by Claude Code - projected to reach 20% by end of 2026 This is the fastest-growing developer tool in the history of enterprise software. **Claude Code hit $1 billion in annualized revenue within six months of launch**- a milestone that took Salesforce a decade to reach, and Zoom four years even at the height of a global pandemic. As of early 2026, it's tracking toward $2.5 billion. Anthropic, which built it, now carries a $380 billion valuation and $14 billion in annualized revenue. The capability is real. Claude Code doesn't just write snippets. It reads your entire codebase, plans multi-file changes, runs tests, corrects its own errors, executes git operations, and works with other agents concurrently. Teams at **Netflix, Spotify, Salesforce, KPMG, and L'Oreal**have deployed it as part of serious engineering workflows. Anthropic's acquisition of Bun - the JavaScript runtime with 7 million monthly downloads and 82,000 GitHub stars - signals they're doubling down on making Claude Code faster and more deeply integrated into how developers actually work. This is not hype. The capability is here. **But there is a problem nobody is talking about.** ## The Input Problem When you give Claude Code a task, you give it a description. _"Implement SSO login."_ _"Add multi-currency support."_ _"Build an audit log for enterprise customers."_ Claude Code goes to work. It reads your authentication module, your user model, your API layer. It writes clean, idiomatic code. It handles edge cases. It runs your test suite and fixes what breaks. It opens a pull request with a coherent description. **It does everything right, from a technical standpoint.** But it had no idea that **47 different customers**asked for SSO over the past eight months. It did not know that **80% of those requests specifically mentioned Okta**, not SAML in general. It had no visibility into the **three enterprise deals, worth a combined $6 million**, that the sales team marked as blocked pending SSO support. It could not see that the two largest accounts threatening churn this quarter both cited SSO in their support tickets. **Claude Code built what the ticket said. The ticket did not say any of that.** This is not a failure of agentic AI. This is a structural problem in how software gets built - and it predates AI by decades. Agentic coding does not fix it. **In many ways, it accelerates it.** ## The Chain That Loses Everything Think about what happens between a customer expressing a need and a developer - or an AI agent - receiving a task. The Signal Degradation Chain Customer context lost at every handoff Customer 100% “We need Okta SSO or we can't migrate” Account Manager 70% “Deal blocked, needs SSO” Sales Leadership 40% “SSO is enterprise blocker” VP Product 20% “SSO, enterprise blocker” Jira Ticket 8% “Implement SSO login” Claude Code 3% “Implement SSO login” A customer tells their account manager they cannot move to your platform without SSO. The account manager logs a note in the CRM, flags the deal as blocked, and asks sales leadership to escalate. Sales leadership emails the VP of Product. The VP of Product opens a planning doc and writes a bullet: "SSO - enterprise blocker." In the quarterly planning meeting, that becomes a ticket: "Implement SSO login." The ticket lands in the backlog. Three months later, it reaches the top of the sprint. The developer - or Claude Code - picks it up. **By this point, the original signal has passed through five or more layers of translation.**The customer's specific words are gone. The Okta requirement is gone. The deal context is gone. The urgency is gone. The pattern across 47 customers is gone. What remains is three words: _"Implement SSO login."_ 64% of software features are rarely or never used after shipping - because context is lost between customer and code This is not a new problem. The Standish Group has studied it for thirty years. Their research consistently finds that **64% of software features are rarely or never used after they ship.**Nearly two-thirds of the code written, tested, deployed, and maintained by engineering teams delivers negligible value to the people it was built for. The reason is almost always the same: **the team built what they were told to build, not what customers actually needed.**Because by the time a customer need reaches engineering, the context that would make it useful is gone. Agentic AI inherits this problem completely. It executes with extraordinary precision on the inputs it receives. **The inputs are still broken.** ## The Confidence Gap There is a related dynamic that makes this worse. 80% of companies believe they deliver a superior customer experience vs 8% of customers agree This gap is not cynicism or corporate delusion. **It's a measurement problem.**The teams building products genuinely believe they are close to their customers because they have processes for collecting feedback, running NPS surveys, and holding quarterly business reviews. Those processes feel rigorous. But they are **lagging, sampled, and filtered.**By the time a pattern of customer frustration has become visible through formal feedback channels, months have passed and the signal has been compressed into averages and categories that strip away the specific, urgent, contextual information that would actually change what gets built. This is why Anthropic's own research found that **developers can only "fully delegate" between 0% and 20% of tasks to AI agents**today. The technical capability to execute is not the limiting factor. The limiting factor is that agents cannot judge the value of what they are building, because that judgment requires information they do not have access to. A developer with twenty years of domain experience and a real relationship with three enterprise customers **brings implicit context to every task they pick up.**They know which tickets matter more than the priority field suggests. They know when a seemingly small request represents a pattern they have heard five times in the last month. An AI agent starting cold from a ticket has none of that. It has exceptional execution capability and **an information vacuum.** ## The Protocol That Was Built for This In late 2024, Anthropic published the **Model Context Protocol - MCP.**It is an open standard that defines how AI agents connect to external data sources and tools. The adoption has been extraordinary. 17,000+ MCP servers published, with 97M+ monthly SDK downloads and adoption by OpenAI, Google, and 28% of Fortune 500 The reason for this adoption is straightforward: **MCP solves a real problem.**AI agents are powerful, but they are isolated. They can only act on information that exists in their context window. MCP is the mechanism for getting the right information into that context window at the right time. The most common early MCP implementations connected agents to databases, documentation systems, and code repositories - giving them better access to technical context. This is useful but it is still technical context. **It improves how agents build. It does not address what they build.** The more important application of MCP, and the one that is still underdeployed, is **connecting agents to product intelligence**- the aggregated, structured record of what customers have actually asked for, what is blocking deals, what is driving churn, and what patterns have emerged across thousands of customer conversations. ## What Changes When Product Intelligence Is in the Loop Return to the SSO example. Claude Code receives the task: "Implement SSO login." Without Product Intelligence Reads codebase and ticket description only Defaults to OAuth, the most common pattern in open-source examples Builds a technically correct, generic SSO implementation Ships something that works, but not what 80% of customers needed With Product Intelligence via MCP Queries customer signal before writing the first line of code Learns 38 of 47 requests specifically mention Okta and SAML 2.0 Builds Okta SAML 2.0 first, the integration 80% of customers need PR references $6M blocked pipeline and churn risk accounts This does not require Claude Code to make a business decision. **It requires Claude Code to have the information that any senior developer would want before starting a significant feature.**With that information, it builds Okta SAML 2.0 first. It prioritizes the integration path that 80% of customers actually need. It writes the implementation notes to reference the deal context. It surfaces, in the pull request, the customer signal that motivated the work. **This is not a marginal improvement.**It is the difference between shipping something technically correct and shipping something that moves the business. ## The Architecture That Makes This Possible The 57% of organizations that have already deployed multi-step agent workflows are discovering that **the bottleneck is not agent capability - it is agent context.** The technical architecture for solving this is available today. An MCP server sits between the product intelligence layer and the coding agent. It exposes structured queries: what are customers asking for in this feature area, what deals are blocked by this capability, what support tickets relate to this domain, what churn signals are associated with this part of the product. When Claude Code starts a task, it queries the MCP server. The response shapes how it approaches the implementation - which edge cases to prioritize, which integrations to build first, which documentation to write, which internal stakeholders to surface in the PR description. **The agent team structure that Claude Code supports**- where one agent researches, another plans, another implements, another reviews - can incorporate this product context at every stage. The researcher agent queries customer signal before planning begins. The reviewer agent checks whether the implementation addresses the highest-priority customer patterns before the PR is approved. None of this requires rebuilding the development process. **It requires connecting the context that already exists**in customer conversation data, CRM notes, support tickets, and sales call recordings to the agents that are already doing the work. ## What Engineers Actually Lose When Context Is Stripped There is a version of this problem that gets framed as a product management failure: PMs should write better tickets, include more customer context, be more specific about requirements. **This framing puts the burden in the wrong place.** Product managers are not losing context because they are careless. They are losing context because the systems they use to collect, analyze, and communicate customer signal are not designed to flow that signal into engineering workflows. Survey tools output spreadsheets. CRM notes live in sales systems. Support ticket data sits in helpdesk platforms. Call recordings are transcribed and filed. **Each system holds a piece of the picture, and none of them natively connect to the place where building decisions are made.** **This is the silo problem that enterprise software has struggled with for decades - except now the cost of those silos is amplified by agents that build at machine speed from incomplete information.** The result is that the average senior engineer - someone with years of domain experience who genuinely wants to build the right things - is working from **a remarkably thin slice of available customer information.**They see the ticket. If they're lucky, there's a Slack thread with some additional context. If they are exceptionally proactive, they might read a few recent support tickets in the same area before starting. They are not seeing the 47-customer request pattern. They are not seeing the revenue attached to open deals. They are not seeing the churn risk signals from the accounts in their feature area. **And Claude Code sees even less.**It sees the ticket and the codebase. “ A product intelligence MCP server does not replace product management or engineering judgment. It restores the information that both deserve to be working with. ## The Compounding Effect on Roadmap ROI Consider what happens across a twelve-month roadmap when the context problem is solved versus when it is not. In the status quo, **each feature that ships carries the drift introduced at every stage of the telephone chain.**Some features land accurately - often because a single, vocal champion inside the company maintained the context through sheer persistence. Many others land slightly or significantly off from what customers actually needed. The team celebrates a ship, and then watches usage metrics tell a different story. Adoption is lower than projected. Enterprise renewals come with requests to prioritize different things. The features that were supposed to reduce churn do not move the number. In a world where coding agents have access to product intelligence at the moment of implementation, **the drift compounds differently.**Each feature is built with specificity that the ticket description alone could never provide. The Okta-first SSO implementation lands in the accounts that needed it. The audit log is built to the ISO 27001 standard that three enterprise prospects had specifically asked about. The multi-currency support handles the Euro edge cases that showed up in seventeen support tickets from EMEA customers. **The total throughput of the engineering organization has not changed.**The agents are building at the same rate. But the proportion of that throughput that lands on things customers actually needed - and can therefore be recognized as value - shifts materially. This is the lever that agentic coding makes newly urgent. **When agents are shipping 20% of your commits, the ROI multiplier of getting each commit right is enormous.**Equally, the cost of building 64% of features for minimal usage - at agent velocity - is enormous. “ Speed amplifies everything. The question is whether you are amplifying the right signal. ## The Inversion That Is Coming Today's conversation about agentic coding is almost entirely about capability. What can the agent do? How many files can it change at once? How reliably does it write tests? How well does it handle complex refactors? These are real questions, and the progress on all of them has been dramatic. **But they are the wrong frame for understanding where value is actually created or destroyed.** Value is created when the right things get built. Value is destroyed when 64% of shipped features go unused because they were built from ticket descriptions that had lost their customer context by the time they reached engineering. **The teams that will compound the most value from agentic coding**are not necessarily the teams with the best prompt engineering or the most sophisticated CI/CD pipelines. They are the teams that solve the context problem - that find a way to get real customer signal into the agent's decision-making at the moment the agent is working. MCP is the mechanism. The question is what data sits on the other side of that connection. An agent that can read your codebase is powerful. **An agent that can read your codebase and understand what your customers have been asking for**, what your sales team is blocked on, and what patterns of frustration are driving churn - that agent builds differently. It prioritizes differently. The pull requests it opens move different needles. The $2.5 billion trajectory of Claude Code reflects genuine capability. The 20% of GitHub commits projected by end of 2026 will happen. **The question is whether those commits reflect what customers actually need**, or whether they reflect increasingly well-executed implementations of the same broken telephone chain that has always lost context between customer and code. ## Connect Your Product Intelligence to Claude Code The architecture described above is not theoretical. ClosedLoop AI's MCP server connects Claude Code to your customer signal today - feedback patterns, deal blockers, churn risk, and feature requests from 40+ integrations. 1 Get your API key [app.closedloop.sh/api-keys →](https://app.closedloop.sh/api-keys)2 Run the installer $ curl -fsSL https://closedloop.sh/install | bash Copy 3 Ask Claude about your customers [Get API Key](https://app.closedloop.sh/auth?mode=signup)[Read Docs](https://closedloop.sh/docs/mcp-server) The setup takes under two minutes. Once installed, Claude Code automatically pulls customer evidence when you ask it to implement a feature, write a spec, or scope a v1. No extra commands - the context is just there. ## Closing the Loop The missing layer is **product intelligence**- the structured, synthesized signal from customer conversations, support interactions, sales calls, and deal data that tells an agent not just what to build but why it matters and for whom. This is exactly what ClosedLoop AI's MCP server provides: **a direct connection from the customer context layer to the coding agent layer**, so that the features Claude Code builds are informed by the signals your customers have actually been sending. **The agents are ready. The protocol exists. The customer signal is already there. The question is whether you connect them.** ![Jiri Kobelka](/assets/images/jiri-kobelka.png)Jiri Kobelka Founder We build tools that turn customer conversations into product decisions. ClosedLoop AI analyzes feedback from 40+ integrations to surface the insights that matter. ### Get insights like this in your inbox Product intelligence insights delivered weekly. No spam, just signal. Subscribe Join product leaders from companies using ClosedLoop AI ## Related Articles More insights you might find useful Strategy Feb 21, 2026 ### [Best AI Product Discovery Tools in 2026](https://closedloop.sh/blog/best-ai-product-discovery-tools-2026) A detailed comparison of the 5 leading AI product discovery tools in 2026 — Productboard + Spark, Dovetail, Enterpret,... 5 min read [Read Article](https://closedloop.sh/blog/best-ai-product-discovery-tools-2026)Case Study Nov 12, 2025 ### [Case Study: How Canonical Labs Scales Risk Intelligence with Product Clarity](https://closedloop.sh/blog/canonical-labs-case-study) Discover how Canonical Labs, a risk-intelligence infrastructure company, transformed their product development by unifyi... 5 min read [Read Article](https://closedloop.sh/blog/canonical-labs-case-study)Strategy Sep 27, 2025 ### [The Product Manager's Secret Weapon: Command Line Customer Intelligence](https://closedloop.sh/blog/cli-secret-weapon) Stop drowning in scattered feedback! Product managers are using this simple CLI trick to turn 20 hours of manual analysi... 5 min read [Read Article](https://closedloop.sh/blog/cli-secret-weapon)[Browse All Articles](https://closedloop.sh/blog) --- ## More Information - Website: https://closedloop.sh - Documentation: /docs - Pricing: https://closedloop.sh/pricing - Contact: https://closedloop.sh/contact