# Cursor Makes Your Team 10x Faster - But Are They Building the Right Thing? > AI coding tools like Cursor have revolutionized development speed. But when 64% of features go unused, building faster without product intelligence just means wasting resources at unprecedented velocity. --- [Cursor](https://closedloop.sh/tag/cursor)[AI Coding](https://closedloop.sh/tag/ai+coding)[Product Intelligence](https://closedloop.sh/tag/product+intelligence)[Development Speed](https://closedloop.sh/tag/development+speed)[Feature Waste](https://closedloop.sh/tag/feature+waste) # Cursor Makes Your Team 10x Faster - But Are They Building the Right Thing? Oct 19, 2025 11 min read ClosedLoop AI Team AI coding tools like Cursor have revolutionized development speed. But when 64% of features go unused, building faster without product intelligence just means wasting resources at unprecedented velocity. On this page On this page Your team just shipped a feature in two days that would have taken two weeks. The code is clean, the tests pass, the deployment was smooth. The engineering org is buzzing with momentum. Cursor has genuinely changed what's possible. There's only one problem: the feature nobody asked for is still a feature nobody asked for. You just built it 10 times faster. Speed is not the same as direction. And in software development, the gap between those two things is where most product value disappears. ## The Speed Revolution Is Real Let's be clear: Cursor's rise is not hype. It is one of the most extraordinary growth stories in the history of enterprise software. The company hit $1 billion in ARR in approximately 24 months - a pace that has no meaningful precedent in SaaS. That growth reflects genuine, measurable value delivered to engineering teams. For context, most enterprise SaaS companies take seven to ten years to reach that milestone. Cursor did it in two. The productivity numbers back this up. Developers complete tasks 55% faster when using AI coding tools. By 2026, AI is estimated to write 41% of all code. These are not marginal improvements - they represent a fundamental shift in how software gets made. Cursor specifically excels at the mechanical work of software engineering: understanding your codebase at depth, autocompleting with context, generating boilerplate that actually fits your conventions, refactoring across files, writing tests. It takes the cognitive overhead of translation - turning intent into working code - and compresses it dramatically. For teams that have adopted it seriously, the before/after feels almost disorienting. Developers describe it the same way people describe the first time they used Google Maps instead of printed directions: you cannot quite imagine going back. And adoption is accelerating. Developers who use AI coding tools are not going back. The productivity gains are too real, the friction reduction too significant. AI-assisted development is not a trend. It is the new baseline. Engineering organizations that are not already evaluating these tools are already behind. So what's the problem? ## The Paradox Hidden in the Speed Numbers Here is a research finding that tends to get buried in the wave of enthusiasm about AI coding tools: a rigorous study by METR found that experienced developers working on real-world tasks were actually 19% _slower_when using AI assistance on tasks that were unfamiliar to them. Read that again. Nineteen percent slower. The explanation is not that AI tools are bad. It is that they require the developer to already understand what they are building. AI dramatically accelerates execution - but it does not generate clarity about what to execute. When the goal is fuzzy, AI-generated suggestions still need to be evaluated, directed, and corrected. The cognitive overhead shifts rather than disappears. This distinction matters enormously. Cursor is extraordinarily good at helping you build a thing faster once you know what thing you need to build. It is less help in answering the prior question: what should we be building at all? That question - the one that determines whether all the velocity means anything - lives almost entirely outside the IDE. ## Speed Amplifies Every Decision, Including the Bad Ones There is a compounding dynamic at work that most engineering leaders have not fully internalized. Speed is a force multiplier. It amplifies good decisions and bad decisions at equal rates. A team that is building the right things and ships faster wins more. A team that is building the wrong things and ships faster loses more, faster. Consider the baseline: the Standish Group's CHAOS Report has documented for decades that 64% of software features are rarely or never used by the people they were built for. Forty-five percent are never used at all. This is not a startup problem or a legacy-company problem. It is endemic to software development as an industry. That means, on average, the majority of what gets built does not meaningfully serve customers. Now apply a 55% productivity multiplier to that baseline. The teams that were producing 100 units of wasted work per quarter are now producing 155 units of wasted work per quarter, with the same headcount. The waste does not decrease because you can build faster. It increases, because you build more. The tools that make engineers faster also make feature waste faster. You cannot separate those two things without first solving the input problem: making sure the thing being built is actually the thing customers need. ## The Context Gap Between Customer and Code Here is the actual information flow in most software organizations, stated plainly. A customer experiences a problem or has a need. They express it to a sales rep, in a support ticket, in a call with their account manager, in a Slack message to their CSM, or in a quarterly business review. That expression gets interpreted and partially transcribed. The transcription gets summarized in a ticket. The ticket gets prioritized in a sprint. The sprint item gets picked up by a developer. The developer opens Cursor. That is five or more layers of translation, summarization, and information loss between the customer's actual need and the moment code gets written. At each layer, nuance disappears. Business context falls off. Urgency signals get normalized into priority scores. The specific customer who mentioned this - their industry, their use case, what was at stake for them - all of that is gone by the time the ticket lands in the IDE. Cursor reads your codebase with remarkable depth. It understands your patterns, your architecture, your naming conventions, your test structure. What it cannot read is your customers. The ticket description is the entire input. And the ticket description, by the time it reaches the developer, is a thin shadow of the original customer signal. This is the context gap. It is not a technology problem that better AI coding tools will solve. It is a structural problem in how product decisions get made and communicated. ## What Gets Lost in Translation To make this concrete, think about the categories of information that exist in customer-facing conversations and almost never make it into engineering tickets. **Deal context.**A customer mentioned in a sales call that this feature was a blocker for their signature. That information rarely flows to the developer who is actually building it. The developer does not know that what they are building affects a $200,000 ARR expansion - or that it was promised in a specific timeframe. **Pattern signals.**Twelve different customers have expressed versions of the same frustration across support, sales, and success conversations over six months. No single ticket captures that pattern. The developer working on any individual ticket sees one request, not twelve. The actual priority - which should be high - is invisible. **Churn indicators.**A customer said something in a conversation that an experienced CSM would immediately recognize as a churn signal. It was logged somewhere, but the connection between that signal and the product capability that would address it never got made explicitly enough to drive engineering priority. **Usage reality.**A feature was built based on what customers said they wanted. What they actually do in the product is different. That behavioral data exists in analytics but it has not been connected back to the roadmap in any systematic way. **Competitive intelligence.**A customer mentioned in passing that a competitor recently shipped something that changes their evaluation criteria. That comment was noted in a CRM field somewhere. It has not influenced sprint planning because nobody drew the connection. None of this is discoverable from inside the IDE. None of it flows automatically from the customer conversation to the engineering context. The developer working in Cursor is working with incomplete information, and the more powerful their tools, the faster they build on top of that incomplete information. ## The 80/8 Problem in Product Development Bain & Company published a finding that has become something of a landmark in product and customer experience circles: 80% of companies believe they deliver a superior customer experience; 8% of customers agree. That is not a small gap. It is a systematic delusion at scale. Companies are confidently, consistently wrong about whether they understand what their customers need - and they are wrong at rates that suggest the problem is structural, not individual. The same dynamic plays out in feature development. Product teams believe they understand their customers' priorities. They build accordingly. The features go unused or underused because the belief was based on incomplete signals, filtered feedback, and organizational distance from actual customer experience. This is where the confidence that AI tools can generate becomes dangerous. AI coding tools make it easier to feel productive. Velocity feels like progress. Shipping feels like winning. But 64% of the features being shipped at high velocity are rarely or never used, which means a lot of that productivity is very efficient waste. There is a related finding worth sitting with: only 3% of developers report "high trust" in AI-generated outputs. And yet 88% of AI-generated code stays in the final version. Developers are reviewing but largely not changing what the AI produces. The bottleneck has moved - it is no longer writing the code, it is knowing what code to write. That distinction is the entire product intelligence problem. ## The Input That Determines Everything There is a version of the AI coding productivity story that is incomplete in a way that matters. The story goes: AI makes developers faster, so teams ship more. More shipping means more value delivered. More value means better products and happier customers. Each step in that chain depends on the one before it being true. And the first step - AI makes developers faster - is clearly true. But the second step - more shipping means more value delivered - is only true if what is being shipped is what customers actually need. When 64% of features go unused, more shipping does not mean more value. It means more well-engineered, professionally deployed, high-velocity waste. The input that determines whether the chain holds is product intelligence: a clear, continuous, accurate signal about what customers need, what is frustrating them, what is blocking them, what would change their business if it existed. That signal has to flow from customer conversations into prioritization decisions before it can flow into the IDE. Right now, for most organizations, that flow is broken. The customer feedback exists - in Gong calls, in Zendesk tickets, in Slack conversations, in HubSpot notes, in support transcripts. It is not reaching engineering in a form that is actionable, contextualized, or connected to the actual words customers used. Consider how this plays out in a typical sprint planning session. A product manager presents a prioritized list of tickets. Developers ask clarifying questions. The PM answers as best they can, drawing on their mental model of what customers need - a mental model formed from a combination of formal research, stakeholder conversations, intuition, and organizational politics. Nobody in that room has direct access to the customer conversations that should be driving those decisions. Everyone is working from a representation of a representation. Then the developers open Cursor, and they build at extraordinary speed based on that attenuated signal. Cursor can make your developers extraordinarily fast once they know what to build. The hard problem - the one that determines whether all that speed translates into products customers love - is making sure they know what to build in the first place. And that hard problem has gotten more expensive to ignore with every percentage point of productivity improvement AI coding tools have delivered. ## Speed Is Not the Constraint The constraint on shipping great software has never been how fast developers can write code. Even before AI coding tools, most organizations could ship more than their product intelligence could justify. The bottleneck was always upstream: knowing what to build, being confident enough in that knowledge to commit resources to it, and having the customer signal to validate or correct course quickly. AI coding tools have radically improved the cost and speed of execution. They have not touched the cost and speed of understanding customers. That asymmetry matters more than it used to. If execution was already cheaper than product intelligence before Cursor, it is dramatically cheaper now. A mid-sized engineering team that has adopted AI coding tools can ship in one quarter what previously took a year. That is not an exaggeration - it is what teams are reporting. But the product decision quality feeding that execution engine has not improved at the same rate. Most teams are still running quarterly customer interviews, annual satisfaction surveys, and informal feedback collection that has not changed meaningfully in a decade. The result is a widening gap: faster and faster execution, running on product intelligence infrastructure that was designed for a world where shipping was slow. The mismatch is becoming visible in utilization rates, churn patterns, and customer satisfaction scores that do not move despite genuine engineering velocity. The teams that will win with AI coding tools are not just the ones that adopt them the fastest - they are the ones that pair that execution speed with equally fast, equally reliable insight into what customers actually need. They are the ones who understand that Cursor and tools like it have not eliminated the need for product intelligence. They have made it more urgent. An 84% majority of product teams already report worrying about building the wrong thing. That anxiety is well-founded. And the solution is not to slow down development - it is to make sure the signal flowing into development is strong enough to deserve the velocity. Building faster matters. Building the right thing matters more. And the faster you build, the more it matters that you know which one you're doing. ## Closing the Loop The gap between customer insight and engineering context is not a new problem. What has changed is its cost. When you ship features at 10x the previous speed, you need product intelligence flowing into your development workflow at the same rate. That is what ClosedLoop AI is built to do: connect the signals from customer conversations - sales calls, support tickets, success interactions - directly to the product decisions that drive your roadmap, so the things your team builds at unprecedented speed are the things customers were actually asking for. _If your team has adopted AI coding tools and wants to make sure the velocity is pointed in the right direction, [see how ClosedLoop AI connects customer feedback to product decisions](https://closedloop.sh/product-teams)._ ![Jiri Kobelka](/assets/images/jiri-kobelka.png)Jiri Kobelka Founder We build tools that turn customer conversations into product decisions. ClosedLoop AI analyzes feedback from 40+ integrations to surface the insights that matter. ### Get insights like this in your inbox Product intelligence insights delivered weekly. No spam, just signal. Subscribe Join product leaders from companies using ClosedLoop AI ## Related Articles More insights you might find useful Strategy Feb 21, 2026 ### [Best AI Product Discovery Tools in 2026](https://closedloop.sh/blog/best-ai-product-discovery-tools-2026) A detailed comparison of the 5 leading AI product discovery tools in 2026 — Productboard + Spark, Dovetail, Enterpret,... 5 min read [Read Article](https://closedloop.sh/blog/best-ai-product-discovery-tools-2026)Case Study Nov 12, 2025 ### [Case Study: How Canonical Labs Scales Risk Intelligence with Product Clarity](https://closedloop.sh/blog/canonical-labs-case-study) Discover how Canonical Labs, a risk-intelligence infrastructure company, transformed their product development by unifyi... 5 min read [Read Article](https://closedloop.sh/blog/canonical-labs-case-study)AI & Engineering Feb 24, 2026 ### [Claude Code Can Build Anything - But Does It Know What Your Customers Actually Need?](https://closedloop.sh/blog/claude-code-agentic-coding-needs-product-intelligence) Claude Code hit $1B revenue in 6 months and writes 4% of all GitHub commits. But agentic coding tools build from ticket ... 5 min read [Read Article](https://closedloop.sh/blog/claude-code-agentic-coding-needs-product-intelligence)[Browse All Articles](https://closedloop.sh/blog) --- ## More Information - Website: https://closedloop.sh - Documentation: /docs - Pricing: https://closedloop.sh/pricing - Contact: https://closedloop.sh/contact