# GitHub Issues Are the Most Technically Precise Product Feedback You're Ignoring > 150 million developers use GitHub. Their issues, discussions, and pull requests contain the most detailed, actionable product feedback any team could ask for - complete with reproduction steps and code examples. --- [GitHub](https://closedloop.sh/tag/github)[Developer Feedback](https://closedloop.sh/tag/developer+feedback)[Product Intelligence](https://closedloop.sh/tag/product+intelligence)[Technical Feedback](https://closedloop.sh/tag/technical+feedback)[Open Source](https://closedloop.sh/tag/open+source) # GitHub Issues Are the Most Technically Precise Product Feedback You're Ignoring Nov 9, 2025 12 min read ClosedLoop AI Team 150 million developers use GitHub. Their issues, discussions, and pull requests contain the most detailed, actionable product feedback any team could ask for - complete with reproduction steps and code examples. On this page On this page When a developer files a GitHub issue, something unusual happens. They do not say "the app is broken." They say: "Expected behavior: the function returns a sorted array. Actual behavior: returns unsorted when the array contains duplicate values. Reproduced on Node 20.11.0, not present on 18.x. Minimal reproduction: [linked repository]." That is not a complaint. That is a complete specification of a defect, written by someone who understands the system deeply enough to isolate the failure condition, identify the version boundary, and strip the problem down to its irreducible minimum. No survey in the world produces feedback this actionable. No NPS response, no customer interview, no in-app feedback widget comes close to the diagnostic precision embedded in that single issue report. This level of technical specificity is not exceptional on GitHub. It is the norm. And it represents one of the most underutilized sources of product intelligence available to software teams. ## The Scale of the Corpus GitHub's growth over the past two years reframes what it means to have access to developer feedback at scale. As of 2025, GitHub hosts more than 150 million registered developers -- a figure that grew from 100 million in 2023, meaning the platform added 50 million new users in roughly two years. Thirty-six million of those new developers joined in 2025 alone. These users work across more than 1 billion repositories. In 2024, they made 5 billion contributions across public and private projects. In 2025, they pushed nearly 1 billion commits -- a 25% increase year over year. The business context reinforces the significance of these numbers. GitHub generates approximately $2 billion in annual revenue. Ninety percent of Fortune 100 companies run code on GitHub. The platform has ceased to be a niche tool for open-source enthusiasts and become the de facto infrastructure of the global software industry. Layered on top of this infrastructure are issues, discussions, and pull requests -- the structured feedback mechanisms where developers document what is broken, what is missing, and how they think software should behave differently. Historical data from 2019 counted more than 20 million issues closed in a single year, a figure that has almost certainly grown substantially alongside the platform's doubling of its user base. The curl project, a foundational open-source library used in billions of devices, has accumulated 20,000 issues over its lifetime and historically closed half of all new issues within six hours. The Flutter team triages between 75 and 150 new issues every week, maintaining a living backlog that functions as one of the most granular public records of a developer tool's evolving requirements. The volume is staggering. But the more important story is what this volume actually contains. ## Developer Feedback Is a Different Category of Signal Every feedback channel produces signals of some kind. Support tickets describe problems. Sales call transcripts surface objections. NPS surveys indicate sentiment. Customer interviews reveal workflows. Developer feedback on GitHub is different in kind, not just in quantity. When a developer files an issue, they are typically solving a problem for themselves. They are not filling out a form because a customer success manager asked them to. They are not responding to a survey prompt. They are blocked, or they have encountered something unexpected, or they have identified a gap between what a tool does and what they need it to do -- and filing an issue is the most efficient path to a resolution. This self-selection effect means that GitHub issues represent feedback from users who are engaged deeply enough with a product to encounter its edge cases, motivated enough to document those edge cases precisely, and technically sophisticated enough to do so in a way that is immediately useful to engineers. The contrast with conventional feedback channels is stark. NPS surveys average a 12.4% response rate. In-app feedback widgets perform better, reaching around 36% in optimized implementations. But response rate is only part of the problem with these channels. The deeper issue is what the responses contain. An NPS detractor who scores a product a four out of ten conveys sentiment without information. A developer who files an issue on the same day conveys a precise failure mode, the conditions under which it manifests, the versions affected, and often a hypothesis about root cause. These are not different quantities of the same kind of feedback. They are qualitatively different kinds of information, with fundamentally different utility for product and engineering decisions. Research published at ICSE 2022 analyzing GitHub Discussions -- a more conversational layer added on top of issues -- found that technical discussions on GitHub exhibited more positive sentiment than equivalent conversations on Stack Overflow, and that the community norms around documentation and reproduction meant that even critical feedback tended to be constructive rather than merely negative. Developers filing issues are not just venting. They are trying to get something fixed. ## What Lives Inside GitHub Feedback The feedback captured across GitHub's issue tracker, discussions, and pull request conversations spans five distinct signal categories, each carrying different product intelligence value. **Feature requests with specificity that surveys cannot generate.**When developers request features through GitHub issues, they do so with technical precision. A feature request in a survey might say "better error messages." A feature request in a GitHub issue says: "When the API returns a 429, the current error message doesn't include the Retry-After header value, making exponential backoff implementations guess at the cooldown period. The raw header is present in the response object. Surfacing it in the error would let us write retry logic without inspecting raw headers." That issue contains not just a request but a complete specification, a rationale grounded in real implementation experience, and an indication of the upstream complexity the feature would eliminate. GitHub's reaction system -- the thumbs-up, heart, and rocket emoji responses on issues -- provides a crude but meaningful demand signal on top of this specificity. An issue with 847 reactions is not definitively proof of prioritization-worthy demand, but it is a stronger signal than a feature request that arrived as a single survey response. **Bug reports as structured defect documentation.**The issue template system on GitHub, particularly YAML-based forms introduced to encourage structured reporting, has dramatically raised the floor on bug report quality. Templates commonly prompt for expected behavior, actual behavior, reproduction steps, environment details, and relevant logs. Even without templates, community norms on established projects mean that developers filing issues tend to anticipate what information maintainers will ask for and include it proactively. The result is a corpus of bug reports that often reads like first-draft bug specifications. **Architectural feedback embedded in pull requests.**When a developer submits a pull request that changes a significant architectural pattern, the review conversation that follows frequently contains the most substantive design debate the team has. PR comments surface disagreements about API design, performance tradeoffs, backward compatibility concerns, and long-term maintainability considerations. This is product and architecture feedback of the highest quality, generated by engineers who have engaged deeply enough with the codebase to propose concrete changes. Most product intelligence pipelines never see it. **Integration demand through repository relationships.**The pattern of which projects open issues against which other projects -- requests for official integrations, compatibility questions, feature requests driven by a specific use case in a dependent tool -- reveals integration demand that developers rarely articulate in any other channel. A cluster of issues requesting Kubernetes operator support, appearing across multiple projects over a six-month window, signals an ecosystem demand pattern that would not surface in any CRM. **Performance and scale concerns from practitioners at limits.**Developers encountering performance ceilings tend to document them in detail: the data volume at which the problem appeared, the query that triggered it, the CPU and memory profile, the comparison to an earlier version or a competing tool. These reports are irreplaceable inputs for roadmap decisions about infrastructure investment, because they come from users who are actually at scale, with evidence. ## The VS Code Case Study: Community-Driven Feature Development at Scale The trajectory of Visual Studio Code as a product offers the clearest documented case of what happens when a product team treats GitHub issues as a primary feedback channel rather than a secondary one. VS Code moved to GitHub Issues from UserVoice early in its development, consolidating feature requests, bug reports, and product discussion in a single public repository. The outcome was not just operational simplification. It changed how the product developed. Three features that became definitional to VS Code's identity -- the tabbed editor interface, the integrated terminal, and the extension ecosystem -- emerged directly from community pressure on the GitHub repository. These were not features that Microsoft's product managers identified through internal analysis. They were features that developers demanded, explained, debated, and in some cases prototyped through proposals and pull requests on GitHub. The integrated terminal, in particular, was driven by persistent community requests arguing that switching between editor and terminal broke developer flow in ways that hurt productivity. The issue thread accumulated enough evidence, enough specific use cases, and enough proposed implementations that the business case built itself. The VS Code repository has since become one of the most actively maintained on GitHub, with contributors filing issues, submitting PRs, and participating in discussions at a scale that most product teams have no framework to process. But the principle it demonstrated applies to any team building developer tools: GitHub issues are not a support burden. They are a product intelligence feed. ## The Extraction Problem If GitHub Issues contain the highest-quality product feedback available to software teams, why are most teams not systematically using them? The answer is structural. GitHub was built for code management and collaboration. It was not built for product intelligence extraction, cross-repository analysis, or feedback synthesis. **The repository fragmentation problem.**A company with a mature product ecosystem might maintain dozens of repositories: the core product, SDKs for multiple languages, infrastructure tooling, documentation, plugins, and integrations. Feedback appears across all of them, inconsistently labeled, without a unified view of cross-repository patterns. A performance concern raised in the Python SDK repository may be directly related to an architectural discussion in the core API repository, but no tooling surfaces that connection. Each repository is its own silo. **The labeling inconsistency problem.**GitHub's label system is flexible and entirely discretionary. Different repositories use different label taxonomies, applied with varying consistency by different maintainers and contributors. An issue labeled "enhancement" in one repository might be labeled "feature-request," "p-feature," or simply left unlabeled in another. Systematic analysis across these inconsistencies requires normalization that the platform itself does not provide. **The staleness problem.**GitHub repositories accumulate issues over time, and the ratio of stale issues to currently relevant ones grows steadily. An issue filed in 2021 requesting a feature that was subsequently built but never closed, or requesting something that no longer applies because the product architecture changed, looks identical to a current priority from the outside. Without active triage, older repositories become partially unreliable records where signal and noise are interleaved chronologically. **The reaction-as-voting limitation.**The thumbs-up reaction on a GitHub issue is a blunt instrument for measuring demand. It aggregates across all users regardless of account tier, usage depth, or revenue significance. An issue with a thousand reactions from free-tier users on a self-hosted deployment is a different kind of signal than an issue with forty reactions from enterprise customers on managed accounts, but the platform treats them identically. Without enrichment from business context -- ARR, customer tier, account health -- reaction counts cannot support revenue-weighted prioritization. **The discussion dispersion problem.**GitHub now surfaces feedback across three distinct mechanisms: Issues, Discussions, and pull request comments. Each serves a different conversational function. Issues are for tracked items. Discussions are for open-ended conversations and proposals. PR comments are for in-context review. The same underlying product concern might manifest differently in each channel, and synthesis across all three requires processing that spans fundamentally different data structures and community norms. **The cross-project signal problem.**Some of the most important feedback about a developer tool does not appear in that tool's own repository. It appears in repositories of projects that depend on it. When a framework's users file issues against downstream tools attributing their problem to an upstream limitation, that upstream limitation may have no corresponding issue in the upstream repository. The signal is real. It is just located somewhere the maintainers are not looking. ## The Scale That Makes Manual Analysis Unworkable The VS Code repository received thousands of issues per month at peak engagement. The Kubernetes project maintains tens of thousands of open issues across its ecosystem. Rust's issue tracker has accumulated tens of thousands of filed items across its existence. For an enterprise software team, the GitHub footprint is smaller but the extraction challenge is proportionally similar. A developer-facing product with five repositories, an active community, and two or three years of history might have five thousand to ten thousand issues, spread across multiple label taxonomies, with varying degrees of staleness, and pull request conversations scattered across hundreds of merged and unmerged branches. Reading all of it manually is not a product management strategy. It is a full-time job for multiple people, and even then, the analysis would not be systematic enough to surface cross-repository patterns, track the evolution of themes over time, or connect individual issues to the business context needed for prioritization. The feedback is there. The technical precision is there. The signal quality is unmatched by any other feedback channel available to product teams building for developers. What is missing is the infrastructure to extract it at the scale at which it exists, normalize it across the structural inconsistencies of real-world repository ecosystems, and deliver it in a form that product managers can actually use. ## What Systematic Extraction Looks Like The gap between the feedback that exists on GitHub and the feedback that actually reaches product decisions is not a people problem. Development teams are not failing to read issues because they do not care. They are failing to read them systematically because the volume, fragmentation, and structural inconsistency of the data exceed what any manual process can handle at scale. Closing this gap requires treating GitHub's issue tracker, discussion threads, and pull request comments as a structured data source rather than a conversational interface. That means entity extraction: identifying the feature requests, bug reports, API design proposals, and performance concerns embedded in natural language and linking related items across repositories and label taxonomies. It means deduplication: recognizing that the same underlying concern filed sixteen times by different users in different words is a single signal with a measured demand level, not sixteen separate items. It means enrichment: connecting individual issues to the business context -- customer account, tier, revenue -- needed to weight them appropriately in prioritization decisions. And it means synthesis: surfacing trends that are building across repositories and over time before they reach the magnitude where they become impossible to ignore. Developer feedback on GitHub is the highest-quality product feedback most software teams have access to. It is filed by sophisticated users, at the moment of contact with the product, with technical specificity that no survey instrument can replicate. It is already there, already written, already validated to some degree by community reactions. The question is not whether it contains product intelligence. The question is whether product teams have the infrastructure to extract it. ClosedLoop AI connects to the platforms where customers actually articulate what they need -- including GitHub repositories -- and turns that raw technical feedback into structured product intelligence that reaches the people making roadmap decisions. ![Jiri Kobelka](/assets/images/jiri-kobelka.png)Jiri Kobelka Founder We build tools that turn customer conversations into product decisions. ClosedLoop AI analyzes feedback from 40+ integrations to surface the insights that matter. ### Get insights like this in your inbox Product intelligence insights delivered weekly. No spam, just signal. Subscribe Join product leaders from companies using ClosedLoop AI ## Related Articles More insights you might find useful Strategy Feb 21, 2026 ### [Best AI Product Discovery Tools in 2026](https://closedloop.sh/blog/best-ai-product-discovery-tools-2026) A detailed comparison of the 5 leading AI product discovery tools in 2026 — Productboard + Spark, Dovetail, Enterpret,... 5 min read [Read Article](https://closedloop.sh/blog/best-ai-product-discovery-tools-2026)Case Study Nov 12, 2025 ### [Case Study: How Canonical Labs Scales Risk Intelligence with Product Clarity](https://closedloop.sh/blog/canonical-labs-case-study) Discover how Canonical Labs, a risk-intelligence infrastructure company, transformed their product development by unifyi... 5 min read [Read Article](https://closedloop.sh/blog/canonical-labs-case-study)AI & Engineering Feb 24, 2026 ### [Claude Code Can Build Anything - But Does It Know What Your Customers Actually Need?](https://closedloop.sh/blog/claude-code-agentic-coding-needs-product-intelligence) Claude Code hit $1B revenue in 6 months and writes 4% of all GitHub commits. But agentic coding tools build from ticket ... 5 min read [Read Article](https://closedloop.sh/blog/claude-code-agentic-coding-needs-product-intelligence)[Browse All Articles](https://closedloop.sh/blog) --- ## More Information - Website: https://closedloop.sh - Documentation: /docs - Pricing: https://closedloop.sh/pricing - Contact: https://closedloop.sh/contact