The Excellence Gap: Why Software Engineering Fundamentals Are More Critical Than Ever
The industry is producing more software engineers than ever while quietly hollowing out the understanding that makes software engineering worth doing. Here is what is causing it, what it costs, and what actually separates the engineers who compound in value from those who don't.
Abstraction Is a Multiplier, Not a Foundation
There is a seductive story told about modern software development: that layers of abstraction have democratized the craft, that managed cloud services and high-level frameworks have made deep technical knowledge optional. That story is half-true and fully dangerous.
Abstraction is one of the most powerful tools in engineering. It lets you reason and build at a higher level without reconstructing everything beneath you. TCP/IP is a good abstraction because the underlying behavior is rigorous, predictable, and well-understood. You can build on it without thinking about it, and it will not surprise you. A managed Kubernetes cluster operated by a team that has never debugged a pod scheduling failure is not the same kind of abstraction — it is a false floor.
The value of abstraction is that it lets you reason at a higher level without losing fidelity. It only does that if what's underneath it is sound.
The critical distinction is not between using abstractions and avoiding them. It is between engineers who can move fluidly up and down the stack — who reach for an abstraction confidently and can step below it when it fails — and engineers who have only ever been trained to navigate interfaces. The former group compounds in value. The latter hits walls they cannot see.
The Growing Excellence Gap
There is a divergence opening in the software engineering workforce — what I will call the excellence gap, a pattern this article argues is emerging from converging evidence, not a single named phenomenon in the research literature. On one side: engineers who have invested in understanding systems — data structures, concurrency, distributed systems primitives, the actual behavior of their tools under load. On the other: engineers who have learned to operate abstractions and produce output that looks correct until it isn't.
This gap is not new. But it is widening, and three compounding forces are accelerating it.
Force 1: Education Has Fallen Behind
Academic computer science programs are structurally unable to keep pace with industry tooling. A 2022 study published in ACM Transactions on Computing Education surveying 628 practitioners across 14 countries found a consistent pattern: practitioners struggle to become productive because of a misalignment between skills learned during university and what the industry actually needs.[2]
The gap is not primarily about frameworks or languages. It is about analytical thinking, systems reasoning, and the kind of depth that only develops through deliberate practice with hard problems. As Sean Gruber noted in a 2023 Virginia Tech study of software engineering education, designing courses takes time — and changing them to chase newer industry standards risks actively degrading foundational instruction in areas like testing practices and software design.[8]
Force 2: Velocity Culture Penalizes Depth
Deep domain understanding is structurally invisible in most sprint-driven organizations. The engineer who spends two weeks truly understanding how a caching layer degrades under certain key distribution patterns produces nothing visible during that time. The engineer who ships a feature that will fail under those same patterns in six months looks more productive by every metric the organization is measuring right now.
Depth does not disappear because organizations stopped valuing it. It disappears because the incentives never rewarded it to begin with. Over enough performance cycles, teams optimize toward what gets recognized.
Force 3: The AI Amplification Problem
Generative AI coding tools have created a new and significant pressure on fundamentals. The tools are genuinely useful — a controlled experiment by researchers at MIT found developers using GitHub Copilot completed tasks 55.8% faster than those without it.[12] But speed and correctness are not the same thing.
A 2024 analysis by GitClear of 153 million changed lines of code found that code churn — lines reverted or updated within two weeks of being authored — is projected to have doubled compared to its 2021 pre-AI baseline. The same analysis found a concerning decrease in refactoring activity and a sharp rise in copy-pasted code, patterns consistent with a workforce accepting AI-generated first drafts rather than reasoning about them.[10] A separate study by Uplevel Data Labs found that developers with Copilot access saw a 41% higher bug rate with no corresponding improvement in throughput — suggesting the tools may be redistributing, not eliminating, the costs of shallow thinking.[14]
A 2025 study of students performing brownfield programming tasks — working in unfamiliar codebases written by others — found that while Copilot access improved operational efficiency, excessive dependence on it diminished creative problem-solving capacity and led to superficial comprehension of programming principles.[11] The study's scope is students, not professional developers; the pattern it surfaces, however, is consistent with what the GitClear and Uplevel data suggest is happening more broadly in production codebases.
The Real Cost: A $2.4 Trillion Problem
The excellence gap is not an abstract concern about craft. It has a measurable financial footprint.
The Consortium for Information and Software Quality (CISQ) estimated the cost of poor software quality in the United States at $2.41 trillion in 2022 — driven by cybercrime exploiting software vulnerabilities, supply chain failures, and technical debt that organizations cannot afford to resolve.[15] Technical debt alone had risen to $1.52 trillion by the same year, making it the single largest obstacle to making changes in existing codebases.[17]
According to research cited by Tonic.ai from a McKinsey 2023 study, technical debt accounts for roughly 40% of IT balance sheets for the average large technology organization — not as a one-time cleanup cost, but as an ongoing drag on every new initiative.[18] A separate McKinsey survey of 50 CIOs found that 60% reported their organization's technical debt had grown over the prior three years, suggesting the problem is compounding rather than being managed down.[18]
These numbers do not exist in a vacuum. They are downstream of the same deficit: systems built by people who understood the interfaces but not what the interfaces were built on.
How Hiring Has Quietly Selected Against Fundamentals
The talent pipeline issue is not simply that universities produce under-prepared graduates. It is that the hiring process actively filters for a different profile than it claims to.
Most technical hiring at scale has been cargo-culted from a handful of large technology companies whose interview processes were designed for a specific kind of problem: systems at massive scale, operated by people who would never see the full system. These interviews test algorithmic puzzle performance, not domain reasoning. The signal has long since decayed, and candidates have adapted to optimize for the assessment rather than the underlying skill.
The result is structural: you can be hired as a senior engineer at a payments company without understanding double-entry accounting. At a healthcare platform without knowing what an HL7 message is. At an ad targeting system without understanding auction theory. The domain knowledge that would let you make sound architectural decisions — the kind that prevents the $2.4 trillion problem — is nowhere in the signal.
The interview itself creates a perverse incentive around intellectual honesty. The winning move when you don't know something is almost always to appear to reason toward an answer rather than say "I don't know." Genuine intellectual honesty — accurately acknowledging the limits of your knowledge — reads as weakness in a format designed to assess confidence and output. The disposition learned there does not disappear once someone is hired.
What Good Fundamentals Actually Buys You
This is where the conversation often goes wrong. Strong fundamentals are not about doing everything from scratch, refusing to use high-level tools, or cultivating a reflexive distrust of abstraction. They are about maintaining the capacity to reason downward when you need to.
An engineer with strong fundamentals uses abstractions faster than one without — because they are not confused by them. They understand what trade-off the abstraction is making, what assumptions it encodes, and what conditions will make it misbehave. When the abstraction leaks, they can step below it, form a coherent mental model of what is actually happening, and step back up with a solution. That capacity does not show up in a sprint velocity metric. It shows up in the absence of certain categories of catastrophic failures.
It also determines the quality of decisions made at design time. The engineer who understands eventual consistency reasons differently about where to place invariants than one who knows only that "distributed databases can have sync issues." The engineer who understands memory allocation reasons differently about data structures than one who knows that "arrays are faster than lists sometimes." These differences compound across thousands of decisions over a career.
Investing in fundamentals is what makes abstraction safe to use. It is not in tension with moving fast — it is what allows you to move fast sustainably.
The Analog Holds Everywhere
The pattern is not specific to software. It is a structural truth about the relationship between abstraction and the foundations beneath it. A contractor who does not understand load-bearing principles can follow a blueprint competently until the situation falls outside the blueprint's assumptions. A trader who does not understand the instruments they are pricing can use a model correctly until market conditions invalidate its assumptions. A doctor who pattern-matches symptoms to treatments without understanding pathophysiology is fine in common cases and dangerous in edge ones.
The abstraction works until the situation falls outside the conditions the abstraction was designed for. The person who only knows the abstraction has no tools left when that happens — and, critically, may not even recognize that it has happened.
Closing the Gap: What Actually Works
The skills gap is real and growing. A 2024 survey by Pluralsight of 1,400 executives and IT professionals across multiple markets found unanimous agreement that foundational skills in software development represent the most critical and persistent gap in the industry, and that 78% of organizations had abandoned projects due to insufficient technically skilled engineers.[1]
That gap is not closed by adding frameworks to a curriculum or rotating through cloud certifications. Research consistently finds that the highest-value interventions involve analytical thinking, systems reasoning, and deliberate exposure to the failure modes of real systems — the parts of engineering education that are hardest to automate and most expensive to acquire later in a career.
At the individual level, the implication is direct. The engineers who will compound in value over the next decade are those who treat their chosen tools as transparent — who ask not just "does this work?" but "why does this work, and under what conditions will it stop?" That habit, applied consistently across a career, is what separates engineers whose judgment is trusted from those whose output requires supervision.
At the organizational level, it means structurally protecting space for depth. That means hiring processes that include some assessment of domain understanding, not just algorithmic performance. It means performance frameworks that recognize the invisible work of quality — the problem not caused, the architecture not accreted, the conversation that steered a design away from a failure mode nobody else saw coming. And it means recognizing that a team capable of reasoning about its own systems is worth more than a team that can merely operate them.
A useful heuristic for evaluating your own depth: when something in your stack behaves unexpectedly, can you form a coherent hypothesis about why before reaching for a search engine or an AI tool? The ability to reason toward a plausible explanation — even a wrong one that you can test — is the hallmark of genuine fundamentals. The absence of it is the hallmark of interface navigation.
The excellence gap will not close because tooling improves. If anything, tooling improvements that reduce the visible cost of shallow understanding will widen it — by allowing more code to be written by people with less understanding of what it should do. The engineers on the right side of that gap are the ones who made a different bet: that understanding the system, not just operating it, was worth the investment.
That bet has always paid off. It is just becoming more valuable as fewer people are making it.
References
- Pluralsight. 2024 Technical Skills Report. Conducted by Wakefield Research, February 2024. pluralsight.com
- Akdur, D. Analysis of Software Engineering Skills Gap in the Industry. ACM Transactions on Computing Education, 2022. dl.acm.org
- Anewalt, K. & Polack, J. "Alignment between Academia and Industry in Software Engineering Programs." Cited in ACM TOCE, 2023.
- Skillsoft. 2023 IT Skills and Salary Report. 2023. skillsoft.com
- ICSE SEET 2024. "Bridging the Theory–Practice Gap in a Maintenance Programming Course." International Conference on Software Engineering Education and Training, 2024. conf.researchr.org
- Nascimento, N. et al. "Understanding the Gaps in Software Engineering Education from the Perspective of IT Leaders: A Field Study." CSEDU 2023. scitepress.org
- ICSE 2024. "Introducing Computer Science Undergraduates to DevOps from Software Engineering Fundamentals." ICSE SEET 2024. conf.researchr.org
- Gruber, S. M. Gaps in Software Engineering Education. Virginia Tech, 2023. vtechworks.lib.vt.edu
- ResearchGate. "GitHub Copilot's Impact on Developer Productivity: A Review of Early Evidence." 2023. researchgate.net
- GitClear. Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality. 2024. gitclear.com
- Takerngsaksiri, W. et al. "Effects of GitHub Copilot on Computing Students' Programming Effectiveness." arXiv, 2025. arxiv.org
- Peng, S. et al. The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. arXiv:2302.06590, 2023. arxiv.org
- GitClear. AI Copilot Code Quality: 2025 Data. 2025. gitclear.com
- Uplevel Data Labs. "Can Generative AI Improve Developer Productivity?" 2024. Reported by IT Pro. itpro.com
- Black Duck / Synopsys. The Cost of Poor Software Quality in the U.S.: A 2022 Report. CISQ, 2022. blackduck.com
- Reyes, I. The Annual Cost of Technical Debt: $1.52 Trillion. Medium, 2023. medium.com
- Security Magazine. "Poor Software Costs the US $2.4 Trillion." 2022. securitymagazine.com
- Tonic.ai. "The Hidden Value of Test Data: A Case Study on Tech Debt." Secondary citation of McKinsey 2023 research and McKinsey CIO survey. Note: the McKinsey figures are cited via this secondary source and have not been independently verified against the primary McKinsey report. tonic.ai