Career & AI

AI and Your Future Job

AI is not a genie, not a replacement, and not magic. It is an aggregator and amplifier — but only when there is real expertise to amplify. Understanding that condition may be the most important career decision you make this decade.

Robert Fuller  ·  March 2026  ·  ~14 min read

Every wave of technological disruption arrives wearing the same costume: this time everything changes. That is not entirely wrong — disruptions do change things. But they tend to change them more selectively than the initial panic suggests. The spreadsheet did not eliminate accountants. But it did eliminate a lot of bookkeepers — the ones who wouldn't learn to use it, and whose value lived entirely in skills the spreadsheet could now handle. The ones who survived were not "accountants as a category." They were the ones who understood enough about their field to see what spreadsheets could and couldn't do, and who used that understanding to move toward work the tool couldn't perform.

That pattern — disruption transforms a field, survivors are those who adapt and grow, casualties are those whose value was in what the technology displaced — is the relevant lesson. And it applies directly to the AI era, with higher stakes and faster timelines.

The World Economic Forum's Future of Jobs Report 2025 projects that by 2030 roughly 92 million jobs will be displaced by automation while 170 million new roles emerge — a net gain of 78 million positions, but concentrated in workers who can meet new demands. [1] The same report found that 39% of existing skill sets will be significantly transformed or rendered obsolete over that same five-year window. These are projections with real uncertainty, not actuarial forecasts — but the directional pressure is clear and consistent across nearly every credible labor market analysis.

This article is about what the "surviving bookkeeper" attributes look like in 2026 — across every kind of work, physical and cognitive.

1. The Amplifier — and Its Limits

Before discussing what to do about AI, it is worth dispelling the two myths that dominate the conversation: the genie myth and the terminator myth.

The genie myth holds that AI can grant any wish you phrase correctly — that it "knows" things the way an expert knows them, that it can reason about novel problems as a domain master would, and that its outputs are trustworthy by default. The terminator myth is the inverse: that AI is an unstoppable force that will mechanically displace every human worker it encounters.

Both fail to characterize what AI actually is.

What AI Actually Is — and Where the Analogy Has Limits

A large language model is a sophisticated pattern-matching system trained on a snapshot of human knowledge. It aggregates and recombines what has been written, said, and published up to its training cutoff. It does not reason from first principles, hold tacit expertise, or update dynamically as your field evolves. In the hands of someone with deep knowledge, it amplifies: it accelerates research, extends reach, eliminates friction. In the hands of someone without that foundation, it generates plausible-sounding outputs they cannot evaluate — and confident-sounding errors they cannot catch. This distinction is the whole argument.

A 2025 study by MIT Sloan economists Roberto Rigobon and Isabella Loaiza examined task data across all U.S. occupations and found that, between 2016 and 2024, the volume of "human-intensive" tasks — tasks requiring what they call EPOCH capabilities (Empathy, Presence, Opinion/judgment, Creativity, and Hope/vision/leadership) — actually grew. Their analysis argues this is consistent with AI being more likely to complement human workers than replace them at the task level, and that augmentation effects are significant alongside automation ones. [2]

There is, however, a credible skeptical view worth naming. MIT economist Daron Acemoglu — whose research on automation and labor markets spans three decades — argues in his 2024 paper "The Simple Macroeconomics of AI" that AI's economic impact will likely be considerably smaller than boosters claim, affecting a narrower set of tasks and generating more modest productivity gains over the next decade than transformational narratives suggest. [10] His work with Simon Johnson further argues that augmentation versus displacement is not a natural law — it is a design choice shaped by incentives, and corporate incentives often favor labor cost reduction over worker empowerment. This is important context: the argument that AI tends toward augmentation describes an observed pattern, not an iron guarantee.

What this means for individual workers: augmentation is the dominant effect in environments where employers are investing in it. The personal strategy described in this article — build expertise, use AI to amplify it — is most powerful where that investment is happening. Where it is not, the same strategy still protects you, but through a different mechanism: it makes you the kind of worker that better employers want to hire.

2. Two Questions That Actually Discriminate

Predictions about which specific jobs will be automated are notoriously unreliable. A more useful frame: instead of asking "will AI take my job?" ask two sharper questions about the nature of your work and your relationship to it.

The Two-Factor Safety Check

1. Does the core value of your role live in your capacity to generate new knowledge and judgment — or in the accumulated body of knowledge you currently hold?

2. Do you actively pursue continuous improvement in your field — not as a chore, but as a genuine orientation toward your work?

The first question identifies exposure. Work whose value derives from a relatively fixed rulebook — know the procedure, execute it consistently, repeat — is the category most susceptible to automation. A well-trained model can encode that rulebook. What it cannot encode is the ongoing generation of new judgment: diagnosing problems that don't fit the standard pattern, adapting to conditions the procedure didn't anticipate, making calls that require weighing competing considerations none of which are fully specified. The more your role requires the latter, the less replaceable its core is.

The second question identifies whether you are positioned to benefit from what AI offers. Even in roles where continuous learning is required to stay competitive, the benefit only flows to the worker who is actually doing it. This is not a judgment about character — but it is an honest observation about risk.

Two important caveats deserve naming here. First, these questions presuppose that the worker has meaningful agency over whether they invest in learning. For many people — caregivers, workers in economic precarity, those with health limitations, those in the final years of a long career — that agency is constrained in ways that are not a matter of choice. The argument in this article is most actionable for workers with the stability to act on it. Second, the safety check is not a guarantee: a worker who genuinely satisfies both criteria can still face structural displacement in their specific role or industry. What the check identifies is relative positioning, not immunity.

3. The Learning Edge — and Why It Compounds

There are two distinct mechanisms by which being a genuine, continuous learner protects you in an AI-enabled economy. They are worth understanding separately.

3a. You Can Stay Ahead of the Training Data

Every AI model has a knowledge cutoff — the date beyond which its training data does not extend. More importantly, even within that cutoff, AI is trained on what has been published about a field, not on the tacit, evolving, context-specific knowledge that practitioners develop through years of doing. Bleeding-edge practices, novel frameworks, domain-specific nuances, and newly discovered failure modes are often not yet codified in writing — and even when they are, they take time to become meaningfully represented in training data.

A practitioner who stays at the front edge of their field — not just reading about best practices but developing new ones — operates in territory where AI lags by design. This gap is recurring, not permanent: the frontier advances, AI catches up over training cycles, and the practitioner has already moved forward. For a continuous learner, this is not a sprint to win once; it is the nature of the work.

3b. You Can Extend Your Domain Using AI Itself

The second mechanism is more active. If you invest in continuous learning, AI becomes a powerful accelerant for that investment. The model does not replace your expertise — it dramatically lowers the time cost of extending it.

Augmentation in Practice

Consider a seasoned civil engineer who deeply understands structural load principles but has never worked with a specific new composite material. AI can help her rapidly survey the published literature, generate draft analysis approaches, and flag known failure modes in similar materials — compressing months of preliminary research into days. Her engineering judgment still determines whether the output is trustworthy. AI gave her velocity; she provided direction and discernment.

The difference between this and the failure mode is the difference between a surgeon who uses a robotic system to extend precision and a patient who asks an AI for a diagnosis. Both are "using AI." Only one is bringing expertise to bear on the output. Augmentation means using AI to optimize what you know and expand the pool of what you know — not to skip the knowing.

A GitHub research study found that developers using Copilot completed a controlled task 55% faster than those working unaided. [3] It is worth being clear about what that study was: a controlled experiment in which participants implemented an HTTP server in JavaScript — a self-contained, well-documented, greenfield task of the kind AI handles best. In a separate survey of regular Copilot users, experienced developers reported that the primary benefit was flow state preservation: AI handled the routine, freeing mental energy for the complex problems that actually require expertise. The speed finding and the qualitative finding come from different parts of the research, and the aggregate effect in production settings is more complicated. But the pattern is real: skilled engineers using AI tools deliberately report meaningful gains in both throughput and the quality of attention they can bring to hard problems.

4. It Is Not Just Knowledge Work — The Same Rules Apply Everywhere

A common version of the AI displacement narrative focuses narrowly on white-collar, knowledge-based roles. This creates a false sense of security in physical and trade-based careers — and a misleading sense of doom in cognitive ones. The evidence suggests both assumptions are too simple.

+45%
Projected growth in advanced practice nursing roles (NPs, CRNAs, CNMs combined), 2023–2033 [4]
85%
Share of employers planning to upskill workers with AI training through 2030, per WEF [1]
~2%
Additional annual employment growth at AI-investing firms over a decade, per Brookings [5]

Physical trades — electricians, plumbers, HVAC technicians, construction managers, skilled machinists — have consistently ranked among the most automation-resilient roles because they require in-person problem-solving, physical dexterity, environmental judgment, and improvisation in unpredictable conditions. Robotics and AI have made real inroads in controlled manufacturing settings, but the unstructured complexity of field work — an HVAC system in a hundred-year-old building, a plumbing problem behind finished walls — remains genuinely hard to automate.

But the same two-factor check applies. The HVAC technician who is continuously learning about heat-pump efficiency improvements, advanced refrigerants, and smart building integration will be far more valuable than one running the same installation playbook they learned fifteen years ago. AI tools are already entering these fields: predictive diagnostics, AI-assisted building plans, robotic site survey tools. The learner uses these; the static practitioner eventually competes against them.

Logistics and supply chain offers another instructive case. A warehouse worker performing repetitive pick-and-pack operations is in the highest-automation-risk category in physical work. A logistics coordinator who understands routing optimization, exception handling, supplier relationships, and system failure modes — and who uses AI-assisted planning tools to manage more complexity than previously possible — is in a much stronger position. Same physical environment; different relationship to the work.

Healthcare offers the clearest example of the pattern holding across a complex field. Nursing, therapy, and direct patient care are projected to grow strongly through the AI era — not because AI cannot assist with documentation or triage (it can, and does), but because the core value of these roles is irreducibly relational and contextual. Within healthcare, however, the workers who are thriving are those who actively learn to use AI-assisted diagnostic tools, data-driven care protocols, and electronic health integrations. The static practitioner and the augmented practitioner carry the same job title but deliver meaningfully different outcomes.

"The most resilient career prospects belong to professionals who pair technical expertise and soft skills with AI literacy."

— JFF (Jobs for the Future), The AI-Ready Workforce, 2024 [6]

The Brookings study finding is worth sitting with. AI-investing firms increased total employment — but tilted their hiring sharply toward more educated, technically skilled workers, while reducing roles that did not require advanced credentials. [5] The same dynamic that creates opportunity for some workers concentrates risk for others. This is not a comfortable finding, and it deserves more than being dismissed as "a consequence of the amplification dynamic." It is a real polarization effect, and it shapes which individual strategies are likely to work.

5. Software Engineering: The Clearest Case Study

Among all knowledge-based professions, software engineering sits at the most visible intersection of AI capability and human craft. It is the field where the tension between augmentation and replacement is playing out most publicly, most rapidly, and most instructively. The central concept at the heart of this tension has a name: vibe coding.

What Is Vibe Coding?

The term was coined in February 2025 by Andrej Karpathy, a founding team member at OpenAI and former AI director at Tesla. Karpathy described an approach in which a developer "fully give[s] in to the vibes" — describing a desired outcome in natural language, accepting the AI's generated code, and moving forward without deeply understanding what was produced. [7] Karpathy framed this in the context of personal projects and prototyping, where correctness guarantees and long-term maintainability are genuinely less critical.

The problem is not the concept but its migration into production software development — contexts where it was never designed to apply. Vibe coding at scale means building systems without meaningfully engaging with how they work, why they work, or whether they will continue to work under conditions the model did not anticipate. It is not the same as AI-assisted programming. The latter is a legitimate, powerful practice. Vibe coding is specifically the mode where developer judgment is suspended — where the model is treated as the architect and the human as the yes-sayer.

The Quality Evidence

GitClear, a software analytics firm, conducted a longitudinal analysis of 211 million lines of code changes from 2020 to 2024. Their findings are striking. The percentage of code changes associated with refactoring — a proxy for deliberate quality improvement — fell from 25% in 2021 to under 10% by 2024. In the same period, copy-pasted (duplicated) code quadrupled in volume. [8] Code churn nearly doubled. The researchers attribute these trends directly to the growth of AI code generation tools used without disciplined engineering review.

The Vibe Coding Hangover

By late 2025, developer forums and industry coverage were describing a recurring pattern: senior engineers inheriting AI-generated codebases with minimal test coverage, excessive abstraction, and cascading technical debt that made further development exponentially harder. The initial speed was real. The cost was deferred, not eliminated — and it compounded with every subsequent AI-assisted feature added to an already-fragile foundation.

This is exactly the outcome the data predicts. AI-generated code is plausible code — it looks right, compiles, and passes basic tests. But it is produced by a system with no understanding of the product's business logic, no memory of prior architectural decisions, no awareness of performance requirements at scale, and no judgment about long-term maintainability. An engineer who vibe-codes is outsourcing the work that makes them valuable.

Vibe Coding Approach
"Make this endpoint faster"
→ Accept generated output
→ It works in dev; ship it
→ No review of DB query plan
→ No load-testing
→ N+1 query silently introduced
→ Production incident at scale
Augmented Engineering Approach
// Engineer diagnoses: DB query in loop
// Uses AI to draft optimized bulk query
// Reviews generated SQL, verifies plan
// Writes test asserting correct behavior
// Load-tests against production volume
// Ships with confidence and context

What Augmentation Looks Like for Engineers

The right frame is not "use AI" or "do not use AI." It is: own the output. An engineer using AI as an augmentation tool uses it to:

Augmentation

  • Generate boilerplate, then review and understand every line
  • Draft test cases, then extend to cover edge cases from domain knowledge
  • Explore unfamiliar APIs or frameworks rapidly, then validate understanding
  • Explain or document existing code, then verify accuracy
  • Prototype quickly, then refactor with architectural intent

Vibe Coding

  • Accept generated code without understanding its logic or tradeoffs
  • Skip tests because "it works" in the demo
  • Defer to AI on architectural decisions without evaluating alternatives
  • Copy-paste AI output across a codebase, introducing silent duplication
  • Repeat prompts until it "seems to work" rather than diagnosing root causes

The MIT Technology Review noted in late 2025 that the software industry was evolving from a "vibe coding" phase toward what practitioners were calling context engineering — the deliberate, disciplined practice of managing what AI systems know about a codebase, a problem, and a set of constraints, in order to produce trustworthy output. [9] Context engineering is bringing your expertise into the AI interaction rather than outsourcing the expertise to the AI. It is augmentation, practiced with rigor.

The engineers who will be most valued in the next decade are not those who generate code the fastest. They are those who understand systems deeply enough to direct AI effectively, evaluate its output critically, and build things that are correct, maintainable, and worth building in the first place. Those capabilities come from years of deliberate learning — not from prompt engineering alone.

6. Thinking Above the Day-to-Day

There is a second dimension to this argument beyond technical skill accumulation, and it applies across every field: the habit of thinking about your work at a level above its daily execution.

Most workers spend most of their time in the work. A subset also regularly spend time on the work — reflecting on how it could be done better, studying how others approach similar problems, asking why the current method is the current method, and considering whether the role itself should evolve. This is what separates the practitioner from the craftsperson.

The Meta-Level Habit: Two Examples

A roofer who only lays shingles is executing a procedure. A roofer who studies water management, structural failure modes, new materials, and building code changes — and thinks about how these should change their approach — is building expertise. AI can automate the former; it amplifies the latter.

An employment lawyer who masters current case law is holding valuable knowledge. An employment lawyer who also studies how automation is reshaping the underlying employment relationships their clients navigate — what new disputes will arise, what existing law doesn't yet cover — can see around corners that AI-powered legal research tools cannot yet reach.

This is not a call to turn every worker into a theorist. It is a call to develop the habit of examination: periodically stepping back to ask why your current methods work, how they could work better, and what the next evolution of your role looks like. Workers who do this find AI most useful, because they can see clearly where it fits into their process and where it would introduce risk.

The WEF report identified curiosity and lifelong learning as among the top rising skills employers will seek through 2030 — alongside resilience and adaptability. [1] These are not technical skills. They are orientations toward work — orientations that describe a worker engaged enough with their craft to keep examining it. That engagement is what AI cannot replicate, and what it rewards in the humans who possess it.

7. An Honest Look at the Risk

Optimistic framing can become its own kind of complacency. The genuine risks deserve a direct account.

If your work primarily involves applying a fixed body of knowledge — knowledge well-represented in publicly available training data — to routine, predictable problems, automation risk is real and growing. This applies to portions of many jobs: routine legal drafting, standard financial analysis, templated communications, code following well-established patterns, entry-level diagnostics. In each of these areas, AI is already competitive and improving.

The risk is compounded where continuous growth has felt unnecessary for years. Comfortable plateaus are easy to reach in fields where accumulated expertise has historically been enough. But comfort and safety are different things in an era of accelerating automation.

The aggregate picture — 170 million new roles, net gain of 78 million jobs — is real but deserves scrutiny. History shows that even net-positive technological transitions produce transition periods of years to decades where displaced workers do not benefit from aggregate growth. New jobs tend to emerge in different sectors, require different skills, and often appear in different geographies than the ones disrupted. For a worker whose role is automating now, "new jobs will emerge" is a cold statistical comfort if those jobs require substantially different skills and emerge a decade too late. The WEF's optimistic headline conceals real transition hardship that individual learning strategies cannot fully address.

The Brookings finding deserves honesty too: AI-investing firms increased total employment but reduced hiring of workers without advanced credentials. This is consistent with economic polarization — not with universal upward mobility. Some workers shift into higher-value roles; others find the lower rungs of their field have been automated away, and the path upward was never clear. The article's prescriptions are most accessible to workers who already have some foothold in judgment-intensive work, and who have the stability and access to keep developing it. Acknowledging that does not undermine the argument — it clarifies who it is most immediately actionable for.

The encouraging truth — and it is genuinely encouraging — is that within most fields, including fields under significant automation pressure, there exists non-routine, judgment-intensive, human-relational work that AI is currently expanding rather than contracting. The question for any individual worker is whether they are positioned and inclined to move toward that work.


Conclusion: The Amplified Expert

The workers who will thrive in the AI era are not those who use the most AI. They are those who bring the most to their AI interactions — and who keep expanding what they bring.

If you are already a learner, a grower, a craftsperson — in any field, physical or cognitive — the tools available to you right now are the most powerful in the history of your profession. The question is whether you understand your craft deeply enough to use them well. That understanding is the thing no model can supply, and the thing every model rewards when it is present.

References

  1. World Economic Forum. Future of Jobs Report 2025. WEF, January 2025. Summary: Coursera Blog — WEF Future of Jobs 2025 / Full report: weforum.org
  2. Rigobon, R. & Loaiza-Saa, I. "The EPOCH of AI: Human-Machine Complementarities at Work." MIT Sloan, March 17, 2025. mitsloan.mit.edu
  3. Kalliamvakou, E. et al. "Research: Quantifying GitHub Copilot's Impact on Developer Productivity and Happiness." GitHub Blog, September 2022 (updated May 2024). github.blog
  4. U.S. Bureau of Labor Statistics. Occupational Outlook Handbook: Nurse Anesthetists, Nurse Midwives, and Nurse Practitioners. BLS, 2024. bls.gov
  5. Seamans, R. et al. "The Effects of AI on Firms and Workers." Brookings Institution, July 1, 2025. brookings.edu
  6. Jobs for the Future. The AI-Ready Workforce. JFF, 2024. info.jff.org
  7. Wikipedia contributors. "Vibe coding." Wikipedia, The Free Encyclopedia. Accessed March 2026. en.wikipedia.org/wiki/Vibe_coding
  8. GitClear Research. "AI Copilot Code Quality: 2025 Data Suggests 4× Growth in Code Clones." GitClear, 2025. gitclear.com
  9. MIT Technology Review. "From Vibe Coding to Context Engineering: 2025 in Software Development." November 5, 2025. technologyreview.com
  10. Acemoglu, D. "The Simple Macroeconomics of AI." NBER Working Paper 32487, April 2024. nber.org/papers/w32487 / See also: Acemoglu, D. & Johnson, S. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. PublicAffairs, 2023.