Career Decision Analysis

Google Ads vs.
Microsoft Core AI

Evaluated under two AI trajectory scenarios

Google Ads
vs.
Microsoft Core AI
Context
Current Role

Google Ads — Customer Support AI

166M sessions/yr 730+ actions $11M savings (Moltron) 65 skills

Built the agent runtime powering Google's Ads customer support platform. Led migration from fine-tuned models to pure foundation model + RAG architecture. Proven track record with established relationships and deep organizational knowledge.

Potential Role

Microsoft Core AI — VP, Platform & Tools

Jay Parikh's team ~10K employees Reports to Nadella VP title

Building the end-to-end Copilot & AI stack: GitHub Copilot, Azure AI Foundry, VS Code. Newly formed mission-aligned team. Infrastructure layer powering AI for all developers. $337M favorable COGS impact already delivered.

Two AI Trajectory Scenarios
~20% probability

Slow Takeoff

7–15 year horizon

AI advances incrementally. No explosive self-improvement loop. Transformative like cloud computing — powerful but gradual, with plenty of time to adapt and reposition.

  • Search and ads model persists for a decade or more
  • Traditional engineering skills remain valuable
  • Plenty of time to reposition between roles
  • Career stability and compensation optimization matter more
~80% probability by 2030

Fast Takeoff

1–3 year horizon

Recursive self-improvement loop. AI codes autonomously. "Country of geniuses in a datacenter." One hundred years of scientific progress compressed into a single decade.

  • Traditional UIs replaced by autonomous AI agents
  • Coding automated within 1–2 years
  • Infrastructure becomes the critical bottleneck
  • Being at the frontier determines career trajectory
Scoring Matrix
Slow Takeoff Scenario

Google Ads

7.2
Financial safety (5yr)
9
Career growth ceiling
6
Day-to-day satisfaction
7
Resume / optionality
6
Work-life balance
8
Risk level (low=good)
9

Microsoft Core AI

7.2
Financial safety (5yr)
6
Career growth ceiling
9
Day-to-day satisfaction
8
Resume / optionality
9
Work-life balance
6
Risk level (low=good)
5
Fast Takeoff Scenario

Google Ads

5.0
Product survival
4
Proximity to frontier
5
Skills survive automation
5
Decision-making position
4
Career optionality
5
Financial upside
7

Microsoft Core AI

8.8
Product survival
8
Proximity to frontier
10
Skills survive automation
9
Decision-making position
9
Career optionality
10
Financial upside
7
Key Arguments

Google Ads

  • $300B+ revenue machine with 20 years of intent data
  • Proven track record: 166M sessions/yr, Moltron $11M savings, 730+ actions
  • Known quantity with established relationships and credibility
  • GOOGL stock doubled since Apr 2025 low; Alphabet hit $4T market cap (Jan 2026)
  • Gemini 3 Pro topped all major benchmarks — beat OpenAI & Anthropic across math, code, and reasoning
  • Apple chose Gemini to power Siri (~$1B deal) — massive distribution win
  • Fast Company #1 Most Innovative Company 2026
  • Better benefits: 24-week parental leave, free meals, full campus
  • Gemini eating Copilot's lunch — 70% of users preferred Copilot initially, only 8% stayed after trying alternatives
  • Portfolio hedge: GOOG ownership captures AI upside regardless of career choice
  • Application layer — consumes AI, doesn't build it
  • Customer support AI is downstream, not frontier
  • Existential risk if search UI is replaced by autonomous agents
  • Career ceiling limited — Ads org, not DeepMind or Gemini
  • "Pipeline gets jammed" risk — you're the pipeline, not the source

Microsoft Core AI

  • VP title at platform company — step-function career leap
  • Building tools for all developers: GitHub Copilot, Azure AI Foundry, VS Code
  • Direct alignment with Dario's "Big Blob of Compute Hypothesis"
  • Newly formed, mission-aligned team with direct Nadella access
  • Owns GitHub (largest code corpus) — coding automation is their product
  • Resume opens doors to Anthropic, OpenAI, any frontier lab
  • Crisis is at app layer (Copilot UX), not infra — validates Core AI's mission
  • Nadella's reshuffle protects infra while restructuring products
  • Starting from zero — rebuilding credibility at a new company
  • 10K-person team in flux — Suleyman already sidelined, reorgs confirmed
  • OpenAI's $50B AWS deal: "stateful" loophole bypasses Azure exclusivity
  • Copilot adoption collapse: 3.3% of 450M seats converted, 70-to-8 retention drop
  • $37.5B/quarter CapEx with revenue mismatch — stock under pressure
  • Aliso Viejo satellite office — no real campus, no free meals
  • Parental leave worse (12–20 weeks vs 24)
Expected Value Calculation
Scenario Probability Google Ads Microsoft Core AI
Fast takeoff (1–3 yr) 35% 5.0 8.8
Medium takeoff (3–7 yr) 45% 6.2 8.0
Slow takeoff (7–15 yr) 15% 7.2 7.2
No takeoff / AI winter 5% 8.0 6.5
Google Ads — Expected Value
6.02
out of 10
0.35 × 5.0 + 0.45 × 6.2 + 0.15 × 7.2 + 0.05 × 8.0 = 6.02
Microsoft Core AI — Expected Value
8.09
out of 10
0.35 × 8.8 + 0.45 × 8.0 + 0.15 × 7.2 + 0.05 × 6.5 = 8.09
Karpathy's Perspective: The December Shift
Source: No Priors Podcast, March 2026

Andrej Karpathy (ex-Tesla Autopilot, ex-OpenAI) describes a phase transition in December 2025 where he went from writing 80% of code himself to delegating 80%+ to AI agents. "I don't think I've typed a line of code since December." This isn't incremental improvement—it's a fundamental shift in how software gets built.

The New Bottleneck: Token Throughput

"You would feel nervous when your GPUs are not running. But now it's not about flops, it's about tokens. What is your token throughput? If you're not maximizing your subscription, you're the bottleneck in the system." — Karpathy

Key Insights

AI Psychosis

Perpetual state of trying to keep up with what's possible. Everything feels like "skill issue," not capability limitation. The frontier moves faster than individual adaptation.

Auto-Research

"Remove yourself as the bottleneck." Let agents optimize hyperparameters overnight. Frontier labs are already doing this at scale on tens of thousands of GPUs.

Digital First, Atoms Later

Massive "unhobbling" in digital space (bits flip instantly). Physical world lags by years—atoms are "a million times harder." Infrastructure matters more than apps in the short term.

The Jaggedness Problem

Models excel in verifiable domains (code, math) but struggle with soft skills. "You're either on rails in the super-intelligence circuits or you're meandering."

Implications for Microsoft Core AI

✓ Strongly Supports Infrastructure Position

  • + Proximity to recursive self-improvement: Core AI builds the tools (GitHub Copilot, Azure AI Foundry) that enable auto-research at scale. You're building what Karpathy describes as essential infrastructure.
  • + Judg ment drift protection: "If you're outside the frontier lab, your judgment will start to drift because you're not part of what's coming down the line." Core AI keeps you at the frontier.
  • + Infrastructure > Applications in fast takeoff: When UIs evaporate and agents replace apps, the "big blob of compute" survives. Physical constraints (GPUs, power, data centers) matter more than software UX.
  • + GitHub code corpus advantage: Core AI owns the largest code corpus (GitHub). Karpathy's vision of ephemeral software and agent-first tools directly maps to Copilot's mission.

⚠ Challenges Google Ads Position

  • Application layer at risk: Karpathy predicts "overproduction of bespoke apps that shouldn't exist because agents crumble them up." Customer support AI is exactly the kind of app that gets replaced by agent-to-agent communication.
  • Not at the frontier: Google Ads consumes AI from Gemini/DeepMind. You're downstream, not upstream. "Being at the forefront or I feel extremely nervous."
  • Search UI displacement: When agents handle tasks end-to-end, the ads revenue model evaporates. Intent data is valuable, but if users never see a search results page, where do ads go?

Updated Fast Takeoff Scoring Rationale

Karpathy's December shift validates the fast takeoff scenario is already happening for software engineering. The 80/20 flip in coding workflow isn't theoretical—it's lived reality for frontier practitioners. This strengthens Microsoft Core AI's positioning:

  • Proximity to frontier: 10/10 (unchanged) — Core AI is building the infrastructure that enables the shift Karpathy describes
  • Career optionality: 10/10 (unchanged) — Frontier lab experience is the new premium credential
  • Product survival: 8/10 (could justify 9/10) — Compute infrastructure outlives ephemeral software apps

"I want there to be ensembles of people thinking about all the hardest problems. I don't want it to be closed doors with two or three people. By default, I'm very suspicious of centralization." — Karpathy on why he values being at multiple frontier labs over his career

Recommendation
Final Assessment

Move to Microsoft Core AI

Updated March 2026 to reflect OpenAI's $50B AWS deal, Copilot's 70-to-8 adoption collapse, Suleyman's sidelining, and Google's surge — Gemini 3 Pro topped all major benchmarks (VentureBeat), Apple chose Gemini to power Siri in a ~$1B deal (CNBC), Alphabet hit $4T market cap, and Fast Company named Google #1 Most Innovative Company and #1 in AI. The Gemini app now has 750M monthly active users (TechCrunch), AI Overviews serve 2 billion monthly interactions, and Gemini processes 10 billion tokens/minute via API. Fast Company's Harry McCracken: "That universal assistant Pichai wrote about in his 2016 shareholder letter? Google is on the cusp of creating it."

These are model and consumer layer wins — owned by DeepMind and the Gemini team, not Google Ads. The role on the table is Ads Customer Support AI. A developer joining Google Ads benefits from Gemini's 750M users about as much as a developer joining Google Maps would. The competitive headwinds at MSFT hit the application layer (Copilot UX), not the infrastructure layer (Core AI). The recommendation holds.

Google 6.02
Microsoft 8.09

The expected value gap remains decisive: 8.09 vs 6.02. MSFT risk scores dropped (financial safety, reorg risk), but career ceiling, optionality, and frontier proximity still dominate. Slow takeoff is now a tie (7.2 vs 7.2) — Google only wins outright in an AI winter.

The portfolio hedge gets stronger: Google's model-layer wins are already priced into GOOG equity — which you already hold. Fast Company #1, the Apple/Siri deal, 750M Gemini users — all of that is GOOG upside you're already long. Taking the MSFT role adds MSFT infrastructure exposure on top. You end up long the model winner and the infrastructure winner simultaneously.

"The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential."

— Dario Amodei, CEO Anthropic (Feb 2026)

Build the infrastructure. The applications will follow.

CoreAI — Culture, Mission & Jay Parikh
Mission

Build the End-to-End AI Stack

CoreAI's official mission: "Build the end-to-end Copilot & AI stack for both first-party and third-party customers to build and run AI apps and agents."

The developer-facing version: "Empower every developer to shape the future with AI."

Problem 1

The app stack is broken for AI. Traditional architecture doesn't support agents with memory, entitlements, and action spaces. CoreAI is rebuilding it from scratch.

Problem 2

Tools and platform teams were siloed. By owning GitHub Copilot AND the underlying infrastructure, CoreAI creates direct feedback loops between the leading AI dev product and the platform it runs on.

Problem 3

Enterprise AI needs observability, security, and compliance. Parikh: "Nothing in AI is going to work in the enterprise without observability."

The "Agent Factory" Concept

Parikh's own term for how CoreAI operates — a manufacturing metaphor for systematic AI delivery:

"There's these things that show up on the loading dock — models, technology from MSR — then we put those together into a production line. At the end it produces some AI capability, which today likely is some type of agent."

Financial proof: Azure AI Foundry has delivered $337M favorable COGS impact YTD, with $606M annualized savings projected. CoreAI is not a research org — it's a business unit with measurable outcomes.

Culture

A Deliberate Override of Microsoft Culture

CoreAI represents a deliberate culture experiment inside Microsoft. Jay Parikh was brought in specifically to inject Meta-style "move fast" engineering culture into what has traditionally been a risk-averse, process-driven organization.

Parikh's Five Cultural Principles for CoreAI

01 — Speed & Iteration

Remove obstacles, accelerate progress. "Speed is a catalyst for innovation."

02 — Agility & Adaptability

Stay on your toes, prepare for multiple outcomes. Teams that stay agile will have the most success.

03 — Simplicity Over Complexity

"Complexity is the enemy of scale. Simplicity is a core part of how we're going to drive the operations of this team."

04 — Cross-Functional Collaboration

Break down silos, build "connective tissue" across disciplines. Own the full stack vertically.

05 — Measurement & Outcomes

As CTO Kevin Scott added: "When you're going fast, it's really important to remember: build, ship, and measure."

Engineering Thrive Initiative

A concrete culture program Parikh brought from Meta, now scaling company-wide. It literally measures the time engineers are "stuck" — tracking unfocused time vs. focused/creative time — and monitors how that ratio changes as cultural shifts, tooling improvements, and skill investments are made. This is metric-driven engineering health at Microsoft scale.

The Internal Friction (Real Talk)

  • Long-tenured Microsoft engineers can feel displaced by Meta-culture "outsiders"
  • Some internal jealousy: CoreAI compensation is reportedly higher than peer orgs
  • Burnout risk at the breakneck pace in a 10,000-person org
  • "Satya is determined to support new recruits against Microsoft's own culture" — this is by design, not accident
Jay Parikh — EVP, CoreAI

The Leader Who Built Facebook at Scale

Virginia Tech → Akamai → Meta (11 yrs) → Lacework CEO → Microsoft SLT Oct 2024

At Meta, Parikh scaled infrastructure from 300M to 3.5B+ users, and engineering from 300 to 30,000+ people. He oversaw all infrastructure powering Facebook, Instagram, WhatsApp, and Messenger — including subsea cable systems and the Aquila drone internet project. He is bringing the exact same playbook to CoreAI.

His Philosophy, In His Own Words

"People, tools — and way way down the list, process."

— Velocity Conference 2012, on Facebook's engineering philosophy

"If you are in the 'amazed' group — 'wow, that's amazing!' — then you need to raise your level of ambition."

— On the two groups of AI tool users: the amazed vs. the frustrated (the frustrated are right)

"I want to shift dramatically — squeeze that with AI… give back time for the creativity part."

— On developer productivity: less focus on lines of code generated, more on creative reclaimed time

"Who talks about having zero backlog? That is an outcome that I don't think we've ever been able to realize."

— On AI completing feature/bug backlogs (Singapore Airlines went from 11-week project to zero backlog in 2 months)

"Humans like the lived experience… you put it in the hands of customers and they vomit… my evals are great… this sucks."

— Candidly on the eval problem: benchmark performance ≠ real-world user experience

The Meta Playbook He's Bringing to CoreAI

Focus on Impact

Decentralized ownership. At Meta, new engineers shipped code affecting hundreds of thousands of users by day two (the "bootcamp" model).

Move Fast

10,000 performance tests per commit weekly. 500M+ feature flag checks per second. Speed is measurable and engineered — not a vibe.

Be Bold

Don't replicate — redesign. Meta rebuilt data centers from scratch to get 40% throughput improvements rather than iterating on existing architecture.

Fail Without Punishment

The engineer who accidentally released all secret features to users? They stayed employed and became one of Meta's best. Learning from public failure is a feature, not a bug.

Long-arc thinker: Parikh recommends Carlota Perez's Technological Revolutions and Financial Capital — a book about how technology adoption cycles actually unfold over decades. He's not optimizing for the next quarter; he's thinking about the next computing paradigm shift.

Context

CoreAI vs. The Rest of Microsoft

Dimension Traditional Microsoft CoreAI
Culture model Risk-averse, process-driven Meta-style "move fast," high agency
Engineering leader background Microsoft lifers, enterprise DNA Ex-Meta infrastructure + startup executives
Pace Methodical, enterprise release cycles Startup pressure cooker, iterate fast
Measurement Traditional product metrics Engineering "stuck time," learning velocity
Team composition Siloed product teams Vertically integrated, cross-functional
Strategic role Incremental improvement Architectural transformation bet

The talent composition signals the intent: alongside Parikh, Microsoft recruited Jason Taylor (Meta's infrastructure VP), Asha Sharma (ex-Meta/Instacart), and Michael Kirkland from Meta — blended with Microsoft veterans Eric Boyd and Julia Liuson. The explicit goal is "consumer scale DNA" injected into enterprise infrastructure.

The Window That Must Not Be Missed

Three drivers create the urgency:

  • 1 OpenAI dependency risk: CoreAI supports Microsoft's pivot toward reducing reliance on OpenAI — GitHub Copilot now offers Gemini, Grok, Anthropic, and Microsoft's own models alongside GPT.
  • 2 AI revenue explosion: Microsoft's AI business exceeded $13B in annual revenue with 175% YoY growth. The platform underneath that must scale.
  • 3 The agent transition window: 2025–2027 is the window where the AI app stack gets defined. Whoever defines the agent development platform controls the next computing paradigm. This is Microsoft's Windows moment for AI.
Sources
The Perez Framework — Why Infrastructure Always Wins
Reading Recommended by Jay Parikh

Technological Revolutions and Financial Capital

Carlota Perez — Edward Elgar Publishing, 2002  |  Recommended by Jay Parikh in Madrona interview, 2025

Every major technological revolution follows a predictable two-phase pattern. Perez documents five of them — Industrial Revolution, Steam & Railways, Steel & Electricity, Oil & Automobiles, Information Technology — and each follows the same arc. Understanding which phase we're in determines the right strategic bet. Parikh recommends this book because it reframes the current AI moment from "exciting software trend" to "once-in-a-generation paradigm shift with predictable dynamics."

Phase 1

Installation Period

Irruption → Frenzy → Crash

  • Financial capital pours in and speculates wildly on the new technology
  • Infrastructure gets built cheaply and rapidly — often unprofitably
  • Bubble forms and crashes (dot-com bust, railway mania collapse)
  • The infrastructure built during the frenzy does not disappear — it's the foundation

1840s Railway Mania: Investors lost fortunes. But 6,000 miles of railroad were laid across Britain, powering the Victorian economy for a century.

Phase 2

Deployment Period

Synergy → Golden Age → Maturity

  • Production capital takes over from financial capital
  • Technology diffuses broadly — every industry is transformed
  • Long "Golden Age" of widespread productivity growth
  • Infrastructure owners from Phase 1 become the platform everyone builds on

Post-war 1950s: IBM, Bell, and the interstate highway system became the irreplaceable infrastructure of the American Golden Age.

Where We Are Now

We Are Deep in the Frenzy Phase

The signals are unmistakable. Trillion-dollar valuations for companies with marginal profits. GPU spending that defies conventional ROI analysis. The $500B Stargate announcement. OpenAI's $157B valuation on $3.4B revenue. This is exactly what Perez describes as the Frenzy phase — financial capital pouring in to build infrastructure before the business model is clear. This always happens. And the infrastructure always survives the inevitable correction.

Railway Mania (1840s)

Banks poured £240M into unprofitable railways. Most investors lost fortunes. But Britain ended up with 6,000 miles of track that powered the Victorian economy for 100 years.

Dot-Com Bubble (1995–2000)

$5T in market cap destroyed. But fiber optic cable was laid at pennies on the dollar. Amazon, Google, and broadband were all built on that cheap infrastructure.

AI Frenzy (2022–Now)

$500B+ in AI infrastructure investment. GPU clusters deployed at unprecedented scale. The infrastructure being built today is the platform every application will run on for decades.

The bubble may burst. AI winters are possible. But Perez's central insight: the infrastructure built during the frenzy survives the crash. The railroads didn't disappear when investors lost money — they became the arteries of commerce. The fiber optic cables didn't disappear when Webvan failed — they became the backbone of the modern internet.

Career Implication

CoreAI Is the Toll Road, Not the Stagecoach

In every previous technological revolution, there were two types of players: those building the applications on top of the new infrastructure, and those building the infrastructure itself. Applications come and go as technology matures. Infrastructure compounds.

Application Layer — Gets Disrupted
  • Stagecoach companies when railroads arrived
  • Newspapers when the internet arrived
  • Search ads when agents handle tasks end-to-end
  • Customer support UI when agents handle support directly
Infrastructure Layer — Compounds
  • + Railroads: revenue grew regardless of what ran on them
  • + AWS: dominates more as more apps move to cloud
  • + Azure AI Foundry: every agent gets built and run here
  • + GitHub: every developer's dependency, largest code corpus

"Every company that isn't maybe founded today is going to have to actually create a similar system."

— Jay Parikh, on why CoreAI's "Agent Factory" model becomes universal

The Perez framework makes this legible: CoreAI isn't building one product — it's building the production-line infrastructure that every organization in the world will need as AI moves from Installation to Deployment Phase. The platform that wins this window becomes the AWS of the AI era.

Sources
  • Perez, C. (2002). Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. Edward Elgar Publishing.
  • Madrona Ventures (2025) — Jay Parikh recommends the Perez book specifically for framing the AI transition.
  • Nadella, S. (Jan 2025) — "Thirty years of change is being compressed into three years." CoreAI formation memo.
Jensen Huang — The Vendor Who Sees the Whole Stack
Source: Lex Fridman Podcast #494, March 2026  /  The Verge coverage

Jensen Huang is the only person in the world who sells infrastructure to every major AI lab simultaneously — OpenAI, Google, Microsoft, Amazon, Meta, and China. His view of the AI trajectory is the highest-signal external datapoint available, because his revenue forecasts are bets placed with actual capital allocation decisions, not opinion. What he's saying now strongly validates the infrastructure bet.

His starting point

"This is too hard" — that thought is gone. "This is going to take a long time" — that thought is gone too.

The AGI Declaration  /  YouTube clip

"I Think We've Achieved AGI"

When Lex Fridman asked how far we are from an AI system that can start, grow, and run a billion-dollar technology company, Huang didn't hedge: "I think it's now." But his reasoning is the interesting part — it's not about superintelligence, it's about the definition threshold being lower than people assume.

"You said a billion and you didn't say forever. It is not out of the question that [an AI] was able to create a web service, some interesting little app that all of a sudden a few billion people used. And then it went out of business again shortly after."

— Jensen Huang to Lex Fridman, March 2026

His argument: most viral internet-era companies weren't more sophisticated than what current AI could generate. Achieve virality, monetize it — that's a billion-dollar outcome an agent could stumble into. He points to China, where people are already deploying AI agents to find jobs, do work, and make money.

"Now the odds of 100,000 of those agents building Nvidia is zero percent."

— The crucial distinction: AGI ≠ deep sustained innovation

What This Means for CoreAI

If AGI is already here by Jensen's definition, the bottleneck isn't intelligence — it's infrastructure to deploy, orchestrate, and scale agents. The value shifts from "who builds AGI first" to "who provides the platform where millions of agents run." That's Azure AI Foundry. That's CoreAI's entire thesis. Jensen's AGI declaration is the strongest possible endorsement of the infrastructure layer you'd be joining.

The Inference Explosion

A Million X Increase in Compute Demand

Huang previously predicted 1,000x growth in inference compute. He updated his own forecast upward — and his explanation is structural, not speculative:

Generative AI (ChatGPT era) — baseline compute demand
100×
Reasoning models (O1/O3, thinking mode) — thinking at inference time costs dramatically more than retrieval
10K×
Agentic AI — agents don't just answer, they act, check, retry, and iterate
1M×
Multi-agent systems with consumption loops — fleets of agents coordinating, checking each other's work

"Inference is thinking, and thinking is hard. Thinking is way harder than reading."

— The reason inference doesn't get cheap as AI gets smarter: more capable models do more thinking per query

Agents = The New Personal Computer

Huang defines an AI agent by four elements — and each maps directly to what CoreAI is building:

  • 1 Memory systems → Azure AI memory/context persistence
  • 2 Skills & capabilities → GitHub Copilot + plugin ecosystem
  • 3 Resource management → Azure AI Foundry orchestration layer
  • 4 I/O & API subsystems → Azure OpenAI Service integrations

His workforce projection: 100 AI agents per engineer within a decade. Every company will need the infrastructure to build and run agent fleets. That's CoreAI's market.

"AI Factory" — Same Concept, Different Speakers

Huang's framing of Nvidia has evolved from "chip company" to "AI factory company." The language he uses is almost identical to Jay Parikh's "Agent Factory" concept for CoreAI:

Huang (Nvidia): AI factories take in data and energy, produce intelligence as output. The product isn't chips — it's the entire factory (hardware + CUDA + software stack).

Parikh (CoreAI): "Models arrive on the loading dock, get assembled through a production line, and production-ready agents come out the other end."

Two of the most connected infrastructure leaders in AI are independently converging on the same metaphor. That's not coincidence.

Why This Validates the Perez Framework

The Deployment Period Is Beginning

Perez predicts that after the Installation Period frenzy, technology enters the Deployment Period — diffusing into every sector of the economy. Huang is describing exactly this happening in real time with physical AI:

"The technology industry's first opportunity to address a $50 trillion industry that has largely been void of technology until now."

— On physical AI (robotics, autonomous vehicles, manufacturing, agriculture)

Every physical AI system — every robot, every autonomous vehicle, every AI-enabled factory — needs to be trained, simulated, and deployed. That requires the exact stack CoreAI is building: Foundry for model deployment, Copilot for developer tooling, the agent runtime for orchestration.

The Structural Argument

Huang sells to every AI player equally. His revenue grows as long as compute demand grows — regardless of which model wins or which application layer dominates. CoreAI's infrastructure position has the same structural quality: whether the winning agent platform is Copilot, a competitor's product, or something not yet built — if it runs on Azure AI Foundry and uses GitHub tooling, Microsoft wins. The infrastructure bet doesn't require predicting the winning application.

Sources
Jobs in the AI Era — Is Bill Gates Right?

In early 2025, Bill Gates identified three professions he believes will withstand AI displacement: Energy, Biology, and Programming/IT. His argument: these domains require complex judgment, creative hypothesis formation, and human accountability in ways current AI cannot fully replicate. Source ↗

Field 01
Energy
Grid design, dispatch, safety, and crisis response carry physical consequences. Ethical trade-offs require human accountability that can't be delegated to a model.
Field 02
Biology
Original hypothesis formation and careful experimentation can't rely on pattern matching alone. AlphaFold solved folding — but it didn't ask the question.
Field 03
Programming / IT
Humans must design, supervise, and secure AI infrastructure. Ironic for a field that AI is actively transforming — but the oversight layer is human by necessity.
Supporting Data

A Morgan Stanley study found an 8% workforce reduction in UK companies tied to AI adoption. Microsoft researcher Kiran Tomlinson: "AI chatbots are best used to boost productivity, not erase payrolls." The pattern: transformation over elimination.

Is Gates Right?
Where He's Right
  • AI amplifies but doesn't yet replace judgment under novel uncertainty
  • Infrastructure oversight is inherently human — someone has to be accountable
  • Physical-world consequences (energy grids, drug trials) demand human sign-off
  • Gates' framing matches the Perez thesis: tools builders shape every era
Where It's Incomplete
  • Programming is already heavily AI-assisted — Copilot writes production code today
  • He omits roles built on human relationships: therapy, teaching, pastoral care
  • He omits legal, governance, and political accountability roles
  • "Three jobs" undersells adaptability — most roles restructure, not disappear

The sharper framing: Gates is less describing professions that survive and more describing capabilities that survive — judgment, accountability, creative direction, physical-world ownership. Those capabilities exist across many fields. The workers who bring them to AI-augmented roles will be the ones who don't get displaced.

Relevance to This Decision

CoreAI sits at the intersection of Gates' top two fields: Programming/IT infrastructure and the systems that will govern how AI is deployed in Biology, Energy, and beyond. Building the AI stack — GitHub Copilot, Azure Foundry, VS Code — is not just a role that survives the AI transition. It is the role that defines it. If Gates is right about the oversight layer, the CoreAI PM is exactly the person he's describing.