Evaluated under two AI trajectory scenarios
Built the agent runtime powering Google's Ads customer support platform. Led migration from fine-tuned models to pure foundation model + RAG architecture. Proven track record with established relationships and deep organizational knowledge.
Building the end-to-end Copilot & AI stack: GitHub Copilot, Azure AI Foundry, VS Code. Newly formed mission-aligned team. Infrastructure layer powering AI for all developers. $337M favorable COGS impact already delivered.
AI advances incrementally. No explosive self-improvement loop. Transformative like cloud computing — powerful but gradual, with plenty of time to adapt and reposition.
Recursive self-improvement loop. AI codes autonomously. "Country of geniuses in a datacenter." One hundred years of scientific progress compressed into a single decade.
| Scenario | Probability | Google Ads | Microsoft Core AI |
|---|---|---|---|
| Fast takeoff (1–3 yr) | 35% | 5.0 | 8.8 |
| Medium takeoff (3–7 yr) | 45% | 6.2 | 8.0 |
| Slow takeoff (7–15 yr) | 15% | 7.2 | 7.2 |
| No takeoff / AI winter | 5% | 8.0 | 6.5 |
Andrej Karpathy (ex-Tesla Autopilot, ex-OpenAI) describes a phase transition in December 2025 where he went from writing 80% of code himself to delegating 80%+ to AI agents. "I don't think I've typed a line of code since December." This isn't incremental improvement—it's a fundamental shift in how software gets built.
"You would feel nervous when your GPUs are not running. But now it's not about flops, it's about tokens. What is your token throughput? If you're not maximizing your subscription, you're the bottleneck in the system." — Karpathy
Perpetual state of trying to keep up with what's possible. Everything feels like "skill issue," not capability limitation. The frontier moves faster than individual adaptation.
"Remove yourself as the bottleneck." Let agents optimize hyperparameters overnight. Frontier labs are already doing this at scale on tens of thousands of GPUs.
Massive "unhobbling" in digital space (bits flip instantly). Physical world lags by years—atoms are "a million times harder." Infrastructure matters more than apps in the short term.
Models excel in verifiable domains (code, math) but struggle with soft skills. "You're either on rails in the super-intelligence circuits or you're meandering."
Karpathy's December shift validates the fast takeoff scenario is already happening for software engineering. The 80/20 flip in coding workflow isn't theoretical—it's lived reality for frontier practitioners. This strengthens Microsoft Core AI's positioning:
"I want there to be ensembles of people thinking about all the hardest problems. I don't want it to be closed doors with two or three people. By default, I'm very suspicious of centralization." — Karpathy on why he values being at multiple frontier labs over his career
Updated March 2026 to reflect OpenAI's $50B AWS deal, Copilot's 70-to-8 adoption collapse, Suleyman's sidelining, and Google's surge — Gemini 3 Pro topped all major benchmarks (VentureBeat), Apple chose Gemini to power Siri in a ~$1B deal (CNBC), Alphabet hit $4T market cap, and Fast Company named Google #1 Most Innovative Company and #1 in AI. The Gemini app now has 750M monthly active users (TechCrunch), AI Overviews serve 2 billion monthly interactions, and Gemini processes 10 billion tokens/minute via API. Fast Company's Harry McCracken: "That universal assistant Pichai wrote about in his 2016 shareholder letter? Google is on the cusp of creating it."
These are model and consumer layer wins — owned by DeepMind and the Gemini team, not Google Ads. The role on the table is Ads Customer Support AI. A developer joining Google Ads benefits from Gemini's 750M users about as much as a developer joining Google Maps would. The competitive headwinds at MSFT hit the application layer (Copilot UX), not the infrastructure layer (Core AI). The recommendation holds.
The expected value gap remains decisive: 8.09 vs 6.02. MSFT risk scores dropped (financial safety, reorg risk), but career ceiling, optionality, and frontier proximity still dominate. Slow takeoff is now a tie (7.2 vs 7.2) — Google only wins outright in an AI winter.
The portfolio hedge gets stronger: Google's model-layer wins are already priced into GOOG equity — which you already hold. Fast Company #1, the Apple/Siri deal, 750M Gemini users — all of that is GOOG upside you're already long. Taking the MSFT role adds MSFT infrastructure exposure on top. You end up long the model winner and the infrastructure winner simultaneously.
"The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential."
— Dario Amodei, CEO Anthropic (Feb 2026)Build the infrastructure. The applications will follow.
CoreAI's official mission: "Build the end-to-end Copilot & AI stack for both first-party and third-party customers to build and run AI apps and agents."
The developer-facing version: "Empower every developer to shape the future with AI."
The app stack is broken for AI. Traditional architecture doesn't support agents with memory, entitlements, and action spaces. CoreAI is rebuilding it from scratch.
Tools and platform teams were siloed. By owning GitHub Copilot AND the underlying infrastructure, CoreAI creates direct feedback loops between the leading AI dev product and the platform it runs on.
Enterprise AI needs observability, security, and compliance. Parikh: "Nothing in AI is going to work in the enterprise without observability."
Parikh's own term for how CoreAI operates — a manufacturing metaphor for systematic AI delivery:
"There's these things that show up on the loading dock — models, technology from MSR — then we put those together into a production line. At the end it produces some AI capability, which today likely is some type of agent."
Financial proof: Azure AI Foundry has delivered $337M favorable COGS impact YTD, with $606M annualized savings projected. CoreAI is not a research org — it's a business unit with measurable outcomes.
CoreAI represents a deliberate culture experiment inside Microsoft. Jay Parikh was brought in specifically to inject Meta-style "move fast" engineering culture into what has traditionally been a risk-averse, process-driven organization.
Remove obstacles, accelerate progress. "Speed is a catalyst for innovation."
Stay on your toes, prepare for multiple outcomes. Teams that stay agile will have the most success.
"Complexity is the enemy of scale. Simplicity is a core part of how we're going to drive the operations of this team."
Break down silos, build "connective tissue" across disciplines. Own the full stack vertically.
As CTO Kevin Scott added: "When you're going fast, it's really important to remember: build, ship, and measure."
A concrete culture program Parikh brought from Meta, now scaling company-wide. It literally measures the time engineers are "stuck" — tracking unfocused time vs. focused/creative time — and monitors how that ratio changes as cultural shifts, tooling improvements, and skill investments are made. This is metric-driven engineering health at Microsoft scale.
Virginia Tech → Akamai → Meta (11 yrs) → Lacework CEO → Microsoft SLT Oct 2024
At Meta, Parikh scaled infrastructure from 300M to 3.5B+ users, and engineering from 300 to 30,000+ people. He oversaw all infrastructure powering Facebook, Instagram, WhatsApp, and Messenger — including subsea cable systems and the Aquila drone internet project. He is bringing the exact same playbook to CoreAI.
"People, tools — and way way down the list, process."
— Velocity Conference 2012, on Facebook's engineering philosophy
"If you are in the 'amazed' group — 'wow, that's amazing!' — then you need to raise your level of ambition."
— On the two groups of AI tool users: the amazed vs. the frustrated (the frustrated are right)
"I want to shift dramatically — squeeze that with AI… give back time for the creativity part."
— On developer productivity: less focus on lines of code generated, more on creative reclaimed time
"Who talks about having zero backlog? That is an outcome that I don't think we've ever been able to realize."
— On AI completing feature/bug backlogs (Singapore Airlines went from 11-week project to zero backlog in 2 months)
"Humans like the lived experience… you put it in the hands of customers and they vomit… my evals are great… this sucks."
— Candidly on the eval problem: benchmark performance ≠ real-world user experience
Decentralized ownership. At Meta, new engineers shipped code affecting hundreds of thousands of users by day two (the "bootcamp" model).
10,000 performance tests per commit weekly. 500M+ feature flag checks per second. Speed is measurable and engineered — not a vibe.
Don't replicate — redesign. Meta rebuilt data centers from scratch to get 40% throughput improvements rather than iterating on existing architecture.
The engineer who accidentally released all secret features to users? They stayed employed and became one of Meta's best. Learning from public failure is a feature, not a bug.
Long-arc thinker: Parikh recommends Carlota Perez's Technological Revolutions and Financial Capital — a book about how technology adoption cycles actually unfold over decades. He's not optimizing for the next quarter; he's thinking about the next computing paradigm shift.
| Dimension | Traditional Microsoft | CoreAI |
|---|---|---|
| Culture model | Risk-averse, process-driven | Meta-style "move fast," high agency |
| Engineering leader background | Microsoft lifers, enterprise DNA | Ex-Meta infrastructure + startup executives |
| Pace | Methodical, enterprise release cycles | Startup pressure cooker, iterate fast |
| Measurement | Traditional product metrics | Engineering "stuck time," learning velocity |
| Team composition | Siloed product teams | Vertically integrated, cross-functional |
| Strategic role | Incremental improvement | Architectural transformation bet |
The talent composition signals the intent: alongside Parikh, Microsoft recruited Jason Taylor (Meta's infrastructure VP), Asha Sharma (ex-Meta/Instacart), and Michael Kirkland from Meta — blended with Microsoft veterans Eric Boyd and Julia Liuson. The explicit goal is "consumer scale DNA" injected into enterprise infrastructure.
Three drivers create the urgency:
Carlota Perez — Edward Elgar Publishing, 2002 | Recommended by Jay Parikh in Madrona interview, 2025
Every major technological revolution follows a predictable two-phase pattern. Perez documents five of them — Industrial Revolution, Steam & Railways, Steel & Electricity, Oil & Automobiles, Information Technology — and each follows the same arc. Understanding which phase we're in determines the right strategic bet. Parikh recommends this book because it reframes the current AI moment from "exciting software trend" to "once-in-a-generation paradigm shift with predictable dynamics."
The signals are unmistakable. Trillion-dollar valuations for companies with marginal profits. GPU spending that defies conventional ROI analysis. The $500B Stargate announcement. OpenAI's $157B valuation on $3.4B revenue. This is exactly what Perez describes as the Frenzy phase — financial capital pouring in to build infrastructure before the business model is clear. This always happens. And the infrastructure always survives the inevitable correction.
Banks poured £240M into unprofitable railways. Most investors lost fortunes. But Britain ended up with 6,000 miles of track that powered the Victorian economy for 100 years.
$5T in market cap destroyed. But fiber optic cable was laid at pennies on the dollar. Amazon, Google, and broadband were all built on that cheap infrastructure.
$500B+ in AI infrastructure investment. GPU clusters deployed at unprecedented scale. The infrastructure being built today is the platform every application will run on for decades.
The bubble may burst. AI winters are possible. But Perez's central insight: the infrastructure built during the frenzy survives the crash. The railroads didn't disappear when investors lost money — they became the arteries of commerce. The fiber optic cables didn't disappear when Webvan failed — they became the backbone of the modern internet.
In every previous technological revolution, there were two types of players: those building the applications on top of the new infrastructure, and those building the infrastructure itself. Applications come and go as technology matures. Infrastructure compounds.
"Every company that isn't maybe founded today is going to have to actually create a similar system."
— Jay Parikh, on why CoreAI's "Agent Factory" model becomes universal
The Perez framework makes this legible: CoreAI isn't building one product — it's building the production-line infrastructure that every organization in the world will need as AI moves from Installation to Deployment Phase. The platform that wins this window becomes the AWS of the AI era.
Jensen Huang is the only person in the world who sells infrastructure to every major AI lab simultaneously — OpenAI, Google, Microsoft, Amazon, Meta, and China. His view of the AI trajectory is the highest-signal external datapoint available, because his revenue forecasts are bets placed with actual capital allocation decisions, not opinion. What he's saying now strongly validates the infrastructure bet.
"This is too hard" — that thought is gone. "This is going to take a long time" — that thought is gone too.
When Lex Fridman asked how far we are from an AI system that can start, grow, and run a billion-dollar technology company, Huang didn't hedge: "I think it's now." But his reasoning is the interesting part — it's not about superintelligence, it's about the definition threshold being lower than people assume.
"You said a billion and you didn't say forever. It is not out of the question that [an AI] was able to create a web service, some interesting little app that all of a sudden a few billion people used. And then it went out of business again shortly after."
— Jensen Huang to Lex Fridman, March 2026
His argument: most viral internet-era companies weren't more sophisticated than what current AI could generate. Achieve virality, monetize it — that's a billion-dollar outcome an agent could stumble into. He points to China, where people are already deploying AI agents to find jobs, do work, and make money.
"Now the odds of 100,000 of those agents building Nvidia is zero percent."
— The crucial distinction: AGI ≠ deep sustained innovation
If AGI is already here by Jensen's definition, the bottleneck isn't intelligence — it's infrastructure to deploy, orchestrate, and scale agents. The value shifts from "who builds AGI first" to "who provides the platform where millions of agents run." That's Azure AI Foundry. That's CoreAI's entire thesis. Jensen's AGI declaration is the strongest possible endorsement of the infrastructure layer you'd be joining.
Huang previously predicted 1,000x growth in inference compute. He updated his own forecast upward — and his explanation is structural, not speculative:
"Inference is thinking, and thinking is hard. Thinking is way harder than reading."
— The reason inference doesn't get cheap as AI gets smarter: more capable models do more thinking per query
Huang defines an AI agent by four elements — and each maps directly to what CoreAI is building:
His workforce projection: 100 AI agents per engineer within a decade. Every company will need the infrastructure to build and run agent fleets. That's CoreAI's market.
Huang's framing of Nvidia has evolved from "chip company" to "AI factory company." The language he uses is almost identical to Jay Parikh's "Agent Factory" concept for CoreAI:
Huang (Nvidia): AI factories take in data and energy, produce intelligence as output. The product isn't chips — it's the entire factory (hardware + CUDA + software stack).
Parikh (CoreAI): "Models arrive on the loading dock, get assembled through a production line, and production-ready agents come out the other end."
Two of the most connected infrastructure leaders in AI are independently converging on the same metaphor. That's not coincidence.
Perez predicts that after the Installation Period frenzy, technology enters the Deployment Period — diffusing into every sector of the economy. Huang is describing exactly this happening in real time with physical AI:
"The technology industry's first opportunity to address a $50 trillion industry that has largely been void of technology until now."
— On physical AI (robotics, autonomous vehicles, manufacturing, agriculture)
Every physical AI system — every robot, every autonomous vehicle, every AI-enabled factory — needs to be trained, simulated, and deployed. That requires the exact stack CoreAI is building: Foundry for model deployment, Copilot for developer tooling, the agent runtime for orchestration.
Huang sells to every AI player equally. His revenue grows as long as compute demand grows — regardless of which model wins or which application layer dominates. CoreAI's infrastructure position has the same structural quality: whether the winning agent platform is Copilot, a competitor's product, or something not yet built — if it runs on Azure AI Foundry and uses GitHub tooling, Microsoft wins. The infrastructure bet doesn't require predicting the winning application.
In early 2025, Bill Gates identified three professions he believes will withstand AI displacement: Energy, Biology, and Programming/IT. His argument: these domains require complex judgment, creative hypothesis formation, and human accountability in ways current AI cannot fully replicate. Source ↗
A Morgan Stanley study found an 8% workforce reduction in UK companies tied to AI adoption. Microsoft researcher Kiran Tomlinson: "AI chatbots are best used to boost productivity, not erase payrolls." The pattern: transformation over elimination.
The sharper framing: Gates is less describing professions that survive and more describing capabilities that survive — judgment, accountability, creative direction, physical-world ownership. Those capabilities exist across many fields. The workers who bring them to AI-augmented roles will be the ones who don't get displaced.
CoreAI sits at the intersection of Gates' top two fields: Programming/IT infrastructure and the systems that will govern how AI is deployed in Biology, Energy, and beyond. Building the AI stack — GitHub Copilot, Azure Foundry, VS Code — is not just a role that survives the AI transition. It is the role that defines it. If Gates is right about the oversight layer, the CoreAI PM is exactly the person he's describing.