Scroll Top

Did Jensen Huang Just Usher in a New Economy? How Tokens, Agents, and AI Factories Could Change Everything 

Engineering team observing rapid development of AI technology in RD lab, doing brainstorming. Teamworking colleagues monitoring progress of artificial intelligence architecture evolution, discussing

AI will become the industrial infrastructure of the global economy.  This is what Jensen Huang told the assembled audience today at the NVIDIA GTC 2026 keynote. It’s a bold claim. To get there will require much more than powerful GPUs and massive data centers. It will take expert integrators. But more on that later. 

Huang’s message was clear – and consistent with one our own CTO Rob Kim has been all over in 2026: we are beyond experimentation and on to execution of a new agentic frontier that ends the era of passive software. 

For organizations of all sizes, this obviously changes everything. 


AI Is Becoming Infrastructure 

For years, AI conversations centered on models: bigger models, better models, and more training data. But Huang’s keynote made it clear that the next chapter of AI is about operationalizing intelligence at scale. 

Simply put, AI is not an application or a feature. It’s becoming core infrastructure, following the same path as cloud and networking before it – only much faster. 

Enterprises will build AI factories designed to continuously churn out intelligence, insights, and automation. Infrastructure is important, but it doesn’t deliver outcomes alone. This is where the real enterprise challenge begins. 


Inside the AI Factory: A New Era for Engineers 

One of the most interesting implications of Huang’s keynote is how the role of engineers is changing inside what NVIDIA calls the AI factory. 

In traditional software environments, engineers build applications that run on infrastructure. In AI factories, engineers are orchestrating systems designed to continuously generate intelligence. 

As Rob Kim recently wrote: 

In the last few decades, we’ve used technology in roughly the same way. We build applications. We log into them. We organize things inside them. We remember to check them. We have to maintain them – and optimize them. We’ve made apps faster. Prettier. Cloud-based. Mobile. Integrated. But at a fundamental level, the relationship hasn’t changed. Software still assumes humans should do the hardest parts of the work. 

Huang’s keynote flipped the software paradigm on its head. How? The unit of production is no longer just code. Large-scale AI systems operate by generating tokens – text, images, actions, predictions – at massive scale. The entire stack is optimized to produce them efficiently: GPUs, networking, data pipelines, inference engines, and agent frameworks. 

That shift fundamentally changes how engineering teams think about their work. 

Engineers are becoming designers of intelligence pipelines: 

  • Architecting systems that transform raw data into usable context 
  • Building agents and workflows that reason, plan, and act 
  • Optimizing inference performance and token throughput 
  • Managing the cost, latency, and governance of AI outputs 

Just as cloud computing introduced the idea of infrastructure budgets and FinOps, AI factories are introducing token budgets. These budgets cover how much intelligence an organization can generate and deploy across its applications. 

That shift is even starting to show up in hiring. 

As Huang described, forward-looking companies are already experimenting with offering engineers dedicated token budgets as a part of their overall compensation. To sweeten the deal beyond six-figure salaries, they are giving teams the computational runway to experiment with models, agents, and workflows without constantly worrying about usage limits. 

In other words, the perks of working in AI may soon include not just great tools and GPUsbut the freedom to generate intelligence at scale. 

The engineers who thrive in this environment will look different from traditional software developers. They’ll combine skills in: 

  • Software engineering 
  • Data engineering 
  • AI model integration 
  • Distributed systems 
  • Product thinking 

Engineers inside the AI factory will be designing the systems that produce intelligence itself.  


Why do AI FactorIEs Need Systems Integration?

If Huang’s keynote could be distilled to one messageit’s that the AI era is now about systems, not just isolated tools. 

AI factories require an ecosystem of technologies working together: accelerated computing, cloud platforms, modern data architectures, application engineering, and the operational frameworks needed to deploy AI safely at scale. 

Enterprises now need to integrate GPU-accelerated infrastructureAI-ready data platformsmodern application architectures, agent frameworksorchestration layers, and governance and security models for AI systems. 

Oh yeah, and they need to do all this quickly enough to keep pace with innovation and customer needs. 

At Presidio, we see organizations moving from AI experimentation toward AI production systems, or what NVIDIA describes as AI factories. Making that shift requires rethinking how infrastructure, data, and applications come together to deliver continuous intelligence. 

It’s why we focus on helping organizations build AI-native platforms through Presidio’s Human AI (HAI) methodology. We combine cloud, modern data, and AI-enabled application engineering to accelerate the path from innovation to outcomes. All with the modern engineer in the cockpit. 

At Presidio, we believe the winners of the AI era will operationalize intelligence across their entire business. Huang’s message today was a ringing endorsement of that vision. 

+ posts
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.