Broadcom’s premise is that we’re moving from a world that is CPU-centric to one that is connectivity-centric. That the emergence of alternative processors beyond the CPU like the GPU, NPU, LPU – the XPUs if you will – require high speed connections between them.
Hyperscale cloud providers (the Amazons, Microsofts and Googles of the world) are certainly convinced– they are doubling down on massive capital expenditures to build out AI supercomputing infrastructure.
In contrast, traditional enterprises are taking a more cautious stance. While cloud giants’ spending on AI infrastructure is exploding, most enterprises lag in adopting these capabilities, constrained by practical realities of budgets, skills and legacy systems. In short, there’s a widening gap between the hype of what AI agents promise and the reality of what enterprises can feasibly implement today.
We’ve seen this pattern before. From the dot-com era to cloud computing and big-data analytics, new technologies often spark feverish excitement and bold predictions.
In the case of cloud, mobile and social, the transformative impact did come – but somewhat later than initially expected, after the technology matured and organizations did the unglamorous work to make it enterprise-ready. Big data, however, never really lived up to its expectations.
Agentic AI is following a similar script: enormous long-term potential, but with an adoption curve that will be more gradual than the current frenzy suggests.
In 2025, for instance, we will not see broad adoption of AI agents run the enterprise on autopilot. Rather, it’s the year when forward-thinking companies lay down bricks on the yellow road toward that eventual destination. The question is: Will agentic end up living up to the hype? We think it will for those companies that do the hard work.
In practical terms, this means the cloud titans are retooling their data centers to be AI-first. They’re investing billions in
- GPUs
- Custom AI chips
- High-speed networks
- Liquid cooling
- Expanded facilities to handle the volume, value and velocity of AI workloads.
This hyperscaler-fueled buildout is driving overall tech CapEx to new heights.
Most enterprises are far from ready to reap the benefits of fully autonomous agents.
Much work needs to be done to realize the full promise of agentic systems, including
- Organizational alignment to create a true data culture
- Cleaning up data silos
- Harmonizing that data
- Assigning data ownership
- Data product thinking
- Getting governance and security right
- Choosing technology partners
- Rationalizing software-as-a-service
- On-premises applications portfolios
- Change management
Why enterprise adoption of agentic AI is on a slower trajectory than the hype?
- Hyperscaler CapEx explodes – enterprises lag: Cloud giants are pouring unprecedented capital into AI infrastructure, but most enterprises are far behind in building comparable capabilities.
- 2025 won’t be “The Year of the Agent”: Despite the buzz around autonomous AI agents, enterprises aren’t ready for wide deployment of “agentic AI” at scale in 2025 – fundamental groundwork is still missing.
- We’ve seen this movie: The hype around AI agents echoes past tech waves (for example, big data) where enthusiasm outpaced reality. Eventually value emerged, but only for a narrow set of firms that had the talent to pull it off. And even then, for most organizations, the promise wasn’t fulfilled, even after years of maturation and hard lessons on data and integration.
- The yellow brick road matters: Reaching true agentic AI is a journey with critical stepping stones. Companies must follow a structured path (a “yellow brick road” of data foundations, integration, and governance) rather than expecting instant magic from AI.
- Winners bridge hype and reality: The organizations that will win in the agentic AI era are those that embrace the vision and invest in practical steps. These winners methodically close the gap between AI hype and enterprise reality, while others fall victim to unrealistic expectations.
The reality on the ground in enterprises is far more sober. Yes, interest in generative AI and agents is sky-high, and many organizations are experimenting. But when it comes to real production use of agentic AI, most enterprises are just not there yet.
The basic ingredients needed to deploy reliable AI agents at scale – high-quality unified data, deeply integrated systems, strong governance – are missing in many cases. As a result, 2025 will be a year of learning and laying groundwork, not a year where AI agents run rampant across the enterprise.
One clear indicator of the maturity gap is how companies are using AI today. A recent Enterprise Technology Research survey of IT decision-makers reveals that most organizations are currently engaging with large language models in a very limited way – primarily via consumption of third-party tools and application programming interfaces, rather than building their own advanced AI solutions.
Enterprise adoption of AI today skews toward light consumption rather than deep production integration.
- In an ETR survey (N=106), 80% of organizations said they “pay for subscriptions to tools like ChatGPT or Microsoft Copilot” to let their teams experiment or perform work.
- Some 63% are tapping cloud AI APIs (for example, OpenAI and Anthropic) to integrate LLM capabilities into some workflows.
- Only 39% are using open-source models on their own infrastructure
- Mere 27% are training proprietary LLMs in-house.
In other words, the vast majority are consuming AI (often via cloud services), whereas far fewer are creating or operating their own AI at scale.
This data paints a clear picture: Enterprises remain in the early, exploratory phase with generative AI. Most are content to leverage SaaS offerings or API services for AI capabilities – things like using ChatGPT for copywriting, or plugging an OpenAI API into a customer support app. These are relatively easy on-ramps to get value from AI quickly, but they’re a far cry from having fully autonomous AI agents deeply woven into business processes.
Only a minority of companies are building their own models or significantly customizing AI systems on their own infrastructure. Training a proprietary LLM, for example, requires enormous data readiness and machine learning operations maturity that few firms have right now. The upshot: For most enterprises, AI adoption in 2025 means consuming prebuilt AI services, not standing up their own agentic AI platforms.
Why aren’t enterprises further along? There are three big gaps between the hype of agentic AI and the reality in enterprises today:
- Data quality and lineage: AI agents are only as good as the data feeding them. Most organizations’ data is not in great shape – it’s scattered, siloed, unclean and lacking consistent lineage. Without a clean, unified data corpus (with known provenance), autonomous AI outputs can’t be trusted. Many firms are discovering that their data foundation needs serious work before they can unleash AI broadly.
- Integration and orchestration: For an AI agent to be truly useful, it must plug into a multitude of enterprise systems, APIs and workflows – often in real time. Achieving this integration and orchestration is incredibly challenging. Enterprises have complex, heterogeneous IT environments (ERP, CRM, custom apps and the like). Today’s AI pilots often operate in isolation (a chatbot here, an RPA automation there). The heavy lifting to connect AI agents seamlessly into end-to-end business processes (so they can act across different tools and datasets) is still ahead of us.
- Governance and trust: Enterprises have strict requirements around security, compliance and oversight. An AI agent that unpredictably produces incorrect or biased results can do real damage. Right now, organizations lack robust governance frameworks for AI. Questions of accountability (who is responsible if the AI makes a bad decision?), transparency (can we explain why it acted a certain way?), and ethical use are unsettled. Until governance and trust mechanisms catch up, enterprises will rightly limit where autonomous agents are allowed to roam.
ROI of AI is clear for consumer Internet companies like Google and Meta, the business impact for enterprise AI has not been as evident. Until it shows up in the quarterly numbers, we expect CFOs to remain conservative with their budget allocations.

Company | Core Focus | Revenue Model | Key Customers | Differentiator |
Ansys | Multiphysics Simulation (CAE) | Perpetual + Subscription licenses + Maintenance (20% fee) | Aerospace, Automotive, Electronics | Broadest physics-based simulation |
Cadence | EDA / Chip Design | Term licenses + IP cores + Maintenance | TSMC, Intel, NVIDIA | Dominates digital & analog design |
Synopsys | EDA + Semiconductor IP | Licenses + IP royalties + Maintenance | Apple, Qualcomm, Samsung | #1 in verification & AI-driven design |
Dassault Systèmes | PLM + 3D Simulation (SIMULIA) | SaaS (3DEXPERIENCE) + Perpetual licenses | Boeing, Tesla, Pharma | Leader in PLM + integrated CAD/CAE |
Siemens EDA | PCB / IC Design (Mentor) | Subscription + Perpetual licenses | Automotive, Industrial IoT | Strong in system-level EDA |
Siemens Simcenter | CAE / Multiphysics | Licensing + Cloud (Xcelerator) | GM, Airbus, Siemens Energy | Tight CAD-CAE integration |
Altair | Simulation + HPC + AI | Subscription-based licensing | Ford, LG, Samsung | Lightweight optimization & AI-driven CAE |
Keysight (EDA) | RF/Wireless Design (ADS) | Licenses + Test Hardware | Qualcomm, Huawei, NASA | Combines EDA + measurement tools |
ARM (NVIDIA) | Semiconductor IP (CPU/GPU) | Royalty-based IP licensing | Apple, MediaTek, Amazon | Dominates mobile/embedded CPUs |
MathWorks | Algorithmic Modeling (MATLAB) | Annual licenses + Toolboxes | Universities, Auto, Defense | Industry-standard for control systems |
Company | Focus | Licensing Model | Key Industries |
Ansys | Simulation | Subscription | Aerospace, Automotive |
Cadence | EDA | Time-Based | Semiconductors |
Synopsys | EDA + IP | Time-Based + SaaS | Semis, Automotive |
PTC | CAD/PLM | Subscription | Industrial, IoT |
Dassault Systèmes | CAD/CAE | Subscription | Aerospace, Auto |
Autodesk | CAD (AEC/MFG) | Subscription | AEC, Manufacturing |
Altair | Simulation + AI | Token-Based | Automotive, Aerospace |
COMSOL | Multiphysics | Subscription + Perpetual | Academic, Engineering |
Mentor Graphics (Siemens) | EDA | Time-Based | Semiconductors |
Keysight Technologies | RF Test/Simulation | Mixed | Wireless, Defense |
Altium | PCB Design | Subscription | SMB Hardware Firms |
MathWorks | Numerical Computing | Subscription + Perpetual | Academia, Engineering |
Palantir | Data Platform | SaaS | Government, Commercial |
AspenTech | Process Simulation | Subscription | Oil & Gas, Chemicals |
ESI Group (Keysight) | Virtual Prototyping | Project-Based | Manufacturing, Automotive |
What Do These Companies Do?
These companies develop specialized software tools used by engineers and scientists to design, test, and optimize products. Examples:
- Chips (Semiconductors) – Cadence, Synopsys
- Cars, Airplanes, Electronics – Ansys, Siemens
- Software Algorithms & AI – MathWorks (MATLAB)
- Smartphone/Computer Processors – ARM
Think of them as the “Photoshop for engineers”—but instead of editing photos, they help design everything from iPhone chips to electric cars.
2. How Do They Make Money?
All these companies follow a high-margin software business model:
- Sell Licenses: Companies pay to use their software (like buying Windows or Photoshop).
- Charge Annual Fees: Customers pay extra (~20%) for updates & support.
- Subscription Plans: Some now offer cloud-based access (like Netflix for engineering tools).
- Sell IP Cores (Synopsys/ARM): Like selling “blueprints” for chip designs.
This model is profitable because:
- Software is expensive to build but cheap to distribute.
- Engineers rely on these tools for years (hard to switch).
When we say “Azure, Google Cloud, AWS, and Oracle want to undergird the next generation of AI-powered startups,” it means that these cloud providers aim to:
🏗️ Be the foundational infrastructure layer for AI startups.
They want AI companies to build, train, deploy, and scale their models and applications on their platforms — making the cloud provider:
- The computational backbone (via GPUs, TPUs, XPUs, CPUs)
- The data pipeline host (storage, ETL, data lakes)
- The ML ops platform (for versioning, orchestration, monitoring)
- The API and deployment layer (for SaaS and inference delivery)
🔑 What “undergird” really implies here:
Intent | Explanation |
Capture long-term usage | AI startups that grow into giants (like OpenAI, Anthropic) are very sticky customers. |
Provide early-stage incentives | Credits, partnerships, and VC-like support to onboard early. |
Control the AI infrastructure stack | Own everything from compute to AI chips (TPUs, Inferentia) to tooling. |
Build ecosystems around AI | Encourage use of their own AI APIs, data tools, and model hosting. |
Edge out smaller or slower players | Compete with niche cloud or open-source alternatives. |
📌 Real-world examples:
- Microsoft Azure powers OpenAI and is deeply integrated into its business model.
- Google Cloud offers Vertex AI, TPUs, and LLM tools to attract foundation model builders.
- AWS supports startups like Anthropic, Hugging Face, and Stability with compute + infrastructure.
- Oracle Cloud partners with NVIDIA for AI superclusters, offering lower-cost AI compute.
In short, these cloud giants are not just renting out servers—they want to become the platform-of-choice for the AI boom, betting that the next billion-dollar AI companies will be built on their rails.
The Real Contenders (Ranked by Threat Level)
- Microsoft (+OpenAI)
- Why: Already monetizing AI via Azure OpenAI Service (30%+ gross margins).
- Killer Move: Windows Copilot could embed AI deeper than Meta’s chatbots.
- Weakness: Enterprise sales cycles are slow.
- NVIDIA (The Arms Dealer)
- Why: Controls the GPU supply chain—Meta’s 600,000 H100s depend on them.
- Killer Move: DGX Cloud lets startups bypass Meta’s infra.
- Weakness: No direct consumer reach.
- Apple (The Dark Horse)
- Why: On-device AI (Apple Intelligence) could make Meta’s cloud-reliant models look outdated.
- Killer Move: 2B+ active devices = instant AI distribution.
- Weakness: Late to generative AI.
- Google (The Sleeping Giant)
- Why: Search + Gemini integration could crush Meta’s ad business.
- Killer Move: YouTube’s AI video tools > Meta’s static posts.
- Weakness: Execution chaos.
- Elon’s xAI (Wildcard)
- Why: If Grok achieves AGI-first, it’s game over.
- **Killer Move: Tesla’s real-world data for robot training.
- Weakness: Cash burn vs. Meta’s $40B+ yearly profit.
Why Meta Could Lose
- No MoAT: AI research (Llama) isn’t monetizing like Azure/Google Cloud.
- Platform Risk: If Apple bans Meta’s AI from iOS, growth stalls.
- GPU Glut: Wasting $10B+ on H100s without a clear ROI path.
Who Actually Wins?
Short-term: Microsoft (enterprise adoption).
Long-term: Apple (consumer hardware + AI symbiosis).
Meta has strong assets but also serious challenges—whether it “wins” in the AI race depends on the lens you use: