Skip to main content

Report: Enterprise investment in generative AI shockingly low, while traditional AI is thriving

Artistic, colorful, imaginitive AI improving cybersecurity concept
Image Credit: Created with Midjourney

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Generative AI is all anyone can talk about. It is a breakthrough technology with transformational promises across numerous domains — even human life itself.  

And while 2023 was undoubtedly the year that gen AI had its breakout, that has largely been hype, according to a Menlo Ventures report shared exclusively with VentureBeat. 

Gen AI still accounts for a “relatively paltry” amount of enterprise cloud spend — less than 1%. Traditional AI spend, on the other hand, comprises 18% of the $400 billion cloud market. 

“A lot of people thought generative AI would rapidly take over the world,” Derek Xiao, investor with Menlo, told VentureBeat. “AI is a fundamental step forward. But the reality is that this takes time, especially in the enterprise.”

Spend in traditional AI increasing

Some projections put the gen AI market at $76.8 billion by 2030, representing a compound annual growth rate (CAGR) of 31.5% over 2023. Others say the technology will create at least $450 billion in the enterprise market across 12 verticals over the next 7 years. 

While ChatGPT has dominated boardroom discussions — not to mention around the cooler and dining room tables — since its debut in November 2022, half of the enterprises polled in Menlo’s State of AI in the Enterprise report had implemented some form of AI before 2023.

In fact, the number of enterprises using AI grew by 7% — from 48% to 55% — and AI spend grew roughly 8% on average. Of any department, product engineering teams spends the most on AI.

Still, Menlo’s research indicates that enterprises have strong trepidations around gen AI

“We thought generative AI was going to be this overnight success story,” Naomi Ionita, Menlo partner, told VentureBeat. But 2023 was “a year of experimentation and tire-kicking.”

Looking ahead, “2024 will be the hard work of implementing generative AI,” said Xiao. 

Concerns around generative AI adoption 

Leaders at large-scale enterprises should find a sense of comfort in these findings and recognize that moving slowly is OK, Menlo partner Tim Tully told VentureBeat. 

“The smart folks are taking their time,” he said, noting that the rapidly evolving nature of gen AI is leading to a tentativeness to adopt. Also, in many cases “the dollars aren’t there.”

“These are expensive decisions to make,” he said. 

As has been the case with other transformative technologies — such as the cloud — adoption will continue to be measured, Menlo predicts. 

Barriers continue to revolve around unproven ROI and the “last mile problem,” said Ionita. Other concerns include data privacy, shortage of AI talent, lack of organizational bandwidth, compatibility with existing infrastructure and limited explainability and customizability.

Menlo reports that enterprise solutions “have yet to deliver on their promise of meaningful transformation.” They have failed to create new workflows and behaviors and productivity gains feel limited. Buyers will continue to remain skeptical until they can see true value. 

Also, in this market, “it’s harder than ever to get past the CFO,” said Ionita. “There are real barriers to overcome, the promise is there, but when we get down to brass tacks, how do we get it into production?”

However, early adopters of gen AI are seeing significant gains when it comes to using their data and cutting “mundane, painful workflows.”

“It’s meeting the user in ways we were not able to do before,” said Ionita. 

Tully noted that users can create “really remarkable tools” in just 20 minutes (or less). 

“It’s changing workflows,” he said. “It will replace teams, make people’s jobs easier, make people more successful. There is real value and revenue being created.”

Opportunities both horizontal and vertical

As the gen AI market continues to grow, Menlo sees great opportunities for startups in both vertical (industry-specific) and horizontal (more generalized) applications. 

Ionita pointed out that the AI world will be hybrid: Many enterprises are already using more than one foundation platform and smaller models will be used for different, specialized use cases. 

“When generative AI is introduced, industry-specific tools gain superpowers,” the report states. 

For example, marketers have embraced the video content creation tool Synthesia while the legal world is increasingly leveraging Harvey to perform contract analysis and ensure regulatory compliance. Other specialized startups include Greenlite for finance, Abridge for healthcare and Higharc for architecture. 

Meanwhile, horizontal AI tools help to automate manual tasks and workflows. Menlo also anticipates a rise of AI agents that can “think and act independently.” These sophisticated tools will be able to, for example, handle emails, calendars and note taking, and integrate into department and domain-specific workflows. 

“Giving people their time back is an obvious value,” said Ionita, noting that the average employee is working across a “patchwork quilt” of tools. 

Going forward, “AI will lose its novelty and become an unsurprising, if not expected, collaborator throughout the workday,” the report states. 

Standardizing the modern AI stack

Menlo, which has invested in Anthropic and Pinecone, found that enterprises invested $1.1 billion in the modern AI stack this year, making it the largest new market in the gen AI domain

Buyers report that 35% of their infrastructure dollars go to foundation models such as OpenAI and Anthropic. These closed-source models continue to dominate, comprising upwards of 85% of models in production. 

Furthermore, most models are off-the-shelf; only 10% of enterprises pre-train their models. 

Most enterprises adopt multiple models for higher controllability and lower costs, and 96% of spend is on inference. Prompt engineering is the most popular customization method, while human review is the most popular evaluation method. 

Also, retrieval-augmented generation (RAG) is becoming standard. This framework augments large language models (LLMs) with information from external knowledge bases to overcome the limitations of fixed datasets and generate up-to-date, contextually relevant responses. 

Of the enterprises surveyed by Menlo, 31% were using this approach, while 19% used fine-tuning methods, 18% were implementing adapters and 13% were incorporating reinforcement learning through human feedback (RLHF). 

While the first half of the year was “sort of the wild west, under constant construction and revision,” as Xiao described it, the industry is beginning to converge around core components and standard practices. 

Still, the modern AI stack is by no means standardized. According to Menlo, this offers opportunities for startups in offering service remote environments to run and deploy models; extract, transform and load (ETL) that handle data pipeline creation; and data loss prevention, content governance and threat detection and response (to name a few). 

Ultimately, startups should not be looking to compete, said Xiao; they should focus on tools offering new workflows, next-generation reasoning, chain-of-thought and proprietary data analysis. 

It’s not enough to just be “ChatGPT wrapper,” he said. “It’s really about the ability to create new markets where incumbents are not. This is a warning to startups that differentiation really matters.”