HomeArtificial IntelligenceArtificial Intelligence NewsOpenAI chief says age of giant AI models is ending

OpenAI chief says age of giant AI models is ending

Sam Altman, CEO of OpenAI, claims that the era of ever-larger artificial intelligence models is coming to an end as cost restrictions and declining returns restrain the relentless scaling that has defined advancement in the field.

Last week, Altman stated during an MIT event that “giant, giant models will not lead to further advancement. He allegedly stated, that he thinks we’re at the end of the era where it’s going to be these, like, gigantic, giant models. We’ll improve them in different ways.

The outrageous and unaffordable cost of training and running the potent graphics processors required for large language models (LLMs), despite Mr. Altman not mentioning it specifically, is a key factor in the turnaround from the idea that scalability is all you need. For instance, ChatGPT apparently needs over 10,000 GPUs to train and uses considerably more resources to continue to run.

According to John Peddie Research, Nvidia has an 88% market share and dominates the GPU industry. The most recent Nvidia H100 GPUs, which are intended for AI and high-performance computing (HPC), can cost up to $30,603 per unit, and much more on eBay.

Ronen Dar, cofounder and chief technology officer of Run AI, a compute orchestration platform that accelerates data science initiatives by pooling GPUs, said training a state-of-the-art LLM can need hundreds of millions of dollars’ worth of processing.

The economics of scale have changed against ever-larger models as costs have surged while benefits have plateaued. Instead, advancements in model structures, data efficiency, and algorithmic techniques beyond copy-paste scale will be what lead to progress. Finally, the period of limitless data, computing, and model size that revolutionized AI over the past ten years is coming to an end.

‘Everyone and their dog is buying GPUs’

Elon Musk recently stated in a Twitter Spaces interview that his firms Tesla and Twitter were purchasing thousands of GPUs to build a new AI company, which is now formally known as X.ai.

At this moment, it appears that everyone and their dog is purchasing GPUs, according to Musk. There’s no doubt Tesla and Twitter are purchasing GPUs.

Dar noted that certain GPUs might not be readily available, though. It can sometimes take months, even for the hyperscaler cloud providers like Microsoft, Google, and Amazon, so businesses are really reserving access to GPUs. He remarked, “Elon Musk will have to wait to receive his 10,000 GPUs.

Not limited to GPUs

Not everyone concurs that Altman’s remarks are motivated by a GPU crisis. Aidan Gomez, co-founder and CEO of Cohere, which competes with OpenAI in the LLM market, believes that it is actually based on a technical discovery made over the past year that suggested we may have created models that were larger than necessary.

Size, according to Altman, is a “false measurement of model quality.”

He believes that parameter count has received far too much attention and may in fact trend upward. However, this reminds him a lot of the gigahertz race in semiconductors in the 1990s and 2000s, when everyone was attempting to point to a large figure, Altman said.

However, Elon Musk’s recent purchase of 10,000 data center-grade GPUs shows that for the time being, having access to GPUs is crucial. For all save the most well-funded AI-focused enterprises, the lack of affordable and easy access to this technology is a crisis. Even OpenAI has limited financial resources. It turns out that even they could eventually need to change their focus.

Source link

 

Most Popular