ChatGPT is Costing OpenAI a Whopping $700,000 a Day to Run
The cost of maintaining state-of-the-art AI technology is becoming more clear as OpenAI, the company behind the wildly popular ChatGPT, is now reported to be pouring $700,000 per day into its infrastructure to keep it up and running. But what’s driving this enormous expenditure, what might the alternatives be, and how does it impact the future of AI development?
The Price of Progress: AI Cost and Chip Consumption
The lion’s share of the OpenAI costs stem from the pricey servers that ChatGPT requires.
According to Dylan Patel, chief analyst at research firm SemiAnalysis, these servers are not only expensive in the first place, but they also depend on specialized chips that consume a staggering amount of power.
OpenAI’s costs could be on the rise even more, as Patel explains that his estimates were based on GPT-3, the predecessor of ChatGPT. With the introduction of OpenAI’s newest model, GPT-4, it’s safe to assume that AI costs will continue to climb.
A recent report published by OpenAI reveals that AI costs related to training large AI models is anticipated to soar from $100 million to $500 million by 2030.
The AI Divide: Wealth, Access, and Cost
With the cost of model training on the rise, the landscape of AI research is likely to change dramatically. Tech giants such as Google, Facebook, and Microsoft already dominate the AI field. As AI costs continues to rise, only the wealthiest companies and individuals will have the financial means to develop and utilize AI technologies.
This cost barrier could lead to a concentration of AI development within a select few large corporations, creating a divide between those who can afford to harness AI technologies and those who can’t.
The potential ramifications of this AI cost disparity could significantly impact the future development of AI, potentially hindering innovation and accessibility.
The exponential growth in AI cost is further complicated by Reddit’s recent decision to introduce a premium access point for third parties requiring more extensive capabilities, higher usage limits, and broader usage rights.
Motivated by a desire to capitalize on the value of its data, Reddit’s move could drive AI costs even higher, as AI companies rely on the platform’s rich conversation data to train large language models.
Microsoft’s Athena: A Step Towards Tackling AI Cost
Microsoft, one of the biggest investors in OpenAI, is acutely aware of the growing problem. As a result, the tech giant is developing its proprietary AI chip, internally known as “Athena.” Reportedly in the works since 2019, Athena aims to reduce AI costs by offering an alternative to chips from suppliers like Nvidia.
“Athena, if competitive, could reduce the cost per chip by a third when compared with Nvidia’s offerings,” said Patel in an interview with The Information.
While this would be a significant first step into the AI hardware realm for Microsoft, the company is unlikely to replace Nvidia’s AI chips entirely. In fact, both parties have recently agreed to a years-long AI collaboration, signaling their ongoing partnership.
The deployment of Athena is expected to have a substantial impact on AI operating costs for Microsoft itself, however. By replacing the current Nvidia graphics processing units with a more efficient and less expensive option, the company stands to save a considerable amount of money.
Unfortunately, it remains unclear whether Athena will be made available to Azure customers or if it will be offered exclusively to Microsoft and OpenAI employees.
Giant AI Models: Reaching the Limits of Scalability and Cost
Meanwhile, OpenAI CEO Sam Altman recently commented that “we’re at the end of the era of giant AI models,” alluding to the fact that large language models like ChatGPT may be approaching a point of diminishing returns due to their immense size.
OpenAI’s GPT-4 model, boasting over one trillion parameters, might already be nearing the limit of practical scalability, as indicated by OpenAI’s own analysis.
The correlation between an AI model’s size, power, and capabilities has generally held true, but the cost of maintaining and scaling these models is becoming increasingly prohibitive. If Patel’s analysis is accurate, the ballooning AI cost associated with larger AI models could hamper further advancements in the field.
In the meantime, despite the daunting cost of operating ChatGPT, OpenAI appears to be in a stable financial position thanks to the application’s success.
While tech giants like Microsoft explore alternative solutions to curb rising expenses, we’ll just have to wait and see whether these efforts will be enough to bridge the growing divide between those who can afford AI technologies and those who can’t.
Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.