Reuters reports that OpenAI, the developer of ChatGPT, is studying the possibility of creating its own AI chip. This includes considering the option of acquiring a company specializing in this field. However, OpenAI has not yet concluded whether it will proceed with designing its own chip for AI. This idea has been brewing since last year, in response to the shortage and high cost of chips. Apart from the internal chip development approach, OpenAI is also exploring other avenues, such as collaborating with other chip manufacturers or even engaging in special negotiations with NVIDIA, the largest AI processor manufacturer in the market.
Currently, OpenAI’s AI models are processed on Microsoft’s servers, which reportedly utilize over 10,000 GPUs. This results in a significant cost of approximately 4 cents per inference for ChatGPT. Analysts estimate that if ChatGPT had the same number of inquiries as Google, which is approximately 1 in 10, it would cost around $16 billion per year in GPU expenses. This self-development chip approach appears to be more intriguing, but it also requires substantial initial investment.
Many technology companies are also exploring the development of AI chips. For example, Meta has its own chip called MTIA v1, and even Microsoft is testing AI chips as well.
OpenAI has declined to comment on this report.
TLDR: OpenAI is studying the possibility of creating its own AI chip, considering options such as acquiring a chip company or collaborating with other manufacturers. Currently, AI models like ChatGPT utilize Microsoft’s servers, resulting in high GPU expenses. Developing an in-house chip presents an interesting opportunity, but requires significant investment upfront. Other technology companies are also venturing into AI chip development. OpenAI has refrained from providing further comments on this matter. [118 words]