Home ยป [Rumor] Microsoft is Currently Training its Own Large-scale Model MAI-1 to Reduce Dependence on OpenAI

[Rumor] Microsoft is Currently Training its Own Large-scale Model MAI-1 to Reduce Dependence on OpenAI

The rumor has it that Microsoft is in the process of training its own large-scale language model called MAI-1, boasting 500 billion parameters and on par with models from OpenAI, Google, or Anthropic. Leading this project is Mustafa Suleyman, co-founder of DeepMind, who recently transitioned from Inflection AI to become Microsoft AI’s CEO.

This news about MAI-1 is significant as it marks Microsoft’s first foray into developing its own LLM model after relying on OpenAI’s GPT model for the past two years. The internal turmoil at OpenAI may serve as a cautionary tale for Microsoft, reminding them not to rely too heavily on external companies.

While Microsoft has been developing its own language models like Phi (most recently Phi-3), they have never introduced a large-scale model like MAI-1 before. Speculation is rife that Microsoft may officially unveil MAI-1 at Build 2024 towards the end of this month.

Source: The Information, Ars Technica

TLDR: Microsoft is reportedly training its own large-scale language model called MAI-1, potentially unveiling it at Build 2024. This move signifies a shift away from reliance on external companies like OpenAI.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Elon Musk’s $9.7 billion ‘Offer to Purchase’ for OpenAI Causes Turmoil, Despite Lack of Seriousness

The Head of AI at Duolingo Expresses Satisfaction with the Utilization of GPT-4; Envisions Wider Implementation in the Future.

OpenAI proposes to acquire news content at an annual rate of $1-5 million without confirmation.