The rumor has it that Microsoft is in the process of training its own large-scale language model called MAI-1, boasting 500 billion parameters and on par with models from OpenAI, Google, or Anthropic. Leading this project is Mustafa Suleyman, co-founder of DeepMind, who recently transitioned from Inflection AI to become Microsoft AI’s CEO.
This news about MAI-1 is significant as it marks Microsoft’s first foray into developing its own LLM model after relying on OpenAI’s GPT model for the past two years. The internal turmoil at OpenAI may serve as a cautionary tale for Microsoft, reminding them not to rely too heavily on external companies.
While Microsoft has been developing its own language models like Phi (most recently Phi-3), they have never introduced a large-scale model like MAI-1 before. Speculation is rife that Microsoft may officially unveil MAI-1 at Build 2024 towards the end of this month.
Source: The Information, Ars Technica
TLDR: Microsoft is reportedly training its own large-scale language model called MAI-1, potentially unveiling it at Build 2024. This move signifies a shift away from reliance on external companies like OpenAI.
Leave a Comment