01.AI, a company founded by Kai-Fu Lee, former executive of Google in China, has unveiled their new LLM artificial intelligence model called Yi. The Yi model belongs to the LLM family and boasts a staggering 6 trillion parameters, including 34 billion parameters for language processing. It has been trained using high-quality English and Chinese language datasets. One of the highlights of Yi is its impressive performance in intelligence testing, consistently outscoring the LLaMA2 model in all test sets.
While Yi is available for free for research purposes, the commercial usage still requires prior approval from 01.ai. This approach differs from Meta, which allows free usage of LLaMA2 without the need for advance permission, with the condition that the user base does not exceed 700 million.
Yi has been trained on a dataset of 3 quadrillion tokens, supporting context sizes of up to 4,000 tokens and can handle up to 32,000 tokens. 01.AI has stated their plans to further expand the model’s capabilities to accommodate up to 200,000 tokens.
TLDR: 01.AI has introduced the Yi model, an LLM-based AI with 6 trillion parameters. It outperforms LLaMA2 in intelligence testing. Researchers can use Yi for free, while commercial usage requires prior approval. The model has been trained on a massive dataset and can handle large context sizes. Future updates will increase its token capacity.