Home ยป Llama 3.1 Cloud Service Unveiled with Meta’s Blessing for Self-Training Models

Llama 3.1 Cloud Service Unveiled with Meta’s Blessing for Self-Training Models

After last night’s unveiling of the Meta Llama 3.1 artificial intelligence model, both small and large cloud providers have announced the launch of Llama 3.1 simultaneously. The most comprehensive service comes from Microsoft Azure, making it available alongside pricing for all three sizes. Meanwhile, Google Cloud has made the 405B model available but has yet to disclose pricing. AWS, on the other hand, has announced pricing for the 70B and 8B versions, with the 405B version requiring a usage request submission.

Specialized cloud providers have also announced pricing and availability, such as Together.AI offering the lowest prices for the Turbo version, already quantized. Although the company asserts that their quantization technology is comparable to full models. Groq, a specialized provider, has made the 70B and 8B models available without publicly disclosing pricing. Nevertheless, in comparison to previous Llama 3 versions, it appears that the pricing is notably lower.

Running the LLM project on personal machines like Ollama now supports Llama 3.1 smoothly. However, the 405B version only supports quantized 4-bit models and not FP16 models. NVIDIA also announced support for Llama 3.1 on their NVIDIA NIM, showcasing full NVIDIA NeMo Retriever service. This enables organizations to create retrieval-augmented generation (RAG) applications seamlessly.

A standout feature of Llama 3.1 is its licensing agreement that permits modifications to licenses allowing the use of model outputs elsewhere. This differs from other models like OpenAI, which restrict the use of outputs for training other models. This flexibility empowers organizations to optimize additional operations, such as creating datasets from Llama 3.1 405B and training them on smaller LLM models.

**TLDR:**
Meta introduced the Llama 3.1 AI model recently, with cloud providers like Microsoft Azure, AWS, and Google Cloud offering various versions. Specialized providers like Together.AI and Groq have also entered the market, presenting competitive pricing and unique features to cater to different needs.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Comparative Analysis: Unleashing the Power and Value of vCPU Core – A Cross-border Examination of Thai Cloud vs. International Cloud

Introducing Lumiere: Google Research Unveils an Exemplary AI Model Crafting Video Clips that Perpetuate the Quintessential Aesthetics

Hong Kong Research Team Develops LLM for targeted code decompilation with precision aiming to run like the original code.