Lamini, a company specializing in the sale of training platforms and running artificial intelligence models, has teamed up with AMD to unveil the LLM Superstation, a server specifically designed for running AI models, particularly the LLM. The Llama 2-70B device is immediately available for use upon purchase.
The standout feature of the LLM Superstation is its use of the AMD Instinct MI250 card, as opposed to the industry-preferred NVIDIA card. The MI250 offers the advantage of up to 128GB of memory, making it easier to run large-scale models compared to the A100 card (NVIDIA has only recently increased memory capacity in their later generation cards). Another significant advantage is the faster order fulfillment compared to servers using NVIDIA cards.
Lamini specifies that their own services use both the MI210 and MI250 cards, and that AMD utilizes the Lamini platform for LLM internally to support developers.
While Lamini has opened up a form for interested individuals to order the LLM Superstation, they have not yet disclosed the pricing or overall specifications.
TLDR: Lamini introduces the LLM Superstation in partnership with AMD, utilizing the AMD Instinct MI250 card with impressive memory capacity for easier running of large AI models compared to NVIDIA cards. Lamini also offers services using the MI210 and MI250 cards, while AMD uses the Lamini platform for LLM internally. Pricing and specifications for the LLM Superstation are currently undisclosed.