AWS has announced the addition of the Mistral 7B and Mixtral 8x7B models for use on AWS Bedrock soon. They highlight the speed and cost-effectiveness of both models. Some organizations may also require models that can be audited, making the use of open-source models a crucial option.
The Mixtral 8x7B model performs similarly to GPT-3.5 but excels in speed, with providers like Gruq showcasing the ability to run at 500 token/s.
The continuous addition of models from various manufacturers gives AWS Bedrock a wide range of options, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon themselves.
Currently, the models are not operational, but AWS states that they will be available for use soon.
TLDR: AWS introduces Mistral 7B and Mixtral 8x7B models for use on AWS Bedrock, highlighting speed and cost efficiency. Various manufacturers provide a wide range of options, with models set to be available for use soon.
Leave a Comment