Tech

Mistral AI Mixtral 8x7B mixture of experts AI model impressive benchmarks revealed

×

Mistral AI Mixtral 8x7B mixture of experts AI model impressive benchmarks revealed

Share this article
Mistral AI Mixtral 8x7B mixture of experts AI model impressive benchmarks revealed

Mistral AI has recently unveiled an innovative mixture of experts model that is making waves in the field of artificial intelligence. This new model, which is now available through Perplexity AI at no cost, has been fine-tuned with the help of the open-source community, positioning it as a strong contender against the likes of the well-established GPT-3.5. The model’s standout feature is its ability to deliver high performance while potentially requiring as little as 4 GB of VRAM, thanks to advanced compression techniques that preserve its effectiveness. This breakthrough suggests that even those with limited hardware resources could soon have access to state-of-the-art AI capabilities. Mistral AI explain more about the new Mixtral 8x7B :

“Today, the team is proud to release Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.”

The release of Mixtral 8x7B by Mistral AI marks a significant advancement in the field of artificial intelligence, specifically in the development of sparse mixture of experts models (SMoEs). This model, Mixtral 8x7B, is a high-quality SMoE with open weights, licensed under Apache 2.0. It is notable for its performance, outperforming Llama 2 70B on most benchmarks while offering 6x faster inference. This makes Mixtral the leading open-weight model with a permissive license, and it is highly efficient in terms of cost and performance trade-offs, even matching or surpassing GPT3.5 on standard benchmarks​​.

See also  Shake Shack Malaysia: TRX menu revealed, burger price starts at RM19

Mixtral 8x7B exhibits several impressive capabilities. It can handle a context of 32k tokens and supports multiple languages, including English, French, Italian, German, and Spanish. Its performance in code generation is strong, and it can be fine-tuned into an instruction-following model, achieving a score of 8.3 on MT-Bench​​.

Mistral AI mixture of experts model MoE

The benchmark achievements of Mistral AI’s model are not just impressive statistics; they represent a significant stride forward that could surpass the performance of existing models such as GPT-3.5. The potential impact of having such a powerful tool freely available is immense, and it’s an exciting prospect for those interested in leveraging AI for various applications. The model’s performance on challenging datasets, like H SWAG and MML, is particularly noteworthy. These benchmarks are essential for gauging the model’s strengths and identifying areas for further enhancement.

Here are some other articles you may find of interest on the subject of Mistral AI :

The architecture of Mixtral is particularly noteworthy. It’s a decoder-only sparse mixture-of-experts network, using a feedforward block that selects from 8 distinct groups of parameters. A router network at each layer chooses two groups to process each token, combining their outputs additively. Although Mixtral has 46.7B total parameters, it only uses 12.9B parameters per token, maintaining the speed and cost efficiency of a smaller model. This model is pre-trained on data from the open web, training both experts and routers simultaneously​​.

In comparison to other models like the Llama 2 family and GPT3.5, Mixtral matches or outperforms these models in most benchmarks. Additionally, it exhibits more truthfulness and less bias, as evidenced by its performance on TruthfulQA and BBQ benchmarks, where it shows a higher percentage of truthful responses and presents less bias compared to Llama 2​​​​.

Moreover, Mistral AI also released Mixtral 8x7B Instruct alongside the original model. This version has been optimized through supervised fine-tuning and direct preference optimization (DPO) for precise instruction following, reaching a score of 8.30 on MT-Bench. This makes it one of the best open-source models, comparable to GPT3.5 in performance. The model can be prompted to exclude certain outputs for applications requiring high moderation levels, demonstrating its flexibility and adaptability​​.

See also  Samsung PRO Ultimate microSD and SD cards unveiled

To support the deployment and usage of Mixtral, changes have been submitted to the vLLM project, incorporating Megablocks CUDA kernels for efficient inference. Furthermore, Skypilot enables the deployment of vLLM endpoints in cloud instances, enhancing the accessibility and usability of Mixtral in various applications​

AI fine tuning and training

The training and fine-tuning process of the model, which includes instruct datasets, plays a critical role in its success. These datasets are designed to improve the model’s ability to understand and follow instructions, making it more user-friendly and efficient. The ongoing contributions from the open-source community are vital to the model’s continued advancement. Their commitment to the project ensures that the model remains up-to-date and continues to improve, embodying the spirit of collective progress and the sharing of knowledge.

As anticipation builds for more refined versions and updates from Mistral AI, the mixture of experts model has already established itself as a significant development. With continued support and development, it has the potential to redefine the benchmarks for AI performance.

Mistral AI’s mixture of experts model is a notable step forward in the AI landscape. With its strong benchmark scores, availability at no cost through Perplexity AI, and the support of a dedicated open-source community, the model is well-positioned to make a lasting impact. The possibility of it operating on just 4 GB of VRAM opens up exciting opportunities for broader access to advanced AI technologies. The release of Mixtral 8x7B represents a significant step forward in AI, particularly in developing efficient and powerful SMoEs. Its performance, versatility, and advancements in handling bias and truthfulness make it a notable addition to the AI technology landscape.

See also  What is Google Gemini? Google's New AI Model Explained

Image Credit: Mistral AI

Filed Under: Technology News, Top News





Latest aboutworldnews Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, aboutworldnews may earn an affiliate commission. Learn about our Disclosure Policy.

Leave a Reply

Your email address will not be published. Required fields are marked *