Tech

What is Stable Beluga AI fine tuned large language model?

×

What is Stable Beluga AI fine tuned large language model?

Share this article
What is Stable Beluga AI fine tuned large language model?

In the fast, ever-evolving world of artificial intelligence, Stability AI and CarperAI lab have made a significant stride with the launch of Stable Beluga 1 and Stable Beluga 2. These two new open access Large Language Models (LLMs) were unveiled last month, in July 2023, and have since been making waves in the AI community.

Stable Beluga 1, the first of the duo, is built upon the robust LLaMA 65B foundation model. It utilizes a synthetically-generated dataset for fine-tuning, a novel approach that sets it apart from its peers. On the other hand, Stable Beluga 2, the second model, is based on the LLaMA 2 70B foundation model and boasts industry-leading performance.

Stable Beluga training

The training utilized in shaping the Stable Beluga models was not born in a vacuum, but rather stands on the shoulders of giants — more accurately, on the ground-breaking methodology proposed by Microsoft. The tech giant’s game-changing paper, “Orca: Progressive Learning from Complex Explanation Traces of GPT-4,” served as a beacon that lit the path which eventually led to the creation of the Stable Beluga models. Microsoft’s trailblazing contributions provided a potent fuel for our innovation engine.

The process of data generation followed a path that bears a striking resemblance to Microsoft’s modus operandi, albeit with a few nuanced differences. One such divergence lies in our choice of data sources, which were carefully cherry-picked to meet our rigorous quality standards and to cater for the unique requirements of our developmental process.

As for the actual training dataset, it consists of a whopping 600,000 data points, an impressive volume that equals roughly 10% of the dataset size used in the original Orca research project. Each of these data points was synthetically crafted, making them a product of cutting-edge technologies, untamed creativity, and meticulous attention to detail.

See also  AMD Ryzen Embedded 7000 Series processors powered by Zen 4

The genesis of these high-quality instructions can be traced back to a set of datasets, the brainchildren of Enrico Shippole. These datasets are universally revered for their exceptional quality, robustness, and reliability, making them the perfect building blocks for our expansive training set.

It is this unique fusion of inspiration, dataset, and methodology that equipped the Stable Beluga models with the tools and knowledge they need to excel—a test testament to the importance of collaboration and shared learning in advancing technology.

Despite the smaller sample size used for training, the Stable Beluga models have shown exceptional performance across various benchmarks. They were evaluated using EleutherAI’s lm-eval-harness, with AGIEval added, and have demonstrated proficiency in intricate reasoning, understanding linguistic subtleties, and answering complex questions.

The results of these evaluations were not only confirmed by Stability AI researchers but also independently reproduced by Hugging Face. As of July 27th, 2023, Stable Beluga 2 ranked #1 and Stable Beluga 1 ranked #4 on their leaderboard, a testament to their superior performance.

Other articles you may find interesting on Stability AI

Name change from FreeWilly

The Stable Beluga models are expected to significantly advance research, enhance natural language understanding, and enable complex tasks. Initially codenamed FreeWilly, the models were renamed to Stable Beluga to better reflect their optimized “harmlessness”.

“Why did we change the names? These models were renamed from their internal code-name FreeWilly (a homage to the movies that some of us remember fondly), referring to the Orca paper. There were multiple reasons for the name change, the most notable being that belugas are gentler animals, unlike the fierce Orca (commonly known as killer whales). Stable Beluga models are optimized for “harmlessness”; therefore, the new names fit better with the models.”

The weights for Stable Beluga 2 are released as-is, while those for Stable Beluga 1 are released as deltas over the original model. Both models are released under the Stable Beluga Research License, further emphasizing their role in advancing AI research.

See also  BMW to bring hands free driving to the UK next year

The launch of Stable Beluga 1 and Stable Beluga 2 marks a significant milestone in the field of AI, promising to revolutionize natural language understanding and enable complex tasks. For more information jump over to the  Stability AI official website.

Filed Under: Guides, Top News





Latest Aboutworldnews Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Aboutworldnews may earn an affiliate commission. Learn about our Disclosure Policy.

Leave a Reply

Your email address will not be published. Required fields are marked *