Sarvam AI Unveils the Flagship Open-Source LLM with 24 Billion Parameters

0
Sarvam AI logo displayed as the company announces its 24B parameter open-source language model.

Sarvam AI Unveils the Flagship Open-Source LLM with 24 Billion Parameters

Sarvam-M, a 24-billion-parameter hybrid language model with robust math, programming, and Indian language skills, was introduced by Sarvam AI.

image Shows  Sarvam AI

Sarvam-M, the company’s flagship Large Language Model (LLM), was introduced by the Indian AI firm Sarvam.  On top of Mistral Small, the LLM is a 24-billion-parameter open-weights hybrid language model.  According to reports, Sarvam-M has surpassed previous records in programming, mathematics, and even comprehension of the Indian language.  The business claims that the model has been developed for a wide variety of uses.

 

Among the noteworthy applications of Sarvam-M are machine translation, conversational AI, and educational resources.  Mathematical and programming reasoning challenges can be completed via the open-source methodology.

 

The three-step approach of Supervised Fine-Tuning (SFT), Reinforcement Learning with Verifiable Rewards (RLVR), and Inference Optimisations has been used to improve the model, according to the official blog post.

 

image shows Sevam AI founder

 

Regarding SFT, the Sarvam team selected a broad range of prompts that prioritized complexity and quality.  To lessen bias and cultural relevance, they modified outputs, filtered completions using bespoke scoring, and created them using acceptable models.  Sarvam-M was trained by the SFT technique to operate in both “think,” or complex thinking, and “non-think,” or casual communication.

 

However, Sarvam-M was further trained with RLVR, utilizing a curriculum that included math, programming datasets, and the following instructions.  To improve the model’s performance across tasks, the researchers employed strategies like prompt sampling and unique reward engineering.

 

The model was subjected to post-training quantization for FP8 precision in order to optimize inference with little accuracy loss.  Lookahead decoding was one of the techniques used to increase throughput, although there were issues with enabling increased concurrency.

 

Image shows Sarvam AI

 

Notably, the model demonstrated an astonishing +86% improvement in tasks that included math and Indian languages, such as the Romanized Indian language GSM-8K test.  Groomed under Sarvam AI, Sarvam-M performed better than Llama-4 Scout in the majority of benchmarks and is on par with larger models such as Llama-3.3 70B and Gemma 3 27B.  Nonetheless, it indicates a minor decline (~1%) in English proficiency tests such as MMLU.

 

For testing and integration, the Sarvam-M model is now available through Sarvam’s API and may be obtained from Hugging Face.

About The Author:

Yogesh Naager is a content marketer who specializes in the cybersecurity and B2B space.  Besides writing for the News4Hackers blogs, he also writes for brands including Craw Security, Bytecode Security, and NASSCOM.

Read More: 

India and Russia will Strengthen their Nuclear Technology and Cyber Defense Partnerships Ahead of Putin’s Visit to India

WB Man Detained After An Odisha Man Duped of ₹73 Lakhs in Digital Arrest Fraud

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish
Open chat
Hello
Can we help you?