Research
VinaLLaMA: LLaMA-based Vietnamese Foundation Model
Dec 18th, 2023
pixel art of a cute llama in Vietnamese style - Created by DALL-E 3
Read our paper: arxiv
Try out VinaLLaMA: Demo
HuggingFace: [2.7B] | [2.7B-chat] | [7B] | [7B-chat] | [7B-chat-GGUF]
We are proud to unveil VinaLLaMA, a groundbreaking Foundation Language Model tailored specifically for the Vietnamese language. This innovative platform marks a significant leap forward in the realm of Vietnamese AI, positioning VinaLLaMA as an indispensable tool for both research and practical applications in natural language processing.
Engineered on top of Meta AI's LLaMA-2 and enriched with over 800 billion additional training tokens, VinaLLaMA is not just fluent in Vietnamese language and culture. It excels in a broad spectrum of tasks, including Reasoning, Mathematics, Agent Prompting, and Coding. This exceptional performance positions VinaLLaMA at the forefront of language AI, offering unparalleled accuracy, depth, and versatility.
VinaLLaMA is more than just a language model; it's a testament to our commitment to advancing AI technology and our dedication to enhancing the capabilities of Vietnamese language processing. Join us in exploring the vast potential of VinaLLaMA, a true marvel in the world of artificial intelligence.
State-of-the-art performance
Our rigorous benchmarking spanned a diverse array of tasks, encompassing natural language processing, coding, and mathematical reasoning.
We started with the VLSP benchmark, a crucial standard for evaluating Vietnamese Large Language Models. In this domain, our VinaLLaMAs model not only set a new standard but also outshone the once-celebrated PhoGPT, relegating it to the past.
Impressively, the VinaLLaMA-7B didn't just surpass PhoGPT in scoring—it did so by a substantial margin, by 54%. Furthermore, our streamlined VinaLLama-2.7B version also demonstrated remarkable efficiency, delivering a performance that's 32% superior, while requiring only 1/3 of the computational resources.
In a specialized test, VMLU, designed to assess the performance of language models within the specific cultural and linguistic context of Vietnam, VinaLLaMA demonstrated exceptional proficiency. This benchmark is crucial for evaluating how well a Vietnamese Large Language Model understands the nuances of its native users, distinguishing it from predominantly English-focused LLMs like ChatGPT or LLaMA.
Remarkably, VinaLLaMA-2.7B, despite having a lower parameter count, surpassed PhoGPT's performance by 8%. This achievement underscores the model's efficiency and advanced capabilities. The larger variant, VinaLLaMA-7B, took this performance to a new level, showcasing an astounding 74% improvement. These results not only exhibit the technological prowess of VinaLLaMA but also mark a significant stride in developing AI that resonates deeply with the Vietnamese language and its cultural context.
Say goodbye to ChatGPT!
The Vietnamese Vicuna Benchmark, developed by VinAI Research, provided a unique testing ground for language models, focusing on instruction-following abilities. In this rigorous assessment, GPT-4 served as the adjudicator, scoring answers on a scale from 0 to 4. VinaLLaMA-7B distinguished itself as the premier Vietnamese open-weight model, demonstrating comparable performance to ChatGPT-3.5-Turbo across various tasks. This benchmark underscores VinaLLaMA's adeptness in processing and responding to the Vietnamese language and cultural context.
These results signify more than just technological progress; they represent a pivotal moment for the Vietnamese community. With VinaLLaMA's demonstrated capabilities, it's clear that the future of AI in Vietnam is promising, offering effective and culturally resonant tools for both global and local digital interactions.
Viet-lish? No Problem!
With Vietnam being a developing country, VinaLLaMA was designed to be a Vietnamese-English bilingual LLM from the beginning to meet with local needs of English conversation. Not only VinaLLaMA speaks 2 languages fluently, it outperforms all tested 7B models in English, including Meta AI's LLaMA-2 Chat/Reinforcement Learning version. Furthermore, VinaLLaMA shows impressive mathematical reasoning in English, beats all others 7B tested models on GSM8K, a dataset for mathematical reasoning. You can find out more on our paper!
Is this the end? HELL NAH!
No worries, our RLAIF versions along with a whole new series of really smolllll models are coming. Stay tune!
Home
Blog
Product
Research
About
Support
Terms of Service