![Google's Gemma](https://allaboutaitech.com/wp-content/uploads/2024/02/Googles-Gemma-1024x536.png)
Google just launched Gemma, a family of lightweight and open models with which developers can create their own AI applications. Google’s Gemma models were built from the same research and technology that were used to create Google’s Gemini models.
Google’s CEO Sundar Pichai also mentioned Gemma in his post on X: ” Introducing Gemma – a family of lightweight, state-of-the-art open models for their class built from the same research & tech used to create the Gemini models.”
Gemma can’t accept visual or audio input prompts, unlike Google’s Gemini models. It currently only accepts text input prompts, and it is only available in English, whereas the Gemini models are available in multiple languages.
Gemma is now available worldwide and comes in two sizes: Gemma 2B and Gemma 7B. Both sizes can be commercially used and distributed to all organizations, regardless of the size of the organization. But Google mentions responsible use of Gemma models, so any misuse, such as programs to spread hate or weapon development, is strictly prohibited.
Gemma beats other Open Models
According to Google, Gemma 7B exceeds Llama 2-7B and Llama 2-13B on all benchmarks, including general language understanding, reasoning, math, and coding, and beats Mistral 7B in math/science and coding-related tasks. You can also read the full technical report for Gemma to learn more about the technical details.
Safety Considerations with Google’s Gemma
As open models are prone to more risks, Google says that they have done safety testing and extensive red teaming with Gemma to find any risks. Google says they have filtered out any personal information and other sensitive data from training sets and used the most robust guardrails for Gemma. Along with the lightweight and open models, Google is also launching a Generative AI Toolkit with tools and a guide for responsible use of these models to create safe AI applications.
With the Responsible Generative AI toolkit, developers can create their own guidelines and safety nets while using Gemma in their applications.
How to access Google’s Gemma
Gemma is now available and can run locally on laptops and PCs. It is compatible with IoT devices, mobile devices, and the cloud. Gemma models can also be accessed via Kaggle, Colab, Hugging Face, Nvidia’s NeMo, and Google’s Vertex AI. Gemma is optimized for Nvidia GPUs and Google Cloud TPUs.
Users can get free access to Gemma via Kaggle and Colab’s free tier. First-time users of Google Cloud will get $300 in credits to use the models. Researchers can also apply for Google Cloud credits of up to $500,000.
Conclusion
After releasing Gemini Models, Google has now entered the open model space with Gemma. Google’s biggest competitor, OpenAI, has still not entered this space, and this can be a plus point for Google. It can easily take over Mistral and Meta’s Llama2 with Gemma.
Read More
Elon Musk on X’s Potential Partnership With Midjourney and Neuralink’s First Patient
OPPO and OnePlus Smartphones to get AI Features with the New ColorOS Update in China
0 Comments