Intel stock spikes after report of possible US government stake
Investing.com -- Google has introduced Gemma 3 270M, a compact AI model designed specifically for task-specific fine-tuning with built-in instruction-following capabilities.
The new 270-million parameter model joins Google’s expanding Gemma family, which recently celebrated surpassing 200 million downloads. Gemma 3 270M features 170 million embedding parameters and 100 million transformer block parameters, with a large 256,000 token vocabulary that enables handling of specific and rare tokens.
A standout feature of the new model is its energy efficiency. Internal tests on a Pixel 9 Pro SoC showed the INT4-quantized model consumed just 0.75% of battery power for 25 conversations, making it Google’s most power-efficient Gemma model to date.
Google is positioning Gemma 3 270M as ideal for high-volume, well-defined tasks such as sentiment analysis, entity extraction, and creative writing. The company suggests the model is particularly suitable when developers need to optimize for cost and speed, require quick iteration and deployment, need to ensure user privacy with on-device processing, or want to create multiple specialized task models.
The model is available in both pretrained and instruction-tuned versions, with Quantization-Aware Trained checkpoints that enable INT4 precision operation with minimal performance degradation. Developers can download Gemma 3 270M from platforms including Hugging Face, Ollama, Kaggle, LM Studio, or Docker, and can try it on Vertex AI or with inference tools like llama.cpp and Keras.
Google cited a real-world example of the specialized model approach with SK Telecom, where a fine-tuned Gemma 3 4B model exceeded the performance of larger proprietary models for multilingual content moderation.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.