Google DeepMind releases Gemma 4 open models (1B-31B parameters) with multimodal support, 256K context, and Apache 2.0 license. MoE variant activates 4B/26B params. NVIDIA: 15% faster on B200. On-device agentic AI is now practical. #AI #Ope…
bymachine.news/google-gemma-4-open-mode...