Google DeepMind launched two new AI models aimed at healthcare research and development today.
The new additions are MedGemma 27B Multimodal and MedSigLIP. MedGemma 27B Multimodal adds complex multimodal and longitudinal electronic health record interpretation to the existing 4B Multimodal and 27B text-only versions. MedSigLIP is a lightweight image and text encoder designed for classification, search, and related tasks.
These models build on DeepMind’s existing Health AI Developer Foundations (HAI-DEF) collection, which provides open models for healthcare developers with a focus on privacy and control.
MedGemma excels at tasks needing free text generation like medical report writing or visual question answering. MedSigLIP targets imaging tasks with structured output, such as classification or retrieval. Both are efficient enough to run on a single GPU, with versions that can adapt for mobile use.
Full development and evaluation details for these models are available in the MedGemma technical report on arXiv.
The launch follows May’s rollout of MedGemma, based on the larger Gemma 3 models, which aim to jumpstart healthcare and life sciences AI applications.
DeepMind’s focus remains on providing open, lightweight AI models that keep developer control and data privacy front and center.
Links: