Skip to main content

Samsung unveils Gauss, on-device GenAI models for text, images and code

Image Credit: Venturebeat made with Ideogram

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Today, South Korean electronics giant Samsung made its first major move in the generative AI space by announcing Gauss, a foundation model designed to run locally on smartphones and produce text, code and images.

At its ongoing AI Forum in Suwon, the company detailed the efforts with the new model, noting that the technology is currently being tested internally by its employees. It’s been named after Carl Friedrich Gauss, the late German mathematician and physicist who established the normal distribution theory that formed the base of modern machine learning and AI.

Eventually, the company plans to evolve Gauss and use it for “a variety of product applications” to deliver new user experiences. The move comes as technology companies, including Apple and Google, explore the potential of on-device AI for different use cases.

What to expect from Samsung Gauss?

While Gauss has just been announced, Samsung Research has confirmed that it will have three versions: Gauss Language, Gauss Code and Gauss Image.

The language model will work similarly to Google Workspace’s generative AI smarts and help with tasks such as composing emails, summarizing documents and translating content. It may also enable smarter device control, Samsung indicated without sharing specific details.

Similarly, Gauss Image will handle photo work on the devices, starting from generating and editing the images to enhancing them with additions and increasing resolutions. This would be similar to giving access to features like generative fill right within the editor of a smartphone.

When both these capabilities will become available to folks using Samsung devices, Gauss Code will serve as a software development assistant, helping teams write code quickly. It will support functions such as code description and test case generation through an interactive interface, the company added.

No word on availability

The addition of generative AI into the Samsung ecosystem could mean a major upgrade for customers of the company. However, as of now, there’s no word on when the company plans to execute the integration.

Currently, it just says it is using the model for employee productivity and will expand it to various product applications to provide new user experiences in the near future.

If anything, Samsung may add the model, and multiple capabilities driven by it, to its next flagship planned for 2024. This also aligns with the launch timeline of Qualcomm’s next-gen chip with an AI engine to support multi-modal generative AI models, large language models, language vision models, and transformer network-based automatic speech recognition at over 10 billion parameters. Qualcomm is Samsung’s vendor for mobile SoCs.

The move strengthens the race for on-device AI, which is also being explored by the likes of Google and Apple. The former recently launched Pixel 8 Pro with distilled versions of its text- and image-generating models to power applications like image editing while the latter has been hiring extensively for generative AI roles and has debuted a voice cloning accessibility feature driven by AI. 

With dedicated hardware and AI models running on devices, users can expect better results than those delivered by cloud-based general-purpose models. In an interview with CNET, Qualcomm’s senior VP of product management Ziad Asghar said models’ access to device-specific data – like driving patterns, restaurant searches, photos and more – will result in more personalized outcomes than currently possible.

Samsung, on its part, continues to move in this direction. The company has also set up an AI Red Team to detect and eliminate security and privacy issues that may arise in the entire process of bringing its vision of AI to life.

It’s expected to share more in months to come.