Mozilla.ai’s Post

View organization page for Mozilla.ai, graphic

2,027 followers

Over the past few months at Mozilla.ai, we engaged with a number of organizations to learn how they are using language models in practice. We spoke with 35 organizations across sectors like finance, government, startups, and large enterprises. Our interviewees ranged from ML engineers to CTOs, capturing a diverse range of perspectives. Our interview summary notes for the 35 conversations amounted to 18,481 words (approximately 24,600 tokens), almost the length of a novella. To avoid confirmation bias and subjective interpretation, we decided to leverage language models for a more objective analysis of the data. By providing the models with the complete set of notes, we aimed to uncover patterns and trends without our pre-existing notions and biases. For this, we used Llama-3-8B-Instruct-Gradient-1048k by Meta and Gradient; Phi-3-medium-128k-instruct by Microsoft; and Qwen1.5-7B-Chat by Alibaba Cloud. To read the GenAI trends across 35 organizations, check out our latest learnings by Stefan French! #machinelearning #LLM #GenAI

Uncovering GenAI Trends: Using Local Language Models to Explore 35 Organizations

Uncovering GenAI Trends: Using Local Language Models to Explore 35 Organizations

blog.mozilla.ai

To view or add a comment, sign in

Explore topics