A Squad Of Open-Source LLMs Can Now Beat OpenAI’s Closed-Source GPT-4o

A deep dive into how the Mixture-of-Agents (MoA) model leverages the collective strengths of multiple open-source LLMs and beats the performance of GPT-4o by OpenAI

Dr. Ashish Bamania
Level Up Coding
Published in
7 min readJun 30, 2024

--

Image generated with DALL-3

There has been a constant battle between open-source and proprietary AI.

The war has been fierce, so much so that Sam Altman once said on his visit to India that developers can try to build AI like ChatGPT, but they will never succeed in this pursuit.

But Sam has been proven wrong.

A team of researchers recently published a pre-print research article in ArXiv that shows how multiple open-source LLMs can be assembled together to achieve state-of-art performance on multiple LLM evaluation benchmarks, surpassing GPT-4 Omni, the apex model by OpenAI.

They called this model Mixture-of-Agents (MoA).

They showed that a Mixture of Agents consisting of just open-source LLMs scored 65.1% on AlpacaEval 2.0 compared to 57.5% by GPT-4 Omni.

This is highly impressive!

This means that the future of AI is no longer in the hands of big tech building software behind closed doors but is more…

--

--

Self- Taught Software Engineer 👨‍💻 | Emergency Physician 🩺 | AIIMS, New Delhi 👨‍🎓| Free 'AI In 100 Images' : https://bamaniaashish.gumroad.com/l/visual_ai