Get started with Mistral models in a few clicks via our developer platform hosted on Mistral’s infrastructure and build your own applications and services. Our servers are hosted in EU.
We release the world’s most capable open models, enabling frontier AI innovation.
Our portable developer platform serves our open and optimized models for building fast and intelligent applications. We offer flexible access options!
We’re committed to empower the AI community with open technology. Our open models sets the bar for efficiency, and are available for free under Apache 2.0, a fully permissive license, that allows to use the models anywhere without any restriction.
Our very first. A 7B transformer model, fast-deployed and easily customisable. Small, yet very powerful for a variety of use cases.
A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total.
Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B.
Our optimized commercial models are designed for performance and are available via our flexible deployment options.
Cost-efficient reasoning for low-latency workloads.
Top-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family.
State-of-the-art semantic for extracting representation of text extracts.
Codestral is an open-weight generative AI model explicitly designed for code generation tasks. It helps developers write and interact with code through a shared instruction and completion API endpoint. As it masters code and English, it can be used to design advanced AI applications for software developers.
State-of-the-art Mistral model trained specifically for code tasks.
We’re constantly innovating to provide the most capable and efficient models.
We build models that offer unparalleled cost efficiency for their respective sizes, delivering the best performance-to-cost ratio available on the market. Mixtral 8x22B is the most powerful open source model with significantly fewer parameters than its competition:
Our open models are truly open source, licensed under Apache 2.0, a fully permissive license that allows for unrestricted use in any context.
Access our latest products via our developer platform, hosted in Europe
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user",
content="Who is the most renowned French painter?")
]
La Plateforme is developers’ preferred way to access all Mistral Al’s models. Hosted and served on Mistral Al infrastructure, in Europe.
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user",
content="Who is the most renowned French painter?")
]
We allow you to fine-tune our models in an easy, effective & cost-efficient way, and thus use smaller and better-suited models to solve your specific use cases. Fine-tuning can be done with our open-source fine-tuning code as well as on La Plateforme with our efficient Fine-tuning API.
Benefit from Mistral fine-tuning code to perform fine-tuning on Mistral open-source models on your own.
Leverage Mistral’s unique expertise in training models by using our highly efficient fine-tuning service to specialize both our open-source and commercial models.
Input | Output | ||
---|---|---|---|
open-mistral-7b | A 7B transformer model, fast-deployed and easily customisable. | $0.25 /1M tokens | $0.25 /1M tokens |
open-mixtral-8x7b | A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total. | $0.7 /1M tokens | $0.7 /1M tokens |
open-mixtral-8x22b | Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B. | $2 /1M tokens | $6 /1M tokens |
Input | Output | ||
---|---|---|---|
open-mistral-7b | A 7B transformer model, fast-deployed and easily customisable. | 0.2€ /1M tokens | 0.2€ /1M tokens |
open-mixtral-8x7b | A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total. | 0.65€ /1M tokens | 0.65€ /1M tokens |
open-mixtral-8x22b | Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B. | 1.9€ /1M tokens | 5.6€ /1M tokens |
Input | Output | ||
---|---|---|---|
mistral-small-2402 | Cost-efficient reasoning for low-latency workloads. | $1 /1M tokens | $3 /1M tokens |
codestral-2405 | State-of-the-art Mistral model trained specifically for code tasks. | $1 /1M tokens | $3 /1M tokens |
mistral-medium-2312 | *Will soon be deprecated* | $2.7 /1M tokens | $8.1 /1M tokens |
mistral-large-2402 | Top-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family. | $4 /1M tokens | $12 /1M tokens |
Input | Output | ||
---|---|---|---|
mistral-small-2402 | Cost-efficient reasoning for low-latency workloads. | 0.9€ /1M tokens | 2.8€ /1M tokens |
codestral-2405 | State-of-the-art Mistral model trained specifically for code tasks. | 0.9€ /1M tokens | 2.8€ /1M tokens |
mistral-medium-2312 | *Will soon be deprecated* | 2.5€ /1M tokens | 7.5€ /1M tokens |
mistral-large-2402 | Top-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family. | 3.8€ /1M tokens | 11.3€ /1M tokens |
Input | Output | ||
---|---|---|---|
mistral-embed | State-of-the-art semantic for extracting representation of text extracts. | $0.1 /1M tokens | $0.1 /1M tokens |
Input | Output | ||
---|---|---|---|
mistral-embed | State-of-the-art semantic for extracting representation of text extracts. | 0.1€ /1M tokens | $0.1 /1M tokens |
One-off training | Storage | Input | Output | |
---|---|---|---|---|
Mistral 7B | $2 /1M tokens | $2 per month per model | $0.75 /1M tokens | $0.75 /1M tokens |
Mistral Small | $4 /1M tokens | $2 per month per model | $2.5 /1M tokens | $7.5 /1M tokens |
One-off training | Storage | Input | Output | |
---|---|---|---|---|
Mistral 7B | 1.9€ /1M tokens | 1.9€ per month per model | 0.7€ /1M tokens | 0.7€ /1M tokens |
Mistral Small | 3.8€ /1M tokens | 1.9€ per month per model | 2.3€ /1M tokens | 7€ /1M tokens |
Mistral AI provides a fine-tuning API through La Plateforme, making it easy to fine-tune our open-source and commercial models. There are three costs related to fine-tuning: