![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/07/vgtalk.png?fit=200%2C114&ssl=1)
Podcast: Creating Custom LLMs
- Vincent Granville
- July 17, 2024
Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a ‘custom’ openAI GPT, but what does it take to create a truly […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/06/vendors.png?fit=200%2C111&ssl=1)
Synthesizing Multi-Table Databases: Model Evaluation & Vendor Comparison
- Vincent Granville
- June 15, 2024
Synthesizing multi-table tabular data presents its own challenges, compared to single-table. When the database contains date columns such as transaction or admission date, a frequent occurrence in real-world datasets, generating high quality synthetizations and model evaluation are even more complicated. In this article, we focus on this type of problems, comparing generated observations produced by […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/06/ppt-llm2.png?fit=200%2C123&ssl=1)
New Trends in LLM: Overview with Focus on xLLM
- Vincent Granville
- June 3, 2024
If you ever wondered how xLLM is different from other LLM and RAG architectures, what are the foundational changes that make it appealing to fortune 100 companies, and what are the innovations being copied by competitors, read on. In this article, I share the latest trends and provide a high-level summary of xLLM, describing the […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/05/xLLM-diagram.png?fit=200%2C169&ssl=1)
New Book: State of the Art in GenAI & LLMs — Creative Projects, with Solutions
- Vincent Granville
- May 20, 2024
With 23 top projects, 96 subprojects, and 6000 lines of Python code, this vendor-neutral coursebook is a goldmine for any analytic professional or AI/ML engineer interested in developing superior GenAI or LLM enterprise apps using ground-breaking technology. This is not another book discussing the same topics that you learn in bootcamps, college classes, Coursera, or […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/05/mixture2.png?fit=200%2C149&ssl=1)
GenAI Evaluation Metrics: Your Best Loss Functions to Boost Quality
- Vincent Granville
- May 17, 2024
Whether dealing with LLM, computer vision, clustering, predictive analytics, synthetization, or any other AI problem, the goal is to deliver high quality results in as little time as possible. Typically, you assess the output quality after producing the results, using model evaluation metrics. These metrics are also used to compare various models, or to measure […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/05/dendo.png?fit=200%2C125&ssl=1)
Breakthrough: Zero-Weight LLM for Accurate Predictions and High-Performance Clustering
- Vincent Granville
- May 4, 2024
While most AI companies keep building LLMs with more weights and tokens (now one trillion is a standard number), I went in the opposite direction. Of course, zero weight means that there is no neural network behind the scenes. More specifically, it means that there is no lengthy Blackbox process to find the “best” weights […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/04/catwolf.png?fit=200%2C142&ssl=1)
Build and Evaluate High Performance Taxonomy-Based LLMs From Scratch
- Vincent Granville
- April 21, 2024
One obvious way to dramatically improve the quality of LLM and RAG systems is to use high-quality input sources, as opposed to just raw text from the crawled or parsed content. Combine it with specialization: one LLM per top domain, allowing the user to customize parameters and specify the domain in addition to standard concise […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/04/xLLM-diagram.png?fit=200%2C169&ssl=1)
Hallucination-Free, Self-Tuned, Fast Hierarchical LLMs with Multi-Token Embeddings
- Vincent Granville
- April 12, 2024
The new generation of RAG / LLM architecture is moving away from the original monolithic and generic OpenAI model, towards a collection of decentralized and specialized LLMs jointly organized and governed via multi-agent systems. The benefits are obvious: low latency, smaller tables (one per LLM), faster training and fine-tuning, energy-efficient, better results, with much lower […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/03/clt-xllm.png?fit=200%2C175&ssl=1)
Extreme LLM: Case Study, Documentation, Best Practices, and Python sources
- Vincent Granville
- March 2, 2024
Extreme LLM, abbreviated as xLLM, relies on multiple specialized large language models, one per top category, to deliver highly relevant answers to specific questions, covering the entire human knowledge or targeted content such as corporate repositories. The user, in addition to the classic prompt, is invited to select or guess top categories. Behind the scenes, […]
Read More![](https://cdn.statically.io/img/i0.wp.com/mltechniques.com/wp-content/uploads/2024/02/ann-max.png?fit=200%2C128&ssl=1)
Probabilistic Nearest Neighbor Search: The Swiss Army Knife of GenAI
- Vincent Granville
- February 11, 2024
ANN — Approximate Nearest Neighbors — is at the core of fast vector search, itself central to GenAI, especially GPT and LLM. My new methodology, abbreviated as PANN, has many other applications: clustering, classification, measuring the similarity between two datasets (images, soundtracks, time series, and so on), tabular data synthetization (improving poor synthetizations), model evaluation, […]
Read More
You must be logged in to post a comment.