The document discusses optimizing question answering systems called RAG (Retrieve-and-Generate) stacks. It outlines challenges with naive RAG approaches and proposes solutions like improved data representations, advanced retrieval techniques, and fine-tuning large language models. Table stakes optimizations include tuning chunk sizes, prompt engineering, and customizing LLMs. More advanced techniques involve small-to-big retrieval, multi-document agents, embedding fine-tuning, and LLM fine-tuning.
This document provides a technical introduction to large language models (LLMs). It explains that LLMs are based on simple probabilities derived from their massive training corpora, containing trillions of examples. The document then discusses several key aspects of how LLMs work, including that they function as a form of "lossy text compression" by encoding patterns and relationships in their training data. It also outlines some of the key elements in the architecture and training of the most advanced LLMs, such as GPT-4, focusing on their huge scale, transformer architecture, and use of reinforcement learning from human feedback.
The document discusses advances in large language models from GPT-1 to the potential capabilities of GPT-4, including its ability to simulate human behavior, demonstrate sparks of artificial general intelligence, and generate virtual identities. It also provides tips on how to effectively prompt ChatGPT through techniques like prompt engineering, giving context and examples, and different response formats.
It is not often even in the ICT world that one witnesses a revolution. The rise of the Personal Computer, the rise of mobile telephony and, of course, the rise of the Internet are some of those revolutions. So what is ChatGPT really? Is ChatGPT also such a revolution? And like any revolution, does ChatGPT have its winners and losers? And who are they? How do we ensure that ChatGPT contributes to a positive impulse for "Smart Humanity?". During a key note om April 3 and 13 2023 Piek Vossen explained the impact of Large Language Models like ChatGPT. Prof. PhD. Piek Th.J.M. Vossen, is Full professor of Computational Lexicology at the Faculty of Humanities, Department of Language, Literature and Communication (LCC) at VU Amsterdam: What is ChatGPT? What technology and thought processes underlie it? What are its consequences? What choices are being made? In the presentation, Piek will elaborate on the basic principles behind Large Language Models and how they are used as a basis for Deep Learning in which they are fine-tuned for specific tasks. He will also discuss a specific variant GPT that underlies ChatGPT. It covers what ChatGPT can and cannot do, what it is good for and what the risks are.
This document provides a 50-hour roadmap for building large language model (LLM) applications. It introduces key concepts like text-based and image-based generative AI models, encoder-decoder models, attention mechanisms, and transformers. It then covers topics like intro to image generation, generative AI applications, embeddings, attention mechanisms, transformers, vector databases, semantic search, prompt engineering, fine-tuning foundation models, orchestration frameworks, autonomous agents, bias and fairness, and recommended LLM application projects. The document recommends several hands-on exercises and lists upcoming bootcamp dates and locations for learning to build LLM applications.
The document describes the RAG (Retrieval-Augmented Generation) model for knowledge-intensive NLP tasks. RAG combines a pre-trained language generator (BART) with a dense passage retriever (DPR) to retrieve and incorporate relevant knowledge from Wikipedia. RAG achieves state-of-the-art results on open-domain question answering, abstractive question answering, and fact verification by leveraging both parametric knowledge from the generator and non-parametric knowledge retrieved from Wikipedia. The retrieved knowledge can also be updated without retraining the model.
This document provides information about a bootcamp to build applications using Large Language Models (LLMs). The bootcamp consists of 11 modules covering topics such as introduction to generative AI, text analytics techniques, neural network models for natural language processing, transformer models, embedding retrieval, semantic search, prompt engineering, fine-tuning LLMs, orchestration frameworks, the LangChain application platform, and a final project to build a custom LLM application. The bootcamp will be held in various locations and dates between September 2023 and January 2024.
Prompt engineering is a fundamental concept within the field of artificial intelligence, with particular relevance to natural language processing. It involves the strategic embedding of task descriptions within the input data of an AI system, often in the form of a question or query, as opposed to explicitly providing the task description separately. This approach optimizes the efficiency and effectiveness of AI models by encapsulating the desired outcome within the input context, thereby enabling more streamlined and context-aware responses.
The document provides an overview of transformers, large language models (LLMs), and artificial general intelligence (AGI). It discusses the architecture and applications of transformers in natural language processing. It describes how LLMs have evolved from earlier statistical models and now perform state-of-the-art results on NLP tasks through pre-training and fine-tuning. The document outlines the capabilities of GPT-3, the largest LLM to date, as well as its limitations and ethical concerns. It introduces AGI and the potential for such systems to revolutionize AI, while also noting the technical, ethical and societal challenges to developing AGI.
Langchain Framework is an innovative approach to linguistic data processing, combining the principles of language sciences, blockchain technology, and artificial intelligence. This deck introduces the groundbreaking elements of the framework, detailing how it enhances security, transparency, and decentralization in language data management. It discusses its applications in various fields, including machine learning, translation services, content creation, and more. The deck also highlights its key features, such as immutability, peer-to-peer networks, and linguistic asset ownership, that could revolutionize how we handle linguistic data in the digital age.
Retrieval Augmented Generation (RAG) combines the concepts of semantic search and LLM-based text generation. When a person makes a query in natural language, the query is compared to the entries in the knowledge base and most relevant results are returned to the LLM, which uses this extra information to generate more accurate and reliable response. RAG can therefore limit hallucination and provide accurate responses from reliable source. In this talk, we will present the concept of RAG and underlying concept of semantic search, and present available libraries and vector databases.
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI. Link to GPT-3 paper: https://arxiv.org/abs/2005.14165 Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
What are the "use case patterns" for deploying LLMs into production? Understanding these will allow you to spot "LLM-shaped" problems in your own industry.
GPT discusses various ways that language models can acquire external information as context to improve responses, including: 1) Querying search engines using APIs to incorporate search results into responses 2) Recognizing tasks from prompts and accessing databases or APIs to incorporate relevant information 3) Summarizing, calculating, and verifying information from external sources to provide more accurate answers
Mihai is the Principal Architect for Platform Engineering and Technology Solutions at IBM, responsible for Cloud Native and AI Solutions. He is a Red Hat Certified Architect, CKA/CKS, a leader in the IBM Open Innovation community, and advocate for open source development. Mihai is driving the development of Retrieval Augmentation Generation platforms, and solutions for Generative AI at IBM that leverage WatsonX, Vector databases, LangChain, HuggingFace and open source AI models. Mihai will share lessons learned building Retrieval Augmented Generation, or “Chat with Documents” platforms and APIs that scale, and deploy on Kubernetes. His talk will cover use cases for Generative AI, limitations of Large Language Models, use of RAG, Vector Databases and Fine Tuning to overcome model limitations and build solutions that connect to your data and provide content grounding, limit hallucinations and form the basis of explainable AI. In terms of technology, he will cover LLAMA2, HuggingFace TGIS, SentenceTransformers embedding models using Python, LangChain, and Weaviate and ChromaDB vector databases. He’ll also share tips on writing code using LLM, including building an agent for Ansible and containers. Scaling factors for Large Language Model Architectures: • Vector Database: consider sharding and High Availability • Fine Tuning: collecting data to be used for fine tuning • Governance and Model Benchmarking: how are you testing your model performance over time, with different prompts, one-shot, and various parameters • Chain of Reasoning and Agents • Caching embeddings and responses • Personalization and Conversational Memory Database • Streaming Responses and optimizing performance. A fine tuned 13B model may perform better than a poor 70B one! • Calling 3rd party functions or APIs for reasoning or other type of data (ex: LLMs are terrible at reasoning and prediction, consider calling other models) • Fallback techniques: fallback to a different model, or default answers • API scaling techniques, rate limiting, etc. • Async, streaming and parallelization, multiprocessing, GPU acceleration (including embeddings), generating your API using OpenAPI, etc.
In this episode we'll discuss the different flavors of prompt engineering in the LLM/GPT space. According to your skill level you should be able to pick up at any of the following: Leveling up with GPT 1: Use ChatGPT / GPT Powered Apps 2: Become a Prompt Engineer on ChatGPT/GPT 3: Use GPT API with NoCode Automation, App Builders 4: Create Workflows to Automate Tasks with NoCode 5: Use GPT API with Code, make your own APIs 6: Create Workflows to Automate Tasks with Code 7: Use GPT API with your Data / a Framework 8: Use GPT API with your Data / a Framework to Make your own APIs 9: Create Workflows to Automate Tasks with your Data /a Framework 10: Use Another LLM API other than GPT (Cohere, HuggingFace) 11: Use open source LLM models on your computer 12: Finetune / Build your own models Series: Using AI / ChatGPT at Work - GPT Automation Are you a small business owner or web developer interested in leveraging the power of GPT (Generative Pretrained Transformer) technology to enhance your business processes? If so, Join us for a series of events focused on using GPT in business. Whether you're a small business owner or a web developer, you'll learn how to leverage GPT to improve your workflow and provide better services to your customers.
- Jon McKinney, Director of Research, H2O.ai - Arno Candel, Chief Technology Officer, H2O.ai H2O Open Source GenAI World SF 2023
The presentation "ITDays_2023_GeorgeBara" discusses challenges in adopting AI large language models (LLMs) in enterprise settings. The presentation covers: 1. **Challenges in AI LLMs adoption**: It highlights the noise in the current AI landscape and questions the practical use of AI in real businesses. 2. **The DNA of an Enterprise**: Defines enterprise sizes and discusses the new solutions adoption process, emphasizing effective integration and minimizing disruption. 3. **Enterprise-Grade**: Lists qualities like robustness, reliability, scalability, performance, security, and support that are essential for enterprise-grade solutions. 4. **What are LLMs?**: Describes the pre-ChatGPT era with BERT, a model used for language understanding, and details its enterprise applications. 5. **LLM use-cases before ChatGPT**: Focuses on data triage, process automation, knowledge management, and the augmentation of business operations. 6. **EU Digital Decade Report**: Points out that AI adoption in Europe is slow and might not meet the 2030 targets. 7. **Adoption Challenges**: Addresses top challenges such as data security, predictability, performance, control, regulatory compliance, ethics, sustainability, and ROI. 8. **Conclusion**: Reflects on the slow adoption of AI in enterprises, suggesting that a surge might occur once the technology matures and is ready for enterprise use. The presenter concludes by stating that despite the hype around technologies like ChatGPT, enterprises are cautious and will adopt new technologies at their own pace. He anticipates a gradual then sudden adoption pattern once LLMs are proven to be enterprise-ready.
This document discusses Red Hat's efforts to empower customers to self-solve issues through improved search capabilities on their customer portal. It outlines what self-solve is, why it is important for both customers and businesses, and how Red Hat is enhancing search and findability to help customers resolve issues on their own. Key initiatives discussed include improving search relevance, integrating product metadata, handling complex error messages, customizing search for different products, and measuring success through decreased support cases and faster resolutions.
This document summarizes the evolution of AppsFlyer's raw data product from a simple Spark script to a premium data service over 3 months. It began as a prototype to address large file sizes and numbers for BI clients. Challenges included scaling, monitoring, security and schema. Improvements such as Parquet format and stateful S3 reduced costs and improved performance. The service was abstracted into microservices with automated tasks, search, and notifications. Monitoring, cost optimization, and prioritizing jobs further refined the product. It concluded having transitioned to a premium, self-serve offering with onboarding and defined schemas.
Halvar Flake and Sebastian Porst present BinCrowd, a tool for analyzing disassembled binaries. It allows uploading analysis results to a central database for later retrieval and comparison to other binaries. This helps identify code reuse across different programs. The presentation covers techniques for function matching and scoring file similarity. It also discusses how BinCrowd can be accessed using IDA Pro and managing access levels for team collaboration.
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.