LightRAG - The Lightning Library for LLM Applications. LightRAG helps developers with both building and optimizing Retriever-Agent-Generator pipelines. It is light, modular, and robust, with a 100% readable codebase. Key Highlights ✅ LightRAG shares similar design pattern as PyTorch for deep learning modeling. ✅ Only two fundamental but powerful base classes: Component for the pipeline and DataClass for data interaction with LLMs. ✅ A highly readable codebase and less than two levels of class inheritance. Class Hierarchy. ✅ Maximize the library’s tooling and prompting capabilities to minimize the reliance on LLM API features such as tools and JSON format. ✅ The result is a library with bare minimum abstraction, providing developers with maximum customizability. Github - https://lnkd.in/gEkD6ukX Follow Antar AI for more
Antar AI
Technology, Information and Internet
Bangalore, Karnataka 2,489 followers
Research and Consulting Company
About us
Antar AI is an Independent Research and Consulting company, that specializes in the development of sustainable AI/ML applications, facilitating clients in seamlessly integrating automation into their operations. Our skilled team of experts in Artificial Intelligence and Machine Learning is great at creating customized customer experiences, making internal processes smoother, and introducing solutions that completely change how businesses operate. Harness the power of AI to boost business growth and enhance productivity. For inquiries, contact us at antarailabs@gmail.com
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Bangalore, Karnataka
- Type
- Nonprofit
- Founded
- 2023
- Specialties
- AI, Machine Learning, Deep Learning, Data Science, Generative AI, Predictive Analysis, POC development, and Product Development
Locations
-
Primary
Bangalore, Karnataka, IN
Updates
-
Antar AI reposted this
✨Open-sourcing comprehensive LLM Glossary✨ Explore, Learn, and Add terms about #LLMs and #GenAI. Let's make AI easy for everyone. 🚨Adding new terms regularly Don't forget to give star⭐ https://lnkd.in/g4tgBedq #OpenSource #AI #MachineLearning #Glossary #AICommunity
GitHub - freetoolsarebest/llm-glossary: Basic to Advanced LLM Glossary terms
github.com
-
𝐭𝐱𝐭𝐚𝐢 𝐁𝐌25 𝐯𝐬 𝐋𝐚𝐧𝐠𝐂𝐡𝐚𝐢𝐧 𝐁𝐌25 LangChain and LlamaIndex both use Rank-BM25. txtai has a custom BM25 implementation. 1.6M ArXiv abstracts were tested. txtai had 2x slower index times but 13x faster search times. LangChain used 25 GB of RAM vs 3.8GB for txtai. Code - https://lnkd.in/dxWDeey Stay updated with Antar AI NeuML Augmented A.I. Roc4Tech YOLOvX Yolo Group Ritesh Kanjee Muhammad Rizwan Munawar #bm25 #langchain #informationretrieval #nlproc #deeplearning #genai
-
-
Such a brilliant work by Kyutai Labs on Moshi Moshi, the open-source GPT-4o competitor, is a real-time multimodal model that can listen, hear, and speak with full emotion. They will open-source everything - Code, model, and paper. Stay tuned by following Antar AI #gpt4o #openai #opensource #multimodal #vectordb #rag #llm #ai #agi
-
Antar AI reposted this
Our VectorDB-recipes has a new look! Made it all easy for all levels to find resources, demos, samples, colab, and follow along with our blogs. Go give it a try! https://lnkd.in/gCj-FsWb LanceDB Ayush Chaurasia thanks for the team work! 😊
-
Real-time video generation is here! 🚀 Introducing Pyramid Attention Broadcast (PAB), the first approach to achieve real-time DiT-based (Diffusion Transformer) video generation. We're talking up to 21.6 FPS with a 10.6x acceleration! (check the video below) - Without any quality sacrifice - Works across popular models like Open-Sora, Open-Sora-Plan, and Latte - Training-free, so it can empower any future DiT-based video gen models How did they do it There are two key things about attention in video diffusion transformers: 1️⃣ Attention differences across time steps follow a U-shape 2️⃣ In the middle stable segment, different attention types vary So they built PAB to cut down on unnecessary computations. It's simple but effective: - Broadcast one step's attention outputs to several subsequent steps - Set varied broadcast ranges for different attentions The result Up to 35% speedup with minimal quality loss. But there's more: They also improved sequence parallelism by broadcasting temporal attention, which eliminated communication overhead by over 50%. The numbers speak for themselves: • 1.26x to 1.32x speedup on a single GPU • Up to 10.6x speedup on multiple GPUs A detailed blog post: https://lnkd.in/g4UWBxwb Code for OpenDiT: https://t.co/3QTJbmOM8X Stay tuned with Antar AI Generative AI YOLOvX Runway #genai #videogeneration #attention #transformer #diffusion
-
❄️Matching Anything By Segmenting Anything MASA leverages the Segment Anything Model (SAM) for object segmentation and enables universal tracking with zero-shot capabilities in complex scenes. The MASA method overcomes the limitations of domain-specific datasets. 🔗 Research Paper: https://lnkd.in/ecXdP6Vr 🔗 Project Page: https://lnkd.in/edh6Qcrw 🔗 Github: https://lnkd.in/eQPQzHkb Key Highlights: ✅Domain Agnostic: Achieves cross-domain generalization without relying on labeled datasets. ✅SAM Integration: Utilizes SAM outputs as dense object region proposals for learning instance-level correspondence. ✅Universal Adapter: Designed to work with foundational segmentation or detection models for tracking any detected objects. ✅ Zero-Shot Tracking: Demonstrates strong performance on challenging MOT benchmarks using only unlabeled static images. ✅ State-of-the-Art Performance: Surpasses current methods trained with fully annotated video sequences. Stay updated with Antar AI Augmented A.I. Roc4Tech YOLOvX Yolo Group Ritesh Kanjee Muhammad Rizwan Munawar #MachineLearning #ComputerVision #ObjectTracking #MOT #AIResearch #DeepLearning #DataScience #AI #Segmentation
-
🌀 RAG-Flow: Open-Source RAG Engine RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data. 𝐊𝐞𝐲 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬 🍱 Template-based chunking 🌱 Grounded citations with reduced hallucinations 🍔 Compatibility with heterogeneous data sources 🛀 Automated and effortless RAG workflow RAGFlow Github - https://lnkd.in/gak6YJDe #rag #ragflow #nlproc #llms #generativeai #deeplearning #transformers
-
-
📷 Camera-based Attendance System Yash Baravaliya, Ritika Lal, Fenil Patel, and Vivek Patel made a vision-based attendance system, utilizing basic technologies dlib, OpenCV, features extraction and mapping, and similarity mapping. It's minimal but looks good. Stay tuned for interesting content and follow Antar AI Muhammad Rizwan Munawar Roc4Tech Augmented A.I. Ritesh Kanjee AlphaSignal #opencv #ai #computervision #dlib #attendancesystem #vision
-
🌟 Gemma-2 is Here with 9B and 27B Models! What sets Gemma-2 apart? 1️⃣ Peak Performance: The 27B model outshines competitors, even those twice its size, while the 9B version leads its class. 2️⃣ Cost Efficiency: Operates on single devices like Google Cloud TPU and NVIDIA GPUs, slashing costs and enhancing accessibility. 3️⃣ Speed Across Devices: Optimized for everything from gaming laptops to cloud setups. 🔙 This brings back memories of building Navarasa—a fine-tuned model for Indic languages showcased at Google I/O. 🏆 In the latest Indic Human evaluations from Microsoft Research India—PARIKSHA, Navarasa consistently performs among the top 5 Indic LLMs. 🙌 Thanks to the Google team for releasing an updated video version. Check it out here: https://lnkd.in/gq8-Mh_y Stay in the loop with the updates by following Antar AI.
Developing for Indic languages | Gemma and Navarasa (Extended Edition)
https://www.youtube.com/