New tutorial out now — use Guardrails with LangChain's LCEL language! We updated Guardrails so that Guards natively export to LangChain runnables and can be combined to work together in a chain. Read more details in the tutorial below 👀 https://lnkd.in/gy3Zdiqk
Guardrails AI
Software Development
Menlo Park, California 2,730 followers
Our mission is to empower humanity to harness the unprecedented capabilities of foundation AI models.
About us
Our mission is to empower humanity to harness the unprecedented capabilities of foundation AI models. We are committed to eliminating the uncertainties inherent in AI interactions, providing goal-oriented, contractually bound solutions. We aim to unlock an unparalleled scale of potential, ensuring the reliable, safe, and beneficial application of AI technology to improve human life.
- Website
-
http://www.guardrailsai.com
External link for Guardrails AI
- Industry
- Software Development
- Company size
- 2-10 employees
- Headquarters
- Menlo Park, California
- Type
- Privately Held
- Founded
- 2023
Locations
-
Primary
801 El Camino Real
Menlo Park, California 94025, US
Employees at Guardrails AI
Updates
-
If you’ve struggled with the failure rate of using OpenAI function calling then you should check out the Guardrails AI 0.5.0 preview You can now use Guardrails to get structured data out of Hugging Face models like in the code snippet below. Blog post linked in thread
👀 MAJOR new feature released in the 0.5.0 preview — you can now use Guardrails to get structured data from any open source LLM! 🚀 Check out the blog post below to learn how to get JSON from a Hugging Face model. Shoutout to the amazing Joseph Catrambone for adding this to 0.5.0!! https://lnkd.in/gAknWtmU
-
-
👀 MAJOR new feature released in the 0.5.0 preview — you can now use Guardrails to get structured data from any open source LLM! 🚀 Check out the blog post below to learn how to get JSON from a Hugging Face model. Shoutout to the amazing Joseph Catrambone for adding this to 0.5.0!! https://lnkd.in/gAknWtmU
-
-
New feature announcement! Guardrails 🤝 OTEL provides the most detailed observability monitoring for your AI Applications When we were choosing how to instrument observability with Guardrails, it was clear that adapting an open standard provides observability for LLM Applications without needing to setup a different observability tool. What OTEL allows you to do: ✅ Monitor Guard and validator latency ✅ LLM latency ✅ Validation outcomes of individual Validators pass/fail More details in the link below: https://lnkd.in/ggYkFX7H
-
-
Excited to share Guardrails AI has integrated with LiteLLM (YC W23). Together, customers can now build reliable AI applications with 100+ LLMs! https://lnkd.in/gnSQjwkS
-
🎉 We're excited to welcome Wyatt Lansford to his first week at Guardrails AI! Wyatt has a wealth of knowledge in deep learning, reinforcement learning and MLops, and we’re proud to have him as our first senior Machine Learning engineer. He’s a joy to work with and smoked our interviews. Wyatt is going to own a number of exciting new ML projects for Guardrails. A lot of Wyatt's projects will be in the open source AI reliability domain, so please give him a follow to stay updated as he hammers away at some of the most interesting problems in the LLM toolchain! Fun fact about Wyatt - he's really into hiking with his dog, and has done over 10 hikes with my dog in his backpack! Check out a cool photo of Wyatt and his backpack-pup below! #guardrails #ai #llm #ml #letsgooooooo
-
-
We're #hiring a new Senior Machine Learning Engineer in San Francisco Bay Area. Apply today or share this post with your network.
-
If you're going to be at NVIDIA GTC, let us know we'd love to meet!
Excited to attend NVIDIA GTC this week! Let me know if you'll be around and want to meet up. #nvidia #llm #guardrails
GTC 2024: #1 AI Conference
nvidia.com
-
Guardrails AI reposted this
How does Guardrails AI work? We asked the source, Shreya Rajpal. TL;DW → Guardrails AI is an open-source, two-stage framework designed to safeguard interactions with LLMs. → The first stage involves a verification suite that implements and executes multiple independent checks to ensure the output meets specific correctness criteria. → Under the hood, the framework can be ML-based, LLM-based, rule-based, heuristic-based or hooked up to an external system. → Users can use pre-defined guardrails available in the open-source library for common use cases or create custom verification checks to meet specific needs. → The system is designed to be configurable, allowing users to trust the LLM outputs by complying with regulations and standards. h/t to Generative AI World and Shreya Rajpal from Guardrails AI for the insight. — (link to the full interview in the comments) #ML #MLOps #LLM