This month's AI Security Interview Series showcases Daniel Kang, Professor at UIUC, enlightening us on the capabilities and potential misuse of LLMs and AI agents. In this excerpt, Daniel explains how AI agents can now autonomously hack websites. This highlights what sort of vulnerability agents can exploit automatically. Check out the full video here: https://lnkd.in/eJAq6NgY Follow Daniel to get the latest updates on his research: Twitter https://lnkd.in/e-nbe_R2 Medium https://lnkd.in/eYG3NgwK #AIsecurity #LLMsecurity #AIAgents #Redteaming
Robust Intelligence
Software Development
San Francisco, California 12,724 followers
Achieve AI security and safety to unblock the enterprise AI mission.
About us
Robust Intelligence enables enterprises to secure their AI transformation with an automated solution to protect against security and safety threats. Our platform includes an engine for detecting and assessing model vulnerabilities, as well as recommending and enforcing the necessary guardrails to mitigate threats to AI applications in production. This enables companies to meet AI safety and security standards with a single integration, automatically working in the background to protect applications from development to production. Robust Intelligence is backed by Sequoia Capital and Tiger Global, and trusted by leading companies including ADP, JPMorgan Chase, Expedia, Deloitte, Cisco, and the U.S. Department of Defense to unblock the enterprise AI mission.
- Website
-
https://www.robustintelligence.com
External link for Robust Intelligence
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2019
- Specialties
- Artificial Intelligence, Cybersecurity, AI Security, AI Safety, AI Governance, AI Risk Management, LLM Security, LLM guardrails, AI Firewall, and AI Validation
Products
Robust Intelligence
Data Science & Machine Learning Platforms
The Robust Intelligence platform automates testing for security and safety vulnerabilities of AI models in development and their protection in production. The platform includes an engine for detecting and assessing model vulnerabilities as well as the necessary guardrails to deploy safely in production. This consists of two complementary components, which can be used independently but are best when paired together: AI Validation detects and assesses model vulnerabilities to various attack techniques and safety concerns through automated testing and provides the recommended guardrails required to deploy safely in production. AI Protection secures applications against attacks and undesired responses in real time with guardrails that are tailored to the specific vulnerabilities identified during model assessment. It’s simple to get started with our API-based service. Just point at a model endpoint to initiate the assessment and generate specific guardrails custom-fit to your model.
Locations
-
Primary
555 19th St
San Francisco, California 94107, US
Employees at Robust Intelligence
Updates
-
📣 We’re thrilled to announce our partnership with IBM watsonx, making it easy to protect all #GenAI models and applications on watsonx.ai from safety and security threats in real time! The integration of our AI Firewall provides a model-agnostic guardrail that validates inputs for threats (such as off-topic queries and prompt injection attacks) and outputs for undesired responses (such as toxic responses and factual inconsistency). Our detections span hundreds of security and safety categories, powered by our proprietary technology and pioneering research. In the video below, you can see how easy and effective it is to route all calls to an LLM on watsonx.ai through our AI Firewall in a few lines of code. Check out the Robust Intelligence listing on the IBM watsonx partner page and reach out if you’re ready to get started: https://lnkd.in/g77Zk82G #AIrisk #AIsafety #AIsecurity #LLMsecurity #generativeAI #LLMs #guardrails
-
⚠️ As part of our ongoing AI threat research efforts here at Robust Intelligence, we’re sharing roundups of the latest vulnerabilities and exploits identified each month. Our June edition features: 1️⃣ Special Characters Attack (SCA) which uses a combination of symbols, punctuation marks, and other characters to extract model training data 2️⃣ Chain of Attack (CoA) multi-turn jailbreak which continuously iterates on its subtle, malicious prompting using a secondary LLM 3️⃣ Sleepy Pickle vulnerability which executes custom functions to compromise models after deserialization In these regular updates, you’ll find highlights and important developments to come out of our continuous research efforts along with concise and informative analysis from our team. Check out the latest blog by AI Security Researcher Adam Swanda. https://lnkd.in/gN8j5D5e #AIsecurity #LLMsecurity #cybersecurity #threatintel #LLMjailbreak
-
👋 We're excited to announce that Dr. amin karbasi has joined Robust Intelligence as our Chief Science Officer, further extending our deep roster of AI security experts. Amin joins Robust Intelligence from Yale University, where he is an Associate Professor of Electrical & Computer Engineering & Computer Science (currently on leave). He also worked as a Staff Researcher for Google AI for the past five years, during which he made many significant contributions to scalable optimization for foundation models, data curation, and AI privacy. Over his impressive career, Amin’s groundbreaking AI work has been published over 150 times in top-tier ML conferences. We had the opportunity to work with Amin last year on the co-development of a new, highly effective algorithmic AI jailbreak technique called Tree of Attacks with Pruning (TAP). As our Chief Science Officer, Amin will spearhead our research initiatives and define our research agenda in the ever-evolving field of machine learning and AI safety/security. Welcome to the team, Amin! We look forward to all we’ll accomplish together on our mission to enable every organization on the planet to adopt AI securely. #AIrisk #AIsafety #AIsecurity #LLMsecurity #machinelearning
-
-
The new edition of our AI Security insider is out!
Check out this month’s edition of the #AISecurityInsider newsletter where we cover the latest news, research, threat intelligence, events & more from the world of AI security. June brings two new foundational resources which will help teams better understand and contend with the complex challenges of AI security: our AI Security and Safety Taxonomy and our AI Security Reference Architectures. Plus, check out our monthly AI threat roundup and a conversation with the CTO of IBM Security. #AIsecurity #AIrisk #LLMsecurity #AI
AI Security Insider — June 2024
Robust Intelligence on LinkedIn
-
Reflecting on last week's Team8 CISO Summit and the prioritization of AI application security! #AIsecurity #LLMsecurity #AIsafety #genAI #generativeAI #cybersecurity
Thank you Team8 for having me present this year again at the CISO Summit last week! It was great to see that securing the AI transformation has become a highly relevant topic for all security leaders contending with the safe and secure development of GenAI applications. AI security was a significant area of focus throughout the event, including insightful talks by shafi goldwasser and Jason Clinton. This year, there was already a great deal of awareness about AI risk and securing AI systems is now a top priority for almost all CISOs and security leaders. I feel priviliged to be a part of this amazing community of visionary security leaders and industry experts. Thank you Team8 for another great CISO Summit!
-
-
Check out this month’s edition of the #AISecurityInsider newsletter where we cover the latest news, research, threat intelligence, events & more from the world of AI security. June brings two new foundational resources which will help teams better understand and contend with the complex challenges of AI security: our AI Security and Safety Taxonomy and our AI Security Reference Architectures. Plus, check out our monthly AI threat roundup and a conversation with the CTO of IBM Security. #AIsecurity #AIrisk #LLMsecurity #AI
AI Security Insider — June 2024
Robust Intelligence on LinkedIn
-
🏆 We're honored to receive the "Best AI Startup" award for the second consecutive year from AI Breakthrough Awards! Chosen from over 5,000 global nominations, this award highlights our commitment to innovation in AI application security, which helps companies develop and deploy AI applications safely and securely. Thank you to AI Breakthrough for recognizing our contributions to AI security, and congratulations to the Robust Intelligence team! #AIBreakthrough #AIsafety #AIsecurity #LLMsecurity #GenAI #LLMs
-
-
🤩 We're thrilled to announce our partnership with Pinecone, making it easier for developers to adopt a shift-left approach to testing and build safer, more secure #RAG applications! Our integration with #Canopy, Pinecone's open-source framework which simplifies the development of RAG applications, enables companies to automatically validate vector database components to prevent indirect prompt injections, data poisoning, and other AI risks. In the video below, you can see how easy it is to keep harmful data out of your vector database and stop it from compromising your RAG applications. Check out our blog for more information on the partnership, including how to get started: https://lnkd.in/gyEum4j4 #AIsecurity #LLMsecurity #AIsafety #RAGapplications #promptinjection #vectordatabase
-
📣 We're pleased to introduce the first taxonomy covering both AI security and AI safety threats. We hope this resource is helpful to the AI and cybersecurity communities as they develop and deploy GenAI applications. It's important to consider both security and safety threats. AI security is concerned with protecting sensitive data and computing resources from unauthorized access or attack, whereas AI safety is concerned with preventing harms caused by unintended consequences of an AI application by its designer. Both present business risk which can result in financial, reputational, and legal ramifications. Check out the full taxonomy to understand the generative AI threat landscape with definitions, mitigations, and standards classifications: https://lnkd.in/gR7bBE_u We’re continuously updating this resource. Please reach out to us with any questions or comments. #AIsecurity #AIsafety #LLMsecurity #AIrisk #generativeAI #cybersecurity
The Comprehensive AI Security and Safety Taxonomy — Robust Intelligence
robustintelligence.com