🚀 Transformative Security for AI Industry Announcement: HiddenLayer Collaborates with Microsoft Azure AI to Enhance Model Security We are thrilled to announce that HiddenLayer and Microsoft have partnered to improve the security of the #AI models available in the Azure AI Studio. With HiddenLayer's safe verification through our Model Scanner, organizations can assess the security of open-source and third-party models within the model catalog. “We see a need for proactive security solutions that allow developers to deploy AI models safely–and feel confident fine-tuning these models with their own proprietary data,” said Sarah Bird, Chief Product Officer of Responsible AI at Microsoft. “Integrating HiddenLayer into our model onboarding process is the validation that our customers need as they drive competitive differentiation with AI.” Key capabilities enabled by HiddenLayer in the Azure AI model catalog include: 🔎 Malware Analysis ✅ Vulnerability Assessment 🚪 Backdoor Detection 🔄 Model Integrity Read our press release 📄 https://hubs.ly/Q02xZZVs0 Learn more about our exciting partnership 👉 https://lnkd.in/gREB6jgF #Security4AI #securityforai #hiddenlayer #aidr #genai #LLM #cybersecurity #protectyouradvantage #azure #microsoft #AzureAI #AzureML #SecurityInnovation #TechInnovation #TechNews #InfoSec
About us
HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
- Website
-
https://hiddenlayer.com/
External link for HiddenLayer
- Industry
- Computer and Network Security
- Company size
- 51-200 employees
- Headquarters
- Austin, TX
- Type
- Privately Held
- Founded
- 2022
- Specialties
- Security for AI, Cyber Security, Gen AI Security, Adversarial ML Training, AI Detection & Response, Prompt Injection Security, PII Leakage Protection, Model Tampering Protection, Data Poisoning Security, AI Model Scanning, AI Threat Research, and AI Red Teaming
Locations
-
Primary
Austin, TX, US
Employees at HiddenLayer
Updates
-
A Roadmap for AI in the US Senate released by Bipartisan Senate AI Working Group Early in the 118th Congress, AI's transformative potential in sectors like science, medicine, and agriculture was recognized, along with risks such as workforce disruptions, legal ambiguities, and national security challenges. The AI Working Group was formed to tackle these issues, engaging with experts through briefings and forums to develop an AI policy roadmap, including strategies to safeguard AI systems. 🔍 Safeguarding AI Recommendations: - Conduct AI Red Teaming - Develop an analytical framework - Consider a capabilities-based AI risk regime - Create legislation aimed at advancing AI systems - Develop commercial AI auditing The insights gained have shaped a comprehensive AI policy roadmap to harness AI's benefits while mitigating risks, encouraging bipartisan legislative action to keep the U.S. at the forefront of innovation. This roadmap emphasizes balancing progress with responsibility to ensure ethical advancements. You can learn more here 👉 https://lnkd.in/gerXTKGb This post is part of our Between the Layer series. Tune in weekly as we share industry insight and thought leadership topics on #Security4AI. #AI #Innovation #Policy #AIWorkingGroup #AIPolicy #GenAI #LLM
-
-
🚀 See how a Global Financial Services company uses HiddenLayer���s Red Teaming Assessment to secure their AI A financial services company partnered with HiddenLayer to conduct a red team evaluation of their machine learning models used for fraud detection. The goal was to uncover weaknesses that could lead to significant financial losses if exploited. Objectives of AI Red Teaming: Identify Vulnerable Features: Find features in the models that attackers could manipulate. Create Adversarial Examples: Develop examples that change fraudulent classifications to legitimate ones. Improve Model Classification: Enhance the accuracy of fraud detection. With a list of identified exploits, our client can now focus on mitigating these vulnerabilities, leading to a stronger security posture without affecting customer experience. Read more 👉 https://lnkd.in/g-uijm54 Interested in learning more about AI Red Teaming? Join our webinar, A Guide to AI Red Teaming, on July 17th 👇 https://lnkd.in/g8EZWYjX #AI #GenAI #LLM #SecurityforAI #cybersecurity #AISecurity #RiskManagement #AIRedTeaming
-
-
🎉 The HiddenLayer family just hit 100 employees! 🎉 We're excited to have these newest members join us as we work diligently every day to #protectyouradvantage. Please give a warm welcome to Alex Avendano, Connor McCune, Jeff Music, Sidney Stefani
-
We’re thrilled that HiddenLayer has been featured in the latest Gartner report, “Emerging Tech: Secure Generative Communication for LLMs and AI Agents.” In the report, Gartner dissects how the Google search volume of “AI agents” doubled between December 2023 and April 2024, indicating growing enterprise interest. Adopting AI agents introduces new security risks, especially during external collaborations, which can lead to workflow disruptions and potential data loss or leakage. Gartner advises partnering with emerging AI Trust, Risk, and Security Management (AI TRiSM) providers to enhance the security features in your product portfolio. We are proud to be recognized as a sample vendor alongside other leading vendors in the AI TRiSM category. Book a demo to see how our AIsec Platform provides scalable and unobtrusive security for your AI 👉 https://lnkd.in/g5f7-8pG #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #LLM #GenerativeAI #AITRiSM #Gartner #EmergingTech
-
-
HiddenLayer will be heading back to Vegas for Black Hat this year! Want to learn how to balance the competing priorities of AI adoption and security? Meet with our team to discuss the competing priorities of AI adoption and security and demo our AI Detection & Response for Gen AI solution. Request a meeting today 👉https://lnkd.in/gSgqj4AX Want to start the week off with a bang? Join us for our Welcome Happy Hour to discuss the emerging field of security for AI with other cyber professionals. Register today 👉 https://lnkd.in/g8_ErttZ Stay tuned as we continue to roll out our schedule and whereabouts at this year's Black Hat conference. #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #BlackHat24 #aidr #protectyouradvantage #hiddenlayer
-
-
Interested in learning more about AI Red Teaming and how your organization's AI tools can benefit from it? Join us for our “A Guide To AI Red Teaming” Webinar on July 17th. Topics we will cover include: - An Introduction to AI Red Teaming - Techniques and Frameworks for AI Red Teaming - The Regulatory Landscape - Best Practices We’re pleased to be joined by leading experts to discuss this important topic: - Christina Liaghati, PhD, Trustworthy & Secure Al Department MITRE Atlas Lead - Travis Smith, VP - ML Threat Operations, HiddenLayer - John Dwyer, Director of Security Research, BinaryDefense - Chloé Messdaghi, Head of Threat Intelligence, HiddenLayer As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important. Register today to stay ahead of threats and learn how collaborative efforts between cybersecurity and data science experts through red-teaming can build stronger, more secure AI systems 👇 https://lnkd.in/g8EZWYjX #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #LLM #GenerativeAI #RedTeaming #protectyouradvantage #AIwebinar
-
-
READ: New research introducing Knowledge Return Oriented Prompting (KROP), a novel method for bypassing conventional LLM safety measures, and how to minimize its impact. In AI, many LLMs and LLM-powered applications rely on prompt filters and alignment techniques to safeguard their integrity. However, these measures are not foolproof. KROP is a prompt injection technique capable of obfuscating prompt injection attacks, making them virtually undetectable to most existing security measures. Dive into our latest research to explore how KROP works and its implications for Security for AI. Read the full blog here 👇 https://lnkd.in/g8GcVw48 #AI #AIAttacks #AIIntegrity #Security #TechInnovation #KROP #PromptInjection #LLM #AISecurity #SecurityforAI
-
-
HiddenLayer reposted this
Thank you Walt Maciborski for showcasing HiddenLayer and how we accelerate AI adoption globally and across enterprise and public sector customers right here in the ATX.
PROTECTING AI MODELS. AI security is essential as we tap into the promise and the hope of AI. Hacked and stolen AI models could lead to compromised military secrets, banking disruptions and energy grid failures. But HiddenLayer, an Austin tech company, is pioneering AI security and just signed a big deal with Microsoft to protect Azure AI models. Tech This Out..... #ai #ml #security #hiddenlayer #microsoft Capital Factory
-
CBS Tech This Out: HiddenLayer Edition! Thank you to CBS KEYE-TV and Walt Maciborski for covering HiddenLayer’s story. It’s a privilege to be part of Austin’s growing tech industry.
PROTECTING AI MODELS. AI security is essential as we tap into the promise and the hope of AI. Hacked and stolen AI models could lead to compromised military secrets, banking disruptions and energy grid failures. But HiddenLayer, an Austin tech company, is pioneering AI security and just signed a big deal with Microsoft to protect Azure AI models. Tech This Out..... #ai #ml #security #hiddenlayer #microsoft Capital Factory