🚀 Transformative Security for AI Industry Announcement: HiddenLayer Collaborates with Microsoft Azure AI to Enhance Model Security We are thrilled to announce that HiddenLayer and Microsoft have partnered to improve the security of the #AI models available in the Azure AI Studio. With HiddenLayer's safe verification through our Model Scanner, organizations can assess the security of open-source and third-party models within the model catalog. “We see a need for proactive security solutions that allow developers to deploy AI models safely–and feel confident fine-tuning these models with their own proprietary data,” said Sarah Bird, Chief Product Officer of Responsible AI at Microsoft. “Integrating HiddenLayer into our model onboarding process is the validation that our customers need as they drive competitive differentiation with AI.” Key capabilities enabled by HiddenLayer in the Azure AI model catalog include: 🔎 Malware Analysis ✅ Vulnerability Assessment 🚪 Backdoor Detection 🔄 Model Integrity Read our press release 📄 https://hubs.ly/Q02xZZVs0 Learn more about our exciting partnership 👉 https://lnkd.in/gREB6jgF #Security4AI #securityforai #hiddenlayer #aidr #genai #LLM #cybersecurity #protectyouradvantage #azure #microsoft #AzureAI #AzureML #SecurityInnovation #TechInnovation #TechNews #InfoSec
About us
HiddenLayer is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
- Website
-
https://hiddenlayer.com/
External link for HiddenLayer
- Industry
- Computer and Network Security
- Company size
- 51-200 employees
- Headquarters
- Austin, TX
- Type
- Privately Held
- Founded
- 2022
- Specialties
- Security for AI, Cyber Security, Gen AI Security, Adversarial ML Training, AI Detection & Response, Prompt Injection Security, PII Leakage Protection, Model Tampering Protection, Data Poisoning Security, AI Model Scanning, AI Threat Research, and AI Red Teaming
Locations
-
Primary
Austin, TX, US
Employees at HiddenLayer
Updates
-
🇺🇸 HiddenLayer was back at the White House this week alongside partners and leaders in the Security for AI sector, where we engaged in pivotal discussions on the future of security for AI. Our agenda was filled with essential topics, including: - Data & Model Integrity - AI Red Teaming - Upcoming Reports - AI Legislation & Innovation - Education on AI Security We are continually inspired by policymakers' dedication to securing and promoting responsible AI adoption and look forward to continuing our contribution to this vital conversation. Thank you to OpenPolicy for bringing together this group: Amit Elazari, Dr. J.S.D, Chloé Messdaghi , Ellyn Kirtley, M.A., Emily Elaine Coyle, Josh Harguess, Ph.D., Tim Freestone #AI #GenAI #AIpolicy #WhiteHouse #Gov #cybersecurity #AIsecurity #securityforAI
-
-
HiddenLayer reposted this
Supply chain attacks suck - and that's a fact. But what do they look like in the age of AI? Marta Janus and I's talk at BSidesSF this year just went live where we talked about the AI Supply chain and how to threat model for a new subset of attacks (including some that might look quite familiar!) Check it out below 👇 https://lnkd.in/dM47qGeX
BSidesSF 2024 - Insane in the Supply Chain: Threat modeling for... (Eoin Wickens, Marta Janus)
https://www.youtube.com/
-
HiddenLayer reposted this
We are just a week out from our “A Guide To AI Red Teaming” Webinar on July 17th. Topics we will cover include: - An Introduction to AI Red Teaming - Techniques and Frameworks for AI Red Teaming - The Regulatory Landscape - Best Practices We’re pleased to be joined by leading experts to discuss this important topic: - Christina Liaghati, PhD - Trustworthy & Secure Al Department, MITRE Atlas - Travis Smith, VP - ML Threat Operations, HiddenLayer - John Dwyer - Director of Security Research, BinaryDefense - Chloé Messdaghi - Head of Threat Intelligence, HiddenLayer As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important. Secure your spot today 👇 https://lnkd.in/g8EZWYjX #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #LLM #GenerativeAI #RedTeaming #protectyouradvantage #AIredteaming
Join our webinar on July 17th to hear from cybersecurity and data science experts about building stronger, more secure AI
-
📊 CB Insights has highlighted the growing machine learning security (MLSec) market, and we have exciting news. HiddenLayer has been recognized as the market leader thanks to our flexibility, execution, and non-invasive technology. We’re proud to be at the forefront of MLSec, providing innovative solutions that ensure comprehensive security without compromising performance. Read more here 👇 https://lnkd.in/gqfqTSJp #AI #MachineLearning #CyberSecurity #MLSec #HiddenLayer #Innovation #TechLeadership #GenAI #LLM
-
-
We are just a week out from our “A Guide To AI Red Teaming” Webinar on July 17th. Topics we will cover include: - An Introduction to AI Red Teaming - Techniques and Frameworks for AI Red Teaming - The Regulatory Landscape - Best Practices We’re pleased to be joined by leading experts to discuss this important topic: - Christina Liaghati, PhD - Trustworthy & Secure Al Department, MITRE Atlas - Travis Smith, VP - ML Threat Operations, HiddenLayer - John Dwyer - Director of Security Research, BinaryDefense - Chloé Messdaghi - Head of Threat Intelligence, HiddenLayer As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important. Secure your spot today 👇 https://lnkd.in/g8EZWYjX #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #LLM #GenerativeAI #RedTeaming #protectyouradvantage #AIredteaming
Join our webinar on July 17th to hear from cybersecurity and data science experts about building stronger, more secure AI
-
How well do you know your AI environment? In the first part of a new series, Securing Your AI: A Step-by-Step Guide for CISOs, we highlight the importance of securing AI amidst organizational pressure to accelerate AI adoption and cover the steps your organization should take to understand its AI ecosystem, including: - Step 1: Establishing a Security Foundation - Step 2: Discovery and Asset Management - Step 3: Risk Assessment and Threat Modeling Through these initial steps, leaders can set the stage for a more secure, ethical, and compliant AI environment, fostering trust and enabling the safe integration of AI into critical business operations. Read the full blog here 👉 https://hubs.ly/Q02FNDK80 Stay tuned for next week's installment of Securing Your AI: A Step-by-Step Guide for CISOs. #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #LLM #AIsystems #AIguide
-
-
Headed to Black Hat this year? Join us for our Welcome Happy Hour to discuss the emerging field of security for AI with other cyber professionals. Register today 👉 https://lnkd.in/gsRFjeud Want to learn how to balance the competing priorities of AI adoption and security? Meet with our team to discuss the competing priorities of AI adoption and security and demo our AI Detection & Response for Gen AI solution. Request a meeting today 👉https://lnkd.in/g8mXDdSD Stay tuned as we continue to share our schedule and whereabouts at this year’s Black Hat conference. #SecurityforAI #cybersecurity #AISecurity #AI #GenAI #BlackHat2024 #aidr #protectyouradvantage #hiddenlayer
-
-
A Roadmap for AI in the US Senate released by Bipartisan Senate AI Working Group Early in the 118th Congress, AI's transformative potential in sectors like science, medicine, and agriculture was recognized, along with risks such as workforce disruptions, legal ambiguities, and national security challenges. The AI Working Group was formed to tackle these issues, engaging with experts through briefings and forums to develop an AI policy roadmap, including strategies to safeguard AI systems. 🔍 Safeguarding AI Recommendations: - Conduct AI Red Teaming - Develop an analytical framework - Consider a capabilities-based AI risk regime - Create legislation aimed at advancing AI systems - Develop commercial AI auditing The insights gained have shaped a comprehensive AI policy roadmap to harness AI's benefits while mitigating risks, encouraging bipartisan legislative action to keep the U.S. at the forefront of innovation. This roadmap emphasizes balancing progress with responsibility to ensure ethical advancements. You can learn more here 👉 https://lnkd.in/gerXTKGb This post is part of our Between the Layer series. Tune in weekly as we share industry insight and thought leadership topics on #Security4AI. #AI #Innovation #Policy #AIWorkingGroup #AIPolicy #GenAI #LLM
-
-
🚀 See how a Global Financial Services company uses HiddenLayer’s Red Teaming Assessment to secure their AI A financial services company partnered with HiddenLayer to conduct a red team evaluation of their machine learning models used for fraud detection. The goal was to uncover weaknesses that could lead to significant financial losses if exploited. Objectives of AI Red Teaming: Identify Vulnerable Features: Find features in the models that attackers could manipulate. Create Adversarial Examples: Develop examples that change fraudulent classifications to legitimate ones. Improve Model Classification: Enhance the accuracy of fraud detection. With a list of identified exploits, our client can now focus on mitigating these vulnerabilities, leading to a stronger security posture without affecting customer experience. Read more 👉 https://lnkd.in/g-uijm54 Interested in learning more about AI Red Teaming? Join our webinar, A Guide to AI Red Teaming, on July 17th 👇 https://lnkd.in/g8EZWYjX #AI #GenAI #LLM #SecurityforAI #cybersecurity #AISecurity #RiskManagement #AIRedTeaming
-