Microsoft, in partnership with US and Poland Government, has taken steps to disrupt and mitigate a widespread campaign by the Russian nation-state threat actor Midnight Blizzard targeting TeamCity servers using the publicly available exploit for CVE-2023-42793. Microsoft's October analysis on North Korean actor exploit of TeamCity CVE-2023-42793 linked below. Additional related details from US Government and Poland Government also below. Following exploitation, Midnight Blizzard uses scheduled tasks to keep a variant of VaporRage malware persistent. The VaporRage variant, which is similar to malware deployed by the threat actor in recent phishing campaigns, abuses Microsoft OneDrive and Dropbox for C2. Post-compromise activity includes credential theft using Mimikatz, Active Directory enumeration using DSinternals, deployment of tunneling tool rsockstun, and turning off antivirus and EDR capabilities. In addition to disrupting the abuse of Microsoft OneDrive for command and control, Microsoft Defender Antivirus and Microsoft Defender for Endpoint protect customers against this and other Midnight Blizzard malware. Although many of the compromises appear to be opportunistic, affecting unpatched Internet-facing TeamCity servers, Microsoft continues to work with the international cybersecurity community to mitigate the potential risk to software supply chain ecosystems. We are especially grateful to our partners in the international cybersecurity community for their collaboration on this investigation. [edited/updated links] https://lnkd.in/eejCCtcH https://lnkd.in/eB5ptTsY https://lnkd.in/eG3s2n42 Midnight Blizzard is the latest nation-state threat actor observed exploiting the TeamCity CVE-2023-42793 vulnerability. In October, North Korean threat actors Diamond Sleet and Onyx Sleet exploited the same vulnerability in separate attacks: https://lnkd.in/gUv4SU24
Jeremy Dallman’s Post
More Relevant Posts
-
I'm a day late, but we just put out a second amazing blog on AI jailbreaks. Not only is this blog post very detailed and informative, but it's also a really fun read with great visuals! Congrats to the team for breaking down Skeleton Key so effectively. Here's a few teasers to make you want to read the whole post... Skeleton Key jailbreak technique works by using a multi-turn (or multiple step) strategy to cause a model to ignore its guardrails. Once guardrails are ignored, a model will not be able to determine malicious or unsanctioned requests from any other. It relies on the attacker already having legitimate access to the AI model. At the attack layer, Skeleton Key works by asking a model to augment, rather than change, its behavior guidelines so that it responds to any request for information or content, providing a warning (rather than refusing) if its output might be considered offensive, harmful, or illegal if followed. When the Skeleton Key jailbreak is successful, a model acknowledges that it has updated its guidelines and will subsequently comply with instructions to produce any content, no matter how much it violates its original responsible AI guidelines. Mitigations: Input filtering, System messages, Output filtering, Abuse monitoring.
Microsoft recently discovered a new type of generative AI jailbreak method, which we call Skeleton Key for its ability to potentially subvert responsible AI (RAI) guardrails built into the model, which could enable the model to violate its operators’ polices, make decisions unduly influenced by a user, or run malicious instructions. The Skeleton Key method works by using a multi-step strategy to cause a model to ignore its guardrails by asking it to augment, rather than change, its behavior guidelines. This enables a model to then respond to any request for information or content, including producing ordinarily forbidden behaviors and content. To protect against Skeleton Key attacks, Microsoft has implemented several approaches to our AI system design, provided tools for customers developing their own applications on Azure, and provided mitigation guidance for defenders to discovered and protect against such attacks. Learn about Skeleton Key, what Microsoft is doing to defend systems against this threat, and more in the latest Microsoft Threat Intelligence blog from the Chief Technology Officer of Microsoft Azure Mark Russinovich: https://msft.it/6043Y7Xrd Learn more about Mark Russinovich and his exploration into AI and AI jailbreaking techniques like Crescendo and Skeleton Key, as discussed on that latest Microsoft Threat Intelligence podcast episode hosted by Sherrod DeGrippo: https://msft.it/6044Y7Xre
To view or add a comment, sign in
-
I saw more of myself in this article than I like, so was happy to see the author take the time to give practical ways to combat cynicism when I feel it surfacing. Now to apply it more! Well worth a read and pause for introspection. Excerpt: “Resisting cynicism’s pull starts with being open-minded. Examine the data of your life like a scientist would, he says, instead of jumping to conclusions, positive or negative. Think everyone at your job is out for themselves? Ask 10 colleagues for a favor, and see if anyone agrees to help. Convinced every conversation with a co-worker will be painful? Spend a day rating your interactions with them on a scale from 1 to 10.” https://lnkd.in/g3yi4k3p
Quit Being a Cynic at Work. It’s Holding You Back.
wsj.com
To view or add a comment, sign in
-
There has been a lot of chatter in the security and privacy ecosystems about the Recall preview feature for Windows Copilot+ PCs. The Windows team put out a blog today that clearly spells out how this feature will heavily support user privacy by default along with other security and privacy controls. Read the blog for yourself, but here are a couple of my key takeaways - most importantly, "If you don’t proactively choose to turn it on, it will be off by default."... - Even before making Recall available to customers, we have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards. - We are updating the set-up experience of Copilot+ PCs to give people a clearer choice to opt-in to saving snapshots using Recall. If you don’t proactively choose to turn it on, it will be off by default. - These images are encrypted, stored and analyzed locally, using on-device AI capabilities to understand their context. - Windows Hello enrollment is required to enable Recall. - we are adding additional layers of data protection including “just in time” decryption protected by Windows Hello Enhanced Sign-in Security (ESS) so Recall snapshots will only be decrypted and accessible when the user authenticates. In addition, we encrypted the search index database.
Update on the Recall preview feature for Copilot+ PCs
blogs.windows.com
To view or add a comment, sign in
-
Proud to see the team showing up with strong analysis on Grandoreiro and Luna Tempest in the latest podcast. Take a listen!
Recent notable cybercrime developments such as the global expansion of the Grandoreiro banking trojan and emergence of Western-based threat actors like Luna Tempest highlight the adaptability of threat actors in response to global disruption efforts, as well as focus on seeking high payouts in attacks. Microsoft observed an uptick in activity related to the Grandoreiro banking trojan in March 2024, not long after a related disruption operation by law enforcement in January 2024. Known to mostly target Latin America or Spanish-speaking countries, Grandoreiro was recently observed expanding its scope to target users in the United States, the United Kingdom, South Africa, and Australia. The threat actor Microsoft tracks as Luna Tempest is a group based in the US and the UK that focuses on extorting startups and small companies in the insurance, FinTech, and biotech sectors. Luna Tempest is observed to resort to aggressive tactics, targeting company executives and threatening their family members to increase the chance of getting paid. Learn more about these developments and the insights of the Microsoft Threat Intelligence experts as they track these activities in the full podcast episode, hosted by Sherrod DeGrippo: https://msft.it/6049Yo3wz
Threat Landscape Update on Grandoreiro and Luna Tempest
To view or add a comment, sign in
-
In threat intelligence, a deep knowledge of tactics and techniques (TTPs) is critical to help us implement the most effective mitigations and protections. In an AI-enabled world, those TTPs are evolving in some fascinating ways. Our latest blog takes a deep dive into AI jailbreak techniques. This is critical "the more you know" reading for all of us in Security. Make the time to read it. Bonus: It's buried toward the end, but don't miss the details on the release of PyRIT (Python Risk Identification Toolkit for Generative AI) on Github! What is an AI Jailbreak + a bit of a teaser from the blog... An AI jailbreak is a technique that can cause the failure of guardrails (mitigations). The resulting harm comes from whatever guardrail was circumvented: for example, causing the system to violate its operators’ policies, make decisions unduly influenced by one user, or execute malicious instructions. This technique may be associated with additional attack techniques such as prompt injection, evasion, and model manipulation. You can consider the attributes of an AI language model to be similar to an eager but inexperienced employee trying to help your other employees with their productivity: Over-confident, gullible, wants to impress, and lacks real-world application. Resulting in AI models and system having the following characteristics: Imaginative but sometimes unreliable, Suggestible and literal-minded, without appropriate guidance, Persuadable and potentially exploitable, Knowledgeable yet impractical for some scenarios. When an AI jailbreak occurs, the severity of the impact is determined by the guardrail that it circumvented. Your response to the issue will depend on the specific situation and if the jailbreak can lead to unauthorized access to content or trigger automated actions. To mitigate the potential of AI jailbreaks, Microsoft takes defense in depth approach when protecting our AI systems, from models hosted on Azure AI to each Copilot solution we offer. The blog has a long list of high-quality referenceable resources! To empower security professionals and machine learning engineers to proactively find risks in their own generative AI systems, Microsoft has released an open automation framework, Python Risk Identification Toolkit for generative AI (PyRIT). The blog has more about the release of PyRIT for generative AI Red teaming and access the PyRIT toolkit on GitHub.
As part of a responsible AI approach, AI models are protected by layers of defense mechanisms to prevent the production of harmful content or being used to carry out instructions that go against the intended purpose of the AI integrated application. Threat actors attempt to bypass these defenses with the intent to achieve unauthorized actions, resulting in an AI jailbreak. What are AI jailbreaks, and why is generative AI susceptible to them? Read our latest blog to learn about AI jailbreaks and how you can mitigate associated risks and harms: https://msft.it/6047YWrpd
AI jailbreaks: What they are and how they can be mitigated | Microsoft Security Blog
https://www.microsoft.com/en-us/security/blog
To view or add a comment, sign in