151

We just posted an update to the security incident that happened back in May 2019 with technical details of what happened, how it happened, and the remediations we applied to prevent an incident like it from happening again. Here's a couple of excerpts from the post - first from the introduction:

On May 12th, 2019, at around 00:00 UTC, we were alerted to an unexpected privilege escalation for a new user account by multiple members of the community. A user that nobody recognised had gained moderator and developer level access across all of the sites in the Stack Exchange Network. Our immediate response was to revoke privileges and to suspend this account and then set in motion a process to identify and audit the actions that led to the event.

After initial discovery, we found that the escalation of privilege was just the tip of the iceberg and the attack had actually resulted in the exfiltration of our source code and the inadvertent exposure of the PII (email, real name, IP addresses) of 184 users of the Stack Exchange Network (all of whom were notified). Thankfully, none of the databases—neither public (read: Stack Exchange content) nor private (Teams, Talent, or Enterprise)—were exfiltrated. Additionally, there has been no evidence of any direct access to our internal network infrastructure, and at no time did the attacker ever have access to data in Teams, Talent, or Enterprise products.

Meme: One does not simply break into Stack Overflow without constantly looking up how to do so on Stack Overflow

And from the final paragraph:

This incident reminded us about some fundamental security practices that everyone should follow:

  1. Log all your inbound traffic. We keep logs on all in-bound connections. This enabled all of our investigations. You can’t investigate what you don’t log.
  2. Use 2FA. That remaining system that still uses legacy authentication can be your biggest vulnerability.
  3. Guard secrets better. TeamCity has a way to protect secrets but we found we weren't using it consistently. Educate engineers that "secrets aren't just passwords”. Protect SSH keys and database connection strings too. When in doubt, protect it. If you must store secrets in a Git repo, protect them with git-crypt or Blackbox.
  4. Validate customer requests. The more unusual a request from a customer, the more important it is to verify whether or not the request is legitimate.
  5. Take security reports seriously. We're grateful that our community reported suspicious activity so quickly. Thank you!

There's plenty more in the blog post - please feel free to ask any questions or comments relating to the post below and we'll do our best to answer them. We are not able to comment on any other details related to the attack beyond what is included in the blog post, due to ongoing investigations.

15
  • 34
    Dean has taken on most of the follow-up with this, from answering questions from any entities to writing this up and making sure we got the update we promised out (as soon as we could, I promise). Thanks for all you've done on this. Commented Jan 25, 2021 at 15:39
  • 15
    Great work @Dean. :) Commented Jan 25, 2021 at 16:03
  • 4
    A wild @ChanceHeath appears!
    – Dean Ward
    Commented Jan 25, 2021 at 16:04
  • 8
    I wonder why nobody was notified of the privilege escalation. Google notifies me THREE TIMES if I just use my Google account to tell a website my name, mail address and profile picture, but nobody got notified that there was a new "employee" with administrative privileges? All the other security issues you had were somewhat understandable. Commented Jan 25, 2021 at 21:34
  • 2
    @FabianRöling I agree, no login notification for a product (TeamCity) that can apparently give you admin access to the environment and no notification for privilege escalation from normal account to staff account (of any level) are both things that alarmed me, so I'm glad they've been addressed.
    – TylerH
    Commented Jan 25, 2021 at 21:55
  • @NickCraver is this "the incident" as refereed on a recent webcast?
    – Braiam
    Commented Jan 25, 2021 at 21:59
  • 4
    @FabianRöling as noted in the remediations, this is some of the alerting we’ve put in place. I can count on a couple of fingers the number of places I’ve worked that have that level of alerting in place and some of my workplaces have been financial institutions.
    – Dean Ward
    Commented Jan 25, 2021 at 22:02
  • 2
    @TylerH it was mentioned in the blog post that TeamCity was inadvertently misconfigured at some point and it granted administrative privileges upon login. Build systems are highly privileged - it should come as no surprise that getting access to one allows escalation elsewhere. Obviously that’s no longer the case, not sure what else there is to say there
    – Dean Ward
    Commented Jan 25, 2021 at 22:04
  • @Braiam which webcast are you referring to?
    – Dean Ward
    Commented Jan 25, 2021 at 22:06
  • @DeanWard Yes, I read the part about misconfiguration (along with the rest of the blog post). My comment to Fabian was that a login notification/alert for it, since it wasn't supposed to be in use, is something I would expect to be in place. Especially considering it's powerful enough that someone could do what this attacker did, regardless of whether the settings are configured properly or not.
    – TylerH
    Commented Jan 25, 2021 at 22:10
  • 2
    @TylerH something that isn’t in use probably shouldn’t even be there! Effectively this was a dead set of credentials that should have been removed from our settings and the underlying account deactivated or removed in AD. I’m not sure what alerting would gain you there - root cause is that the credentials still existed and were able to be used in the first place. Pair that with the TC misconfiguration and a big mess ensues :(
    – Dean Ward
    Commented Jan 25, 2021 at 22:20
  • 3
    Congratulations on your first gold badge here! Commented Jan 26, 2021 at 11:20
  • 1
    @DeanWard youtube.com/watch?v=K6NECAZhJG4
    – Braiam
    Commented Jan 26, 2021 at 15:05
  • @Braiam yep, same thing
    – Dean Ward
    Commented Jan 26, 2021 at 16:43
  • 1
    @DeanWard alerting would tell you about the attack internally on May 6th instead of the Community noticing the escalation of the account on May 12th :-) That's a whole week of activity where they wouldn't have gained access to source code or found a GitHub SSH key, for example. I'm not saying you guys were lazy (in fact I think the team has clearly done a great job and does a great job in general), just that ideally the situation would've unfolded differently, through my rose-colored glasses :-)
    – TylerH
    Commented Jan 26, 2021 at 21:01

12 Answers 12

31

Can you make any comment about the attackers intentions?

Does it appear they were after a certain goal / certain (user) data?

Or was it perhaps more of a "curious teenager" poking with sticks seeing how far they could get?


P.S. thanks for the openness regarding this matter, it's really appreciated!

6
  • 19
    We don't know for sure, their traffic patterns are indicative of curiousity ahead of going all out at a specific goal such as harvesting user data. Looking at the timeline you can see that things kind of moved slowly while they probed infrastructure and even once they had code there wasn't a huge amount of subsequent attempts at further infiltration. That's as much as I can say I'm afraid
    – Dean Ward
    Commented Jan 25, 2021 at 19:15
  • 3
    Yeah that was also the vibe I was getting. If they were after a certain goal they would've made their move as soon as they had sufficient access.
    – Luuklag
    Commented Jan 25, 2021 at 19:18
  • 1
    If hackers go by a rule book, this one wasn't following the White Hat section ...
    – rene
    Commented Jan 25, 2021 at 19:20
  • 6
    @rene I agree, hence my curious teenager remark. Probably unknowing of "good" and "bad" but with no real ill intentions. Just curious what they can achieve, without any thought in what to do next.
    – Luuklag
    Commented Jan 25, 2021 at 19:22
  • 6
    They did some sheepish work; a few reviews and flags, seemingly purposeless.
    – Rob
    Commented Jan 25, 2021 at 23:00
  • @Rob Probably to analyze the HTTP response for reviews.
    – Anonymous
    Commented Jul 14, 2021 at 11:59
30

This line:

This act of looking up things (visiting questions) across the Stack Exchange Network becomes a frequent occurrence and allows us to anticipate and understand the attacker’s methodology over the coming days. (emphasis mine)

makes it sound like in real time, as the attack was happening, you could pinpoint what the attacker would do based on what they visited on Stack Overflow, instead of what they did by forensically looking at what they viewed (after the attack). Which one did you mean?

1
  • 37
    A bit of both, some was retrospective, but you'll note that the timeline overlapped with our initial investigations and the attack continuing as we investigated. Those visits gave us some hints as to what and where we should be digging next.
    – Dean Ward
    Commented Jan 25, 2021 at 15:47
23

Several questions related mainly to the attacker:

  1. What happened to the attacker?
  2. Did you suspend their account?
  3. Did SE contact the attacker at any point?
  4. Why don't you expose the attacker's identity?
  5. Has anyone else tried to use this same attack method later?
6
  • 21
    Their account was immediately suspended and we've no indications that the same attack was attempted again. We don't (and won't) expose the attacker's identity - there's nothing to gain from doing so and I can't comment further on anything related to such things because of ongoing investigations.
    – Dean Ward
    Commented Jan 25, 2021 at 16:02
  • 10
    @DeanWard ongoing investigations so much time after the incident?? Is this in order to find possible support the attacker gained "from the inside"? Or do I watch too many movies? :-) Commented Jan 25, 2021 at 16:11
  • 8
    At this moment in time and given all secrecy around it we can speculate about connections to one or more secret services of foreign regimes. This is going to be big!
    – rene
    Commented Jan 25, 2021 at 16:12
  • 27
    Haha, y'all watching way too many movies! Like I said, I can't comment any further on the attacker or any ongoing investigation. Sorry!
    – Dean Ward
    Commented Jan 25, 2021 at 16:13
  • 14
    Foreign Regimes? Maybe it was... the HYPHEN SITE!
    – Aibobot
    Commented Jan 25, 2021 at 18:00
  • 6
    I happened to be there at the time that the security incident happened and was reported in the Tavern. This was the attacker's account (archived onebox with diamond). They were suspended network-wide for one year, which expired in May 2020. Commented Jan 25, 2021 at 20:43
21

Was there a detectable sleep cycle at the other end of events?

Edit to clarify:

After becoming aware of the attacker, and since you followed some of their actions as they unfolded, did you notice anything resembling a biological cycle, both day-to-day and retrospectively? E.g: Eating (1-2 hour breaks), sleeping (8 hour inactivity pattern), "power naps" (90 minutes), etc...?

5
  • 7
    Are you asking whether the attacker went idle after they discovered we were on to them? If so: after escalation was discovered on SO the attacker continued pulling source code, but once we started to take build and source servers off the Internet they quickly backed off and their traffic was limited to (very) few site visits across the network
    – Dean Ward
    Commented Jan 25, 2021 at 17:09
  • 26
    Oh, I gotcha - yes, there was definitely indications of the attacker actually sleeping etc. which correlated with the time zone associated with their traffic.
    – Dean Ward
    Commented Jan 25, 2021 at 17:17
  • 4
    Okay, not Jon Skeet gone bad then :p
    – hat
    Commented Jan 26, 2021 at 4:43
  • 4
    @hat if that's the case the attacker's fresh account would've gained 10k reps during the intrusion from answering questions.
    – Martheen
    Commented Jan 27, 2021 at 3:48
  • 1
    I was secretly hoping for a "plot twist: it was actually a trained AI model poking at the security holes gone bad" Commented Jan 27, 2021 at 5:15
19

This is not really part of the incident, but a more general concern about security measures around employee accounts. There were a lot of steps in this incident, but the final one was escalating privileges of an SE account. I can imagine a lot more straightforward ways to attempt this than gaining admin access to the CI server via the dev instance to execute SQL in production, and I'm interested in what mitigations and security practices SE has implemented to defend against simpler attempts to gain access to an employee account.

You can't put the main SE sites behind the firewall obviously, so they will always be exposed. And the SE internal login method does not provide any 2FA methods, which I find somewhat concerning.

  • are employee accounts 2FA protected via other means (or other login providers)?
  • are there any measures to ensure that no private email addresses or login providers are attached to employee accounts that could be less secure and still be used to receive recovery mails to gain access to the account?
  • is there monitoring of login attempts from new sources for employee accounts?
  • are there additional protections for dangerous employee tools in case someone gains access to a running session of an employee account (e.g. require password and/or 2FA token again when accessing security-critical tools)

Something like spear phishing is probably still one of the more likely ways someone could try to gain access to an employee account.

8
  • 6
    Great questions! This is one of the areas that we need to focus on. When an employee joins Stack they usually join our internal Team which requires an SSO credential - this is backed by AD and mandates MFA. However, some employees (particularly longtime users of the network) simply have this credential added to their existing account. That means that many of us can login to our accounts using personal credentials that may not have 2FA enabled. That is definitely a concern! (cont’d...)
    – Dean Ward
    Commented Jan 26, 2021 at 11:00
  • 5
    However, we’ve moved some privileged tooling behind the internal Team so it is now inaccessible without prior authorisation via SSO + MFA. That eliminates some of the concerns but we’re planning to move some of that behind the firewall on to a different domain (to eliminate XSS concerns). For tooling that remains on the publicly routable domains we do have functionality to require reauthorization... but it didn’t get switched on yet. We really need to do this at some point - it’s just been punted a few times. (cont’d...)
    – Dean Ward
    Commented Jan 26, 2021 at 11:03
  • 5
    Regarding login attempts to employee accounts from unknown sources - we don’t do this yet and I agree it would be valuable. All of these things have been brought up (in some cases recently) we just need to dedicate some time to actually building some things here. It’s worth noting that employee level access doesn’t really grant much above a regular user - moderators, CMs and developers are the high value accounts so I would expect any protections we end up putting in place for employees would be available to moderators as well
    – Dean Ward
    Commented Jan 26, 2021 at 11:08
  • 3
    @DeanWard Thank you for your detailed responses. You've probably seen this, but most of my questions are based on this meta post about similar concerns regarding Teams. I used employee accounts as a somewhat inaccurate shorthand, but moderators are certainly part of this as well, and I would like to see something like robust 2FA for mods as well (as they have PII access). Commented Jan 26, 2021 at 11:11
  • 4
    I was about to follow up with a comment on Teams! Even with dev access on SO I cannot do anything useful with Teams. Tooling for that is behind SSO+MFA (which is also tied to a specific browser session). Dev routes cannot touch anything in a specific Team and cannot grant access or escalate privilege in a Team. We’ve been very careful to ensure that segmentation exists. Inside the firewall the Teams networks are separated, accessible via bastion hosts and heavily restricted in terms of who can access them.
    – Dean Ward
    Commented Jan 26, 2021 at 11:15
  • 3
    Note that the tooling for Teams is what’s moving behind the firewall in the next month or two to eliminate the possibility of an XSS attack vector
    – Dean Ward
    Commented Jan 26, 2021 at 11:16
  • 1
    @DeanWard My Teams concerns are almost entirely about mixing private and professional accounts, which is something the current design seems to encourage. This does create some small security and privacy issues, such as making it impossible to mandate 2FA and allowing taking over the private account from the professional one and vice versa. It also adds complexity in a place where you really don't want to have it, I'm quite sure I only understand a tiny fraction of how logins actually work on SE. Commented Jan 26, 2021 at 11:21
  • 4
    I hear ya, we have forthcoming plans to address some of those concerns with our Business tier, so watch this space. In many ways our current implementation is not dissimilar to a hosted GitHub behind SSO - i.e. the “professional” credential is required authorisation into the Team but also grants access to the public account - so it’s not without precedent but we understand that this makes some people uncomfortable. That said, it’s always possible to create a separate account, it can just be a headache to maintain that separation
    – Dean Ward
    Commented Jan 26, 2021 at 11:30
18

At around the same time this security incident happened, a few days later, some users began noticing that Twitter oneboxing in chat wasn't working anymore. An employee subsequently confirmed in February of next year that it had indeed been disabled intentionally due to having to "close some gaps" as a result of this security incident.

Can we get a full explanation as to why Twitter oneboxing in chat had to be disabled as a result of this security incident? The blog post published at the time stated that "other potential vectors" had been closed then, and the February 2020 staff message I linked above stated that the Twitter oneboxing feature "made use of one of the gaps we closed". What was that thing, and what security risk did it create?

Finally, is there any way that this functionality can be implemented again, in a secure manner? In August 2020, a few months after the staff message above, the bug report filed at the time was marked by another employee. Would a feature request to change the design back (in a secure manner) be considered, or is it impossible to do so without opening up an attack vector?

6
  • 15
    Yup, 100% related. Once we realised that source code had been leaked (Chat was one of the repos involved) we rotated every secret exposed. Unfortunately Chat doesn’t have any where safe and secure to store secrets and integrating something more secure is a fair amount of work. Even when we do have appropriate services provisioned in the data centre it’d be work that I couldn’t comfortably say would be undertaken with any priority. So answer is: maybe it’ll come back some day but it depends on a bunch of things that make it hard to give any semblance of a decent timeline
    – Dean Ward
    Commented Jan 25, 2021 at 20:45
  • 9
    @DeanWard So you mean, the Twitter API key was in the source code which was leaked, and it was deactivated, but there was no place where the new key could be stored securely so it was decided to just disable it? Commented Jan 25, 2021 at 20:48
  • 15
    Exactly, yes. We decided the functionality wasn’t critical enough to justify the effort involved
    – Dean Ward
    Commented Jan 25, 2021 at 20:49
  • So go post a FR and get a status-deffered Sonic ;)
    – Luuklag
    Commented Jan 25, 2021 at 20:53
  • @DeanWard My answer to the bug report (the question in the first link) also mentions four other site oneboxing features that were disabled. Were those four all disabled at the same time as the security incident, or were some of those disabled at different times? Commented Jan 25, 2021 at 21:57
  • 3
    @SonictheCuriouserHedgehog Yearp, same root cause for those and so the same remedy that could take a while to bring them back. Sorry to be the bearer of bad news
    – Dean Ward
    Commented Jan 25, 2021 at 22:23
12

Why was the magic link in dev viewable to CMs (presumably just in dev) a real magic link?

2
  • 10
    It literally rendered the email template with an actual model to show a CM what the user would see, at the time it was written I imagine nobody thought it’d be used for nefarious purposes. After all CMs and devs have very high level privileges, what would an account takeover achieve? I’m unsure if this was dev only - Shog details some of the history over on Hackernews and that seems to indicate it was used for support purposes by CMs, so I suspect, yes, this was on prod at some point.
    – Dean Ward
    Commented Jan 25, 2021 at 23:28
  • 9
    Somewhat understandable. Lesson learned to code as if everyone is the thief - even those who you have appointed a great deal of privilege.
    – Makoto
    Commented Jan 26, 2021 at 2:22
11

I would flag that "password" parameter types in TeamCity aren't considered all that secure:

The password value is stored in the configuration files under TeamCity Data Directory. Depending on the server Encryption Settings, the value is either scrambled or encrypted with a custom key.

The build log value is hidden with a simple search-and-replace algorithm, so if you have a trivial password of "123", all occurrences of "123" will be replaced, potentially exposing the password. Setting the parameter to the password type does not guarantee that the raw value cannot be retrieved. Any project administrator can retrieve it, and any developer who can change the build script could potentially write malicious code to get the password.

2
  • 6
    100% agreed... but they're more secure than the plain-text that was there before them - you need to be an admin to unobfuscate them or able to modify build scripts to output them. Either of those is "elevated" access in the first place. This is also true for any system where a secret is consumed by a script (like GH actions) - it can be output anywhere the script author has access to. Ideally secrets are stored in a place that is write-only to any entity with the exception of the application(s) that consume the secre, where it should be read-only.
    – Dean Ward
    Commented Jan 25, 2021 at 17:44
  • 7
    We're not there yet in our data centre environments, but we're actively using KeyVault in our Azure environments and fully intend to use an equivalent in the DC longer term. I blogged about how we're managing some aspects of our configuration the other day
    – Dean Ward
    Commented Jan 25, 2021 at 17:45
10

This is really an awesome incident report! One of the best ones I've read.

Thank you Stack for making it public and Dean for a great write!

I am just curious to know few things:

  • What is the size of the incident response team?
  • Were there any specific protocols followed during the investigation?
  • What key factor was involved to engage external security vendor? What were the points that were considered in choosing that particular vendor?
  • What lessons were learned from the external security vendor? Was their audit process different (effective/ineffective) from the one used already by the team?

The article gives good glimpse of the entire architecture of Stack and the development processes. A more detailed read would or link if there is already an article about it would be great.

4
  • 3
    Initial incident response was just a couple of engineers that responded to the report from the community and performed initial remediations, but that expanded to include a much larger group of people once the working week began. I'm not sure what you mean by "specific protocols" - initial remediation was to remove the escalated privileges and investigate how they occurred - remediating anything along the way that could cause further problems (cont'd)
    – Dean Ward
    Commented Jan 27, 2021 at 12:24
  • 3
    Once a secondary team started work the focus was more on forensics, athough that also resulted in more immediate remediations as we discovered the attacker still had access to pull source code. (cont'd)
    – Dean Ward
    Commented Jan 27, 2021 at 12:25
  • 4
    We engaged an external security vendor purely to validate and audit our findings and to provide guidance on other things we should be checking. I wasn't involved in the vendor selection process so I don't know what the criteria were there.
    – Dean Ward
    Commented Jan 27, 2021 at 12:26
  • 6
    The external vendor did give us some different directions to go in but, frankly, we had already covered the majority of the ground needed to gather relevant forensics and remediate the on-going threat so, although helpful, they didn't greatly influence the overall direction we were already headed in.
    – Dean Ward
    Commented Jan 27, 2021 at 12:29
8

Under "Advice to Others":

Log all your inbound traffic. We keep logs on all in-bound connections. This enabled all of our investigations. You can’t investigate what you don’t log.

How can a network as busy as Stack Exchange log the entire inbound traffic? Are these logs web server entries, or IP flows, or full TCP sessions?

I could record most entries and connection attempts on my tiny network, but I have no idea how such a large network does it.

8
  • 3
    It just requires lots of free disk space (for the text file and/or database used for the logs). These days it's pretty cheap, even 1000 TB (which should suffice for all the logs of all SE for all the years) won't cost more than few thousands of dollars, Small price to pay for the benefits of having it. :) Commented Feb 4, 2021 at 7:02
  • @ShadowWizardisVaccinating Maintaining an entire infrastructure dedicated to logging, which also needs to be searchable and cross-correlated, seems like an entire new company on its own. Storage is hard, and even though it's "cheap", it's still quite expensive and slow. I'm curious as to how it's done effectively, and whether these logs are metadata-only or include the body of requests (as long as this is information that can be publicly provided). Metadata logs are hard but doable. Full request logs (like full tcpdumps after stripping TLS) seem infeasible given my own experience.
    – xiaomiklos
    Commented Feb 4, 2021 at 16:03
  • A full 10 Gbps link carries a total of 108 TB on a single day. I expect Stack Exchange's traffic to be higher than that. If they retain 30 days worth of full logs, that would amass over 3.24 PB. I'm not expecting anyone to keep this kind of logs.
    – xiaomiklos
    Commented Feb 4, 2021 at 16:09
  • 1
    I found this article regarding Stack Overflow's logging and monitoring, which hints that they are not storing full traffic but application-specific logging, far more reasonable: nickcraver.com/blog/2018/11/29/… So, the question becomes whether this is still up to date :-)
    – xiaomiklos
    Commented Feb 4, 2021 at 16:15
  • Well, this I can't know, and can hope just as you hope that a dev will show up and give actual answer. :) Commented Feb 4, 2021 at 16:25
  • 1
    Our HAProxy servers are configured to log their traffic using it’s built in syslog support. Any request that hits a public facing load balancer ends up being sent via UDP to a service that batches and flushes to a SQL server configured with monthly tables configured with clustered columnstore indexes (CCI). HAProxy doesn’t log the request body - we only log things like the verb, URI, timings, some request headers and some response headers returned by the applications to associated traffic to specific user accounts and get timing breakdowns with the application. (contd)
    – Dean Ward
    Commented Feb 4, 2021 at 23:18
  • 1
    I’m not near a keyboard with access right now to get exact numbers but, yes, it’s a reasonably large amount of data. Taryn (our awesome DBA) blogged about some of the challenges of moving that data around not so long ago - at that time we had about 40TB of traffic data going back a few years.
    – Dean Ward
    Commented Feb 4, 2021 at 23:21
  • As you linked- Nick’s blog has a good section with way more detail - that’s pretty much how we continue to do things
    – Dean Ward
    Commented Feb 4, 2021 at 23:22
2

Can you explain more clearly what "publicly accessible properties" means in the below quote?

we have a database containing a log of all traffic to our publicly accessible properties

1
  • 3
    Any website or service that is routable from the public internet
    – Dean Ward
    Commented Jan 28, 2021 at 6:12
-6

What is so special about Stack Overflow for Teams Enterprise? Does it grant developer-level access within the team?

The attacker seemed pretty interested in it:

Additionally a person claiming to be one of our Enterprise customers submits a support request to obtain a copy of source code for auditing purposes. This request is rejected because we don’t give out source code and, additionally, the email cannot be verified as coming from one of our customers. It is flagged for further investigation by our support team.

And Stack Overflow restricted access to the help articles for Enterprise, even though they didn't restrict access to the other team articles without payment.

Access to support documentation for our Enterprise product was limited to authorised users of that product.

17
  • 5
    You misread the post. What makes you believe the attacker was interested in Stack Overflow for Teams Enterprise? It was only mentioned as example for a private database/server that was not compromised, or put at risk, at all. This is written explicitly for calm those who use it down, so they know their private data was never in any kind of danger, and hopefully will never be. Commented Jul 14, 2021 at 12:34
  • 1
    @ShadowWizardWearingMaskV2 "Additionally a person claiming to be one of our Enterprise customers submits a support request to obtain a copy of source code for auditing purposes. This request is rejected because we don’t give out source code and, additionally, the email cannot be verified as coming from one of our customers. It is flagged for further investigation by our support team."
    – Anonymous
    Commented Jul 14, 2021 at 12:59
  • 4
    Well you better add this to the answer, but still, this does not mean the attacker is interested in Stack Overflow for Teams Enterprise in particular. It's given as another example of their "probing our infrastructure, in particular parts of our build/source control systems and web servers hosting some of our development environments" activity. Commented Jul 14, 2021 at 13:16
  • 1
    @ShadowWizardWearingMaskV2 Not necessarily. Learning the enhanced features that Enterprise teams get isn't exactly helpful for a hacker, and using a free team should have been sufficient. Additionally, "Access to support documentation for our Enterprise product was limited to authorised users of that product.". They wouldn't do that if it weren't particularly useful for a hacker.
    – Anonymous
    Commented Jul 14, 2021 at 13:22
  • 2
    Well, we can't know. I think the attacker just spread attempts randomly to any possible direction. Also please use the term "attacker", not hacker. to make is consistent with the blog, and the facts we do know. Commented Jul 14, 2021 at 13:24
  • 1
    @ShadowWizardWearingMaskV2 But why would Stack Overflow restrict documentation of Enterprise teams?
    – Anonymous
    Commented Jul 14, 2021 at 13:25
  • 1
    @ShadowWizardWearingMaskV2 "Access to support documentation for our Enterprise product was limited to authorised users of that product.", under the Remediations section.
    – Anonymous
    Commented Jul 14, 2021 at 13:34
  • 2
    Why do you think anyone but the attacker would be able to answer your first question about why they were interested in this, and how big do you think your chances are for an honest answer to your second question about gaining developer access that way?
    – Tinkeringbell Mod
    Commented Jul 14, 2021 at 13:38
  • 2
    @Tinkeringbell The first question was an indication, not asking what the hacker's motives were. My point is that it seems that there's something strange about Enterprise, and it should be patched if it weren't already.
    – Anonymous
    Commented Jul 14, 2021 at 13:41
  • 3
    Well - they could have figured a paid customer would have a better chance of getting the code, and enterprise instances, as I understand are fairly full instances of the Q&A engine, running independantly of the main public instances and teams Commented Jul 14, 2021 at 13:42
  • 7
    @Anonymous " why was the attacker so interested in learning as much as they could about it?" ... that's the question that's there, black on white with a neat little question mark behind it. And that's a question no one but the attacker can answer. The other one/two, about what's special and if it grants developer access, are ones you'll hopefully never even get an answer to, as that would pose a huge security risk.
    – Tinkeringbell Mod
    Commented Jul 14, 2021 at 13:47
  • 1
    @Tinkeringbell Okay, I've reworded the question. And I'm simply bringing this to their attention, to try to stop another hack before it happens while getting no information in return.
    – Anonymous
    Commented Jul 14, 2021 at 15:43
  • 2
    @ShadowWizardWearingMaskV2 I've changed "hacker" to "attacker", I missed the comment earlier.
    – Anonymous
    Commented Jul 14, 2021 at 15:46
  • 6
    Our Enterprise product is deployed separately to anything deployed in our data centers - its membership is separate and those with elevated privileges in an Enterprise instance have no elevated access to the public sites. However the deployed code is practically identical to that used to deploy the public sites - there are some features only available in that environment but it's otherwise identical. As for the attacker's motivation - who knows? They may have thought it was a quick and easy route to get source code or maybe they just wanted their own Enterprise instance without paying...
    – Dean Ward
    Commented Jul 14, 2021 at 17:23
  • 2
    @Dean perfect answer, I'm truly impressed. Thanks! Commented Jul 15, 2021 at 5:52

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .