-242

After a thorough investigation and inquiry, we were able to get an update on this. The ads in question were not intending to start audio but were rather checking to see if there was an audio player present as part of a bot/fraud detection system. For those who might still be wondering about the ad code, please read below.

Some of our advertising clients use a third-party ad fraud service to verify their ads are running on the correct sites and to a real audience. This is a normal and standard practice for digital advertising.

In this case, one company, Integral Ad Science (IAS), is responsible for the console errors seen in dev tools.

We’ve been assured by IAS that:

  • The console errors people have seen are normal and expected. This was confirmed by their product and engineering teams.

  • The sca.js pixel as mentioned collects data required for ad verification. There is no individual user tracking happening as the browser signals they collect via their js does not collect any PII.

  • Their signal collection complies with the standards of Media Rating Council (MRC), GDPR, and other regulations.

IAS’s privacy policy has the information they collect and their data retention policies.

This was very helpful for us to know and we are satisfied that there is not anything nefarious going on here. We're therefore going to continue to allow our advertisers to use IAS and other third-party ad fraud services.

Please note that as technologies evolve, we’ll continue to work closely with our clients and third-party vendors to make sure they are following the industry standards and respecting our policies.

41
  • 110
    The amount of data ads are trying to collect nowadays gives me the heebie jeebies. I'm no privacy nut, but I'm extremely uncomfortable with what data ad networks believe they have the right to. Assurances aside, I think I'll just keep my adblocker on everywhere, thank you.
    – fbueckert
    Commented Aug 14, 2019 at 19:30
  • 65
    “There is no individual user tracking happening as the browser signals they collect via their js does not collect any PII.” How is the first part of the sentence related to the second part? User tracking does not need to involve PII, and can be recouped with PII by amalgamating data collected through other channels. Commented Aug 14, 2019 at 19:55
  • 90
    I gotta say the top voted answer that went through the source code doesn’t match with the words you’re saying in this post. The code appears to be able to uniquely identify a user; even if that’s not the intent. I recognize you’re not attempting to “bait and switch”, but PII collection isn’t the core of the issue: tracking the user across the internet is. Commented Aug 14, 2019 at 20:03
  • 14
    somehow I think I would feel better if it turned the other way 'round - that is, if SO would really really just tried to play audio (instead of attempting to collect data allowing to identify me by my browser settings)
    – gnat
    Commented Aug 14, 2019 at 20:40
  • 27
    I've upvoted this since I know the company has decided not to pursue it any further than the assurances by IAS, and you've given a very good report as to what's been investigated and why they stopped where they did. However, I don't want that to be misconstrued as me accepting the IAS's assurances at face value nor that I believe doing these checks is an appropriate way of detecting bots or fraud. This post has given me the information I needed to be sure I will not put Stack Exchange as an exception in my ad blocker, so for that reason it was useful, even if I'm not happy with the content.
    – Davy M
    Commented Aug 14, 2019 at 21:10
  • 62
    Thanks for providing an update on this issue Juan. However, I am disappointed with the results. Just because something is 'normal and standard practice for digital advertising' and 'complies with the standards of Media Rating Council (MRC), GDPR, and other regulations' doesn't automatically make it ok or ethical. It just means it's not illegal. While I understand that ad revenue is important to SE, I would hope that decisions about tracking user data would be made on ethics rather than legalities.
    – Dhaust
    Commented Aug 15, 2019 at 5:36
  • 11
    This blatantly goes against what another staff said here, which said, in bold, that this is a problem. Commented Aug 16, 2019 at 5:17
  • 38
    checking to see if there was an audio player present as part of a bot/fraud detection systemNo, that is not what it is for. AudioContext fingerprinting is used for cross-domain tracking. In essence, bypassing tracking cookie restrictions by using unique hardware information by exploiting the audio subsystem's behavior. Whoever told you that it was part of fraud detection was lying to you. Commented Aug 16, 2019 at 5:21
  • 45
    To be more precise, AudioContext fingerprinting is not used to detect if an audio player is present, but to uniquely identify individual computers through quirks in the audio hardware. There are other ways to test for the presence of an audio player. Any ad that is using AudioContext fingerprinting is doing so to track users without needing to use cookies (since cookies can be deleted and hardware fingerprints cannot). Commented Aug 16, 2019 at 5:56
  • 25
    Shouldn't this post be featured. Seeing this is the official response to a highly debated post?
    – Luuklag
    Commented Aug 16, 2019 at 8:54
  • 41
    According to Olivia who did a very thorough analysis of the very policies that you are discussing in regards to user tracking and fraud detection, what you claim is blatantly and provably false. Commented Aug 18, 2019 at 9:40
  • 21
    @Mari-LouA there's also the third option: Ads without tracking. Those do exist in the wild (for an instance readthedocs supports that). No one said ads had to be at the expense of privacy. Combine that with optional memberships (which lets people support SE directly), and you got a decent system. SE already has an income from Jobs, Teams, and Enterprise as well, it's not like they're running a system with the sole income being from ads. Commented Aug 21, 2019 at 12:27
  • 31
    Sorry, Juan, but where you've written "Stack Overflow is not trying to start audio", given all the information on painfully obvious display here and on the previous thread, it just reads "Stack Overflow Is Lying". Y'all have a ton of work in front of you to regain trust here.
    – E.P.
    Commented Aug 31, 2019 at 15:55
  • 8
    Second: When I load the page without the console loaded, it errors, but when I load the page with the console loaded, it doesn't, and the requests are never sent. This SCREAMS that something sneaky is going on. Commented Sep 3, 2019 at 13:07
  • 7
    Maybe this post should go in the "Featured on Meta" box? Commented Sep 3, 2019 at 22:11

8 Answers 8

230
+50

According to a post from another staff here, they said, in bold, that they are aware of the issue and not OK with it. But now you're trying to excuse it as acceptable. Is SE really going to go in this direction? Is it going back on the statement that it will prevent this from happening in the future?

From the linked post (emphasis in original):

Thanks for letting us know about this.

We are aware of it. We are not okay with it.

We're trying to track down what is doing it and get that mess out of here. We've also reached out to Google to enlist their support. I'll be honest: it's late in the day and we're unlikely to get this resolved today. But we've reached out and hope to get it fixed ASAP.

I don't like seeing Stack Exchange excuse malicious ads which fingerprint user's browsers.


You say that you were told by the ad company that this is part of fraud prevention. That is not correct. The AudioContext fingerprinting (along with other forms of fingerprinting that are harder to detect) that they are doing is not used to detect whether or not an audio player is present, but to uniquely identify computers due to quirks in how audio hardware operates. It's designed as a cross-browser and even cross-operating system tracking technique that, unlike cookies, cannot be deleted by the user.

8
  • 74
    TL;DR: Install an adblocker. Take SE and all its sites off your whitelist.
    – user248725
    Commented Aug 16, 2019 at 16:19
  • 44
    @NicHartley I don't have a whitelist
    – Dragonrage
    Commented Aug 17, 2019 at 0:02
  • 20
    @NicHartley I personally am not affected by this for various reasons, but that doesn't stop me from strongly disliking this as it will affect those who visit this site who are not aware of this situation and do not install an effective adblocker because they don't know any better. But yes, you should install an adblocker. :) Commented Aug 17, 2019 at 6:55
  • 6
    There are many ways to limit their collection. I was going to have my students join, but now I have decided against it. And I am going to issue a warning to the people I bring to these websites: that they must be careful because browser fingerprinting is going on.
    – user609937
    Commented Sep 14, 2019 at 9:07
  • 1
    if you are using chrome, just create a rule that blocks the permission of using microphone, same thing can be done in firefox
    – HQSantos
    Commented Feb 4, 2020 at 14:29
  • 1
    @riki481saysReinstateMonica This does not mitigate audiocontext fingerprinting. Commented Jan 3, 2021 at 3:02
  • Companies go back on what they have said in the past plenty of times. Are you really surprised? Commented May 5, 2021 at 2:13
  • 1
    @EkadhSingh Surprised? No. Disappointed? Very. Commented May 5, 2021 at 2:15
209
+800

TL;DR: "The sca.js pixel as mentioned collects data required for ad verification. There is no individual user tracking happening as the browser signals they collect via their js does not collect any PII." is wrong, SO isn't starting audio but ads use it for fingerprinting, IAS ToS + GDPR invalidates the quote. Please use an ad blocker and stay safe (pro tip: Firefox also has included fingerprinting protection, and there's also browser plugins for I think all the major browsers that add the feature as well). If you want to help send an even clearer message to SE, consider using AdNauseam (uBlock derivative).

Disclaimer: I'm not a lawyer.

Stack Overflow is not trying to start audio

Well, you're completely correct. Nothing is attempting to start audio, and that was never the question either. Honestly, if you attempted to add audio ads to the site, that being Stack Overflow or any other Stack Exchange site, that's not something I can support. You've already stated you have no intention of blocking animated ads, which is yet another reason I'm using an ad blocker (beyond the privacy concerns and failure to keep malicious ads at bay), but ads containing audio is something I personally classify as destructive for the site (it's fine on YouTube or Spotify because the context is different, but here? No.)

The highly upvoted answer by the temporary user deleted on request shortly after the answer was posted also outlined this for you:

The ad is attempting to use the Audio API as one of literally hundreds of pieces of data it is collecting about your browser in an attempt to "fingerprint" it, to uniquely identify you across sites despite your privacy settings.

We're highly aware Stack Overflow wasn't trying to start audio, so if you'd like better title, how does this one sound: "Stack Overflow enables ads to identify and track users across sessions by using the audio API"

The ads access the audio API, but not with the intent of starting audio.

But seriously, are you OK with this now? Was Nick Craver's answer at any point the opinion of the company, or did you change your mind on that later? Was it before or after you were "reassured" this behavior was only "required for ad verification" and that they do "not collect any PII"? Spoiler: none of that is true. The ad network collects PII, and it's not exclusively for identifying fraud ad viewing

The honestly worst part about this whole mess is, in my opinion, this announcement. You completely ignored the post documenting the fingerprinting, and you effectively replaced your initial announcement in addition to announcing false information.

Some of our advertising clients use a third-party ad fraud service to verify their ads are running on the correct sites and to a real audience.

Again, you're right! They do use the data for fraud detection. Here's what you left out:

Additionally, IAS uses advertising impression information, mobile app information, and website traffic information including IP address and browser header information to:

  • Identify traffic sources by their geographic location and determine if the location is correct and located within the advertiser’s campaign parameters or traffic settings
  • Determine if traffic is being acquired is fraudulent, or if traffic acquisition practices that are out of compliance with an advertiser’s guidelines or contractual requirements.
  • Determine if a middleware is attempting to misrepresent its operating characteristics to prevent the identification of fraud or other invalid traffic.
  • Determine if traffic or ad impressions are originating from a server farm unlikely to be responsible for human-generated browsing activity.

"Geographic location". Doesn't that sound an awful lot like PII to you? There are more examples of this throughout the terms of service, as well as on their blog. They even agree it's PII under GDPR. source later

Further, I'll have to call your, or your ad provider's, lie:

The sca.js pixel as mentioned collects data required for ad verification. There is no individual user tracking happening as the browser signals they collect via their js does not collect any PII.

Their privacy policy:

For the purpose of identifying and preventing online ad impression fraud and invalid traffic and determining if advertisers and publishers are in compliance with their agreements, our Technology Solutions utilize the following additional technologies (in addition to the data described above):

  • Device identification technology, which analyzes device parameters collected as described above, including IP address and browser header information, to probabilistically identify a particular device.

[...]

Quite honestly, this alone is enough to back up my initial statement, but where's the fun in that? This uses "device identification technology", whatever that fully implies. Combined with the IP address, that is enough to personally identify people, and location (which they have explicitly stated they're using) is also personal data under GDPR (reference later).

Also:

We minimize our use of Personal Data by, for example, truncating the IP address after 30 days.

Additionally, the pixel tag collects data as per earlier in the privacy policy, which also lists IP addresses:

Our pixel tags allow cookies to be set, read, and modified when Individual users visit a website, and directly collect the Personal Data described under “Data We Collect.”

The reason I called your statement a lie is because IP addresses are personal data under GDPR. It might not be considered personal data elsewhere, but it's considered PII in the EU, and that's more than enough to invalidate your statement for traffic from an entire continent.

I dislike slamming GDPR on the table to make you see the reality, but when you're clearly disregarding the points outlined in their privacy policy, I don't have a choice.

This was very helpful for us to know and we are satisfied that there is not anything nefarious going on here

Okay, so you're dealing with a company that tries to uniquely identify users by using factors classified as personal data under GDPR, who then tell you they don't collect personal data, and you're satisfied? Anton Menshov already made my point, and the company even put it on their blog that they're collecting data classified as PII under GDPR. They also stated it's not used in the EU, but it was posted in 2015:

A handful of other offerings that do rely on data considered personal data under the new regulation have been withdrawn from EU markets while we explore alternative solutions. IAS looks forward to providing this measurement capability to our EU customers once an alternative solution is available and/or an industry-wide consent management platform is made scalable.

Their privacy policy doesn't mention whether the data collection practice is limited to areas outside the GDPR, so I doubt that's still the case. In fact, there's no mention of GDPR. The reasonable assumption is that they found their legal reason to collect the data, and I'm not doubting the lawfulness of the collection. You (the Stack Overflow company) have outlined the use of data in your ToS, and so has IAS. As I mentioned at the start of this post, I'm not a lawyer, and I'm not familiar enough with GDPR to start questioning that. It's still easy to read up on it to find definitions and see that they indeed are collecting PII.


Something you need to realize is that advertising is more than tracking. Quite honestly, I block ads from sources that attempt to track me, because of data leaks from sources such as Facebook, who have proven to be outright incompetent at keeping data safe. The real difference between advertiser/tracker data leaks and service leaks is that I at least know what my data is. If my data here on SO gets leaked, I'll at least get notified or read about it a lot quicker than for a third party service I don't want that stores data because I use a website.

If you step back and think about the amount of users you might end up with who use ad blockers, what do you earn in the end? I'm assuming you're using ads as an alternative income source, and I honestly understand that. I whitelist sites I trust that rely on ads to support themselves, or otherwise need it for funding. However, if you legitimately believed that statement without referring to a lawyer and checking their privacy policy before making that statement, then you don't have my trust.

In the past couple of months, there have been a couple topics on tracking, and several related to the behavior of ads.

(feel free to append to that list or leave a comment with suggestions for links to add if I've missed anything)

All it takes is one improperly handled ad and one unpatched XSS exploit no one has noticed or a bug in the safe frame you forced to make the SE network a much worse place for the users without ad blockers. That being said, are ads still sandboxed? If you're over on IAS, then I'm assuming you've moved away from Google as an ad provider, which kinda invalidates the initial solution. Are we still safe from ads attempting page redirects for fun?

One metric you really should look out for is how many users decide to block your ads. In the end, ads are only useful if you actually have people viewing them. Personally, I use Firefox and an extremely (likely overkill) protection system: I've enabled fingerprinting blocking, tracker blocking, and cryptominer blocking (because you never know what unfiltered ads might do), as well as uBlock and Privacy Badger. uBlock targets ads, while the tracking and fingerprinting protection just blocks known trackers. Privacy Badger knocks out cross-site trackers thinking they're safe by sneaking past the other two defenses. Nothing gets through unless I say otherwise.

Beyond the periodic annoyance of seeing some ads (which I can live with if I like the site enough), privacy is my concern. What worries me the most is that we have a post fully documenting the fingerprinting, and the ToS of IAS to back up that this practice does happen (and that GDPR defines it as PII), but you come in what appears to be an official announcement and spread false information. I don't know if you've done your research, or at least consulted a lawyer before posting, but the answers here (specifically the ones posted before mine) point to that not being the case. Whitelisting ads is, in my opinion, a matter of trust. With this post, you've lost the rest of my trust on the advertising front, and my trust in your (the company's, with extremely few exceptions) statements.


What concerns me more, however, is this announcement. The answer that exposed the use of the audio API was to fingerprint users exists. If you've ignored it, that's your choice, but it doesn't change the fact that your announcement blatantly ignores several privacy concerns, and neglects to mention several of the data usage areas. You could at least be honest in terms of what data is being used instead of requiring users to read several legal documents to find what data is being collected, what it's classified as, and arguably most importantly, what it's used for. You missed all three and presumably relied on the statement of IAS instead of their legal documents.

That being said, I have no idea which services you've enabled or disabled (if that's something you can do), but judging by the existence of fingerprinting attempts in logs documented on meta, I'd say you're using the services that, according to their privacy policy, collects data classified as personal data under GDPR.

Also, do I really need to point out your own privacy policy?:

In providing this opportunity, Stack Overflow and its third party partners may collect and use your personal information to tailor your advertising experience to suit your interests, skills, as well as to monitor your account activity in order to optimize our Products and Services.

We seek to limit what information advertisers and similar third parties have access to, as well as to ensure that your user experience on the public and private Stack Overflow network is not overwhelmed by advertising initiatives. However, our advertising products and services require us to collect certain personal and non-personal information on you, which includes:

  • Data from advertising technologies like cookies, web beacons, pixels, ad tags, and browser/device identifiers
  • Information you have provided to us directly including profile information, your Developer Story, and in limited instances your job history
  • Usage analytics including your visits to the Network, browsing and search history
  • Information from our advertising partners (e.g., device type and location)

You admit in your own privacy policy that your ad provider(s) collects data. The thing it doesn't mention, but that's pretty obvious, is that this also classifies as PII under GDPR, just like the stuff IAS collects. Location is undisputably PII.


And I really need to ask this again: Are you seriously going against Nick Craver's initial statement on fingerprinting adverts?

6
  • 30
    Thank you for going over their policies so thoroughly to more explicitly debunk these claims. This is really damning evidence. I hope it will be considered by staff. Commented Aug 18, 2019 at 9:38
  • 11
    @forest No problem :) I hope they'll consider it too. But if they don't, this post will at least work against them trying to make this seem like less of a big deal than it is. Commented Aug 18, 2019 at 10:06
  • 19
    So I did some of my own digging, and what they're doing is basically creating a pseudonym profile on everyone's devices and person, linking IP addresses, device characteristics, geolocation and various cookies. They say and claim they don't use this except to prevent fraud and correctly target campaigns, but I don't trust these networks as far as I can throw them.
    – Magisch
    Commented Aug 19, 2019 at 9:47
  • 24
    @Magisch "Correctly target campaigns" is the tricky part here. That means that they are using it to track individual users so they can give them ads based on information they collected cross-session and even cross-browser or, for AudioContext fingerprinting, cross-operating system (but not cross-hardware). Commented Aug 20, 2019 at 3:41
  • The "Facebook tracking" link is not related to ads.
    – pppery
    Commented Aug 20, 2019 at 16:39
  • 4
    @pppery "In the past couple of months, there have been a couple topics on tracking, and several related to the behavior of ads.". Fingerprinting is tracking, hence it's slightly related. Commented Aug 20, 2019 at 16:47
66
+400

Some of our advertising clients use a third-party ad fraud service to verify their ads are running on the correct sites and to a real audience. This is a normal and standard practice for digital advertising.

Not good enough. This is browser fingerprinting. I don't trust the pinkie promise or privacy policy of a shady third party ad vendor as far as I can throw it. Time and time again vendors like those have been compromised and done stuff outside of their remit.

That this isn't stopped means it becomes a security and privacy necessity to adblock on all SE sites.

1
  • 7
    Given advertising networks' raison d'être is to misinform, befuddle and manipulate people into buying crap they don't have any moral justifcation to claim they are being defrauded.
    – James
    Commented Sep 18, 2019 at 17:34
41

Think about it from our perspective:

If your personal data was being unknowingly taken from you via an ad that you didn't know to block, how would you feel?

I feel like this just ruins my trust in the network, especially considering this quote:

Their signal collection complies with the standards of Media Rating Council (MRC), GDPR, and other regulations.

in combination with this edit:

It has been decided that such user fingerprinting ads will be permitted, as they do not violate any laws or regulations.

Sure, their data collection complies with various standards, laws, and regulations, but does that mean that what they're doing is morally correct, or will encourage more people to use the network?
No.

Please view this from our perspective and try to reconsider.

2
  • 31
    I had my ad blocker disabled for Stack Exchange but after reading about this, it's going back on unfortunately. Organizations that want me to allow them to serve me ads need to do better than "well, they're not breaking any laws".
    – ColleenV
    Commented Sep 2, 2019 at 11:04
  • 10
    I have had a longstanding personal objection to using adblock software because I recognize the social contract of the Internet and how website revenue works. Social contracts depend, however, on everyone being reasonable about things. I'm disappointed that we can't seem to sit together anymore and work on a solution that works for everyone. Commented Sep 16, 2019 at 14:29
35

I guess, part of this is motivated by clarifications of IAS regarding compliance with GDPR. Which would explicitly say

Which IAS solutions were impacted by GDPR?

Ad fraud and IP address-based geolocation measurement: These products use data points defined as “personal data” under the GDPR in order to provide the ad fraud and geolocation components ... This specifically includes how IAS collects, processes, and stores IP addresses and other less specific personal data points such as device information. Our technology identifies and eliminates ad fraud based on proprietary algorithms that monitor and track the behavior of these data points.

So, based on the information from that page, they (IAS + their GDPR consultant team, whatever that means) decided (!) that they can proceed with this particular practice.

While their decision might have some legal grounds, so is the opposite view. Consider GDPR recital 30:

Online identifiers for profiling and identification

Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.

The keyword here is other identifiers; therefore, the device information (which is collected with IAS ads) can certainly be considered personal data.

In practice, a lot would depend on:

  • what purposes is this data used for inside IAS?
  • what granularity it is stored and is accessible at?
  • is it ever sold or supplied by anyway outside of IAS?
  • how precise are the internal regulations in IAS?
  • whether the documental evidence of the violation of GDPR/internal regulation by IAS ever comes to the surface?

and, most importantly, if Stack Exchange users and Stack Overflow as a company believe and trust IAS. It seems like Stack Overflow company, at least for now, made their decision. The users have an easy option to make their decision which "accidentally" might reflect on the number of ads Stack Exchange is able to show.

0
23

I'm not an expert in the terms used here, but the following parts of the IAS privacy policy seem to be the more interesting and relevant ones:

  • Clickstream data including URLs and other data regarding the websites on which a particular browser has viewed advertising impressions we are analyzing.
  • Clickstream data including mobile application identifier and other data regarding the mobile apps on which a particular user has viewed advertising impressions.

A clickstream is the sequence of URLs a specific user has visited. And if you want to provide a clickstream across different websites, fingerprinting the user's browser is probably the most robust method to do this. I'm not entirely sure, but I would read this excerpt as a confirmation that these ads are tracking user behaviour across all sites these ads are displayed.

1
  • 1
    Yes, that is what the more sophisticated browser fingerprinting methods are used for. Commented May 3, 2022 at 22:07
6

It's the guy who wrote the original source code in question here (contained in sca.js). IAS acquired my bot detection startup, Swarm, in 2016, and then I led up R&D on advertising fraud detection technology at IAS for two years after the acquisition. Posting here with the caveat that my opinion in no way shape or form reflects the opinions of my (former) employer, IAS. Just wanted to give you my two cents.

The particular code discussed above is not being used for generating a unique audio fingerprint; it literally tests whether the AudioContext is accessible and functions as expected (including even error conditions). To verify whether a user is a bot or a human, one approach is to validate the JavaScript environment where the code is running, and then verifying whether the JavaScript engine implementation found in the wild matches what is expected based on the device/browser specified in the user's user agent. AudioContext is one of hundreds of interfaces that sca.js tests to probe the browser environment for validation.

In the realm of ad tech, there are anti-fraud products to detect bots viewing and clicking on ads. This is a multi-billion dollar problem and a lot of technology development goes into solving it. (There's also a lot of money spent on defeating the bot detection technology. For those of you who looked at the source of sca.js... any code published for the purpose of bot detection is deliberately opaque and hard to understand in order to prevent botnets from figuring out how it works and defeating the detection.)

If you want to do bot detection effectively, you have to collect a lot of signals that have strong overlap with signals used for browser fingerprinting, even if you are only using this information for purposes of fraud or bot detection. One strong market side constraint exists in the ad fraud detection industry: every vendor wants to be able to run their verification code on Google's display ad network / ad exchange. To have their verification code running on Google ads, they must first be certified by Google, which includes an analysis of the source code to verify that no canvas or audio fingerprinting is taking place (which are the technologies used for more high resolution fingerprints).

The upshot is that no ad tech vendor running tracking code on Google does audio, canvas or webgl fingerprinting. However, they are allowed to collect all kinds of other data about the browser environment in order to identify if the user is a bot or not. Combining these data points along with IP address effectively constitutes a low resolution finger print.

This is in strong contrast to bot detection in fields outside of ad tech / marketing (where fingerprinting is restricted largely due to privacy concerns). In the realm of security, take for example Distil Networks (acquired by Imperva)... their technology depends almost entirely on high resolution browser fingerprinting (using audio + canvas + webgl signals) to identify the fingerprints of bot browsers (and they incidentally collect high resolution fingerprints for all the human users in the process).

9
  • 5
    let me see. I observe annoying ads trying to squeeze my personal details. I hear coder paid for doing this and company getting money from this telling me that this whole racket is full of fraud and scam. And the same folks are now telling me that I personally needn't worry and that they are somehow different from the rest of the industry in that they swear that my personal details won't be misused and abused - because some obscure policy document of some obscure company (which, surprisingly, is also making money from this) says they don't do bad things. Guess I should feel better now
    – gnat
    Commented Feb 10, 2021 at 7:18
  • 1
    What personal details of yours are being gathered by the ad tracking code? Ad measurement companies like IAS have access to an anonymized impression ID / cookie ID and don't know who you are, what your email is, what your name is, etc. At code run time, they have access to 1) your IP address, 2) your browser environment to run JS code. The ad measurement + security vendors all generally have safe harbor under GDPR as long as they do not send IP data to customers/consumers of the data. You may not realize it... but lots of security software wouldn't work without persisting IPs internally. Commented Feb 10, 2021 at 16:56
  • 2
    you described it yourself, "low resolution finger print". Looking a bit closer, since you point that Google (yet another party in that slippery business, how could I forget) "allowed to collect all kinds of other data" I guess I have to also somehow believe that this low resolution won't eventually turn into a high res
    – gnat
    Commented Feb 10, 2021 at 17:14
  • Most websites are collecting data that could be used for a low resolution finger print (user agent + IP address). Not really sure lots of software on the internet would work if you didn't have the collection of a low resolution finger print ¯_(ツ)_/¯ Commented Feb 10, 2021 at 20:34
  • 4
    (I'm a community moderator - not an employee, so except where I'm quoting an employee, my views don't reflect that of the company.) There's a lot more context behind this than would fit in a comment - but SE has always been extremely anti tracking - to the point where this was their initial response. Practically - being fingerprinted at all is/was antithetical to the social contract the community had with the company. This was also during a period when that social contract was slowly, but steadily being eroded - so certainly there was Commented Feb 10, 2021 at 22:33
  • 3
    quite a lot of friction over this. While I'm sure as a developer of such software - you may feel differently, but even low resolution fingerprinting wasn't what we expected of SE, and it certainly wasn't a thing between, if I recall, the previous 2 ad providers they used. Commented Feb 10, 2021 at 22:37
  • 3
    I just wanted to say that I really appreciated reading your take from "the inside" as it were. I do genuinely trust what you've written at face value, as it makes a lot of sense. It's obvious that the tension between "enough data to validate" and "too much data we're fingerprinting now" is really tough to manage, even for the best of actors in the industry. Throw that in with the fact that there have been and continue to be an apparently large number of outright bad actors, and you get the climate we have now; people find it exceedingly difficult to trust any ad players whatsoever.
    – zcoop98
    Commented Apr 2, 2021 at 22:08
  • 1
    I think the other side of this that people forget about is that there is an unbelievably large amount of money involved in the online advertising industry, and that comes with an ever increasing need for verification and validation that this money is well spent. "Turning off JavaScript in ads" isn't an option because it's more complex than that– if I hired someone to go pass out pamphlets on the street, it'd be nice to know that they didn't chuck them in the dumpster and take my money. I'd wager this is only more true with large sums of money and computers thrown in the mix.
    – zcoop98
    Commented Apr 2, 2021 at 22:21
  • 10
    In the realm of ad tech, there are anti-fraud products to detect bots viewing and clicking on ads. This is a multi-billion dollar problem and a lot of technology development goes into solving it. You'd be amazed how much less I care about your bottom line than the privacy of my device.
    – Basic
    Commented Apr 14, 2021 at 23:59
4

To Stack Exchange

Please use an advertising provider that doesn't track what you do online to form a profile users never gave consent to exist. Stay away from anything made by Google, because they don't give two hoots about user privacy. User privacy should come before "relevant advertising."

To the Users of Stack Exchange

Tracking users across the Internet to serve "relevant" ads needs to stop. Browser fingerprinting, tracking users even when they leave the site, trackers like HTML5 SuperCookies and HTML5 Canvas fingerprinting, they all need to stop. If Stack Exchange is going to fingerprint our browsers and use data from digital profiles built of us by companies like Google we never gave consent for, we users need to protect our privacy and install a tracker blocker as well as a traditional adblocker. I recommend Privacy Badger for the tracker blocker because it's run by a nonprofit that exists soley to protect Internet privacy, and AdBlock Ultimate for the adblocker because it (unlike many more popular adblockers) doesn't allow ad networks to pay it to not block ads from them (called "Acceptable Ads"). Of course, download anything you want, just

Don't let tech companies track us!

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .