Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/mmasnick.bsky.social on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 15 July 2024 @ 11:24am

Musk’s DSA Debacle: From ‘Exactly Aligned’ To Accused Of Violations

Elon Musk declaring the EU DSA regulation as “exactly aligned with my thinking” and agreeing with “everything” it mandates is looking pretty hilarious at this point.

Elon Musk loves endorsing things he clearly doesn’t understand and then lashes out when they backfire. Last week, we had the story of how he was demanding criminal prosecution of the Global Alliance for Responsible Media (GARM) just one week after ExTwitter announced it had “excitedly” rejoined GARM. But now he’s really outdone himself.

Two years ago, soon after Elon announced his bid to takeover Twitter because (he claimed) he was a “free speech absolutist,” he met with the EU’s Internal Market Commissioner, Thierry Breton, and gave a full-throated endorsement of the EU’s Digital Services Act (DSA). At the time, we pointed out how ridiculous this was, as the DSA, at its heart, is an attack on free speech and the rights of companies to moderate as they wish.

At the time, we pointed out how it showed just how incredibly naïve and easily played Elon was. He was endorsing a bill that clearly went against everything he had been saying about “free speech” on social media. Indeed, the previous management of Twitter — the one so many people mocked as being against free speech — had actually done important work pushing back on the worst aspects of the DSA when it was being negotiated. And then Musk came in and endorsed the damn thing.

So, of course, the EU has been on the attack ever since he’s taken over the company. Almost immediately Breton publicly started lashing out at Musk over his moderation decisions, and insisting that they violated the DSA. As we highlighted at the time, this seemed ridiculously censorial and extremely problematic regarding free expression.

But, of course, the whole thing was pretty much a foregone conclusion. And late last week, the EU formally charged ExTwitter with violating the DSA, the very law that Elon originally said was great and he agreed with its approach.

The commission has three findings, and each of them seems problematic in the typical simplistic EU paternalistic manner, written by people who have never had any experience having to manage social media.

To be clear, in all three cases, I do wish that ExTwitter were doing what the EU is demanding, because I think it would be better for users and the public. But, I don’t see how it’s any business of EU bureaucrats to demand that ExTwitter do things the way they want.

First, they don’t like how Elon changed the setup of the “blue check” “verified account”.

  • First, X designs and operates its interface for the “verified accounts” with the “Blue checkmark” in a way that does not correspond to industry practice and deceives users. Since anyone can subscribe to obtain such a “verified” status, it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the “verified account” to deceive users.

And, I mean, I’ve written a ton of words about why Elon doesn’t understand verification, and why his various attempts to change the verification system have been absurd and counterproductive. But that doesn’t mean it “deceives users.” Nor does it mean that the government needs to step in. Let Elon fall flat on his face over and over again. This entire approach is based on Breton and EU technocrats assuming that the public is too stupid to realize how broken ExTwitter has become.

As stupid as I think Musk’s approach to verification is, the fact that it doesn’t “correspond to industry practice” shouldn’t matter. That’s how experimentation happens. Sometimes that experimentation is stupid (as we see with Musk’s constantly changing and confusing verification system), but sometimes it allows for something useful and new.

Here the complaint from the EU seems ridiculously elitist: how dare it be that “everyone” can get verified?

Are there better ways to handle verification? Absolutely. Do I trust EU technocrats to tell platforms the one true way to do so? Absolutely not.

Second, the EU is mad about ExTwitter’s apparent lack of advertising transparency:

  • Second, X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.

I wish there were more details on this because it’s not entirely clear what the issue is here. Transparency is a good thing, but as we’ve said over and over again, mandated transparency leads to very real problems.

There are serious tradeoffs with transparency, and having governments require it can lead to problematic outcomes regarding privacy and competition. It’s quite likely that ExTwitter’s lack of a searchable repository has more to do with (1) Elon having a barebones engineering staff that only focuses on the random things he’s interested in and that doesn’t include regulatory compliance, (2) Elon really, really hates it when the media is able to point out that ads are showing up next to awful content, and (3) a repository might give more of a view into how the quality of ads on the site has gone from top end luxury brands to vapes and crypto scams.

So, yes, in general, more transparency on ads is a good thing, but I don’t think it’s the kind of thing the government should be mandating, beyond the basic requirements that ads need to be disclosed.

Finally, the last item is similar to the second one in some ways, regarding researcher access to data:

  • Third, X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionally high fees.

And, again, in general, I do wish that ExTwitter was better at giving researchers access to data. I wish that they made it possible for researchers to have API access for free, and not tryin to charge them $42,000 per month.

But, again, there’s a lot of nuance here that the EU doesn’t understand or care about. Remember that Cambridge Analytica began as an “academic research project” using the Facebook API. Then it turned into one of the biggest (though, quite over-hyped) privacy scandals related to social media in the last decade.

I have no doubt that if ExTwitter opened up its API access to researchers and another Cambridge Analytica situation happened, the very same EU Commissioners issuing these charges would immediately condemn the company for the sin of making that data available.

Meanwhile, Elon is claiming in response to all of this that the Commission offered him an “illegal secret deal” that they wouldn’t face these charges if they “quietly censored speech without telling anyone, they would not fine” the company. Musk also claimed that other companies accepted that deal, while ExTwitter did not.

Image

So, this is yet another situation in which both sides are being misleading and confusing. Again, the structure of the DSA is such that its very nature is censorial. This is what we’ve been pointing out for years, and why we were horrified that Elon so loudly endorsed the DSA two years ago.

But, it does not at all match with how things actually work with the EU Commission to suggest that the EU would offer “secret deals” to companies to avoid fines. Thierry Breton’s explanation that there was no “secret deal” with anyone, and that it was ExTwitter’s own staff that asked what terms might settle the complaint rings very true.

Image

In the end, both sides are guilty of overblown dramatics. Elon Musk continues to flounder spectacularly at managing a social media platform, making a series of blunders that even his fiercest advocates can’t overlook. However, the EU’s role is equally questionable. Their enforcement of the DSA seems overly paternalistic and censorial, enforcing best practices that may not even be best and reeking of condescension.

The allegations of an “illegal secret deal” are just another smoke screen in this complex spectacle. It’s far more likely that the EU Commission pointed to the DSA and offered standard settlement terms that ExTwitter rebuffed, turning it into a grandiose narrative.

This debacle offers no real heroes — just inflated egos and problematic regulations. What we’re left with is an unending mess where no one truly wins. Musk’s mistaken endorsement of the DSA was a red flag from the beginning, showing that hasty alliances in the tech-policy arena often lead to chaos rather than clarity.

There are a ton of nuances and tradeoffs in the tech policy space, and neither Musk nor Breton seem to care about those details. It’s all about the grandstanding and the spectacle.

So, here we stand: a free speech absolutist who endorsed censorship regulations and a regulatory body enforcing broad and suspect mandates. It’s a circus of hypocrisy and heavy-handedness, proving that in the clash between tech giants and bureaucratic juggernauts, the rest of us become unwilling spectators.

Posted on Techdirt - 12 July 2024 @ 02:48pm

Ctrl-Alt-Speech: Over To EU, Elon

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Domonique Rai-Varming, Senior Director, Trust & Safety at Trustpilot. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Posted on Techdirt - 12 July 2024 @ 09:29am

Elon Says ExTwitter Will Sue The Group ExTwitter ‘Excitedly’ Joined Just Last Week

Elon Musk’s ExTwitter just set a new speed record: from enthusiastic joiner of an advertising coalition to potential plaintiff against the same organization in just over a week.

Sometimes, timing is everything.

This week has been a travel week for me, so on Tuesday evening, I wrote up a short article on last week’s news that ExTwitter had “rejoined GARM.” GARM is the Global Alliance for Responsible Media, which is a loose coalition focused on brand safety for advertisers, such that their ads are less likely to appear next to, say, neo-Nazi content.

The main focus of my post was that there was almost no way that anyone should believe that ExTwitter’s decision to rejoin GARM was a sincere statement that ExTwitter would now take brand safety and GARM’s recommendations seriously. Instead, I noted that whenever ExTwitter was desperate for advertisers to sign on, its advertising execs (Linda Yaccarino’s underlings) would tout its compliance with GARM guidelines. But then Elon would do something fucking crazy and drive away advertisers again.

I even predicted, “sooner or later (probably sooner) Elon will do something horrible…” I should have known that it would happen so soon that it was before I could even post my article.

Anyway, I wrote that Tuesday evening and scheduled it to go up on Techdirt on Thursday afternoon, since I’d be traveling and without internet access for large segments of time this week.

Little did I know that on Wednesday, before my post went up, Jim Jordan and the House Judiciary would release an astoundingly stupid “report” claiming that GARM was an antitrust-violating cartel that was pressuring websites into censoring conservatives.

And, in response to a tweet showing just a clip of some nonsense testimony at the House hearing about this report, Elon Musk announced on Thursday morning (before my post went up) that ExTwitter “has no choice but to file suit against the perpetrators and collaborators” (meaning GARM, its organizers, and its members) and also said that “hopefully, some states will consider criminal prosecution.”

Image

Yes, that’s Elon Musk saying that he plans to file a civil lawsuit against GARM and its “collaborators” and hopes that state AGs will file criminal lawsuits against the very organization HIS COMPANY REJOINED JUST A WEEK EARLIER and celebrated with a hyped-up tweet:

Image

So, last week ExTwitter was “excited to announce” that it’s rejoined GARM, and this week Elon says that GARM’s leaders should be criminally prosecuted, and he planned to sue them himself.

Cool, cool.

I can just imagine how Linda Yaccarino must feel about this. She clearly orchestrated the return to GARM as part of her desperate push to lure back advertisers.

But let’s be clear about this. Companies have their own First Amendment rights not to associate with anyone they want. And that includes not advertising on websites where your ads might show up next to controversial content, disinformation, or just general nonsense. Many companies recognize that it is bad for business to have advertisements showing up next to neo-Nazi content, or just plain old disinformation.

Private companies choosing not to advertise is not a violation of any law, civil or criminal. Private organizations setting up guidelines for brand safety is not an antitrust violation. Private organizations choosing not to advertise on the site formerly known as Twitter is an expression of their own First Amendment rights not to associate with whatever nonsense Elon is promoting these days.

Anyway, all that effort that Yaccarino put into “rejoining GARM” last week just went up in smoke. She was trying to convince advertisers that ExTwitter was a safe place for brand advertising, but now Elon is saying ExTwitter will be suing GARM and pushing for criminal prosecutions of everyone involved in GARM.

Which now includes Elon Musk’s ExTwitter as of last week. Can’t wait to see Elon sue himself.

What a clusterfuck of stupidity.

And I’m sure that it won’t be long before an Andrew Bailey of Missouri or a Ken Paxton of Texas opens an “investigation” into GARM (the group that Elon Musk’s company “excitedly” rejoined just last week).

Hilariously, this would be an actual First Amendment violation, in that it would be a government agency starting a criminal investigation for the pretty clear express purpose of intimidating companies out of expressing themselves.

Remember when Elon said he was against governments pressuring companies about their speech? Now he’s telling them to do that, but just to organizations he doesn’t like (even though his own company just joined the very same organization).

So, just to recap: last week, Elon’s company rejoined GARM, the advertising coalition to help make sure platforms are a safe place for brand advertisers to advertise. This week, the House Judiciary Committee falsely claimed that the First Amendment-protected rights of companies not to advertise on ExTwitter was an antitrust violation, leading to “First Amendment absolutist” Elon Musk saying he’s going to sue the very organization his company just “excitedly” joined. And, to top it all off, Elon hopes that states will open criminal investigations into this activity — an act that would actually violate the First Amendment rights of GARM and those involved with it. Which includes Elon Musk’s own company.

I should have stayed off the internet even longer.

Posted on Techdirt - 11 July 2024 @ 03:17pm

Fool Me Thrice: ExTwitter’s Empty Brand Safety Promises

The famous line is “Fool me once, shame on you. Fool me twice, shame on me.” But what do you call it when Elon Musk fools advertisers over and over again into believing that ExTwitter will protect their brand safety, despite making it clear that he has no interest in doing so?

At this point, no advertiser can seriously believe that Elon Musk’s ExTwitter will protect the brand safety of its advertisers. I mean, this is literally the guy who told advertisers to “go fuck themselves” after some pulled their advertisements following one of Elon’s many ridiculous comments (as well as evidence of ads appearing next to neo-Nazi content).

But, at the recent Cannes Lion advertising festival, Elon and Linda Yaccarino tried to play nice with advertisers. They announced that ExTwitter was rejoining the World Federation of Advertisers’ (WFA) Global Alliance for Responsible Media (GARM).

After drifting from the World Federation of Advertisers’ (WFA) Global Alliance for Responsible Media (GARM) when Elon Musk took over Twitter, the social platform now known as X has decided to rejoin the coalition of online providers and brand partners, working to uphold the group’s brand-safety requirements and potentially win back advertisers.

“We’re excited to announce that X has reinstated our relationship with the @wfamarketers Global Alliance for Responsible Media,” the social media company posted on its platform Monday, adding that “X is committed to the safety of our global town square and proud to be part of the GARM community.”

The problem with this announcement, though, is that basically every time ExTwitter is desperate for advertisers to buy some ads, it touts GARM compliance, but then Elon goes on some antisemitic rant, or yet another study comes out showing what a terrible job ExTwitter does in protecting brand safety, and the promises of GARM compliance are forgotten.

Why is this time any different?

Some history: soon after Elon completed the purchase of Twitter, GARM had issued an open letter to Elon about making sure he was committed to brand safety.

Image

At the time, Elon insisted the company’s “commitment to brand safety” was unchanged. He met with GARM folks, and promised to uphold the guidelines.

Image

A few months later, the company announced a new brand safety effort, compliant with GARM.

But, largely due to Elon’s own nonsense (and misunderstanding of free speech), he keeps going back on those promises and personally driving advertisers away.

And each time the company gets desperate for new advertisers, it tries to claim it’s supportive of the GARM approach to “brand safety” for advertisers.

Indeed, at last year’s Cannes Lion, the company also talked about its GARM compliance. That was just months before Elon told advertisers to go fuck themselves, and multiple reports showed that big brand advertisers were showing up next to some pretty horrific content.

So it’s not even clear what is meant by ExTwitter “rejoining” GARM. The company keeps touting GARM as its standard anyway over the last few years, and then totally failing to live up to those promises, mostly due to their own owner’s behavior and desire to appease the worst people in society.

Any advertiser who thinks this newly constituted relationship with GARM means literally anything for brand safety is too gullible to be left alone with an advertisement. It’s all for show. Sooner or later (probably sooner) Elon will do something horrible and/or another study will come out showing how badly the company protects the brand safety of its advertisers.

It’s happened before. It’ll happen again. And rebuilding a relationship with GARM is just window dressing.

Posted on Techdirt - 11 July 2024 @ 11:07am

DOJ Asks Fifth Circuit To Block The Injunction RFK Jr. Thinks He’s Now Entitled To Regarding Social Media

I’m not going to go through all the background on this story, because we just did that yesterday. If you missed that post, it will help to go read it before reading this one. I concluded that post by noting that, thanks to district court Judge Terry Doughty petulantly claiming he can’t stay an obviously problematic injunction (and nearly identical to the injunction the Supreme Court just trashed in the Murthy decision), the DOJ would likely quickly run to the Fifth Circuit to ask for the same relief.

And run they did. Before my article had even posted, the DOJ had filed an emergency motion with the Fifth Circuit asking for a stay on these issues. The motion is basically the same thing the DOJ filed in the district court, just now asking the Fifth Circuit the same thing:

The government respectfully requests a stay pending appeal of the district court’s preliminary injunction. A stay is warranted because the Supreme Court previously stayed, and ultimately reversed, an identical injunction issued by the same district court based on the same record. The Supreme Court’s decision makes clear that the government is likely to succeed on appeal, and the Supreme Court’s prior stay confirms that the equities and the public interest warrant a stay while the appeal proceeds. We request relief by July 24, 2024, to allow sufficient time for the Supreme Court to consider an application for a stay, should the Solicitor General elect to file one. We have sought plaintiffs’ position but have not received a response.

Almost immediately, RFK and his co-plaintiffs filed a “nuh uh, we’re totally different” response using the same argument they had used in the district court:

The chief difference between this action and Murthy is the identity of the plaintiffs. The Kennedy Plaintiffs have a very different, “strong claim to standing,” and Mr. Kennedy in particular, as a candidate for President, has an urgent claim to relief. Murthy v. Missouri, 144 S. Ct. 32, 32-33 (2023) (Alito, J., dissenting from denial of leave to intervene) (“Indeed, because Mr. Kennedy has been mentioned explicitly in com munications between the Government and social media platforms, he has a strong claim to standing, and the Government has not argued otherwise. Our democratic form of government is undermined if Government officials prevent a candidate for high office from communicating with voters, and such efforts are especially dangerous when the officials engaging in such conduct are answerable to a rival candidate.”).

First of all, ignoring what the majority actually said while citing the dissent of the majority is a choice. But the main thing is that the core issue still stands. If the administration were actually coercing social media companies into their moderation decisions, then perhaps the plaintiffs would have standing.

But no one — including RFK Jr. — has presented any evidence of such coercion.

And therefore, the fact that he’s a candidate for President (with no chance to win) is meaningless here.

And, yes, if the administration was actually pressuring social media companies to silence other candidates for President, RFK Jr. would have a point. But social media companies have plenty of reasons to pull down RFK Jr.’s dangerous nonsense peddling that is making kids sick by creating vaccine hesitancy and other nonsense. That’s got nothing to do with the government suppressing speech of a rival candidate, and everything to do with that candidate spewing dangerous stuff.

But, this is the Fifth Circuit, which has a history of making decisions driven by ideology more than reality. So it’s entirely possible that they reject this, and the issue quickly returns to the Supreme Court’s shadow docket, as the government is forced to seek an emergency order putting a stay on the clearly ridiculous injunction.

That would be quite fast, and while the initial request would flow up through Justice Alito (who wrote the cantankerous dissent), I could see enough Justices getting pretty pissed off that the Fifth Circuit seemed to clearly not be paying attention to what the majority was saying in its ruling regarding standing.

Posted on Techdirt - 10 July 2024 @ 11:11am

RFK Jr. Seems To Think The Supreme Court’s Murthy Decision Means The Gov’t Is Now Barred From Talking To Social Media

RFK Jr. seems to believe that being a Kennedy and spouting anti-vax nonsense qualifies him to be President. Now, he’s taking his delusions to a whole new level by arguing that the Supreme Court’s Murthy decision means the government can’t even talk to social media companies anymore. Buckle up, folks, this is going to be a wild ride.

Vanity Fair recently had quite the takedown of RFK Jr. based on conversations with his own family members. It is made quite clear that RFK Jr. is not one to let facts get in the way of whatever nonsense he’s decided to claim to the world.

And while people can point to lots of high-profile ways in which that has played out, I’m going to point out one that is relevant to Techdirt’s general interests: RFK Jr. has been trying desperately to sue whoever he can think of to complain about getting booted from Facebook.

He has sued various social media companies, which have failed spectacularly (thanks to Section 230). He recently has decided to try suing Meta, yet again, in the belief that his Quixotic Presidential campaign somehow makes the issue different than it was before.

However, he also sued the Biden administration directly in 2023. He kept prattling on ignorantly, arguing that the administration is deliberately trying to stifle his speech (which is kind of hilarious, given that any time he talks, more people realize what a nutcase RFK Jr. actually is). RFK filed the lawsuit in the same court where Missouri/Louisiana and some other nonsense peddlers appeared to be having some success in their equally batshit lawsuit against the administration over social media moderation.

Soon after filing the case in the same court, where they were guaranteed to get the same judge, RFK sought to merge his case with the Missouri case. Judge Terry Doughty, after issuing his batshit crazy decision in the case, more or less agreed to merge Kennedy’s case into the Missouri v. Biden docket. He issued a similar injunction as the one he issued in that case, but put it on hold until ten days after the Supreme Court sent down its ruling in the original Missouri case.

As you likely now know, after getting a still crazy (but slightly less crazy) Fifth Circuit ruling, the Supreme Court took the case, newly dubbed Murthy v. Missouri, and made it clear that none of the plaintiffs could show standing. The majority opinion also made it quite clear that both the district court decision and the Fifth Circuit decision were crazy because they were willing to accept absolute nonsense as fact, when it was obviously not.

While that decision sent the case back down to the lower court, unless you were delusional and totally committed to believing things that were not true, you would realize that this basically meant that such a case had no chance to go anywhere.

Enter RFK Jr.

The day after the Supreme Court ruling came down, the DOJ did the proper thing and notified Judge Doughty of the Supreme Court opinion. The DOJ also pointed out that given the nature of the Supreme Court ruling, RFK Jr. also clearly lacked standing. So, rather than letting the injunction go into effect, the DOJ intended to file a motion asking Judge Doughty to “vacate” the injunction he had granted RFK.

While this Court’s stay remains in effect, the government intends to file with this Court a motion for an indicative ruling under Federal Rule of Civil Procedure 62.1 that the Court would vacate the preliminary injunction in Kennedy because the Kennedy plaintiffs (who relied exclusively on the same set of facts “before the Court in Missouri v. Biden,” ECF No. 6-1 at 2) lack standing under the Supreme Court’s analysis in that case. If the Court issues such a ruling, the government would seek a remand from the Fifth Circuit under Federal Rule of Appellate Procedure 12.1 to allow this Court to enter the requested vacatur. In the alternative, the government plans to ask this Court to stay the Kennedy preliminary injunction for the full duration of the pending appeal from that injunction, if the Court declines to enter the requested indicative ruling.

The DOJ also argued that the clock on the “10 days” until the injunction supposedly went into effect didn’t start ticking until the Supreme Court officially sent the decision to the lower court, which would be a month or so later:

Under Supreme Court Rule 45.3, the Supreme Court “will send” its judgment to the lower court “32 days after entry of the judgment, unless the Court or a Justice shortens or extends the time, or unless the parties stipulate that it be issued sooner.” The Supreme Court will accordingly send down its ruling on Monday, July 29, 32 days (plus a weekend day) from yesterday. The government understands this Court’s stay of the preliminary injunction in Kennedy to extend for ten days after that date—i.e., the date on which the Supreme Court “sends down” its ruling in Missouri.

RFK Jr’s lawyers jumped in to say “nuh uh” and to suggest that the injunction (which the Supreme Court had clearly rejected regarding the other plaintiffs in the case) should go into effect very soon.

Two days ago, on Wednesday, June 26, 2024, the Supreme Court handed down its ruling in the Missouri v. Biden case. See Murthy v. Missouri, No. 23-411, 2024 WL 3165801 (U.S. June 26, 2024). Accordingly, under the plain language of this Court’s ruling—and contrary to the Notice of Opinion filed yesterday by Defendants—it would appear that this Court’s stay will be “automatically lifted” on July 7, 2024—eleven days after Murthy was handed down—and that the preliminary injunction will, absent further judicial action, become operative on that day.

The DOJ then felt the need to file a “motion for clarification” from Judge Doughty. First, they point out that RFK’s lawyers are misrepresenting what Judge Doughty actually said in his ruling on the stay of the injunction:

Defendants disagree with the Kennedy Plaintiff’s interpretation, which does not accord with the Supreme Court’s rules governing the timing of when the Supreme Court “sends down” its opinions and judgments. The Kennedy Plaintiffs seize on the Court’s use of the phrase “handed down” at some points in its opinion—and if that were all the Court’s order said, then the Plaintiffs’ interpretation would be reasonable. But in the decretal language of its order—the part that has actual legal force—the Court unambiguously referred to the date on which the Supreme Court “sends down” its ruling. See Dkt. 38 at 23 (“IT IS FURTHER ORDERED that in light of the stay issued by the Supreme Court of the United States in Missouri v. Biden, this order is STAYED for ten (10) days after the Supreme Court sends down a ruling in Missouri v. Biden.”). Plaintiffs never acknowledge that language or attempt to square their interpretation with it.

But, even more importantly, the DOJ says, in effect, “hey, in light of SCOTUS saying ‘no standing’ for the other plaintiffs, how about we extend the stay on the injunction no matter what so we can brief you on why RFK also has no standing”:

In the alternative, if the Court adopts Plaintiffs’ characterization of the duration of the stay, Defendants request that this Court grant a 26-day extension of the stay beyond the expiration date urged by Plaintiffs, until and including Friday, August 2, 2024, to enable the parties to fully brief and this Court to decide (1) a motion by Defendants for an indicative ruling under Federal Rule of Civil Procedure 62.1 that the Court would vacate the preliminary injunction in Kennedy because the Kennedy plaintiffs lack Article III standing under the Supreme Court’s analysis in Murthy, and (2) in the alternative, a motion by Defendants for a stay pending appeal for the full duration of the pending appeal from that injunction, if the Court declines to enter the requested indicative ruling.

In response, RFK filed something saying that the DOJ should have requested this kind of clarification when Doughty first issued his “10 days” ruling:

If Defendants genuinely found the Court’s stay ruling unclear, or if they viewed eleven days as insufficient, they had five months to ask this Court or the Fifth Circuit for relief. Instead, Defendants sat on their hands, and now, five days after Murthy was handed down, Defendants move for “clarification” of a ruling that is already clear, and for the further stay of an injunction already on appeal.

But then, RFK goes on to argue (ridiculously, and wrongly) that he has much stronger arguments for standing on the basis of him being a laughably unqualified candidate for President.

The bottom line is that the Kennedy Plaintiffs have much stronger standing than did the Missouri plaintiffs, and Mr. Kennedy in particular, as a candidate for President who is still being brutally censored on major social media platforms (just as this Court predicted) , urgently requires and is entitled to vindication of his rights

But that’s not what gives you standing. What gives you standing, Bobby Jr., is actual evidence that the government coerced social media companies to shut down your accounts, and that it didn’t happen because your anti-vax nonsense violated their policies. And RFK can’t show that because it didn’t actually happen.

However, they also argue that the right place for this discussion is not in Judge Doughty’s courtroom, but rather at the Fifth Circuit. As we’ll discuss below, this was the most compelling bit to Judge Doughty who decided that this is out of his courtroom for now.

The DOJ then responded to this even more stringently, pointing out that RFK obviously has no standing, based on the Murthy ruling.

First, the Supreme Court’s decision in Missouri demonstrates that the Kennedy Plaintiffs lack standing to obtain a preliminary injunction. The Kennedy Plaintiffs stated that they “do not rest their claims on censorship of their own speech. Rather, Plaintiffs have brought this case as (and on behalf of) social media users, whose right to an uncensored public square is being systematically violated.” Dkt. 20 at 2.1 And this is the sole basis for standing that this Court found for Plaintiff Sampognaro, who “submitted no direct evidence of content suppression.” Dkt. 38 at 11. But the Supreme Court rejected this “startlingly broad” theory, “as it would grant all socialmedia users the right to sue over someone else’s censorship—at least so long as they claim an interest in that person’s speech.” Missouri, 2024 WL 3165801, at 16. And the Court held that such a theory fails to establish an Article III injury absent “any specific instance of content moderation” of a third-party to whom Plaintiff had a “concrete, specific connection,” “that caused [plaintiff] identifiable harm,” id. at 16-17. Plaintiffs fail to supply any such example.

Nor can the Kennedy Plaintiffs rely on a direct censorship theory of standing following Missouri because they have failed to show any future injury that is traceable to the government conduct they seek to enjoin—much less any future injury that is traceable to each of the governmental Defendants covered by the preliminary injunction. Id. at 7-8; see id. at 9 (“‘[P]laintiffs must demonstrate standing for each claim that they press’ against each defendant, ‘and for each form of relief that they seek.’”) (citation omitted). In Missouri, the Supreme Court explained that “[t]he primary weakness in” the plaintiffs’ reliance on “past restrictions” of their content by social-media platforms is that this Court made no “specific causation findings with respect to any discrete instance of content moderation”—in other words, no findings that any act of content moderation was attributable to actions by Defendants (much less a particular Defendant) as opposed to the third-party platforms’ exercise of their independent discretion. Id. at *8. The Kennedy Plaintiffs motion for a preliminary injunction, which “submit[s] no new evidence,” Dkt. 6-1 at 1, did not rectify that deficiency.

Furthermore:

Kennedy adduced no evidence establishing that any social-media company’s action against his accounts can be attributed to the actions of a Defendant. In fact, the record evidence is to the contrary: Facebook explained that it removed pages and accounts linked to the “[D]isinformation [D]ozen” “for violating [Facebook’s] policies,” and noted that it was not imposing a complete ban because “the remaining accounts associated with these individuals [were] not posting content that [broke Facebook’s] rules.” Missouri, Dkt. 10-1, Ex. 37 at 1. That suggests the relevant actions reflected the platform’s own decisions, not any governmental action.

The DOJ then also points to the recent Vullo decision from the Supreme Court, which reinforced the standards from Bantam Books in deciding whether or not a government official has coerced a third party to censor someone. The DOJ says that there’s no way RFK can meet the standards set forth in that decision:

As the Supreme Court recently emphasized in a decision issued after the Kennedy preliminary injunction, it is perfectly “permissible” for the government to “attempt[] to persuade” a private party not to disseminate speech, National Rifle Association, 602 U.S. at 188, so even a showing that platforms would not have taken content-moderation actions against plaintiffs’ speech but for the government’s actions would not suffice to show that those actions violated the First Amendment. Rather, the relevant question is whether the government’s “conduct … , viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff’s speech.”

The Kennedy Plaintiffs are unlikely to be able to demonstrate on the merits that the government coerced the platforms to act given the difficulties identified by the Supreme Court in even establishing that the government’s actions influenced the platforms. See Missouri, 2024 WL 3165801, at *13 n.8 (“acknowledging the real possibility that Facebook acted independently in suppressing [the plaintiff’s] content”). Accordingly, the injunction should be dissolved

The DOJ also points out that Doughty should stay the injunction if only because the issue is going to have to be dealt with by the Fifth Circuit anyway, and it’s standard practice to stay such an injunction until an appeal is decided. Also, they point out that if the Kennedy injunction goes into effect, it will bar all sorts of communications that the Supreme Court in Murthy said were perfectly normal, reasonable communications between government officials and private companies.

Because the universal preliminary injunction here is identical to the injunction in Missouri, it also will inflict exactly the same harms that the Supreme Court found sufficient to issue a stay in that case

But… the very next day, Judge Doughty basically wiped his hands of the issue, saying that the case is out of his court, and if there’s an issue they should take it up with the Fifth Circuit:

This Court lacks jurisdiction to address Defendants’ request. Generally, a notice of appeal divests the district court of jurisdiction over the judgment or order that is the subject of the appeal. Sierra Club, Lone Star Chapter v. Cedar Point Oil Co., Inc., 73 F.3d 546, 578 (5th Cir. 1996). The Court in Sierra Club noted that Fed. R. Civ. P. Rule 62(d) provides an exception to this rule when an appeal is taken from an interlocutory or final judgment granting, dissolving or denying an injunction where the district court may suspend, modify, restore, or grant an injunction during the pendency of the appeal upon such terms as to bond or otherwise as it considers proper for the security of the rights of the adverse party. Id. The court in Sierra Club further noted that the authority granted by Rule 62(c) does not extend to the dissolution of an injunction and is limited to maintaining the status quo.

But wouldn’t maintaining the status quo at least mean maintaining the stay that blocks the injunction from going into effect? He’s doing the reverse of “maintaining the status quo” by apparently letting his original injunction go into effect. Which means, in theory, that the government is yet again barred from talking to social media companies even as the Supreme Court just said that was stupid.

And thus… it seems that the DOJ is likely to make these arguments again before the Fifth Circuit, which is where logic and common sense go to die.

Posted on Techdirt - 9 July 2024 @ 11:28am

Disney Cites Supreme Court’s NetChoice Decision In Fighting Gina Carano’s SLAPP Suit

Remember that SLAPP suit, financed by Elon Musk, that actor Gina Carano filed against Disney after they chose not to renew her contract for the Mandalorian? That’s the one where Carano seems to be insisting that failing to renew her contract after she made some controversial political comments is somehow a violation of her First Amendment rights.

The entire lawsuit is a joke, but the two sides have been flinging paperwork back and forth over the last few months. I’d been waiting for the judge to issue some sort of opinion on the pending motion to dismiss, but I spotted one filing by Disney last week that struck me as worth highlighting.

Disney filed a Notice of Supplemental Authority to highlight to the court some of the verbiage in the Supreme Court’s ruling last week in the NetChoice/CCIA cases, regarding whether or not Texas and Florida can pass laws mandating that social media sites must host certain types of political speech.

As Disney points out, the language in the majority opinion seems “relevant” to Disney’s arguments against Carano’s.

On July 1, 2024, the Supreme Court of the United States issued an opinion in Moody v. NetChoice, LLC, attached as Exhibit A. The First Amendment analysis in Part III of the Court’s opinion is relevant to the parties’ motion-to-dismiss arguments. In particular, the Supreme Court held:

  • That “ordering a party to provide a forum for someone else’s views implicates the First Amendment” if “the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt.” Op. 14.
  • That “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude,” and that the challenged laws “target[] those expressive choices” by “forcing the [plaintiffs] to present and promote content on their feeds that they regard as objectionable.” Op. 17, 24.
  • That none of the analysis “changes just because a compiler includes most items and excludes just a few,” and that “[i]ndeed, that kind of focused editorial choice packs a peculiarly powerful expressive punch.” Op. 18; see Op. 24 (“That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference.”

The language quoted above confirms that Disney has a right to exclude speech that alters its expressive activity, that the First Amendment protects its decision to decline to accommodate messages it would prefer to exclude, and that it does not lose its First Amendment right simply because it allowed others’ speech….

I don’t see how any of this should make any difference at all no matter what, but it’s still fascinating to see how the decision is already being cited in situations like this one.

It’s also an example of why, yes, it is important for companies to have First Amendment rights, as it should be helpful towards stopping these sorts of nonsense lawsuits.

Posted on Techdirt - 8 July 2024 @ 01:09pm

Clarence Thomas Learned Nothing From The Mess He Helped Create Regarding Section 230, Blogs Ignorantly About 230 Yet Again

Have we considered giving Supreme Court justices their own blogs in which they can vent their ill-informed brain farts, rather than leaving them to use official Supreme Court order lists as a form of a blog?

Justice Clarence Thomas has been the absolute worst on this front, using various denials of certiorari on other topics to add in a bunch of anti-free speech, anti-Section 230 commentary, on topics he clearly does not understand.

Thomas started this weird practice of Order List blogging in 2019, when he used the denial of cert on a defamation case to muse unbidden on why we should get rid of the (incredibly important) actual malice standard for defamation cases involving public figures.

Over the last few years, however, his main focus on these Order List brain farts has been to attack Section 230, each time demonstrating the many ways he doesn’t understand Section 230 or how it works (and showing why justices probably shouldn’t be musing randomly on culture war topics on which they haven’t actually been briefed by any parties).

He started his Section 230 brigade in 2020, in which he again chose to write his unbidden musings after the court decided not to hear a case that touched on Section 230. At that point, it became clear that he was doing this as a form of “please send me a case in which I can try to convince my fellow Justices to greatly limit the power of Section 230.”

Not having gotten what he wanted, he did it again in 2021, in a case that really didn’t touch on Section 230 at all, but where he started musing that maybe Section 230 itself was unconstitutional and violated the First Amendment.

He did it again a year later, citing his own previous blog posts.

Finally, later that year, the Supreme Court actually took on two cases that seemed to directly target what Thomas was asking for: the Gonzalez and Taamneh cases targeted internet companies over terrorist attacks based on claims that the terrorists made use of those websites, and therefore the sites could be held civilly liable, at least in part, for the attacks.

When those cases were finally heard, it became pretty obvious pretty damn quickly how ridiculous the premise was, and that the Supreme Court Justices seemed to regret the decision to even hear the cases. Indeed, when the rulings finally came out, it was something of a surprise that the main ruling, in Taamneh, was written by Thomas himself, explaining why the entire premise of suing tech companies for unrelated terrorist attacks made no sense, but refusing to address specifically the Section 230 issue.

However, as we noted at the time, Thomas’ ruling in Taamneh reads like a pretty clear support for Section 230 (or at least a law like Section 230) to quickly kick out cases this stupid and misdirected. I mean, in Taamneh, he wrote (wisely):

The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.

And, I mean, that’s exactly why we have Section 230. To get cases that make these kinds of tenuous accusations into legal claims tossed out quickly.

But, it appears that Thomas has forgotten all of that. He’s forgotten how his own ruling in Taamneh explains why intermediary liability protections (of which 230 is the gold standard) are so important. And he’s forgotten how his lust for a “let’s kill Section 230” case resulted in the Court taking the utterly ridiculous Taamneh case in the first place.

So, now, when the Court rejected another absolutely ridiculous case, Thomas is blogging yet again about how bad 230 is and how he wishes the Court would hear a case that lets him strike it down.

This time, the case is Doe v. Snap, and it is beyond stupid. It may be even stupider than the Taamneh case. Eric Goldman had a brief description of the issues in this case:

A high school teacher allegedly used Snapchat to groom a sophomore student for a sexual relationship. (Atypically, the teacher was female and the victim was male, but the genders are irrelevant to this incident).

The teacher was sentenced to ten years in jail, so the legal system has already held the wrongdoer accountable. Nevertheless, the plaintiff has pursued additional defendants, including the school district (that lawsuit failed) and Snap.

In a new post, Goldman makes even clearer just how stupid this case is:

We should be precise about Snap’s role in this tragedy. The teacher and student exchanged private messages on Snap. Snap typically is not legally entitled to read or monitor the contents of those messages. Thus, any case predicated on the message contents runs squarely into Snap’s limitations to know those contents. To get around this, the plaintiff said that Snap should have found a way to keep the teacher and student from connecting on Snap. But these users already knew each other offline; it’s not like some stranger-to-stranger connection. Further, Snap can keep these individuals from connecting on its network only if it engages in invasive user authentication, like age authentication (to segregate minors from adults). However, the First Amendment has said for decades that services cannot be legally compelled to do age authentication online. The plaintiff also claimed Snapchat’s “ephemeral” message functionality is a flawed design, but the Constitution doesn’t permit legislatures to force messaging services to maintain private messages indefinitely. Indeed, Snapchat’s ephemerality enhances socially important privacy considerations. In other words, this case doesn’t succeed however it’s framed: either it’s based on message contents Snap can’t read, or it’s based on site design choices that aren’t subject to review due to the Constitution.

See? It’s just as, if not more, stupid than the Taamneh case. It’s yet another “Steve Dallas” lawsuit, in which civil lawsuits are filed against large companies who are only tangentially related to the issues at play, solely because they have deep pockets.

The historical posture of this case is also bizarre. The lower courts also recognized it was a dumb case, sorta. The district court rejected the case on 230 grounds. The 5th Circuit affirmed that decision but (bizarrely) suggested the plaintiff seek an en banc review from the full contingent of Fifth Circuit judges. That happened, and while the Fifth Circuit refused to hear the case en banc, seven out of the fifteen judges (just under half) wrote a “dissent,” citing Justice Thomas’s unbriefed musings, and suggesting Section 230 should be destroyed.

Justice Thomas clearly noticed that. While the Supreme Court has now (thankfully) rejected the cert petition, Thomas has used the opportunity to renew his grievances regarding Section 230.

It’s as wrong and incoherent as his past musings, but somehow even worse, given what we had hoped he’d learned from the Taamneh mess. On top of that, it has a new bit of nuttery, which we’ll get to eventually.

First, he provides a much more generous to the plaintiff explanation of what he believed happened:

When petitioner John Doe was 15 years old, his science teacher groomed him for a sexual relationship. The abuse was exposed after Doe overdosed on prescription drugs provided by the teacher. The teacher initially seduced Doe by sending him explicit content on Snapchat, a social-media platform built around the feature of ephemeral, selfdeleting messages. Snapchat is popular among teenagers. And, because messages sent on the platform are selfdeleting, it is popular among sexual predators as well. Doe sued Snapchat for, among other things, negligent design under Texas law. He alleged that the platform’s design encourages minors to lie about their age to access the platform, and enables adults to prey upon them through the self-deleting message feature. See Pet. for Cert. 14–15. The courts below concluded that §230 of the Communications Decency Act of 1996 bars Doe’s claims

Again, given his ruling in Taamneh, where he explicitly noted how silly it was to blame the tool for its misuse, you’d think he’d be aware that he’s literally describing the same scenario. Though, in this case it’s even worse, because as Goldman points out, Snap is prohibited by law from monitoring the private communications here.

Thomas then goes on to point out how there’s some sort of groundswell for reviewing Section 230… by pointing to each of his previous unasked-for, unbriefed musings as proof:

Notwithstanding the statute’s narrow focus, lower courts have interpreted §230 to “confer sweeping immunity” for a platform’s own actions. Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 592 U. S. ___, ___ (2020) (statement of THOMAS, J., respecting denial of certiorari) (slip op., at 1). Courts have “extended §230 to protect companies from a broad array of traditional product-defect claims.” Id., at ___–___ (slip op., at 8–9) (collecting examples). Even when platforms have allegedly engaged in egregious, intentional acts—such as “deliberately structur[ing]” a website “to facilitate illegal human trafficking”—platforms have successfully wielded §230 as a shield against suit. Id., at ___ (slip op., at 8); see Doe v. Facebook, 595 U. S. ___, ___ (2022) (statement of THOMAS, J., respecting denial of certiorari) (slip op., at 2).

And it’s not like he’s forgotten the mess with Taamneh/Gonzalez, because he mentions it here, but somehow it doesn’t ever occur to him that this is the same sort of situation, or that his ruling in Taamneh is a perfect encapsulation of why 230 is so important. Instead, he bemoans that the Court didn’t have a chance to even get to the 230 issues in that case:

The question whether §230 immunizes platforms for their own conduct warrants the Court’s review. In fact, just last Term, the Court granted certiorari to consider whether and how §230 applied to claims that Google had violated the Antiterrorism Act by recommending ISIS videos to YouTube users. See Gonzalez v. Google LLC, 598 U. S. 617, 621 (2023). We were unable to reach §230’s scope, however, because the plaintiffs’ claims would have failed on the merits regardless. See id., at 622 (citing Twitter, Inc. v. Taamneh, 598 U. S. 471 (2023)). This petition presented the Court with an opportunity to do what it could not in Gonzalez and squarely address §230’s scope

Except no. If the Taamneh/Gonzalez cases didn’t let you get to the 230 issue because the cases “would have failed on the merits regardless,” the same is doubly true here, where there is no earthly reason why Snap should be held liable.

Then, hilariously, Thomas whines that SCOTUS is taking too long to address this issue with which he is infatuated, even though all it’s done so far is have really, really dumb cases sent to the Court:

Although the Court denies certiorari today, there will be other opportunities in the future. But, make no mistake about it—there is danger in delay. Social-media platforms have increasingly used §230 as a get-out-of-jail free card.

And that takes us to the “new bit of nuttery” I mentioned above. Thomas picks up on a point that Justice Gorsuch raised during oral arguments in the NetChoice cases, and I’ve now seen being pushed by grifters and nonsense peddlers. Specifically, that the posture that NetChoice took in fighting state content moderation laws is in conflict with the arguments made companies making use of Section 230.

Here, we’ll let Thomas explain his argument before picking it apart to show just how wrong it is, and how this demonstrates the risks of unbriefed musings by an ideological and outcomes-motivated Justice.

Many platforms claim that users’ content is their own First Amendment speech. Because platforms organize users’ content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech. See Moody v. NetChoice, 603 U. S. ___, ___ (2024). When it comes time for platforms to be held accountable for their websites, however, they argue the opposite. Platforms claim that since they are not speakers under §230, they cannot be subject to any suit implicating users’ content, even if the suit revolves around the platform’s alleged misconduct. See Doe, 595 U. S., at ___–___ (statement of THOMAS, J.) (slip op., at 1–2). In the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry. The Court should consider if this state of affairs is what §230 demands.

So, the short answer is, yes, this is exactly the state of affairs that Section 230 demands, and the authors of Section 230, Chris Cox and Ron Wyden, have said so repeatedly.

Where Thomas is getting tripped up, is in misunderstanding whose speech we’re talking about in which scenarios. Section 230 is quite clear that sites cannot be held liable for the violative nature of third-party expression (i.e., the content created by users). But the argument in Moody was about the editorial discretion of social media companies to express themselves in terms of what content they allow.

Two different things in two different scenarios. The platforms are not “arguing the opposite.” They are being specific and explicit where Thomas is being sloppy and confused.

Section 230 means no liability for the third party uses of the tool (which you’d think Thomas would understand given his opinion in Taamneh). But Moody isn’t about liability for third party content. It was about whether or not the sites have the right to determine which content they host and which they won’t, and whether or not those choices (not the underlying content) is itself expressive. The court answered (correctly) that it was expressive.

But that doesn’t change the simple fact that the sites still should not be liable for any tort violation created by a user.

Thomas is right, certainly, that more such cases will be sent to the Supreme Court, given all the begging he’s been doing for them.

But he would be wise to actually learn a lesson or two from what happened with Taamneh and Gonzalez, and maybe recognize (1) he shouldn’t spout off on topics that haven’t been fully briefed, (2) there’s a reason why particularly stupid cases like this one and Taamneh are the ones that reach the Supreme Court and (3) that what he said in Taamneh actually explains why Section 230 is so necessary.

And then we can start to work on why he’s conflating two different types of expression in trying to attack the (correct) position of the platforms with regards to their own editorial discretion and 230 protections.

Posted on Techdirt - 8 July 2024 @ 09:27am

Didn’t We Already Do This? Twenty Years After Supreme Court Rejected Age Verification Law, It Takes Up New Case

Just when you thought the internet was safe from the meddling minds of the Supreme Court, the Justices have decided to take another crack at reviewing whether or not a new set of state regulations of the internet violates the First Amendment. And this time, it has a “but won’t you think of the children online” element to it as well.

Just a day after concluding decisions for the last term and (thankfully) not destroying the internet with its NetChoice decisions, the Supreme Court released a new order list regarding petitions for cert and announced that it would be taking Free Speech Coalition’s challenge to Texas’ internet age verification law, giving it yet another chance to potentially screw up the internet (or, hopefully, to reinforce free speech rights).

Image

If you haven’t been following this case, it’s an important one for the future of privacy and speech online, so let’s bring everyone up to speed.

Two decades ago, there was an early moral panic about kids on the internet, and Congress went nuts passing a variety of laws aiming to “protect the children online.” Two of the bigger attempts — the Communications Decency Act and the Child Online Protection Act — were dumped as unconstitutional in Reno v. ACLU and Ashcroft v. ACLU.

Among other things, the Reno case established that the First Amendment still applies in online scenarios (meaning governments can’t pass laws that suppress free speech online) and the Ashcroft case established that age restricting access to content online was unconstitutional as it failed “strict scrutiny” (necessary to uphold a law that has an impact on speech). In large part, it failed strict scrutiny because it was not the “least restrictive means” of protecting children and would both likely block kids from accessing content they had a First Amendment right to access while also blocking adults from content they had a right to access.

However, we’re deep in the midst of a very similar moral panic about “the kids online” these days, despite little actual evidence to support the fearmongering. Nonetheless, a ton of states have been passing all kinds of “protect the kids online” laws. This is across both Republican and Democrat-controlled states, so it’s hardly a partisan type of moral panic.

Multiple courts have been (rightly) tossing these laws out as unconstitutional one after another, with many (rightly) pointing to the decision in Ashcroft and pointing out that the Supreme Court already decided this.

Many of the age verification laws (especially those in Republican-controlled states) have been focused specifically on adult content websites, saying those sites in particular are required to age gate. And while it makes sense that children should not have easy access to pornographic content, there are ways to limit such access without using problematic age verification technology, which puts privacy at risk and is not particularly effective. Indeed, just a couple weeks ago, an age verification vendor used by many internet companies was found to have leaked personal data on millions of people.

Allowing age verification laws online would do tremendous damage to the internet, to kids, and to everyone. It would create a regime where anonymity online would be effectively revoked, and people’s private data would be at risk any time they’re online. People keep pitching ideas around “privacy-protective age verification” which is one of those concepts, like “safe backdoors to encryption,” that politicians seem to think is doable, but in reality is impossible.

One of the many states that passed such a law was Texas, and like most other states (the only exceptions to date have been on procedural grounds in states where a suit can’t be filed until someone takes action against a site for failing to age-gate) the district court quickly tossed out the law as obviously unconstitutional under the Ashcroft ruling.

But, just months later, the Fifth Circuit (as it has been known to do the past few years) decided that it could ignore Supreme Court precedent, overturn the lower court, and put the law back into effect. I wrote a big long post explaining the nutty thinking behind all this, but in effect, the Fifth Circuit decided that it didn’t have to follow Ashcroft because that only dealt with “strict scrutiny,” and the Judges on the Fifth Circuit believed that a law like this need only face intermediate scrutiny, and on that basis the law was fine.

Again, this bucked every possible precedent. And just last week, as yet another trial court, this time in Indiana, threw out a similar law, the judge there walked through all the many reasons the Fifth Circuit got things wrong (the Indiana court was not bound by the Fifth Circuit, but the state of Indiana had pointed to the Fifth’s ruling in support of its law).

Back in April, we had explained why it was important for the Supreme Court to review the Fifth Circuit’s bizarre ruling, and that’s where things stand now, thanks to them granting cert.

Of course, it’s anyone’s guess as to how the Supreme Court will rule, though there are a few signs that suggest it may use this to smack down the Fifth Circuit and remind everyone that Ashcroft was decided correctly. First, especially this past term, the Supreme Court has been aggressively smacking down the Fifth Circuit and its series of crazy rogue rulings. So it’s already somewhat primed to look skeptically at rulings coming out of the nation’s most ridiculous appeals court.

Second, if the Fifth’s reasoning wasn’t nutty, then there would be little to no reason to take the case. Again, the Court already handled nearly this very issue twenty years ago, and the Fifth Circuit is the first to say it can just ignore that ruling.

That said, any time the Supreme Court takes up an internet issue, you never quite know how it’s going to end up, especially given Justice Kagan’s own comment on herself and her colleagues that “these are not, like, the nine greatest experts on the internet.”

On top of that, any time you get into “for the children” moral panics, people who might otherwise be sensible seem to lose their minds. Hopefully, the Supreme Court takes a more sober approach to this case, but I recognize that “sober analysis” and this particular Supreme Court are not always things that go together.

Posted on Techdirt - 3 July 2024 @ 11:58am

GOP Really Committed To The Bit That Speech They Don’t Like Is Censorship

The House Oversight Committee is investigating NewsGuard, a private company, for supposed “censorship” for the crime of… offering its own opinions on the quality of news sites. The old marketplace of ideas seems to keep getting rejected whenever Republicans find that their ideas aren’t selling quite as well as they’d hoped.

Up is down, left is right, black is white, day is night. The modern GOP, which has left any semblance of its historical roots in the trash, continues to make sure that “every accusation, a confession” is the basic party line. And now, it’s claiming that free speech is censorship.

Apparently Rep. James Comer was getting kinda jealous that his good buddy Rep. Jim Jordan was out there weaponizing the government to suppress speech, all while pretending it was in an attempt to stop the weaponization of the government to suppress speech.

Comer heads the House Committee on Oversight and Accountability. He has apparently decided that it’s his job to investigate companies for the kind of speech he dislikes. In this case, it’s NewsGuard he’s investigating.

Today, House Committee on Oversight and Accountability Chairman James Comer (R-Ky.) launched an investigation into the impact of NewsGuard on protected First Amendment speech and its potential to serve as a non-transparent agent of censorship campaigns. In a letter to NewsGuard Chief Executive Officers Steven Brill and Gordon Crovitz, Chairman Comer raises concerns over reports highlighting NewsGuard’s contracts with federal agencies and possible actions being taken to suppress accurate information. Chairman Comer’s letter includes requests seeking documents and information on NewsGuard’s business relationships with federal agencies and its adherence to its own policies in light of highly political social media activity by NewsGuard employees.

First off, it helps to understand what NewsGuard is. The organization was set up in 2018 by two journalism moguls, Steven Brill and Gordon Crovitz, in an effort to combat the rise of disinformation and nonsense peddling online. The basic product is rating journalism websites to give a scoring of how credible and reliable they are.

And, let me be upfront: I’m not a fan of NewsGuard’s methodology, which I think isn’t particularly useful for doing what they’re trying to do. It’s formulaic in a somewhat arcane way, which enables terrible news sites to get rated well, while dinging (especially smaller, newer) publications that don’t check off all the boxes NewsGuard demands.

But, they’re allowed to do whatever they want. They are expressing their own First Amendment-protected opinion. And that’s a good thing. People don’t have to believe NewsGuard’s rankings (and my personal opinion is that everyone should take them with a large grain of salt). But it’s still their opinion. It’s their speech.

However, NewsGuard has been singled out as one of the enemies of free speech, like so much of the fantasy industrial complex that is making the rounds these days. This is because some of the nuttier nonsense-peddling grifters out there have been rated poorly by NewsGuard, and that’s resulted in some advertisers deciding to pull advertising.

Somehow that is a form of censorship. Of course, it’s not: it’s speech by a private party, in which other private parties listen to and potentially take some action on, exercising their own rights of association.

But, as the Comer “investigation” calls out, some US government agencies have worked with NewsGuard, most notably the Defense Department. A few years back, the DoD signed a contract with NewsGuard, in which NewsGuard would flag content it found online that it believed were foreign influence campaigns. Basically, it’s the Defense Department contracting with some internet watchers to see if they spot anything the DoD should be aware of.

I have no idea if NewsGuard is any good at this, and frankly, I’d be surprised if the DoD actually got any value out of the deal. But, it’s got nothing to do with “censorship” of any kind. It’s still just more speech.

To date, Crovitz (who was formerly the publisher of the Wall Street Journal, so you’d think the GOP grifter class would realize he’s much closer to them politically speaking) has tried defending NewsGuard by (1) inserting some facts into a discussion that will reject such facts and (2) stupidly insisting that his is the only “non-partisan” rating service, and the rest are all leftists.

“We look forward to clarifying the misunderstanding by the committee about our work for the Defense Department,” Crovitz said in a statement to The Hill. “Our work for the Pentagon has been solely related to hostile disinformation efforts by Russian, Chinese and Iranian government-linked operations targeting Americans and our allies.”

Crovitz, a former publisher of The Wall Street Journal, also touted NewsGuard as “the only apolitical service” that rates news outlets, saying, “the others are either digital platforms with their secret ratings or left-wing partisan advocacy groups.”

In some ways, this strategy of responding to the investigation kinda serves to explain why NewsGuard has always been kinda useless. They bring fact-checking to a vibes fight. That doesn’t work.

If we’ve learned anything from the failures of media over the past decade, it is not that we had a lack of fact-checking or other “objective” ways of measuring news. It’s that people don’t want that. What we’ve discovered is that tons of people are in the market for the Confirmation Bias Times, and they’re going to lap up anything that confirms their priors and outright reject anything that challenges what they believe.

We’ve seen things like Stanford’s Internet Observatory try to respond to similar attacks by coming back with facts, only to have those facts distorted, twisted, and turned right back around to accuse them of even worse things. Crovitz and NewsGuard seem likely to go through the same nonsense ringer.

Because the whole point of this is that facts no longer matter to the modern GOP. If you bring facts that conflict with their feelings, they’re going to blame you for it and attack you.

Here, all that NewsGuard has done is add their opinions about news sources. Some people trust them. Others don’t. That’s the marketplace of ideas in action.

And that’s what Comer is trying to suppress.

More posts from Mike Masnick >>