0

In the United States, as some related answers point out (like this and this), Section 230 (47 U.S.C. § 230) protects online forum providers from both civil and criminal liability for user content they host.

Consider this scenario: a user posts some content which is seen really horribly illegal or objectionable by many, and for which the user would quite likely go behind the bars. The forum admin receives a number of complaints but takes no action. Their position is "We are impartial. We value free speech and we are not the ones to judge if this content is illegal and apply censorship. If a court decides the content needs to be removed, we'll comply".

(Let's take copyright/DCMA out of the equation — there's separate clear removal procedure).

Is it really so that the forum admin will not be held liable?

Issuing a court order to remove the content may take a while, during which it will remain online (the user who posted it wouldn't remove it) — does that not to any extent weaken the admin's Section 230 protection? To elaborate further, the content may actually start causing apparent harm like, for example, someone who reads it starts committing violence, social/racial disharmony is caused and so on.

If the admin will be held liable, how exactly would that work around Section 230?

P.S. I know that in many countries they will be held liable. This question is specifically about the United States.

2
  • I would add: If they clearly know about it. My assumption would be that they can't be made responsible if they don't know about it (it's impossible to control all posts of unknown accounts in large social networks), but as soon as it's known (e.g the post is discussed in the news) they have to act.
    – PMF
    Commented Feb 24 at 20:59
  • @Jen I am taking the circumstance pattern of that question to some extreme levels to see if that answer would still apply.
    – Greendrake
    Commented Feb 25 at 2:35

1 Answer 1

2

This question has not been definitively answered. In Gonzalez v. Google LLC, 598 U.S. ___ (2023), the Supreme Court of the United States granted certiorari to review the application of §230. Because it was able to resolve the case on other grounds, it declined to address the application of §230.

It has been argued, and apparently accepted across nearly all federal circuits, that §230(c)(1) bars all claims against providers of "interactive computer services" based on publishing of third-party content.

Since 1996, ten circuit courts have considered §230's applicability to myriad claims and have come to a unanimous consensus that §230(c)(1) protects websites against claims challenging the result of decisions to "publish, withdraw, postpone, or alter content." See e.g. Zeran v. Am. Online, 129 F.3d 327 (4th Cir. 1997); Klayman v. Mark Zuckerberg & Facebook, Inc., 753 F.3d 1354, 1359 (D.C. Cir. 2014):

Other circuits agree, holding that similar conduct falls under Section 230' s aegis. See, e.g., Zeran v. America Online, Inc., 129 F.3d 327, 330 (4th Cir.1997) (the Communications Decency Act protects against liability for the “exercise of a publisher's traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content”); Green v. America Online, 318 F.3d 465, 471 (3d Cir.2003) (same); Universal Communication Systems, Inc. v. Lycos, Inc., 478 F.3d 413, 422 (1st Cir.2007) (same); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir.2008) (no liability under the Act for “decisions relating to the monitoring, screening, and deletion of content” by an interactive computer service provider) (quoting Green, 318 F.3d at 471); Roommates.com, 521 F.3d at 1170–1171 (“[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.”)

Courts have read §230 to shield platforms from liability where the platform (Danielle Keats Citron & Mary Anne Franks, "The Internet as Speech Machine and Other Myths Confounding Section 230 Reform" (2020) U. Chicago Leg. Forum):

  • "knew about users' illegal activity, deliberately refused to remove it, and ensured that those users could not be identified";
  • "solicited users to engage in tortious and illegal activity"; and
  • "designed their sites to enhance the visibility of illegal activity while ensuring that the perpetrators could not be identified and caught."

Congress has amended or incorporated §230 twelve times, keenly aware of these circuit court precedents, and did not push back against the expensive protection as understood by the courts.

Until the Supreme Court of the United States weighs in, the best information we have about the scope of protection afforded by §230 is the interpretation of those circuit courts.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .