SCOTUS Upholds TikTok Ban: Read It For Yourself

Related Post Roulette

22 Responses

  1. Jaybird
    Ignored
    says:

    We are looking at the template for the EU banning Twitter.

    That said, EU will have to crack down a lot harder on its citizens to get them to stop using it than the US will. Americans will just jump to RedBook and watch as China bans *THEM*.Report

    • InMD in reply to Jaybird
      Ignored
      says:

      Europe has no 1A. They already prosecute people for saying mean things on Facebook, making fun of foreign dictators, or even just talking about difficult social and political problems in ways the authorities find upsetting.Report

      • PD Shaw in reply to InMD
        Ignored
        says:

        Yeah, the right to free expression “carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.” (Article 10 of the European Convention on Human Rights)

        Of course, maybe some of those conditions apply to interpretations of the U.S. Constitution, but certainly the U.S. starts with a strong presumption against government regulation of speech.Report

  2. Slade the Leveller
    Ignored
    says:

    “National security”

    From the same folks who brought us United States v. Reynolds.Report

  3. InMD
    Ignored
    says:

    I am not one of our con law scholars but I’m surprised anyone saw a possibility of this going any other way. I’m not even sure I understand the 1A case against the law to the extent we’re talking about the big precedents. And I’m one of those dreaded near-absolutists on free speech.Report

    • Marchmaine in reply to InMD
      Ignored
      says:

      Agreed and true.

      It is indisputably the case that we should scramble our youth’s brains with proper American algorithms, and I’m confident that Trump will find just the right company to do it.

      I’m wondering if this becomes one of those things where we all go, “whelp, that was brilliant of Biden to force Trump to either shut it down or own the transfer” … and Six months later we shake our heads as TwitterTok takes off.Report

      • DavidTC in reply to Marchmaine
        Ignored
        says:

        As far as I’m aware, this is the first time the fact that the law and the courts have discussed the fact that social media algorithms exist and do things, instead of treating social media like some sort of magical box where users post and read other posts and it is entirely user-driven.

        This is, of course, true, but it has a bunch of interesting implications if we are acknowledging it under the law. Here’s the relevant parg section 230 of the CDA:

        (c) Protection for “Good Samaritan” blocking and screening of offensive material
        (1) Treatment of publisher or speaker – No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
        (2) Civil liability – No provider or user of an interactive computer service shall be held liable on account of—
        (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
        (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph

        You may notice something in that in #2. It talks about how restricting access isn’t legal to sue based on. But it doesn’t say a damn thing about promoting access. That is a perfectly valid thing to sue about…about the fact you were demoted below those people. (And not for any of the valid reasons for restricting availability, even assuming that would count as ‘restricting availability’.)

        In other words, you can sue the algorithm if you don’t like it. Now that we finally legally admit there is one and it is altering what content is presented.

        But wait, it’s a little worse. #1 isn’t the absolute statement it pretends. It doesn’t allow a newspaper’s web site, for example, to not be liable for a slanderous editorial piece about someone, and certainly not someone they paid. But they are not liable for comments under that article.

        (Hey, is Twitter paying for posts that have high engagement them paying posters for content and thus legally liable for it…you know, a question for another time. We’re actually talking about social media in general, but Twitter is walking some _really_ stupid ground there.)

        Now, courts have generally been okay with publishers manipulating user-created information without them losing immunity. But this is because, again, the courts really didn’t acknowledge such as thing as the algorithm, that this would be some sort of deliberately coded result, and what they looked at was generally very small amounts like ‘promoted posts’ which generally were just publishes promoting things they thought were useful. Not ‘the algorithm’, a system-wise decision machine.

        Here’s a congressional research service looking at some of these issues, https://crsreports.congress.gov/product/pdf/LSB/LSB10306 and of note is this at the end of page 2:

        Section 230(c)(1) immunity may bar a lawsuit if the claim would treat a provider or user of an “interactive computer service” as “the publisher or speaker” of another’s content. As courts have held, the converse of this rule is that a service provider can be liable for content that the service creates or helps develop. Generally, courts have said that a service’s ability to control the content that others post on its website is not enough, in and of itself, to make the service provider a content developer. In some circumstances, a service provider may retain immunity even when it makes small editorial changes to another’s content. Some courts have employed a “material contribution” test to determine if Section 230(c)(1) applies, holding that a service provider may be subject to liability if it “materially contribute[d] to the illegality” of the disputed content. Others have said that service providers may be liable if they “specifically encourage[] development of what is offensive about the content.”

        Read that, um, last line carefully. The algorithm does, pretty clearly, contribute to the sort of content on a platform, that’s literally one of the stated purposes of the algorithm. The algorithm doesn’t have to be _designed_ to contribute to, let’s say, harassment and threats against someone, but if it _contributes_ to that happening by the way it tries to drive ‘engagement’, it is not insane to try that legal theory out in court if you are, in fact, harassed. (Just like if someone had _manually_ done that by promoting user content to get someone harassed.)

        I don’t know if it would win, but it seems meaningful that we now operating in a world where the laws and courts are acknowledging that legal decisions can be based on the abstract ‘algorithm’ and what it does and who controls it. Someone is going to start looking at this and going ‘Wait, does this algorithm expose the company to legal liability?’…either the company itself or people ready to sue it.Report

    • PD Shaw in reply to InMD
      Ignored
      says:

      I watched the first half hour of arguments without much background and it seemed to me like all of the Justices were skeptical of TikTok’s arguments (but exhibited a lot of uncertainty about a number of issues), but I couldn’t keep watching and the first article I saw later that day had the headline that the Justices were skeptical of TikTok ban. Maybe something happened during questioning of the U.S.?

      Anyway, the part that stood out to me was TikTok’s attorney gave an unsolicited hypothetical in which the Chinese government kidnaps Jeff Bezos and forced him to have the Washington Post publish stories to their liking. He said that even in that case, the government cannot do anything to stop publication, though he kind of said under his breath, maybe require a disclaimer. I thought that a pretty provocative hypothetical to offer suggesting that the interpretation of the First Amendment issue they are being asked to accept is more extreme in its potential applications than here.Report

      • InMD in reply to PD Shaw
        Ignored
        says:

        That’s not a direction I would have gone in either.

        As I understand the act of Congress the law in question is better understood as a matter of forced divestiture of a business, not a restriction on speech. An interpretation of the 1A that prohibits Congress from passing a law of that nature would be unprecedented indeed.Report

    • Jaybird in reply to InMD
      Ignored
      says:

      What the hell is “intermediate scrutiny”?Report

      • InMD in reply to Jaybird
        Ignored
        says:

        Spicier than rational basis but more mild than strict scrutiny.Report

      • CJColucci in reply to Jaybird
        Ignored
        says:

        That’s an excellent question. The prevailing jargon is that there are three, or maybe three and a half “levels of scrutiny” when assessing the constitutionality of legislation, usually applied in first amendment or equal protection cases.
        First, there is “strict scrutiny,” which applies to explicitly content-based or viewpoint-based (there are subtle differences between the two that aren’t worth getting into) restrictions on speech or explicitly race-based classifications. Strict scrutiny requires that a restriction advance a “compelling government interest” (which can’t, in free speech cases, be based on the desire that people not come to accept the viewpoint being suppressed — so preventing people from coming to believe in communism doesn’t count) and that it be the “least restrictive means” of advancing that interest. (This is a big issue in the porn age verification case; is there another way to keep porn away from the kiddies without making it too hard for adults to get it?) If you apply strict scrutiny, you usually overturn things. The cliche is that “scrutiny is strict in theory but fatal in fact.”
        Second is “rational basis” scrutiny, which applies to almost every other legislative classification (and all legislation classifies). This is a very forgiving standard. All you need is a “legitimate state interest”, and if a rational legislator could have thought the measure would advance that interest (even if no actual legislator actually relied on the rationale offered in court), then it stands. So a ban on truck loudspeakers survives a first amendment challenge because cutting down on noise is a legitimate interest and banning people from blaring out messages (of any type–if it banned only things like “Vote for Smith” you’d have a harder question) is a rational measure even if there are better ones available. This doesn’t sound like much, and it isn’t, but that’s not a bug, it’s a feature. If we applied anything stronger than rational basis scrutiny to most government action, nothing would ever get done and lawyers would be the richest people around until the economy crashed. When I said there were possibly three and a half levels of scrutiny, there is a rarely-applied test called “rational basis with teeth,” where the court looks a little harder at the offered justification. Courts seem to apply it when there is some lurking issue of whether the legislation is an ordinary mistake that the government is entitled to make or whether it might be the product of some improper animus. (It was applied in a case involving locations of group homes because the court thought it wasn’t merely zoning, but something to do with the mentally ill.)
        That brings us to the third (or third and a half) level, “intermediate scrutiny.” That requires an “important state interest” and the means chosen to advance it must be “substantially related,” not merely rationally related, to advancing that interest. (It must also be, or at least seem, to be the actual purpose for the choice, not merely a purpose the legislature could rationally have been supposed to have.) It first showed up, if I recall correctly, in sex discrimination cases. Where it applies, you can’t second-guess the government. You have to give it some slack. But you can hold its feet to the fire more than you do under strict scrutiny.
        So what is “intermediate scrutiny”? Hard to say beyond “I know it when I see it.” That’s likely why Justice Stevens thought the whole “levels of scrutiny” framework was just a clumsy attempt to formalize what was in actual practice, pretty sensible, if poorly articulated. (Justice Marshall advocated a “sliding scale” of scrutiny, a clumsy compromise between the the unduly formal three-ish tier test and “I know it when I see it.”)
        I think this is a correct answer to your question. I don’t claim that it is satisfying. Or that it should be.Report

        • InMD in reply to CJColucci
          Ignored
          says:

          My answer was better in that it doubles as criteria for selecting fried chicken.

          IIRC there is also some scholarly debate as to whether in practice the scrutiny applied wasn’t merely a means of guaranteeing the desired outcome.

          But I was never great when it came to the metaphysics around constitutional questions.Report

        • Jaybird in reply to CJColucci
          Ignored
          says:

          So if they used Strict Scrutiny, TikTok would probably still be allowed (or, at least, it wouldn’t have been a 9-0 decision) but under Intermediate, it’s perfectly reasonable to block in the US?Report

          • InMD in reply to Jaybird
            Ignored
            says:

            It’s impossible to know the outcome but I would predict that the national security component would give the law a fighting chance of passing even strict scrutiny. Maybe not unanimously but who knows?

            There’s also a bigger picture where SCOTUS has historically been much more hesitant about directly taking on Congress, particularly where there isn’t some sort of question about construction or intent.Report

  4. Jaybird
    Ignored
    says:

    TikTok is now unavailable in the US.Report

  5. Brandon Berg
    Ignored
    says:

    If Trump issues an executive order postponing the ban for 90 days, which he only has the authority to do if a sale is already pending, who has standing to challenge that? Just Congress? Would competing social media services also have standing?Report

Leave a Reply

Your email address will not be published. Required fields are marked *