Does allowing a terrorist organization like ISIS use Twitter make the social media giant liable for terrorist acts that were aided by the use of the platform? That was the question before the U.S. Supreme Court on Wednesday, as Twitter and the Department of Justice insisted that it does not.
The case was brought by the family of Nawras Alassaf, one of the 39 people killed in a shooting at the Reina nightclub in Istanbul, Turkey on Jan. 1, 2017. ISIS took responsibility for the attack, and Alassaf's family claim that Twitter and other social media companies should be held responsible for not taking proactive measures to take down ISIS accounts and posts that contributed to terrorism.
The oral arguments focused on the language of the Justice Against Sponsors of Terrorism Act (JASTA), which says that "liability may be asserted as to any person who aids and abets, by knowingly providing substantial assistance" to a person who commits an act of international terrorism. The key words that the justices and attorneys took a magnifying glass to were "knowingly" and "substantial."
On the element of substantiality, Justice Sonia Sotomayor boiled it down: "You knew that ISIS was using your platform. But on substantiality there’s a question of how much it helped ISIS, which is different from how much you helped them."
As for as knowledge, Twitter attorney Seth Waxman appeared to agree with Justice Clarence Thomas that a social media company does not necessarily have to know what or where a specific terror attack would be, just that they would "have to have a general awareness" that they were "assisting in overall illegal or tortious activities."
Waxman also argued that it is not enough to know that a terrorist organization is using Twitter, because even a terrorist organization can engage in non-criminal activities. They would have to know, he argued, about specific accounts and posts that were contributing to terrorism.
He also argued that when it comes to substantiality, it is not enough to simply provide the same services that it provides to everyone else.
"As a matter of law, a court should conclude …that the failure to do more to remove content in the context of a service that is generally and widely provided to anybody who complies with the policies … does not amount to the knowing provision of substantial assistance," Waxman said.
In contrast, he gave a hypothetical where Twitter could be found liable for having more specific knowledge and failing to act.
"If the police chief in Istanbul came to Twitter and said, ‘Look, we’ve been following three accounts, and these people appear to be planning some sort of terrorist act,’ and Twitter basically said, ‘You know, people do lots of things, we’re not going to take these things down, we’re not going to look into it,’ there we would have fairly assumed culpable knowledge that there were in fact accounts that they knew about that were assertedly plausibly being used to do this," Waxman said.
Testing Waxman's argument that Twitter should not be liable absent specific knowledge, Justice Elena Kagan gave a hypothetical where Twitter knew that terrorist organizations were recruiting members and enhancing their activities through social media, and decided not to have any policies against it and not to remove any of it. Even in that case, Waxman said, there would not be enough substantial assistance to fall under the statute..
Justice Amy Coney Barrett challenged Waxman's assertion.
"You know ISIS is using it, you know ISIS is going to be doing bad things, you know ISIS is going to be committing acts of terrorism," Barrett said, adding that aiding ISIS "is aiding the commission of particular acts in the future."
"How specific must the knowledge be?" she questioned.
U.S. Deputy Solicitor General Edwin Kneedler took a similar position to Waxman, asserting that JASTA is not broad enough to cover this case. He similarly argued that Twitter engaging in its regular course of business should not be found to be giving substantial assistance, but that this would be different if they were notified that a specific account was about to do something.
On the other side of the case, attorney Eric Schnapper argued that while a business cannot be expected to know about specific acts or know that a random user might engage in terror, but if a known terrorist like Osama bin-Laden wants to use something like a satellite phone that is known to be useful for terrorism, that would be enough. Schnapper went so far as to say that the item would not even have to be used for terror.
Barrett appeared skeptical.
"You couldn't just say he sold him a cell phone and have that be enough," Barrett said.
Schnapper agreed it would be better to at least allege that the item was used somehow in connection to terror efforts.
CLICK HERE TO GET THE FOX NEWS APP
The arguments took place a day after a similar case involving whether Google and YouTube could be held accountable for videos posted by ISIS. That case looked to the broad protection of Section 230 of the Communications Decency Act, a much talked about statute that has shielded social media companies from liability for content published by users.
Both cases are expected to be decided towards the end of the Court's term.