To get a better browsing experience, please use Google Chrome.Download Chrome
Free TrialAsk for Price
  • Products
  • Solutions
  • Customers
  • Blog
  • API Documentation
  • About Us
  • Demo


Regulatory Policy in the United States (Part Ⅱ)

Oct 16, 2023


Internet pornography is a very controversial topic in the United States. Since the 1990s, the public, lawmakers, and courts have debated how to control online pornography. Opponents hope to regulate online pornographic content from a moral perspective, while supporters believe that online pornographic content is also protected by the First Amendment and is part of citizens' freedom of speech. From a legislative perspective alone, it is not illegal to browse pornographic content on the Internet in the United States. Although the U.S. Supreme Court ruled in 1957 that obscenity does not possess "Utterly without redeeming social importance" and is therefore not protected by the Constitutional Amendment. However, due to the high degree of protection of free speech in the American judiciary, the standards for judging whether a piece of material is "obscene" are also stricter. However, it should be noted that although US legislation is generally relatively tolerant of online pornographic content, child pornography is a red line that must not be touched. In 1990, the Supreme Court ruled that child pornography was not protected by the constitutional amendment and that any viewing, possession, downloading, dissemination, or purchase of child pornography was illegal. In March 2016, the police found 50,000 child pornographic images of Mark Salling, the star of the American TV series "Glee", from his computer, including images of children under the age of 10. The court ruled that he "received and possessed children" He was prosecuted for the crime of pornographic materials. In 2021, former US Navy embassy Anthony Gabriel Ortiz was found to have shared child pornographic images through a messaging application. He was later charged with "possessing and distributing child pornography" and was denied bail and detained pending trial.

3. Terrorist and Violent

With the vigorous development of social media, more and more terrorists and violent extremists have begun to use social media for propaganda, recruitment, operations, etc. Since the vast majority of active Internet platforms are American companies, platforms are increasingly under pressure from the government, shareholders, civil society groups, media, and the public, requiring them to take measures to control harmful content. In recent years, many parties have called on the US government to pass legislation to directly regulate online terrorist content. However, judging from the actual situation, the US government's actions are still relatively restrained, and social platforms themselves have become the backbone of the control of online terrorist content. The reason is that, first of all, social platforms are better at and know how to control content on the platforms than governments. Secondly, terrorist content on the Internet is often an international issue and is not subject to the jurisdiction of a single government or regulatory agency, which makes it likely that non-U.S. companies will not cooperate with the U.S. government in taking action. Thirdly, if the government requires social platforms to take measures through fines, criminal penalties, anti-monopoly, cancellation of exemption qualifications of third-party platforms, etc., although it may achieve a certain effect, the government’s credibility with social platform companies will be greatly damaged. , companies may refuse to cooperate with the government. If legislation is enacted, the government must define expressions such as "terrorist content". Vague and unclear expressions will spawn a large number of legal proceedings, and even clearest expressions cannot include all "terrorist content". Inside. After long discussions and attempts, the United States has now formed a complex private mediation mechanism involving both the government and industry. The U.S. government itself does not directly intervene in content control on the platform, but mainly “outsources” this responsibility to companies. On the one hand, although the US government will not directly enact laws to regulate terrorist content on the Internet, it will use legislative power to put pressure on social media companies. On the other hand, the U.S. government also encourages Encourage companies to take moderation measures to combat terrorist content on their platforms. Under pressure from the public and the U.S. government, major U.S. social media platforms have begun to accelerate counter-terrorism efforts. In fact, the first pressure that American Internet giants felt came from European governments. After multiple terrorist attacks in Europe in 2015 and 2016, Facebook, Twitter and Google began removing terrorist content from their platforms in response to pressure from European governments. As more and more Internet giants join the ranks, combating terrorist content on platforms has shifted from the initial solo campaign to industry cooperation. Facebook, Twitter, Google and Microsoft have jointly established an internal shared database of Hash, which stores image and video hashes marked as terrorist on their respective platforms. Based on Hash's internal shared database, Facebook, Twitter and YouTube announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) on June 26, 2017, aiming to address terrorism or violence that appears on their services. Extremism takes tough measures. When it was first founded, GIFCT was just an industry forum chaired by four companies on a rotating basis. In 2019, GIFCT became an independent entity with its own CEO, operating board and independent advisory board, and its membership increased to 19 mutual companies. Networked enterprise. In 2019, after a shooting at a mosque in Christchurch, New Zealand, killed 51 people, many governments and Internet companies signed the "Christchurch Initiative", calling on technology companies and governments to work together to proactively combat illegal activities on the Internet. fear Terrorist content. Industry self-discipline and corporate joint actions have become the backbone of the United States’ efforts to combat cyber terrorism.

4. Online Gambling

For a long time in the past, the U.S. federal government has made online gambling illegal. In 2006, the United States passed the Unlawful Internet Gambling Enforcement Act (UIGEA), which requires U.S. banks and credit card companies not to provide credit cards, checks, electronic payments and other transfer payments for Internet gambling websites. Violators will be subject to penalties. punishment. The shift occurred in 2011, when the federal government ruled that only sports betting was prohibited online and that other forms of online gambling were not subject to the law. Legislative power over online gambling was handed over to state governments. As of 2021, a total of 48 states in the United States have declared online gambling legal, and the other two states that still prohibit online gambling are Hawaii and Utah. Due to Utah's large Mormon population, any form of gambling is prohibited in the state due to religious influence. Hawaii does not support the legalization of online gambling because local residents believe that online gambling will affect family relationships and atmosphere. Although the law does not prohibit gambling, various social platforms have restrictions on online gambling advertising. Take Google, for example, which only allows state-owned entities to place lottery tickets. Advertisements are only allowed for horse racing, sports betting, and online casinos in states where online gambling is legal. Advertisers are also prohibited from targeting users under the age of 21 or outside of the state in which they are licensed, and must include warnings and related help information about the dangers of addictive and compulsive gambling in landing pages or creatives.

5. Hate Speech

Controlling online hate speech was first proposed by the European Union. In 2015, as the European refugee crisis escalated, online hate speech began to become a focus of debate between EU legislators and online giants. In May 2016, the European Commission announced a cooperation with Facebook, Twitter, YouTube and Microsoft to control online hate speech. The four companies promised to "block and delete relevant hate speech within 24 hours after receiving reports." Countries in the European Union have also successively enacted legislation to control online hateful content. Hate speech online in the United States also became more serious after Trump came to power in 2016. However, unlike the European Union’s legislation to control hate speech, due to the high degree of protection of free speech in the U.S. Constitution and the immunity clause for Internet companies in the Communications Decency Act (Section 230), Internet platforms do not need to provide any content on their platforms. Responsible), it is rare for the U.S. government to use technical means or legislation to filter online content. But this does not mean that American society has no control over online hate speech. Corporate self-discipline and industry cooperation have become the main ways for Internet companies to combat hate speech. Facebook, for example, defines “hate speech” in its Community Guidelines as “direct speech attacks that target protected characteristics rather than ideas or customs, including: ethnicity, race, national origin, disability , religious beliefs, caste, sexual orientation, gender, gender identity, and serious illness,” and expressly prohibits hate speech on the platform.

Live Chat