Are tech companies correct to move against the alt-right?

After the tragic events in Charlottesville, the attitude of some companies in the technology environment regarding tolerance towards hate speech seems to be starting to change. Domain registrar GoDaddy has rescinded the registration of the neo-Nazi page The Daily Stormer, which was also rejected by Google Sites after it tried to find a home there, saying it violated its terms of service and cancelling the new domain name. The site has since decided to transfer its dominion to the dark web, promising to return at a later date.

Besides that, Facebook has deleted all links to an article insulting the victim of the car ramming, while Reddit has closed forums glorifying race hatred and Nazism. Discord, an instant messaging service widely used by these types of radical communities, has closed servers and expelled users who used it for conversations to promote Nazi ideology; WordPress has cancelled the domain of a neonazi group linked to the Charlottesville killer; and finally, crowdfunding sites like GoFundMe or Kickstarter have canceled campaigns that sought to raise funds for the Defense of the killer.

This response suggests a major change in the attitude of tech companies toward hate speech and radical ideologies, and seems to be a move toward eliminating this type of content from the internet. Meanwhile, the American Civil Liberties Union (ACLU), while condemning the demonstrations and violence of white supremacists in Charlottesville, has defended the right of radicals to demonstrate under the First Amendment of the Constitution, in a tweet and an open letter, prompting criticism and accusations of ambivalence.

The attitude of the ACLU, Foreign Policy or pages like Techdirt, which defend the need to protect freedom of expression even though what is being expressed disgusts us, highlight the problems from maximalist attitudes and attempts to eliminate hate speech: in the first place, it drives such discourse onto places like the dark web, undergoing greater radicalization as a large proportion of society mistakenly thinks that it has been eliminated. Secondly, arbitrarily deciding who should be silenced ends up creating ambiguities or situations where we end up regretting having made exceptions to the First Amendment, creating bigger problems than those such moves were originally intended to solve. It is important to find ways to limit the activities of radicals, but without impeding their freedom of expression, separating words from actions.

For example, while accepting that in a democracy we are supposed to be able to say what we like, however repugnant it may be to most people, that does not mean are not consequences, and that such people may lose their job, be expelled from university or find themselves subject to other reprisals, as I discussed a few days ago or as XKCD so cleverly illustrates.

Nor should we forget philosopher Karl Popper’s paradox of tolerance, which states that if a society is tolerant without limits, its ability to be tolerant will finally be destroyed by the intolerant, which implies that to defend tolerance requires not tolerating intolerance.

Should society or the internet allow people to say anything they like? A growing number of governments monitor and have closed Jihadist forums, and few are scandalized by it. Why are neonazis or white supremacists different to radical jihadists, aside from (sadly) attracting more support in some Western societies?

Decades of banning anything to do with Nazism in some European countries seems not to have achieved much. Are technology companies right to take a less tolerant attitude toward hate speech? Can terms of service overrule the First Amendment? Is this a potentially dangerous response to President Trump’s "us against them" discourse? In short, should there be limits on freedom of expression on the internet?