So today when I heard that 'hate speech' is being identified by an algorithm and a message is being sent to the 'hater' that this speech is not acceptable I was concerned. Today's guest on National Public Radio's Marketplace program was Anna Bethke, Head of AI for Social Good at Intel. Her title alone is one to make you stop and salute your Agenda 21 handbook. Ms. Bethke is engaged in enabling AI to be used in real time to stop the use of language and terms that are considered to be unacceptable to whatever online community the user is a part of. In this case speech is written, but Ms. Bethke is working on detecting 'hate speech' in vocal conversations. Intel is working with groups like Reddit and Twitter to test their AI bots.
Somehow these bots will know hate speech when they see it or hear it. It's necessary to have AI perform this task, according to Ms. Bethke, because it's just too much of a burden for human beings. The interviewer, as usual, doesn't ask the hard questions such as "How do you define hate speech? Can a speaker appeal the decision? Are there any limits on what sort of speech is tracked and commented on? Isn't there a political element to this? Can you see a possibility of 'climate change denial' or '9/11 truth' or 'Trump supporters' or 'Agenda 21 / Sustainable Development activists' being blocked from posting on or using the web?"
It seems that the Chinese would love this kind of thing for their Social Credit Score. It would make it awfully easy to determine if demerits were in order. Wonderful to force good behavior, or else. Who needs morality when fear works so much better and the message can be changed so easily?
On a daily basis we are reminded that the new public square, the internet, is not a public square at all but is a privately owned and controlled platform designed to record, trace, track, identify, absorb, retain, and manipulate all speech forever.