CHINA NOW USING AI to PROSECUTE THOUGHT CRIMINALS

This is an urgent warning to all: the Chinese government has begun using Artificial Intelligence to punish people for thought crimes. Yes, this sounds unbelievable – but it's true! And if we don't take action now, our rights and freedoms could be in jeopardy. So please spread the word and help us fight back against this oppressive regime before it’s too late!

The Chinese government's use of AI to prosecute individuals for 'dissent' against the state should be a cause for alarm. They claim that their artificial intelligence has been learning how to convict criminals in court for over 5 years, and boasts an impressive success rate at 97%, yet when looking closer into the specifics it is clear why this figure may not be accurate. With only 17,000 cases studied by their AI and 1,000 human-generated case description texts used as evidence -- even for deep-learning Artificial Intelligence, this amount would simply not suffice. Be warned: these worrying attempts at using technology to silence political opposition should send chills down your spine!

How can one trust an AI prosecutor that claims to be 97% accurate? That means that 3% of those convicted are innocent. Do you want to be one of that 3%? Neither do I. AI has not improved justice in the justice system. It is still based on an algorithm that was developed by either a person or an agency. Unsurprisingly, this artificial intelligence carries just as much bias as its human counterpart - whether intentional or unintentional. Therefore, no trial could possibly remain fair with such influence at play.

What about the remaining 3%? What if newly discovered exonerating evidence is made available, but the AI algorithm ignores or fails to update because of a vulnerability that was exploited by a hacker or virus like Stuxnet. Furthermore, who will be held accountable for these mistakes? How do you deactivate AI-driven systems after they've gone awry - how do you punish an artificial intelligence construct when its efficacy fails us? These are all questions we need answers to before we commit too heavily into any form of digital automation; a lapse in judgment could lead to catastrophic consequences. As many have rightly said: Hope it's not you!

Recognize that all kinds of surveillance technologies are used to track citizens across borders between democratic and authoritarian countries alike. Before the COVID-19 pandemic, I said a tracking app would eventually come to democracies like Great Britain, Australia, and parts of the United States. Now it's here--the social credit score is an inevitable reality. Take action now; make your voice heard in this debate before you're taken by surprise by its consequences!

What happens after your spirited, engaging discussion about politics with your friend, you grab your morning coffee and as you are about to head out the door, the electricity cuts, but not before locking you in; your phone blares an alert with a voice that commands, 'Your conversation has been classified as thought-crime by social credit score standards; you have no choice other than to accept the AI prosecutor's sentence of death by home imprisonment.'

Trapped in your own home courtesy of the Wi-Fi locks with just two days of food left and no way to communicate - you're sentenced to premature death within a few weeks. What an amazing life, right? In under two minutes, you've been arrested, tried, and found guilty, which all takes place from the comfort of your home, now converted prison-because of a surveillance system so pervasive even your home has no exemption.

We need to work together to prevent this potentially dangerous future. I had the foresight of warning readers in my book, Discredited Citizen, that an AI justice system might come into play and many people said it was impossible. Now look at us: we are already here! The possibilities with Artificial Intelligence technology may be exciting, but they can also be concerning if not managed properly. Let's spread awareness about its implications so that our society doesn't succumb as a result of negligence or lack of knowledge--we all have a role to play in creating a safe environment for ourselves and those around us who will ultimately suffer from any mistakes made along the way. Thank you for your attention on this matter; hopefully, Skynet won’t take me before my next blog post arrives!

 
Previous
Previous

Artificial Intelligence Have In Common With A Serial Killer?

Next
Next

weaponizing corporations and the beginning of the tech wars