September 14, 2018

Study finds robots can be “bigots” too: Are NON-human rights tribunals in our future?

David MenziesMission Specialist

Never before in the history of peoplekind have there been so many accusations of racism and sexism, but in what is being billed as a “shocking new study” published in Scientific Reports, we learn that even robots can develop similar prejudices all on their own.

It’s hard to imagine but even within the realm of the cold, calculated world of artificial intelligence, these human character flaws are present.

Perhaps the idea of AI mimicking even that human behaviour shouldn’t really come as a surprise since the whole point of it is to mirror the learning and processing components of a human brain.

It’s all so fascinating and makes me hope that the AI of the future won’t embrace that other awful human character flaw known as genocide!

But I can’t help but wonder if future human rights commissions and tribunals will expand their bailiwicks to include complaints against non-humans.

Hey, anything to grow the bureaucracy, right?

You must be logged in to comment. Click here to log in.
commented 2018-09-14 21:20:22 -0400
I’m sensing a robot theme lately Menzies. In any event, it’s hard to see how us flawed humans could create a perfect electronic mind. If leftists are behind its creation I would expect it to have early onset dementia. These are people who believe there are 150 genders (or is it more?), can’t tell the difference between legal immigrants & illegal aliens & think you can change the temperature of the planet by sending money to the UN.
commented 2018-09-14 15:08:38 -0400
Zenophobia and such are merely primitive, tribal survival mechanisms. It’s natural.
But like always the readily programable left twist thing into unrecognizability, nonense and destruction.
Now take that reality and mix it with Azimov’s THREE LAWS OF ROBOTICS and you’ve got a real mess…I think the worry is more likely incompetence and short circuiting robots, than SkyNet!
“1 – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 – A robot must obey orders given it by human beings except where such orders would conflict with the FirstLaw.
3 – A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.