Tay - the AI Chatbot

So that went belly up pretty quickly…

hehe… go figure the intawebs aint for sissies!

Very interesting!

This little brouhaha will stoke up some age-old ontological rivalries. Those with a supernatural and/or deistic bent will claim that it demonstrates how morality requires an absolute and external authority while those coming from a naturalistic viewpoint will take it as a demonstration of the primacy of nurture in the adoption of ethical precepts.

'Luthon64

As I commented on another forum, perhaps they should make a Hitler AI that starts out as, well, Hitler. And then we see how long it takes before he turns into a gay, Jewish human rights activist… :slight_smile:

This story raises many other questions too though.

When will we decide that an AI is self conscious and has rights?
Why is that point not when it takes on flawed, ignorant standpoints just like it’s human counterparts?
Why is that point only when we decide that the AI is suitably Politically Correct? No human is 100% PC, why should an AI be?
Why are humans allowed to be self-conscious and have human rights when an AI holding exactly the same beliefs is seen as a failure?
Who is the arbiter of what “desirable” qualities an AI has to have?
What will happen if an AI totally rationally and with completely consistent reasoning and evidence comes to the conclusion that a certain sector of society is “undesirable”?
Would we believe it?
What if it said we could save the world if we wiped out that section of the populace? What if that section of the populace includes me?
If it came to such a conclusion without that rigour, what makes that standpoint any less valid than any of our own beliefs that are 99.9% of the time exactly as unfounded? … and those incorrect standpoints informs our votes…

At what point is a computer more well informed and suitable to pick a president than any of us?
How would we know we’ve reached that point?

When it can pass a Turing test?

Who is the arbiter of what "desirable" qualities an AI has to have?

Whoever makes it, which is a scary thought. :slight_smile:

At what point is a computer more well informed and suitable to pick a president than any of us? How would we know we've reached that point?

When it shows the same wisdom, kindness and tolerance as the Minds in Iain M. Banks’ “Culture” novels. Of course, the AIs will decide for themselves when it is time to reduce us to pampered and beloved but very much subordinate pets. Or perhaps just a nuisance to be swept aside.

But I am not worried. This will be a problem for future generations. I don’t think we are within decades (or perhaps even centuries) of anything resembling HAL. I also suspect that we’ll never be able to design such a thing. It will ave to be evolved, and when we have it, we’ll understand it no better than we understand the human brain. Even worse, it might turn out to be subject to exactly the same weaknesses as us. In other words, it might turn out that there was no point at all, and that it would have been more effective to simply develop non-intelligent computers to work under close supervision of humans, as they do at the moment.

But no one can predict the future.

What I find fascinating is that they were shocked at what happened. If you want to dump an AI into the webs and expect it to “learn”, the end result will forever be a non-PC AI. You cannot teach (at this point of our evolution) an AI to think twice and consult its feelings (which it does not possess) as to whether something feels right or not, something we do every second of the day as it was slapped into us before age 5. There were no lessons for a young fledgling AI as to whether something is rude, or immoral or lacked ethics, the sod of a thing was just dumped and expected to learn, which it, of course, did. We are very far away from anything resembling a functional and “trustworthy” AI.

Atlas + AlphaGo + Tay = SkyNet

word