South African Skeptics

Tay - the AI Chatbot

Faerie · 7 · 4966

Offline Faerie

  • Hero Member
  • *****
    • Posts: 2128
    • Skeptical ability: +10/-2

Offline Mefiante

  • Defollyant Iconoclast
  • Hero Member
  • *****
    • Posts: 3796
    • Skeptical ability: +64/-9
  • In solidarity with rwenzori: Κοπρος φανεται
    • Me, according to johnno777
Very interesting!

This little brouhaha will stoke up some age-old ontological rivalries.  Those with a supernatural and/or deistic bent will claim that it demonstrates how morality requires an absolute and external authority while those coming from a naturalistic viewpoint will take it as a demonstration of the primacy of nurture in the adoption of ethical precepts.

'Luthon64
"Sensitive" people are now carefully examining the entire universe, trying to find something to be "offended" at. It won't stop until such time as the "offenders" learn to stop apologizing, and saying "freck off" instead. — brianvds, The ShoutBox Classics, 02/07/2018.


Offline brianvds

  • Hero Member
  • *****
    • Posts: 2072
    • Skeptical ability: +15/-0
    • Brian van der Spuy
As I commented on another forum, perhaps they should make a Hitler AI that starts out as, well, Hitler. And then we see how long it takes before he turns into a gay, Jewish human rights activist... :-)


Offline BoogieMonster

  • NP complete
  • Hero Member
  • *****
    • Posts: 3282
    • Skeptical ability: +19/-1
This story raises many other questions too though.

When will we decide that an AI is self conscious and has rights?
Why is that point not when it takes on flawed, ignorant standpoints just like it's human counterparts?
Why is that point only when we decide that the AI is suitably Politically Correct? No human is 100% PC, why should an AI be?
Why are humans allowed to be self-conscious and have human rights when an AI holding exactly the same beliefs is seen as a failure?
Who is the arbiter of what "desirable" qualities an AI has to have?
What will happen if an AI totally rationally and with completely consistent reasoning and evidence comes to the conclusion that a certain sector of society is "undesirable"?
Would we believe it?
What if it said we could save the world if we wiped out that section of the populace? What if that section of the populace includes me?
If it came to such a conclusion without that rigour, what makes that standpoint any less valid than any of our own beliefs that are 99.9% of the time exactly as unfounded? ... and those incorrect standpoints informs our votes...

At what point is a computer more well informed and suitable to pick a president than any of us?
How would we know we've reached that point?
"Monkey killing monkey killing monkey over pieces of the ground, Silly monkeys, give them thumbs, they make a club and beat their brother down. How they survive, so misguided, is a mystery. Repugnant is a creature who would squander the ability to lift an eye to heaven, conscious of his fleeting time here" - Tool


Offline brianvds

  • Hero Member
  • *****
    • Posts: 2072
    • Skeptical ability: +15/-0
    • Brian van der Spuy
This story raises many other questions too though.

When will we decide that an AI is self conscious and has rights?

When it can pass a Turing test?

Quote
Who is the arbiter of what "desirable" qualities an AI has to have?

Whoever makes it, which is a scary thought. :-)

Quote
At what point is a computer more well informed and suitable to pick a president than any of us?
How would we know we've reached that point?

When it shows the same wisdom, kindness and tolerance as the Minds in Iain M. Banks' "Culture" novels. Of course, the AIs will decide for themselves when it is time to reduce us to pampered and beloved but very much subordinate pets. Or perhaps just a nuisance to be swept aside.

But I am not worried. This will be a problem for future generations. I don't think we are within decades (or perhaps even centuries) of anything resembling HAL. I also suspect that we'll never be able to design such a thing. It will ave to be evolved, and when we have it, we'll understand it no better than we understand the human brain. Even worse, it might turn out to be subject to exactly the same weaknesses as us. In other words, it might turn out that there was no point at all, and that it would have been more effective to simply develop non-intelligent computers to work under close supervision of humans, as they do at the moment.

But no one can predict the future.


Offline Faerie

  • Hero Member
  • *****
    • Posts: 2128
    • Skeptical ability: +10/-2
What I find fascinating is that they were shocked at what happened.  If you want to dump an AI into the webs and expect it to "learn", the end result will forever be a non-PC AI.  You cannot teach (at this point of our evolution) an AI to think twice and consult its feelings (which it does not possess) as to whether something feels right or not, something we do every second of the day as it was slapped into us before age 5.  There were no lessons for a young fledgling AI as to whether something is rude, or immoral or lacked ethics, the sod of a thing was just dumped and expected to learn, which it, of course, did. We are very far away from anything resembling a functional and "trustworthy" AI.


Offline cyghost

  • Skeptically yours
  • Hero Member
  • *****
    • Posts: 1426
    • Skeptical ability: +12/-1
  • Carpe diem
Atlas + AlphaGo + Tay = SkyNet

word
Ei incumbit probatio, qui dicit, non qui negat; cum per rerum naturam factum negantis probatio nulla sit