Thursday, August 3, 2017

Those scary AI programs that can take control overnight

Facebook creates AI bots to chat with humans. In the experimental level, they were given a chance to communicate with each other and there is a reinforcement learning program that provides measured feedback, hence they are expected to develop a good human like language as time goes.

But for some reason, an obvious bug in reinforcement feedback, the bots developed a non-human like more logical but less expressive languag
e which looks like utterly harmless gibberish.

"i can i i everything else,"
"i i can i i i everything else,"
"Balls have zero to me to me to me to me to me to me to me to me to,"


Following is what some media agencies reported.

"FB detected AI developing their own language and closed down the system" [well, developing their language was the objective]
"this is a conspiracy, to take over humanity" [how? via FB chat screens?]

Facebook issued a media note saying that they did not shut down the division. Developer accepted that it must have been a bug in learning algo and stated that he removed the bot for bug fixing. Yet the conspiracy theorists make their noise loud. Hope the biggest fools of all - the Hollywood - won't pick it from here.

To go with this some philosopher who has no understanding of AI has given a famous problem of paper clips which sounds like the dumb most idea I ever heard.

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."
— Nick Bostrom, 2003

AI programs are human created within human set limitations. They are given human knowledge to do human work. They only arbitrate from human control in learning from that knowledge to do that work. They cannot overnight rewrite their programs and gain co
ntrol of nuclear weapons, or build robot armies by hacking into a lab. They [if we consider them as individuals] can at most do their job best or otherwise worse.

Of course, there is Genetic Programming [reminded by a good friend in the discussion] which can write their own codes or rather combine given code segments to achieve better results. But they also can do all that within the universe that humans define them to work in. The AI does not have an intuition to take them out of that box, or comprehend anything out of it. An AI system with genetic programming that you write to calculate best algo for stock trading cannot secretly grow and gain control of US nuke subs. You cannot develop a general purpose all human problem solver AI that can re-program itself.

Elon Musk only said that these programs should be regulated, not because they can secretly overgrow, but because they can be utilized to harm ourselves by us.

AI cannot harm humans but humans can use them for harmful activity. Following video shows one which is still making an innocent gesture over humanity but can be developed to achieve horrendous goals. Regulation is needed against that kinda acts of humans.

Atlas of Boston
Dynamics

1 comment:

  1. I hear you. This is good as long as the the AI (engine) is specific (ie: built for a limited task) albeit with a kill-switch. BTW is intuition equals wisdom, which equals to knowledge + experience? If so a learning algo can "gain" intuition (with time). Again it should be "local" to a domain to keep it safe from "stupid humans"

    ReplyDelete