Wednesday, 30 May 2018
Why creating AI that has free will would be a huge mistake | Joanna Bryson
AI expert Joanna Bryson posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan every YouTube comment in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place? Read more at BigThink.com: https://ift.tt/2H4uRVw Follow Big Think here: YouTube: http://goo.gl/CPTsV5 Facebook: https://ift.tt/1qJMX5g Twitter: https://twitter.com/bigthink Joanna Bryson: First of all there’s the whole question about why is it that we in the first place assume that we have obligations towards robots? So we think that if something is intelligent, then that’s their special source, that’s why we have moral obligations. And why do we think that? Because most of our moral obligations, the most important thing to us is each other. So basically morality and ethics are the way that we maintain human society, including by doing things like keeping the environment okay, you know, making it so we can live. So, one of the way we characterize ourselves is as intelligent, and so when we then see something else and say, “Oh it’s more intelligent, well then maybe it needs even more protection.” In AI we call that kind of reasoning heuristic reasoning: it’s a good guess that will probably get you pretty far, but it isn’t necessarily true. I mean, again, how you define the term “intelligent” will vary. If you mean by “intelligent” a moral agent, you know, something that’s responsible for its actions, well then, of course, intelligence implies moral agency. When will we know for sure that we need to worry about robots? Well, there’s a lot of questions there, but consciousness is another one of those words. The word I like to use is “moral patient”. It’s a technical term that the philosophers came up with, and it means, exactly, something that we are obliged to take care of. So now we can have this conversation. If you just mean “conscious means moral patient”, then it’s no great assumption to say “well then, if it’s conscious then we need to take care of it”. But it’s way more cool if you can say, “Does consciousness necessitate moral patiency?” And then we can sit down and say, “well, it depends what you mean by consciousness.” People use consciousness to mean a lot of different things. So one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually the best one is something like “Scientists Show That A.I. Is Sexist and Racist, and It’s Our Fault,” which that’s pretty accurate, because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so human-like that it’s picked up our prejudices and whatever… and it’s just vectors! It’s not an ape. It’s not going to take over the world. It’s not going to do anything, it’s just a representation; it’s like a photograph. We can’t trust our intuitions about these things. We give things rights because that’s the best way we can find to handle very complicated situations. And the things that we give rights are basically people. I mean some people argue about animals, but technically, and again this depends on whose technical definition you use, but technically rights are usually things that come with responsibilities and that you can defend in a court of law.
Labels:
Big Think
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment