AI (Autonomous) Weapons - Choice between Ethics and Power / Control! Which side will prevail?



  • AI (Autonomous) Weapons - Choice between Ethics and Power / Control!

    Which side will Prevail?

    Humanistic or Desire to Control the World!

    http://observer.com/2015/07/22-deepmind-rearchers-among-hundreds-calling-to-ban-autonomous-weapons-in-open-letter/



  • Looking for input from scientific bloggers!

    http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-we-really-should-ban-autonomous-weapons

    AutomatonRoboticsArtificial Intelligence

    Why We Really Should Ban Autonomous Weapons: A Response

    By Stuart Russell, Max Tegmark, and Toby Walsh

    Posted 3 Aug 2015 | 0:25 GMT

    Share

    |

    Email

    |

    Print

    |

    Reprint

    Photo: Getty Images + Dreamscope

    This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

    We welcome Evan Ackerman’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully. This process is well under way, and several hundred position papers have been written in the last few years by think tanks, arms control experts, and nation states. His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, makes four main points:

    (1) Banning a weapons system is unlikely to succeed, so let’s not try.

    (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil.

    (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat?

    (4) What we really need, then, is a way of making autonomous armed robots ethical.

    Note that his first two arguments apply to any weapons system. Yet the world community has rather successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons; and even for arms such as chemical weapons, land mines, and cluster munitions where bans have been breached or not universally ratified, severe stigmatization has limited their use. We wonder if Ackerman supports those bans and, if so, why.

    “A treaty can be effective in this regard by stopping an [autonomous weapons] arms race and preventing large-scale manufacturing of such weapons. Moreover, a treaty certainly does not apply to defensive anti-robot weapons, even if they operate in autonomous mode”

    Argument (2) amounts to the claim that as long as there are evil people, we need to make sure they are well armed with the latest technology; to prevent them from gaining access to the most effective means of killing people is to “blame the technology” for the evil inclinations of humans. We disagree. The purpose of preventing them from gaining access to the technology is to prevent them from killing large numbers of people. A treaty can be effective in this regard by stopping an arms race and preventing large-scale manufacturing of such weapons. Moreover, a treaty certainly does not apply to defensive anti-robot weapons, even if they operate in autonomous mode.

    The question (3) is in our opinion a rather irrelevant distraction from the more important question of whether to start an arms race. It is an interesting point that we discuss in the open letter and it represents exactly the pro-weapon position espoused over the last several years by some participants in the debate. The current answer to this question is certainly no: AI systems are incapable of exercising the required judgment. The answer might eventually change, however, as AI technology improves. But is it actually “the real question,” as Ackerman asserts? We think not. His argument, like those of others before him, has an implicit ceteribus paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities—numbers, times, locations, places, circumstances, victims—will be exactly those that would have occurred with human soldiers, had autonomous weapons been banned. This is rather like assuming that cruise missiles will only be used in exactly those settings where spears would have been used in the past. Obviously, the assumption is false. Autonomous weapons are completely different from human soldiers and would be used in completely different ways. As our open letter makes clear, the key issue is the likely consequences of an arms race—for example, the availability on the black market of mass quantities of low-cost, anti-personnel micro-robots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria. Autonomous weapons are potentially weapons of mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.

    “Autonomous weapons are completely different from human soldiers and would be used in completely different ways. [...] The key issue is the likely consequences of an arms race—for example, the availability on the black market of mass quantities of low-cost, anti-personnel micro-robots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria”

    Which leads to Ackerman’s fourth point: his proposed alternative plan of making autonomous armed robots ethical. But what more specifically is this plan? To borrow a phrase from the movie Interstellar, in Ackerman’s world robots will always have their “humanitarian setting” at 100 percent. Yet he worries about enforcement of a ban in his first argument: how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place? Moreover, one cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules.

    One point on which we agree with Ackerman is that negotiating and implementing a ban will be hard. But as John F. Kennedy emphasized when announcing the Moon missions, hard things are worth attempting when success will greatly benefit the future of humanity.

    Stuart Russell is a professor of computer science and director of the Center for Intelligent Systems at UC Berkeley, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach.” Max Tegmark is a professor of physics at MIT and co-founder of the Future of Life Institute. Toby Walsh is a professor of AI at the University of New South Wales and NICTA, Australia, and president of the AI Access Foundation.



  • Curiousity saheb: First define in its true and techinical appreciation what is artificial intelligence. This is one area in which I have got some lil knowledge. Or just you surfed and found this article attractive because there are mentions of humanitarian acts and evil acts. There is no such thing for robots or A1 automatic robot weapons. Artificial intelligence is kind of soft ware programmed in the A1 robots and they act according to the data at the receiving end of the system. You cannot program humanity in any sort of computerized machinery. Humanity is the judgment at the very spot of incident which only human biological brains can do. Don't make your judgment on such articles. Millions of articles have been written and debated on Evolution. Yet Evolution rules supreme even in modern science of DNA technology. Nuclear fission is spreading from developed countries to even developing countries like Pakistan, Iran, India and North Korea. No country would stop research work on sciences whether it is destructive or constructive on the maxim of "if we will not work on it, other country would surely do and we will fall inferior and vulneralbe to them". It is as simple as that. These articles are just decoration pieces with no cogent appeal to what is going on in the world of sciences. At the most such articles are mental luxuries and for debates over a cup of tea.



  • "There is no such thing for robots or A1 automatic robot weapons."

    When automated cars are around the corner, autonomous weaponry is not too far away.