There are times throughout the year when I climb out on a limb and try to view the world from a different perspective. Sometimes I can see the future, but often it changes before it gets here. Sometimes I can see something which appears to be crystal clear only to realize that it’s much more murky, cloudy, and unfocused up close.
So it is with artificial intelligence, Apple’s new sleight of hand buzzword designed to get us to stop thinking about new Macs.
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
Everybody wants to see some examples of artificial intelligence doing good for mankind, perhaps based upon a realization that mankind’s natural intelligence isn’t doing so well; a situation that increasingly is difficult to argue against. The Obama administration recently released a report on the future of artificial intelligence that covered a number of important topics. These included ethics, bias, job loss, and the positive outcome for a variety of related industries. Let me focus on ethics and bias because humankind doesn’t seem to agree on the definitions or implementations of either, so how is it possible we mere humans could construct algorithms which would work better than human ethics and eliminate bias?
Whose ethics? Whose bias?
Anybody see a problem there? Fortunately, there’s no central entity creating artificial intelligence, whether for autonomous robots or self driving vehicles or any other electric device– personal intelligent assistants like Siri come to mind– but instead we will see AI spread in many forms, and each will have their own ethical management system, and each their own built-in bias (humans cannot seem to remove bias from ourselves, so how successful can we be with AI since we model it after ourselves).
One of our first modern views of artificial intelligence encapsulated within a personal intelligent assistant probably comes from Asimov’s Laws, also known as The Three Laws Of Robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Seems fair and distinct enough, right? Foolproof? Probably not, but mostly because fools are so ingenious. Artificial intelligence can be used by mankind in many facets of our daily lives, but it always seems as if there is a way we can defeat ourselves, and a way that AI can defeat its own programming, and cause detriment to humankind.
Witness Facebook’s inability to keep out fake news from trending topics when guided by artificial intelligence.
The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others
Ethics and bias, like algorithms, need to be tweaked.
- 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What causes me to rethink the ability of artificial intelligence to help mankind overcome basic human problems with ethics and bias?
The 2016 presidential election.
Apple is one of many technology companies working toward the creation of devices and services that will herald a new generation of artificial intelligence. Will such new versions of AI remove human bias and manage ethical situations to the delight of mankind? Or, the destruction of mankind? I see nothing to date– including Siri’s feeble attempts to be useful– that indicates the former, and plenty that indicates the latter. Today’s news organizations can take the same facts and view them differently on the same day which tells me bias is inherent in the species, and ethics are not manageable by humans, so how is it possible to instill both into artificial intelligence?