Would You Trust AI with Your Life?

Peeter Meos, Data Science Lead
Katariina Roosipuu, Marketing Lead

The inevitability of new ethical questions arising from advancements in technology presents  many researchers and scientists with quandaries. These range from weighing the effects of medical drug trials on consenting subjects, to the trade-off of free and open internet access for security and safety online.

Artificial intelligence has created an entirely new range of ethical questions that challenge the very fabric of our society, and threaten to undermine or trivialise laws that have stood for hundreds, if not thousands of years.

In technology we trust – artificial intelligence

The more we entrust key aspects of society and our daily lives to technology, the more we will rely on it to make critical decisions about things ranging from what track to play on the living room audio system, to whom to save in a car crash. These two examples sit awkwardly together, but the most awkward thing here is that the outcome of both these two decisions rely on pretty much the same AI process.

Specialists at Proekspert spend their days (and most likely nights) pondering these moral questions. As a company that champions AI technology we have a duty to examine the moral and ethical minefield that surrounds our work.

Who is responsible for the outcome? The creator programs the outcome, and no more. AI is a decision made from a bank of data, but some decisions and situations will require far too much consideration for any human to be able to program them.

AI conundrum: The trolley problem

The favourite AI conundrum is the “trolley problem.” In a crash, do you save the driver? Or the paying customer who bought the ticket? Is it acceptable for a product to decide to kill them both in order to save a child? Ethically, you should save the child, right? And how would the AI even know? There are too many factors at play: speed, intention, demographics, and the culturally-defined values of each individual. Because of this problems like these, the industry is going through a period of introspection. And so it should.

When engineering AI you have no social mechanism: you have only robotic laws. An artificially intelligent machine isn’t a legal entity (yet) and therefore is technically above the law. So where does the blame lie if a machine kills? With the programmer? With the owner? Or with the person killed? Decision mechanics within the AI world have to process more data to become more accurate, working within a set of rules entered by a well-meaning creator. AI also creates its own rules, gleaned from the environment; an intelligent learned behaviour spawned within a set of carefully derived parameters.

Ultimate responsibility in artificial intelligence

The truth is the creator is ultimately to blame, even if it’s just by default. AI cannot make ethical decisions unless it is programmed to do so, and even if parameters or ethical constraints are laid down by teams of hundreds, it’s near impossible to predict the kind of decision that will be made in every single situation. The only thing that can be guaranteed is the parameter within which the AI can work, and more importantly the lines it cannot cross. But in a situation like our example above, where in a tragic event someone must die, the burden ultimately falls at the feet of those without the in-the-moment knowledge or live information from the event, so essential in any human split-second decision.

It’s not all doom and gloom, far from it. We see AI as a tool to improve the human experience, not threaten it. The role of AI in the elimination of waste, the streamlining of processes and the handling of simple yet monotonous decisions within an industrial setting for example, is proving invaluable, and more importantly carries little of the moral chaos that more headline grabbing applications do.

Release from monotony

AI frees up time, it allows us to devote more time to creativity, to be more productive and less constrained by routine. By releasing us from monotony, we can unlock our skills. It may be able to automate our work, but instead of putting us out of a job it allows us the freedom to do a better job, making even routine work pleasant as it takes away the boring part. We become more self-determined, independent and free to look for new solutions, set new standards and assume an optimised role within our work.

In this sense the ethical questions are less dramatic and more attainable, but still essential. As one person is given greater power through the use of AI, they need a heightened ethical code to follow. When people are on the line, their education and personal development become the central ethical question, their value laid bare.

No matter how routine the role may be, such as a production line or repetitive process, there will always be the need for tough decisions, and these can as yet only be taken by a person. It’s when we accept this that the dilemmas surrounding more experimental applications of AI become naked and exposed, their progress impossible to fathom within an ethical framework and relying on technology with emotional limitations that struggles with the simplest of human decisions. The advantages of AI are clear, but its limitations can’t be ignored, and we may need a seismic shift in the way we as humans see the value of our own lives for any significant progress to be made, though it’s this kind of shift that may signal the beginning of the end for us.

If you feel like discussing it further, feel free to contact us: peeter.meos@proekspert.ee or katariina.roosipuu@proekspert.ee


Tech Tomorrow

Receive our weeky newsletter! Inspiring ideas that are worth your time

Subscribe

Go smarter with Proekspert.

Please fill in the contact form below and we'll get back to you as soon as possible.

Thank You!

Your message has been sent. Our team will get back to you as soon as possible.

Close this window
Close icon