Unless you are living under a rock, you might be aware of the warnings that are being bombarded by the experts of AI. It has been claimed that once humans develop AI to full, they will redesign themselves and grow at an exponential rate. There’s no denying the fact that we don’t know anything about future yet, but the AI that we have designed and coded till date is not mature enough to do so. We are at the initial stage where we are teaching them to take simple decisions by themselves.
Source: techemergence.com
We believe that by doing so, we can reduce the human involvement in making few simple tasks such as in judging who can get a loan or drive a car, get the next increment, or make some observations that are too fast for humans. But researchers are struggling with one question that is, if the machines should be taught about morality or not?
Why is This Such A Hard Question To Answer?
It’s an uphill task to conclude because real-life decisions are far more complicated than implementing the prototype. For an instance, think how will a machine get to know about fairness in any circumstance unless and until it knows about the entire scenario? We as human beings have the superpower to draw conclusions after looking at different facets of a circumstance, but machines will not act similarly!
Source: datasciencecentral.com
Also Read: Internet Of Things Paving Way For Smart Construction Industry
So technically it is not possible to induce humanity in our machines yet! Reason behind this is we are far from inducing sense of conscience and gut feeling on which we rely most of the time while making decisions. Yes, these machines are far better than us in playing poker, handling automation, doing similar calculations, but they do need human supervision. For now, it is a never-ending debate going on to decide the fate of artificial intelligence, we can focus on the following guidelines instead of fighting:
Defining the Ethical And Human Behavior Explicitly
We need to provide explicit answers for some specific questions till we do not teach them to react properly. However, this would need a proper panel because we are still not sure about our own ethics. Even if we keep our differences aside, there are a lot of things on which we cannot draw a sensible and widely acceptable solution.
Crowdsourcing Human Morality
If we wish to find an alternative for conflicts that might arise, then too machines will not be biased towards one thing. Depending on circumstances, their decisions may change which would again give rise to conflicts. Now if it has access to beliefs that humans have put their faith in. This way it will be easier to conclude anything from a situation.
Source: blog.applovin.com
Also Read: Internet of Things: Most Vulnerable IOT Technologies
Make Systems More Transparent
If we look at present scenario, then the neural networks that guide the actions of these machines are not at all understandable. So, if we know about how the engineers have taught them the ethical values, we can easily see who made mistake and not blame the algorithm which was too hard to understand. This will be highly useful to the intelligent self-driving vehicles as they’ll then learn from their mistakes and experiences.
This is recommended because it would be easier for humans to rely on machines that behave ethically. Moreover, the guiding principles would lessen the burden on artificial agents that are responsible for determining their actions. Subsequently we cannot ignore the fact that if machines and humans together needed to make decisions, the shared moral values could facilitate consensus and compromise.
Also Read: The Internet of Things: Network of Physical Objects – Infographic
To understand this better, let’s take an example. Visualize an operation theatre in which a group of experts is finding it hard to make a decision. Now, imagine there is a machine to help them. Well, basically we’ll become more powerful by doing so! and now we cannot wait this to happen!
What do you think? Leave your comments in the section given below and let us know!