While technology can be really great, I personally think human beings are pretty special. While not perfect, people have the capability to learn, grow, make amends and heal. The idea that computers are soon going to be able to adapt and “learn” leaves me wondering if we are better off leaving well enough alone…
Last month, an episode of the new HBO show, Silicon Valley, saw a character abducted to a remote and deserted island by a self-driving car.
On the show Person of Interest, a machine that monitors literally everything that happens in Manhattan progressively learns how to function autonomously, becoming more and more human-like in its decision-making process.
And in the movie Her, the main character has a romantic relationship with the personal assistant (read: voice) in his mobile phone.
As Google gets closer and closer to an actual self-driving car, it makes one wonder if there will come a time when it is possible to completely replace human beings with technology.
And it’s starting to look like this might happen sooner rather than later.
The US government is currently working with researchers to program autonomous robots to be able to make moral decisions.
So, can these robots actually do a better job than humans?
One argument is that the robots can evaluate a wider variety of options when making moral decisions.
Some people point to the fact that there hasn’t been a single accident involving a self-driving car.
However, arguments can also be made that while able to evaluate broader options, the robots are still making decisions based on a programmed set of variables.
And the cars?
They only run effectively where the streets and terrain have already been mapped out!
Some tech experts purport that true Artificial Intelligence (AI) is at least 10 to 20 years in the future as true robot “learning” takes a great deal of time, data collection and analysis.
But I think the even bigger question is do we really want these AI robots and machines to be so fully functioning that we rely on them for everything?
What happens if they malfunction as happened in the movie I, Robot?
Will it actually be possible for these artificial forms of intelligence to take over the world?
And if so, what are the potential downsides?
As always, let us know your thoughts in the Comments below…