Artificial intelligence is the stuff of nightmares and science fiction stories. What if it goes rogue and thinks we're useless? It's not as far-fetched as you might think.
Artificial intelligence can automate many repetitive things we do every day. It can even drive for us and recognize different human faces.
The problem?
Artificial intelligence is as unbiased as we can program it – and we are humans with biases. How can we ensure AI is programmed ethically and unbiasedly?
People have repeatedly tried to program AI to take on lower level tasks. It was supposed to free us to deal with things that are higher in nature.
Amazon has tried this with its rental protocols. He fed resumes into an artificial intelligence algorithm, telling it which candidates had been successfully hired. The result was not only that the IA refused to consider female job applicants, but they also expelled any resumes containing women as references.
There are more dangerous examples of biased programming in artificial intelligence. A 2019 study found that driverless cars were better at detecting pedestrians with lighter skin tones. The data provided to the algorithm contained three times as many light-skinned people as dark-skinned people. So the AI learned to detect lighter-skinned people much faster, but struggled to identify darker-skinned people.
There has also been a lot of talk lately about facial recognition software used by police departments. Some cities and states prohibit this practice. However, states like Orlando, Florida and Washington County, Oregon have already started using the software.
It has many of the same problems as facial detection software in self-driving vehicles. The programming is biased and often misidentifies dark-skinned people. The ACLU matched 25,000 snaps with photos of members of congress and found 28 false matches, 39% of which were people of color. This technology scans police body camera footage as well as security footage, even with known flaws.
Artificial intelligence has the potential to make our lives easier. If we can find a way to program AI to be ethical, we can actually use technology to save lives. It is estimated that driverless cars save us up to 250 million hours of free time, $234 billion in public costs in accident savings and 90% of road deaths. But that is, of course, only if it is programmed correctly.
There's not even a consensus on how driverless cars should react in situations that could lead to death or injury. Only about three-quarters of people believe that driverless cars should save as many lives as possible. There is not even a consensus that human life is more valuable than property or other considerations.
There are people who believe that self-driving vehicles should save children's lives almost unanimously. However, the lives of criminals or animals are less privileged. Also, very few people were actually willing to spend the money to buy a car with programs that would minimize the damage.
As the old saying goes, trash in the trash.
If we want artificial intelligence to be less biased, we need to understand the inner workings of human biases. We need to spend time to make sure this doesn't get translated into the AI training modules.
Training the AI to weigh darker skin tones more heavily or ignore gender could help make the algorithms less biased. Paying more attention to the data fed into the system and monitoring any issues in the output will be crucial steps moving forward. A subtle human bias can be multiplied when it is part of an algorithm. Once the AI is on its own, it can become a serious problem.
If we want to be able to take full advantage of AI, we need to do the up-front work to make sure it thinks ethically. Learn more about ethical artificial intelligence from the infographic below.
Are we ready for a world where machines can make their own decisions?
Source:Cybersecurity Degrees