You may have the wrong idea about artificial intelligence, and it’s not your fault. In recent months, there have been many stories about the power and claimed abilities of technology.
These stories range from sensationalist to downright silly. AI experts and pioneers are not helping when they sign open letters calling for a halt to AI research and warning of an impending extinction level event.
AI is more than just chatting and taking photos. It’s also not Skynet saying it’s going to go off and kill everyone. In fact, artificial intelligence is not very smart. Hiding the fact that he has been known to hallucinate events and make bad decisions in the past has allowed him to do real harm to humans.
Although these damages are caused by a number of things, most of them can be traced back to the problem of bias, which has been going on for a long time. Lots of data is used to train AI bots like ChatGPT and the algorithms that suggest videos on YouTube. This information comes from people, many of whom are sadly biased, racist, and sexist.
For example, if you want to create a bot that decides who should graduate from a university, you can submit demographic information about the types of people who have earned degrees in the past. But if you did that, you’d probably end up with mostly white men and turn away a lot of people of color. This is because, historically, colleges have turned away far more people of color than white people.
This is also not an exaggeration. We have seen this happen many times, each time in a different way. There has been a lot of talk about AI in recent months, but it has been affecting many parts of our lives for years. Before ChatGPT, AI programs were also used to find out how the unemployed made a living, whether you had a place to live, and even what kind of health care you received.
This setup shows what this technology can and cannot realistically do. Without it, you are likely to believe in the hype of AI, which can be very dangerous itself. The misinformation and false claims about these bots come with the hype. There are many ways in which this technology has become a part of our lives. Here are six of the biggest ways we’ve seen this happen.
If you want to buy a house, you will probably have to follow a series of steps called a formula. For example, your FICO credit score is based on a formula and is an important part of whether or not you get a loan of any kind.
But you may also have to go through an AI approval process. Fannie Mae and Freddie Mac introduced automatic underwriting software in 1995. This software was supposed to make the process of approving or rejecting a mortgage loan faster and more efficient by using artificial intelligence to determine the probability that a potential borrower defaults on their loan. loan.
Although it was said that these systems would not care about color, the results were poor. The Markup published a report in 2021 saying that home loan algorithms in the US were 80% more likely to reject Black applicants, Asian and Pacific Islander applicants 50%, Latino applicants by 40% and Native American applicants by 70%, compared to similar white applicants.
In places like Chicago, where black applicants were 150% more likely to be rejected than white applicants, and Waco, Texas, where Latino applicants were 200% more likely to be rejected, these numbers rose even higher.
Jail and prison sentences
When it comes to handing out punishments or being nice in a court of law, we think of judges and lawyers. In fact, much of that work is done with algorithms to determine whether or not a defendant is likely to reoffend.
In 2016, ProPublica found that a popular AI was helping judges hand down much harsher sentences to black defendants at double the rate of white defendants (45% vs. 23%). Additionally, white inmates were thought to be less likely to reoffend than they actually were. This led to an inaccurate recidivism rate.
Even now, that same bot is being used in places like New York, California, Florida, and Wisconsin to find out how dangerous criminals are.
As if job hunting wasn’t frustrating enough, a racist HR bot might have to read your resume.
There are different types of bots that can be used to hire people. HireVue is a company that hires people for companies like Hilton and Unilever across the country. It has software that studies applicants’ facial expressions and voices. The AI then gives them a score and tells the company how they compare to the workers they already have.
There are also AI tools that quickly scan your resume to find the right keywords. This means you could be rejected before a real person in HR even sees your cover letter. As with so many other AI applications, the result is that more applicants of color are rejected than similar white applicants.
Diagnosis and medical treatment
Hospitals and doctors’ offices have long been using automated tools to help with diagnosis. In fact, places like the Mayo Clinic have been using AI for years to help find and detect problems like heart problems.
But when it comes to AI, bias always shows up, and medicine is no exception. A 2019 study published in Nature found that an algorithm used to manage health populations often caused black patients to receive worse care than similar white patients. Black neighborhoods and patients with the same need also receive less money.
With the rise of ChatGPT and other healthcare tech startups trying to create diagnostic chatbots (to varying degrees of embarrassment), many experts worry that the harms we’ve already seen from chatbots could make these bias issues worse. This is also compounded by the dirty past of scientific racism in the medical field.
The most obvious way that AI affects your daily life might be through social media algorithms, which is probably how you came across this article in the first place. These AIs can show you your friend’s latest Instagram photo from their trip to Italy or your mother’s embarrassing Facebook status, but they can also promote radical content on YouTube or push a far-right agenda on Twitter.
In the past, bad players have found ways to use these algorithms to push their own political ideas. This happens all the time on Facebook, where huge troll farms in places like Albania and Nigeria spread false information to try to change elections.
At their best, these systems can help you find a fun new video on YouTube or Netflix to watch. At its worst, that movie tries to make you think that vaccines are dangerous and that the 2020 election was stolen.
But that’s how AI works. These technologies have great potential to help people make decisions more easily and quickly. But when AI is weaponized by bad people, abused by greedy corporations, and carelessly applied to historically racist and biased social systems like incarceration, it does far more harm than good, and you don’t need an AI to tell you that. .
Subscribe to our latest newsletter
To read our exclusive content, sign up now. $5/month, $50/year