Did you hear about the woman who sued an AI company because it stopped her from getting an apartment?
Yeah, I read about that! The tenant screening tool, SafeRent, gave her a low score, and she was rejected even though she had a great rental history.
Wait, how does that even work? What does the AI check for?
SafeRent uses an algorithm to calculate a score for applicants based on things like credit scores and debt. But it doesn’t explain how those factors are weighted or why someone gets approved or rejected.
So the AI just decided she wasn’t good enough? That sounds unfair.
It really was. She had been a great tenant for 17 years, always paying rent on time. But because she had a low credit score and used a housing voucher, the AI flagged her as a risk.
What’s wrong with using a housing voucher? Isn’t that guaranteed rent for the landlord?
Exactly! Housing vouchers ensure the landlord gets paid at least part of the rent by the government. But the AI didn’t take that into account, and studies show minority renters are more likely to use vouchers, so it’s also discriminatory.
That’s so frustrating. Did she do anything about it?
She joined a class-action lawsuit with over 400 other renters, claiming SafeRent’s algorithm discriminated against Black and Hispanic tenants. They sued under the Fair Housing Act and won a settlement.
What happened in the settlement?
SafeRent agreed to stop using a scoring system for housing voucher tenants for five years and paid $2.3 million. They also have to get any future scoring systems reviewed by a fair housing organization.
At least that’s something. But why are companies using these tools if they’re so flawed?
It’s cheaper for landlords and management companies to let AI handle applications. They can blame the computer for rejections and avoid dealing with applicants directly.
But what if someone gets rejected unfairly? Can they appeal?
That’s another problem. In most cases, people don’t even know an AI made the decision, and there’s no way to challenge it. It’s like hitting a wall.
So, is this just a housing issue, or does AI do this in other areas too?
It happens in other areas, like hiring, healthcare, and government aid. AI often makes decisions about jobs, medical care, or benefits, and it’s not always accurate or fair.
That’s kind of scary. What’s being done to fix this?
There aren’t many regulations yet, but some people, like Mary Louis and her lawyers, are suing to hold companies accountable. The government is trying to catch up, but it’s slow, and lawsuits take a lot of time and money.
So, AI is making life harder for people who already have it tough. That’s so wrong.
It is, but cases like this one could set an example. Hopefully, it pushes companies to make fairer systems and governments to create better laws to protect people.
Yeah, I hope so. AI should help people, not make their lives harder.