Editor’s Note: This story has been updated with additional commentary.
DALLAS — When software engineer Bejoy Narayana was developing Bob.ai, an app to help automate the Dallas-Fort Worth Section 8 coupon program, he stopped and wondered, “This system could- it be used to help some people more than others?”
Bob.ai uses artificial intelligence, known as AI, and automation to help voucher holders find rental homes, landlords close deals, and housing authorities conduct inspections. The software and mobile app were launched in 2018 in partnership with the Dallas Housing Authority, which gave Narayana access to data for some 16,000 Section 8 bond holders.
Artificial intelligence is used in a host of algorithms in medicine, banking and other major industries. But as it has proliferated, studies have shown that AI can be biased against people of color. In housing, AI has helped perpetuate segregation, redlining and other forms of racial discrimination against black families, who are disproportionately dependent on vouchers.
Narayana was concerned that Bob.ai would do the same, so he changed his app so that tenants could search for apartments using only their voucher number, without providing any other identifying information.
As an Indian immigrant overseeing a team made up largely of people of color, Narayana was particularly sensitive to the threat of racial prejudice. But lawmakers in a growing number of states don’t want to rely on the goodwill of AI developers. Instead, as AI is embraced by more industries and government agencies, they want to strengthen and update laws to guard against racially discriminatory algorithms, especially in the absence of federal rules.
Since 2019, more than 100 bills related to artificial intelligence and automated decision systems have been introduced in nearly two dozen states, according to the National Conference of State Legislatures. This year, lawmakers in at least 16 states have proposed creating panels to examine the impact of AI, promote public and private investment in AI, or address transparency and fairness in AI development. ‘IA.
A California bill would be the first to require developers to assess the privacy and security risks of their software, as well as assess the potential of their products to generate inaccurate, unfair, biased or discriminatory decisions. Under the proposed law, the California Department of Technology would have to approve software before it can be used in the public sector.
The bill, introduced by Assemblyman Ed Chau, a Democrat and chairman of the Privacy and Consumer Protection Committee, passed the California State Assembly earlier this month. and was pending in the State Senate at the time of publication.
Chau said the bill would establish a clear system of accountability and provide transparency on AI design systems to minimize discriminatory effects on California residents, a key goal as the state works. to its pandemic recovery. Drafting a new policy “is particularly important as our government agencies seek to use algorithms [AI] systems to improve our quality of life,” he wrote in an emailed statement.
Vinhcent Le, an attorney at the Greenlining Institute, an advocacy group focused on racial economic justice, helped draft the California legislation. Le described algorithms such as Bob.ai as gatekeepers of opportunity that can either perpetuate segregation and redlining or help end it.
“It’s great that the Bob.ai developers have decided to omit one person’s name, but we can’t rely on small groups of people to make decisions that can essentially affect thousands of people,” said Le. “We need an agreed way of auditing these systems to make sure they incorporate fairness measures in a way that doesn’t unfairly disadvantage people.”
According to a Massachusetts Institute of Technology October Report, AI has often exacerbated racial bias in housing. A report 2019 from the University of California, Berkeley, showed that an AI-powered mortgage system charged black and Hispanic borrowers higher rates than whites for the same loans.
In 2019, US Senator Cory Booker, a Democrat from New Jersey, introduced a bill like the one being considered in California, but it died in committee and was not reintroduced.
“Fifty years ago, my parents encountered a practice called ‘real estate piloting’ where black couples were kept away from certain New Jersey neighborhoods. With the help of local attorneys and the backing of federal legislation, they prevailed “, Booker said in a press release introducing the bill.
“However, the discrimination my family faced in 1969 can be much harder to detect in 2019: homes you never know are for sale, job opportunities that never come up, and funding you never realize, all due to biased algorithms.”
Several states have struggled in recent years with problematic software.
Facebook overhauled its ad targeting system to prevent discrimination in housing, credit and job postings in 2019 as part of a settlement to resolve legal challenges filed by the National Fair Housing Alliance, the American Civil Liberties Union, Communications Workers of America and other advocacy groups. groups.
In Michigan, an AI system that cost the state $47 million to build in 2013 falsely accused up to 40,000 people of unemployment insurance fraud, forcing some people into bankruptcy, according to to the Detroit Free Press.
In Pennsylvania, a child abuse prediction model unfairly targets low-income families because it relies on data that is only collected on families using public resources, according to Virginia Eubanks’ 2018 book “Automation of inequalities.”
“Automated decision-making breaks the social safety net, criminalizes the poor, intensifies discrimination, and undermines our deepest national values,” Eubanks wrote. “And while the most comprehensive digital decision-making tools are tested in what might be called “low-rights environments” where there are low expectations of political accountability and transparency, systems designed first for the poor will eventually be used for everyone.”
The Sacramento Housing Redevelopment Agency started using Bob.ai in March. Laila Darby, deputy director of the housing voucher program, said the agency vetted Bob.ai before using it to ensure it did not raise privacy and discrimination concerns.
Narayana said he was confident that Bob.ai would pass any state-mandated test for algorithmic discrimination.
“We are a company that fights discrimination and does everything we can to expand accommodation for voucher holders,” Narayana said. “Checking these systems is beneficial because discrimination and inequality are things everyone should be concerned about.”
Narayana worked as an engineer at IBM until he decided to start his own company with a mission to redesign government functions. He founded BoodsKapper in 2016 and began developing Bob.ai from a coworking space near the Dallas-Fort Worth airport.
Narayana’s creation was a huge success, in Dallas and beyond. The Dallas Housing Authority used Bob.ai to reduce the average wait time for an apartment inspection from 15 days to one. Since the launch of Bob.ai, Dallas and more than a dozen other housing agencies have added some 20,000 Section 8 homes from landlords who were not participating in the program due to long inspection times.
“We have partnered with [Narayana] to deliver technology advancements to our workflows and automation so we can respond faster to our business partners so they don’t see this as a lost track in terms of working with the voucher program,” said said Troy Broussard, Dallas Housing Authority CEO.
Marian Russo, executive director of the Patchogue Village Community Development Agency on Long Island, New York, said she hopes Bob.ai can help the agency reverse the area’s long history of redlining. The authority plans to start using Bob.ai to manage its 173 housing vouchers later this year.
“We’re one of the most isolated areas in the country,” Russo said of Long Island. “We have 25 housing authorities, so if we could just have one central place with all the landlords renting through the scheme and all the people looking for housing in one place, that could help to even out the housing problems. on Long Island.”
US Representative Bill Foster, a Democrat from Illinois, has similar hopes for AI. During a May 7 hearing, members of the U.S. House Financial Services Committee’s Artificial Intelligence Task Force discussed how AI could expand lending, housing, and other opportunities. But they also warned that historical data entered into AI can create racist or sexist patterns. Foster’s office did not respond to multiple requests for comment.
“The real promise of AI in this space is that it can eventually produce greater justice and fairness in ways that we ourselves might not have envisioned,” said Foster, chairman of the group. of work, during the hearing. “So we want to make sure that the biases of the analog world are not repeated in the world of AI and machine learning.”