Algorithm biases
How does artificial intelligence make decisions? Why does an algorithm written for a bank approve a loan for one client, but refuse another client? Is it just about income, credit history, or something else? Today, neural networks help doctors in hospitals look at X-rays, suggest a diagnosis, and predict how a patient will be treated. They write music, pictures and texts. They predict traffic jams, weather … and crimes. How can an algorithm predict a crime? Here, enthusiasts have compiled “terrible uses of artificial intelligence” around the world. There are several stories in this collection.
Predictive algorithms used by police and courts are a popular story about the conflict between technology and human rights. In Los Angeles, Atlanta and Philadelphia, algorithms analyze past crime data to inform where crime may occur in the future. One of the popular programs is PredPol, an algorithm developed by the LAPD in conjunction with local universities. The program collects data about where and when crimes were committed, and then maps hotspots – places where a crime is likely to occur in the future. Researchers at the nonprofit Human Rights Watch analyzed the work of PredPol and found that the algorithm often recommended that police send patrols to areas with predominantly black residents. At the same time, those areas where there are more whites and where, in particular, crimes related to drug trafficking were more frequent, were left without the attention of patrols. Algorithms can learn. If the police go to some area over and over again, where they record a crime, the algorithm remembers this. Next time, the program will first of all suspect the place where a similar crime took place. At the same time, she will not pay attention to the area, which simply no one went to.
Human Rights Watch concluded that the program promotes racial prejudice among American police officers. Now the American company PredPol has been renamed Geolitica, its program is one of the most popular in the United States. It is worth mentioning that human rights activists reported that algorithms were used to record drug crimes, but, according to PredPol CEO Brian McDonald, the programs are not used to fight drugs, but use data on crimes such as assault, robbery and car theft precisely to to avoid the bias found by Human Rights Watch.
Most of the algorithms for courts and police from the collection of “bad technologies” are used in the United States and China (an example in China is the use of technology for video surveillance of Uighurs – the population of the Chinese region of Xinjiang, a report on this by Human Rights Watch released in 2018).
Another example of the use of algorithms in the penal system is COMPAS, which is used by judges in Wisconsin. This algorithm predicts the risk of recidivism. Its manufacturer refuses to disclose details of the patented development, only the final risk assessment is known. The authors of the collection believe that the algorithm discriminates against people with black skin.
Why did IBM refuse to make face recognition systems?
Programs in general are prone to bias against blacks, as found in many studies. And not only the above, but, for example, face recognition algorithms, and they are simply worse at recognizing people who are not white-skinned. As the chief of police in Detroid admitted, these algorithms incorrectly identify up to 96% of suspected blacks, leading to false arrests. The “bad AI” compilation contains a case in which Google's image recognition program marked the faces of several blacks as gorillas. Amazon Rekognition incorrectly sexed black women 31% of the time. Women with fair skin were mistakenly identified as men in only 7% of cases.
Facial recognition systems are one of the most controversial technologies. Every year there are more and more demands from society and human rights organizations to abandon such algorithms. Last year, IBM decided to abandon the development of facial recognition technology. “It's time to talk about whether the country's law enforcement agencies should use facial recognition technology,” the company explained.
Why did Amazon's program refuse women jobs?
Discrimination against women in hiring was caught by a program that for several years helped the HR department at Amazon select developer resumes. The algorithm was trained to select applicants whose resumes were similar to those of already hired employees. It is important to understand here that men in the IT industry are more than 90%. The algorithm saw CVs predominantly from men and selected men accordingly. Subsequently, the company stopped using this program.
How did algorithms become racist? Why did they start discriminating against women when hiring? Artificial intelligence solutions are as good as the data on which it was trained. The data is, for example, a map of locations in Los Angeles where crimes have already occurred. Or a resume of developers who have passed probation in a corporation. The computer is not capable of thinking out – it makes decisions based on most of the data it knows. Compounding the problem is the fact that there are few people from social groups subject to discrimination in IT companies that develop these algorithms. For example, only 3.1% of US IT workers are black.
Ideal couple: he is older and more status. This is what the algorithm thinks
Discrimination due to algorithms is not something that happens somewhere far from us – in the American police station or in the Amazon office. Chances are you ran into it too. For example, if you've used Tinder. A couple of years ago, French journalist Judith Duportay published the results of an investigation into the secret algorithms of the social network. Duportay found that Tinder assigns a secret attractiveness index to users, based on which it shows potential candidates (and not at all randomly) and those who are geographically closer. The algorithm calculated the income level (for the sake of this, it looked for different information about users in other social networks), the level of intelligence (how smart the user is, he figured out on the basis of the vocabulary that he uses in correspondence).
The index of attractiveness of men and women was influenced by different characteristics – if the algorithm raised the rating for men with high incomes and good education, then for educated and well-earning women, on the contrary, it decreased. The ideal pair for the algorithm looked like this: a man is older and has a higher status, a woman is younger and a lower status. “I'm sure no one at Tinder headquarters would deliberately say,“ Let's make a sexist app! ”But I think the developers have coded their beliefs and values into its algorithm. I have almost no doubt that their sisters and girlfriends are would also wholeheartedly wish to find a prominent older and richer man – without wondering about the ideology behind this wish, “- said Judith Duportay in an interview published on colta.ru.
What's wrong with the program that wrote the column for The Gardian?
One of the biggest scandals in the AI industry last year was the firing of researcher Timnit Gebru from Google. Google has a research area on the ethics of artificial intelligence, where Gebru and colleagues have worked on a publication on the ethics of large neural network language models (such as BERT and GPT-3). According to the company's rules, scientific articles of employees are pre-moderated in the PR & Policy department before publication. This work did not pass her – it was recommended to withdraw the article or remove company employees from the list of authors.
Gebru talked about the problems of large language models. For example, their inability to maintain impartiality, freedom from racist prejudice. “Most language models are designed primarily to meet the needs of those who already have privileges in society,” Gebru said in that article. “If the training data is too large, so that you cannot change its characteristics, it carries too great risks.” … How well such programs cope with writing texts can be understood by reading the column that GPT-3 wrote according to the terms of reference of The Gardian editor.
Algorithms-managers
People are often afraid that artificial intelligence will take their jobs. How about if you work harder under the supervision of the programs? At the beginning of last year, The Verge published an article about how in call centers, in service and in production, algorithms monitor employees. “The robots keep track of the hotel maids, suggesting which room to clean, and how quickly the work gets done. Algorithms control software developers, track their clicks and scrolls, and reduce pay if they are slow. Programs listen to call center employees, tell them , what and how to say, and constantly load them to the maximum “, – the cases of control by algorithms are listed in the article.
Employees of an IT company in Hangzhou were given smart office chair cushions that monitored heart rate and posture. In addition, the pillow monitored how often they were absent from their jobs, reports on this were reviewed by managers. Algorithms-managers are taking over jobs in China. The government grants subsidies to businesses to move to digital governance solutions. According to research firm iResearch, the local market for digital control systems grew from 7.08 billion yuan (then $ 1 billion) in 2017 to 11.24 billion yuan in 2019.
The already mentioned Amazon is one of the companies where technology helps to load employees with work to the maximum. Many delivery vehicles have CCTV cameras to track the driver. When stopping or deviating from the route, the dispatcher calls him to clarify the reason for the delay. In warehouses, employees must not be absent for more than 15 minutes. For violation – sanctions and fines. In offices, employees are monitored by other programs, but with the same purpose. Overwork and an exhausting schedule at Amazon is a known issue. Recently, there have been reports of Amazon drivers and couriers being forced to pee in plastic bottles due to busy schedules. Amazon employees in the US are now trying to organize a union. In this text, TASS wrote in detail why employees need it and what is wrong with working conditions in the corporation.
Social dilemma
Why do people become addicted to social media so easily? The developers thought of them like this, knowing our psychology. Last year, Netflix released the documentary Social Dilemma, in which former top managers and IT developers talk about how they developed popular social networks like Twitter and came up with, for example, the “like” button, and later realized how destructively these manipulations affect society. One of the heroes of the film is Tristan Harris, a programmer, in 2013 he worked at Google. One evening, he sent out a manifesto to colleagues with ideas about responsibility to users and respect for their time. Harris' main thesis is that today popular social networks and services require too much of our attention. With the help of their device – like likes and constant notifications – they manipulate us so that we constantly dive online from offline. “Netflix, YouTube or Facebook automatically plays the next video, rather than waiting for you to make a conscious choice (in case you don't),” Harris said. In 2015, Harris left the corporation and founded Time Well Spent, a nonprofit that advises companies on how to build services that respect users.
What is the problem with recommendation algorithms
Today, recommendation algorithms work on all popular sites. Thanks to them, you see what interests you in the feeds. They draw conclusions about these interests based on information about users: the history of views and activity on the site (and not only, trackers of popular social networks work on other sites as well). The more actively you perform actions on the social network – share, like, comment – the more the algorithms know about you.
There are two problems. First, recommendation algorithms spread problematic content (from fakes to pseudoscientific and illegal information). A simple example: there are many anti-vaccination groups on social media. It doesn't matter how you feel about vaccination, if you like and share parental content – you will be shown a group of anti-vaccines. “The fact that vaccine opponents are found on the Internet is an Internet problem. Facebook encourages mothers to join vaccine opposition groups is a platform problem,” writes Casey Newton, editor of The Verge. Mary Meeker, an analyst who publishes a report on the state of the internet every year, talks about the dangers of these algorithms. “The main terrorist threat in the United States is people radicalized by a variety of ideologies that they have gleaned from the Internet.”
The second problem is that recommendation algorithms put the user in an information bubble. Perhaps you have it so that you told a friend about the story that they are discussing on your Facebook on the third day, but he did not hear about it. His news feed looks different – it does not have those disputes that occupy you and your environment, but there are others, according to his interests. Neither you nor your friend go deliberately looking for something completely different. Social media that shows us content that aligns with our biases is accelerating the polarization of society. Americans today are more polarized than ever, at least in some ways, according to one article in The Wall Street Journal. One of the frequent solutions that experts suggest is to change the algorithms so that they show us more content from people who disagree with us. But this article is about new research showing that this decision may actually make things worse, as social media reinforces extreme opinions.
Anastasia Akulova
See also: What AI is missing to get smarter? An excerpt from the book of a key expert in this field “Brains” of drones, games, stupid and not stupid robots. What are women doing in IT? Why are dating sites wrong? Excerpt from Amazon's Behavioral Economist Against Trade Unions. What are the company employees unhappy with? How has COVID-19 influenced the development of artificial intelligence? Stanford Report