El portal dels estudis de comunicació des de 2001

Edición en español

Edició en català

Edição en português

Novetats editorials

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor

Autor: Virginia Eubanks

Editorial: MacMillan

Categoria: Tecnologies de la comunicació

Enllaç: https://us.macmillan.com/automatinginequality/virginiaeubanks/9781250074317/

 

 

In October 2015, a week after I started writing this book, my kind and brilliant partner of 13 years, Jason, got jumped by four guys while walking home from the corner store on our block in Troy, New York. He remembers someone asking him for a cigarette before he was hit the first time. He recalls just flashes after that: waking up on a folding chair in the bodega, the proprietor telling him to hold on, police officers asking questions, a jagged moment of light and sound during the ambulance ride.

It’s probably good that he doesn’t remember. His attackers broke his jaw in half a dozen places, both his eye sockets, and one of his cheekbones before making off with the $35 he had in his wallet. By the time he got out of the hospital, his head looked like a misshapen, rotten pumpkin. We had to wait two weeks for the swelling to go down enough for facial reconstruction surgery. On October 23, a plastic surgeon spent six hours repairing the damage, rebuilding Jason’s skull with titanium plates and tiny bone screws, and wiring his jaw shut.

We marveled that Jason’s eyesight and hearing hadn’t been damaged. He was in a lot of pain but relatively good spirits. He lost only one tooth. Our community rallied around us, delivering an almost constant stream of soup and smoothies to our door. Friends planned a fundraiser to help with insurance co-pays, lost wages, and the other unexpected expenses of trauma and healing. Despite the horror and fear of those first few weeks, we felt lucky.

Then, a few days after his surgery, I went to the drugstore to pick up his painkillers. The pharmacist informed me that the prescription had been canceled. Their system showed that we did not have health insurance.

In a panic, I called our insurance provider. After navigating through their voice-mail system and waiting on hold, I reached a customer service representative. I explained that our prescription coverage had been denied. Friendly and concerned, she said that the computer system didn’t have a “start date” for our coverage. That’s strange, I replied, because the claims for Jason’s trip to the emergency room had been paid. We must have had a start date at that point. What had happened to our coverage since?

She assured me that it was just a mistake, a technical glitch. She did some back-end database magic and reinstated our prescription coverage. I picked up Jason’s pain meds later that day. But the disappearance of our policy weighed heavily on my mind. We had received insurance cards in September. The insurance company paid the emergency room doctors and the radiologist for services rendered on October 8. How could we be missing a start date?

I looked up our claims history on the insurance company’s website, stomach twisting. Our claims before October 16 had been paid. But all the charges for the surgery a week later—more than $62,000—had been denied. I called my insurance company again. I navigated the voice-mail system and waited on hold. This time I was not just panicked; I was angry. The customer service representative kept repeating that “the system said” our insurance had not yet started, so we were not covered. Any claims received while we lacked coverage would be denied.

I developed a sinking feeling as I thought it through. I had started a new job just days before the attack; we switched insurance providers. Jason and I aren’t married; he is insured as my domestic partner. We had the new insurance for a week and then submitted tens of thousands of dollars worth of claims. It was possible that the missing start date was the result of an errant keystroke in a call center. But my instinct was that an algorithm had singled us out for a fraud investigation, and the insurance company had suspended our benefits until their inquiry was complete. My family had been red-flagged.

* * *

Since the dawn of the digital age, decision-making in finance, employment, politics, health, and human services has undergone revolutionary change. Forty years ago, nearly all of the major decisions that shape our lives—whether or not we are offered employment, a mortgage, insurance, credit, or a government service—were made by human beings. They often used actuarial processes that made them think more like computers than people, but human discretion still ruled the day. Today, we have ceded much of that decision-making power to sophisticated machines. Automated eligibility systems, ranking algorithms, and predictive risk models control which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud.

Health-care fraud is a real problem. According to the FBI, it costs employers, policy holders, and taxpayers nearly $30 billion a year, though the great majority of it is committed by providers, not consumers. I don’t fault insurance companies for using the tools at their disposal to identify fraudulent claims, or even for trying to predict them. But the human impacts of red-flagging, especially when it leads to the loss of crucial life-saving services, can be catastrophic. Being cut off from health insurance at a time when you feel most vulnerable, when someone you love is in debilitating pain, leaves you feeling cornered and desperate.

As I battled the insurance company, I also cared for Jason, whose eyes were swollen shut and whose reconstructed jaw and eye sockets burned with pain. I crushed his pills—painkiller, antibiotic, anti-anxiety medications—and mixed them into his smoothies. I helped him to the bathroom. I found the clothes he was wearing the night of the attack and steeled myself to go through his blood-caked pockets. I comforted him when he awoke with flashbacks. With equal measures of gratitude and exhaustion, I managed the outpouring of support from our friends and family.

I called the customer service number again and again. I asked to speak to supervisors, but call center workers told me that only my employer could speak to their bosses. When I finally reached out to the human resources staff at my job for help, they snapped into action. Within days, our insurance coverage had been “reinstated.” It was an enormous relief, and we were able to keep follow-up medical appointments and schedule therapy without fear of bankruptcy. But the claims that had gone through during the month we mysteriously lacked coverage were still denied. I had to tackle correcting them, laboriously, one by one. Many of the bills went into collections. Each dreadful pink envelope we received meant I had to start the process all over again: call the doctor, the insurance company, the collections agency. Correcting the consequences of a single missing date took a year.

I’ll never know if my family’s battle with the insurance company was the unlucky result of human error. But there is good reason to believe that we were targeted for investigation by an algorithm that detects health-care fraud. We presented some of the most common indicators of medical malfeasance: our claims were incurred shortly after the inception of a new policy; many were filed for services rendered late at night; Jason’s prescriptions included controlled substances, such as the oxycodone that helped him manage his pain; we were in an untraditional relationship that could call his status as my dependent into question.

The insurance company repeatedly told me that the problem was the result of a technical error, a few missing digits in a database. But that’s the thing about being targeted by an algorithm: you get a sense of a pattern in the digital noise, an electronic eye turned toward you, but you can’t put your finger on exactly what’s amiss. There is no requirement that you be notified when you are red-flagged. There is no sunshine law that compels companies to release the inner details of their digital fraud detection systems. With the notable exception of credit reporting, we have remarkably limited access to the equations, algorithms, and models that shape our life chances.




També et pot interessar...

  Algorithms of Oppression: How Search Engines Reinforce Racism
Safiya Umoja Noble

Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. [...]

  Exploring Transmedia Journalism in the Digital Age
Renira Rampazzo Gambarato, Geane C. Alzamora

Since the advent of digitization, the conceptual confusion surrounding the semantic galaxy that comprises the media and journalism universes has increased. Journalism across several media platforms [...]

  Armas de destrucción matemática. Cómo el big data aumenta la desigualdad y amenaza la democracia
Cathy O'Neil

Vivimos en la edad del algoritmo. Las decisiones que afectan a nuestras vidas no están hechas por humanos, sino por modelos matemáticos. En teoría, esto debería conducir a una mayor equidad: todos [...]