Ethics and open science
This looks like a lot, but most of these are quite short.
Keep in mind throughout all these readings that an “algorithm” in these contexts is typically some fancy type of regression model where the outcome variable is something binary like “Safe babysitter/unsafe babysitter,” “Gave up seat in past/didn’t give up seat in past”, or “Violated probation in past/didn’t violate probation in past”, and the explanatory variables are hundreds of pieces of data that might predict those outcomes (social media history, flight history, race, etc.).
Data scientists build a (sometimes proprietary and complex) model based on existing data, plug in values for any given new person, multiply that person’s values by the coefficients in the model, and get a final score in the end for how likely someone is to be a safe babysitter or how likely someone is to return to jail.
- Miguel A. Hernán, “The C-Word: Scientific Euphemisms Do Not Improve Causal Inference From Observational Data”1
- Hannah Fresques and Meg Marco, “‘Your Default Position Should Be Skepticism’ and Other Advice for Data Journalists From Hadley Wickham,” ProPublica, June 10, 2019
- DJ Patil, “A Code of Ethics for Data Science”
- “AI in 2018: A Year in Review”
- “How Big Data Is ‘Automating Inequality’”
- “In ‘Algorithms of Oppression,’ Safiya Noble finds old stereotypes persist in new media”
- 99% Invisible, “The Age of the Algorithm”: Note that this is a podcast, or a 20ish minute audio story. Listen to this. The rest of the things on this page are helpful and supplementary (very few podcasts provide this much extra information), but you don’t need to go through it all.
- On the Media, “Biased Algorithms, Biased World”
- “Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.”
- “Companies are on the hook if their hiring algorithms are biased”
- “Courts use algorithms to help determine sentencing, but random people get the same results”
- David Heinemeier Hansson’s rant on the Apple Card