top of page

Detain and Release


Task: Building on the podcast episodes from Reply All: The Crime Machine part I and II, the first option is a simulation about algorithmic risk assessment at pretrial designed at the Berkman Klein Centre for Internet and Society. The simulation, called Detain/Release, is described by Keith Porcaro, one of the designers, as follow: "Detain/Release puts students in the role of a county judge at a bail hearing, and prompts them to detain or release individual defendants pending trial." Due to the potentially sensitive nature of the task, we highly recommend you to get a good sense of the layout and functionality of the simulation before starting the task: Detain/Release: simulating algorithmic risk assessments at pretrial

If you prefer to skip this particular task you can always pursue option 2. Both tasks have the same value and will be assessed with the exact same criteria.

If you decide to pursue option 1, follow the next steps:

  1. Please go to this link: Detain/Release

  2. Once there, you will be asked to provide your name. Please write your preferred name followed by the course section (e.g., Ernesto_A). Please be sure that the number of the room is MMVBN. This room is exclusive for this course. You will see a disclaimer at the top that states that only the creators of the simulation and the organizer of this room (your instructor) are the only ones with access to your data. You won't be asked for any personal identifier during the simulation.

  3. Join the room. You will be asked to rule on 24 cases before ending the simulation

  4. Once you have completed the simulation, post a reflection in your personal webspace. Think about the implications and consequences that AI-informed decision making brings to certain aspects of life. Please remember that the post is the task, and not the completion of the simulation.

Before the end of the course, your instructor will gather and share the data from the simulation with you.





I loved this week's “readings”, they were so engaging and the topic was so interesting!


I also really enjoy the simulation, even though I apparently made an awful judge and got kicked out of the courtroom.


My criteria for release was simple, if they weren’t mostly red or entirely yellow, I let them out. I noticed that a lot of the charges were for drug possession and trafficking and I firmly believe that throwing someone in jail for drug use is pointless because it doesn’t address the underlying issues. The defendant's statements didn’t really factor heavily into my decision because nobody is going to tell the judge to lock them up, everyone wants to be able to go home or go back to work while they wait for their trial.


Cathy O’Neil (2016) warned in her talk that just because you don’t look doesn’t mean it’s fair. I don’t know if I could really trust this algorithm to give me sound advice and so I erred on the side of caution and let a lot of people free. Cathay O’Neil also talked about how these algorithmic “weapons of math destruction” work on a “garbage in garbage out” system, only serving to perpetuate and amplify our own biases. An example of this was that the simulation gave no explanation as to which drugs were being used/trafficked or whether a robbery was violent or not. That is information I would need to make an informed decision, one that does not penalize individuals for societal and systemic issues that they have little control over (such as charges for marijuana, or robbing a grocery store to feed your family).


My favourite podcast this week was the ​​Reply All Podcast on The Crime Machine. I have heard of the Compstat system before, but only in passing. Hearing about the inventor of it and his intentions really served to highlight how easily algorithms that are created to do good, can accidentally become part of the problem. Interestingly though, this also links with Cathy and the 99% Invisible Podcast, The Age of Algorithms, who both talked about how algorithms are just “a set of instructions for which to solve problems” (Malan, 2013), and it was not actually the algorithm itself that caused the problems later on for the NYPD (although it did contribute), but the very human cultural response to how the algorithm was being used to reward or punish different precincts. Through no fault of the algorithm, it eventually became so entrenched in the policing culture of New York, that the humans in charge were unable to disengage from what had become an outdated tool.


"Algorithms aren't very objective even when they are carried out by computers. This is relevant because the companies that build them like to market them as objective, claiming they remove human error and fallibility from complex decision making, but every algorithm reflects the priorities and judgements of its human designer" (The Age of the Algorithm, n.d.)


I also really loved that Cathy O’neil made connections to education. Although we have much less standardized testing in Canada, we often hear about the immense impact it has on the American education system. Those tests are all scrapping the education system for various kinds of data, which are likely input into algorithms and have immense effects on the people within the system. To me, the American education system is always a good reminder of what we are fighting not to become here in Canada. I think, similar to the courtroom simulation, that each person, whether they are a criminal or a child, deserves to be given real, nuanced and individual consideration.



References


O’Neil, C [Google Talks]. (2016, November 2). Weapons of Math Destruction [Video]. YouTube. https://www.youtube.com/watch?v=TQHs8SA1qpk


O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.


The Age of the Algorithm. (n.d.). In 99 Percent Invisible. Retrieved from https://99percentinvisible.org/episode/the-age-of-the-algorithm/


Vogt, P. (n.d.-a). The Crime Machine, Part I. In Reply All.


Vogt, P. (n.d.-b). The Crime Machine, Part II. In Reply All.


Comments


bottom of page