Machine learning companies are making big bucks selling opaque, secretive sentencing algorithm tools to America’s court systems: the vendors of these systems claim that they are too sophisticated to explain, and use that opacity to dismiss critics who say the algorithms oversentence black and poor people.
Enter the Supersparse Linear Integer Model, or SLIM, developed by Cynthia Rudin (Duke), Jiaming Zeng (Stanford) and Berk Ustun (MIT). The SLIM model was trained on a public dataset of records on recidivism among 33,000 inmates, and the resulting system is simple enough that a judge can hand-compute a convicted person’s score.
The system is retrainable to suit local characteristics and data, uses open training data and open software, and appears to be the opposite of a Weapon of Math Destruction.
The algorithm also builds models that are highly customizable. The researchers were able to build separate models to predict the likelihood of arrest for different crimes such as drug possession, domestic violence or manslaughter. SLIM predicted the likelihood of arrest for each crime just as accurately as other machine learning methods.The SLIM method could also be applied to data from different geographic areas to create customized models for each jurisdiction, instead of the “one size fits all” approach used by many current models, Rudin says.
As for transparency, the models are built from publicly available data sets using open-source software. The researchers disclose the details of their algorithms, rather than keeping them proprietary. Anyone can inspect the data fed into them, or use the underlying code, for free.
OPENING THE LID ON CRIMINAL SENTENCING SOFTWARE
[Robin A Smith/Duke]
(via 4 Short Links)