Can human behavior be predicted? A broad variety of governmental initiatives are using computerized processes to try. Vast datasets of personal information enhance the ability to engage in these ventures and the appetite to push them forward. Governments have a distinct interest in automated individualized predictions to foresee unlawful actions. Novel technological tools, especially data-mining applications, are making governmental predictions possible. The growing use of predictive practices is generating serious concerns regarding the lack of transparency. Although echoed across the policy, legal, and academic debate, the nature of transparency, in this context, is unclear. Transparency flows from different, even competing, rationales, as well as very different legal and philosophical backgrounds. This Article sets forth a unique and comprehensive conceptual framework for understanding the role transparency must play as a regulatory concept in the crucial and innovative realm of automated predictive modeling.Part II begins by briefly describing the predictive modeling process while focusing on initiatives carried out in the context of federal income tax collection and law enforcement. It then draws out the process’s fundamental elements, while distinguishing between the role of technology and humans. Recognizing these elements is crucial for understanding the importance and challenges of transparency. Part III moves to address the flow of information the prediction process generates. In doing so, it addresses various strategies to achieve transparency in this process—some addressed by law, while others are ignored. In doing so, the Article introduces a helpful taxonomy that will be relied upon throughout the analysis. It also establishes the need for an overall theoretical analysis and policy blueprint for transparency in prediction.Part IV shifts to a theoretical analysis seeking the sources of calls for transparency. Here, the analysis addresses transparency as a tool to enhance government efficiency, facilitate crowdsourcing, and promote both privacy and autonomy. Part V turns to examine counterarguments which call for limiting transparency. It explains how disclosure can undermine government policy and authority, as well as generate problematic stereotypes. After mapping out the justifications and counterclaims, Part VI moves to provide an innovative and unique policy framework for achieving transparency. It concludes, in Part VII, by explaining which concerns and risks of the predictive modeling process transparency cannot mitigate, and calling for other regulatory responses.
The full text of this Article is available to download as a PDF.