Predictive Transparency Algorithms

By: Fred Jankilevich – Attorney At Law

May 9th 2018 

1.- Defining Algorithmic Fairness

In order to define algorithmic fairness, it’s important to first determine the role of prediction algorithms in computer-human interactions. Prediction algorithms optimize user interface settings such as user-recommendation screens where there is a large amount of data from the user online and a historical prediction of the next possible outcome is carried out.

In parallel, similar predictions occur in the offline world. The hiring process is an example of a syntactical analogy. The employer doesn’t know you personally, but knows your resume. The prediction algorithm constitutes a system that also doesn’t distinguish you individually, but knows a representation of you.

The pipeline of the algorithm begins with an individual who to some level or degree is unknowable, abstracting their features and feeding them into an algorithm. The algorithm then makes a binary yes/no prediction constituted in the form of a probability. For example, the probability that a person will be an eligible candidate for university.  Presently, two forms of implementation exist: 1) Fully Automated – Absent of Human Participation. 2) Human Expert – The specialized decision-maker carries out the methodology to reach the conclusion with the aid of the algorithm. (Kleinberg J.: U.C. Berkeley:  March 19th 2018). The present controversy surrounding predictive technologies pertains its methodology.

2.- The Controversy in Predictive Algorithm Technologies – Perspectives

 

Pertaining Predictive Algorithm technologies, the Access to Justice Lab at HLS recommends the search for an “adjudicatory and prospective study of the models” in order to ensure confidence in the public light about their use. According to Griffin, the algorithm is useful in the adjudicatory realm to constitute counter-factual thinking. The studies from J-Lab presently conclude that there is insufficient predictive power in algorithmic tools to provide an unbiased human decision-making process. J-Lab also advocates for transparency in the algorithm, so that all citizens can obtain “an index card where you can understand exactly what the model is saying” (Griffin C.: HLS: 2017)

 

The opposite argument pertaining fairness, defends that algorithms should be reliable instead of transparent. As a result, Black Box Testing methodologies ensure the functionality of an application without peering into the internal structure or working of the system. Black-Box Test Generation Tools generate Black-Box Tests based on program requirements using Black-Box Test methods, such as random testing and boundary value analysis methods (Gao Z.: 2003: pp. 151-172).

 

Using methodologies, such as equivalence partitioning, boundary value analysis and cause-effect graphing may help ensure an increased reliability and a reduced transparency by giving citizens confident that these technologies are not selective.

 

In this sense, a legal advocacy initiative, the Berkman Klein Center for Internet and Society proposes an intermediate approach that protects both perspectives on the matter: Transparency of the overall technical system. As a result, Professor Zittrain recommends the formulation of two fundamental questions: 1) What parts should a public authority consider its own? 2) What should be contracted out to a private firm that then might be proprietary?

 

From a Berkman Klein perspective, “predictive power” is a fundamental factor inherently related to transparency. Thus, a system that has terrific predictive power, but is incredibly biased is useless and a system that has no predictive power but is impartial is futile. In conclusion, by his inference from the formulation of the original hypothesis, there has to be a holistic approach to Artificial Intelligence (Zittrain J.: HLS: 2017).

 

Bibliography:

Primary

Harvard Law School. Boston, MA. Fall 2017

Print

1.- Gao, Jerry. “Testing and quality assurance for component-based software.” 2003. Artech House Inc. Norwood, MA.

Online

1.- https://www.ischool.berkeley.edu/events/2018/inherent-trade-offs-algorithmic-fairness

2.- https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *