09.2020

AI seeks to enable computers to imitate intelligent human behavior, and ML, a subset of AI, involves systems that learn from data without relying on rules-based programming. AI has the power to unleash insights from large data sets to more accurately diagnose and evaluate patients and dramatically speed up the drug discovery process. However, despite its promise, the use of AI is not without legal risks that can generate significant liability for the developer and/or user of ML algorithms. Among other risks, the development and use of ML algorithms could result in discrimination against individuals on the basis of a protected class, a potential violation of antidiscrimination and other laws. Such algorithmic bias occurs when an ML algorithm makes decisions that treat similarly situated individuals differently where there is no justification for such differences, regardless of intent. Absent strong policies and procedures to prevent and mitigate bias throughout the life cycle of an ML algorithm, it is possible that existing human biases can be embedded into the ML algorithm with potentially serious consequences, particularly in the healthcare context, where life and death decisions are being made algorithmically.

Click here to read the full white paper.