08.31.2020

|

Articles

Artificial intelligence (AI) has the promise to revolutionize healthcare with machine learning (ML) techniques to predict patient outcomes and personalize patient care, but use of AI carries legal risks, including algorithmic bias, that can affect outcomes and care.

AI seeks to enable computers to imitate intelligent human behavior, and ML, a subset of AI, involves systems that learn from data without relying on rules-based programming. Among others, ML techniques include supervised learning (a method of teaching ML algorithms to “learn” by example) as well as deep learning (a subset of ML that abstracts complex concepts through layers mimicking neural networks of biological systems). Trained ML algorithms can identify causes of diseases by detecting relationships between a set of inputs, such as weight, height, and blood pressure, and an output, such as the likelihood of developing heart disease.

Click here to read the full article in MedCity News.