On June 10, 2020, the Financial Industry Regulatory Authority (FINRA) issued a summary report regarding the existing and emerging uses of artificial intelligence (AI) by securities industry market participants. The report, Artificial Intelligence (AI) in the Securities Industry, reflects nearly two years’ worth of dialogue between FINRA’s Office of Financial Innovation and over two dozen market participants, including broker-dealers, academics, technology vendors, and service providers. FINRA requested input from the securities industry regarding potential challenges associated with broker-dealers using and supervising AI applications.[1] As “AI technology has gained significant momentum over the past decade and become more mainstream due in part to the availability of inexpensive computing power, large datasets, cloud storage, and sophisticated open-source algorithms[,]” continued regulatory scrutiny is expected.[2]

The report summarizes key takeaways and findings regarding:

  1. AI technology, generally, and specific technology in the securities industry
  2. AI usage by broker-dealers for communication with customers, investment processes, and operational functions
  3. Key challenges and regulatory considerations related to the use of AI by broker-dealers

This update provides an overview of the report’s focus areas.

AI Technology Overview

Defining AI. The report acknowledges that there is no standard industry definition for “AI” nor a definitive list of what constitutes “AI technological applications.” While referencing Merriam-Webster[3] and the Oxford English Dictionary[4], FINRA articulated a definition that focused on the use of computer systems to complete tasks that were grounded in human behaviors and intelligence (and historically handled by human beings). Specifically, the report explains, “John McCarthy, one of the founders of AI research, ‘once defined the field as getting a computer to do things which, when done by people, are said to involve intelligence.’”[5]    

Scope of Use. The report notes that the flexibility of the AI definition is connected to the reality that, in practice, various technologies and applications make up current uses of AI technologies by the securities industry. These uses include the following:

  • Machine Learning (ML): The use of algorithms to process large amounts of data and learn from it in order to independently (i.e., without direct programming) make predictions or identify meaningful patterns.
  • Natural Language Processing (NLP): A form of AI that equips machines to recognize and extract value from text and voice and potentially convert information into an output format. Examples provided in the report include keyword extraction from legal documents, sentiment analysis, and providing relevant information through chat-boxes and virtual assistants.
  • Computer Vision (CV) or Machine Vision: A form of AI that equips machines to identify and process images. Examples provided in the report include facial recognition, fingerprint recognition, optical character recognition, and other biometric tools to verify user identity.
  • Robotics Process Automation: Software tools that interact with other applications to automate certain tasks (for example: account reconciliation, accounts payable processing, and depositing of checks).

The report explained that the combination of three elements (data, algorithms, and human intelligence) is key to influencing the development of securities industry-specific technological applications. In turn, these key factors each have direct impacts on the challenges and regulatory considerations related to the use of AI in the securities industry, as explored later in this update. The report provides specificity with regard to the import of each element:

  • Data: Data, and the collection thereof, has grown tremendously and increased in value. FINRA highlighted that “[t]he importance of data has likewise rapidly increased, and some have even referred to data as a more valuable resource than oil.”[6] As many uses of AI are based on data analysis and interpretation, AI applications are most effective and useful where the provided datasets are “substantially large, valid, and current.”[7]
  • Algorithms: The report notes that the availability of open-source AI algorithms (defined in the report as sets of “well-defined, step-by-step instructions for a machine to solve a specific problem and generate an output using a set of input data”)[8] has substantially contributed to AI technology growth and innovation.
  • Human Intelligence: The report asserts that development of AI necessarily depends on human interaction at all phases of development. Accordingly, the “[a]bsence of such human review and feedback may lead to irrelevant, incorrect, or inappropriate results from the AI systems, potentially creating inefficiencies, foregone opportunities, or new risks if actions are taken based on faulty results.”[9]

Broker-Dealer AI Usage

FINRA’s review found broker-dealers primarily use AI to facilitate (1) customer communications and outreach; (2) investment processes; and (3) operational functions. With respect to communication strategies, the report found that broker-dealers are using AI applications in the form of virtual assistants to facilitate customer service as well as others that analyze email inquiries in order to accelerate response time. The report explained that this functionality is being used not only in the securities industry but in the broader financial services industry. AI applications have also been leveraged to assist broker-dealers in tailoring customer content based on customer data.

The second and third uses of AI by broker-dealers highlight AI’s capabilities to enhance and facilitate investment management. With respect to investment processes, broker-dealers reported using AI to create holistic customer profiles influenced by inputs such as customer assets (across portfolios and custodians), spending patterns, information swept from social media and other public websites, browsing history, and past communications to provide customized market research. The report acknowledges that some broker-dealers are exploring utilizing AI to manage trading and portfolio processes. While AI may provide some benefits, the report cautions that the use of AI in this space may also cause issues in the event of unforeseen circumstances (e.g., market volatility, natural disasters, pandemics, or geopolitical changes). Finally, with respect to operational functions, in addition to AI’s utilitarian benefits to complete administrative tasks, broker-dealers are developing AI-based applications to enhance compliance and risk monitoring functions. Specifically, the report identifies stated AI capabilities with respect to compliance obligations, such as surveillance and monitoring, know-your-customer (KYC) identification, identifying and addressing new requirements, maintaining required liquidity levels, assessing the creditworthiness of counterparties, and maintaining vigilant cybersecurity controls.

Notably, as broker-dealers have expanded AI adoption, it is expected that the scope of use and reliance on AI will continue to grow. The report’s utility list is not exhaustive but meant to highlight industry trends. The report emphasized that, as growth continues, firms must conduct their own due diligence and legal analysis for every application, since the use of AI presents several challenges and regulatory considerations, as next discussed.[10]

Key Challenges and Regulatory Considerations

As with any technological innovation, AI usage comes with benefits, risks, and challenges to broker-dealers, the markets, and investors.[11] As noted earlier, these challenges are largely rooted in the basic elements that are used to develop AI—data, algorithms, and human intelligence. In turn, the report details factors for market participants to consider when evaluating an AI-based solution, including (1) model risk management, (2) data governance, (3) customer privacy, (4) supervisory control systems, and (5) other factors (i.e., cybersecurity, outsourcing/vendor management, books and records, and workforce structure).

Each of these factors—and FINRA’s recommendations on potential areas for broker-dealers to consider—are summarized in the chart below.   


Areas for Consideration

Model Risk Management

  • Update model validation processes to account for complexities of an ML model
  • Conduct upfront and ongoing testing, including tests that experiment with unusual and stressed scenarios (e.g., unprecedented market conditions) and new datasets
  • Employ current and new models in parallel and replace current models only after the new ones are thoroughly validated
  • Maintain a detailed inventory of all AI models, along with any assigned risk ratings, to appropriately manage the models based on risk levels
  • Develop (1) model performance benchmarks (e.g., number of false negatives) and (2) monitoring and reporting processes to ensure that the models perform as intended, particularly when the models involved are self-training and evolve over time
  • Establish policies and procedures related to ML explainability to ensure that the AI-model—as well as firm personnel’s review and management of such model, as required by FINRA Rule 3110—conform to regulatory and legal requirements[12]

Data Governance

  • Review the underlying dataset of a prospective AI application for any potential built-in biases, for example, during the testing process
  • Regularly review the legitimacy and authoritativeness of data sources, particularly where the data is derived from an external source
  • Consider integration of data throughout firm systems
  • Develop and test data security systems to safeguard data, including customer data
  • Develop, and evaluate against, benchmarks regarding the effectiveness of data governance programs

Customer Privacy

  • Review systems to ensure they conform to regulatory requirements regarding protecting financial and customer information (e.g., SEC Regulation S-P, SEC Regulation S-ID, and NASD Notice to Members 05-49)
  • Update written policies and procedures to account for the use of customer data or other information being collected for AI-based applications

Supervisory Controls Systems

  • Establish a cross-disciplinary technology governance group to oversee the development, testing, and implementation of AI-based applications
  • Conduct extensive testing of AI-based applications
  • Establish fallback plan to address a circumstance where an AI-based application fails
  • Evaluate the roles of personnel and ensure that they have the appropriate FINRA registrations, where needed
  • Review and test supervisory controls for continued compliance with applicable rules and regulations, in particular where AI-based applications are applied to trading functions (FINRA Rules 5210, 6140, and 2010; SEC Market Access Rule; and SEC Regulation NMS, Regulation SHO, and Regulation ATS), funding and liquidity risk management (FINRA Notice 15-33 and Notice 10-57), or investment advice (SEC Regulation Best Interest and FINRA Rule 2111)

Additional Considerations

  • Cybersecurity: Consider incorporating cybersecurity as a critical component of the evaluation, development, and testing process of any AI-based application
  • Outsourcing and Vendor Management: Review outsourcing arrangements for regulatory compliance (see e.g., FINRA Notice to Members 05-48)
  • Books and Records: Review AI tools and systems to ensure compliance with broker-dealer recordkeeping obligations (see e.g., Securities Exchange Act Rules 17a-3 and 17a-4 and FINRA Rule 4510)
  • Workforce Structure: Consider methods to address potential challenges in workplace cultural shifts resulting from the implementation of new technology and review and address impacts of incorporating AI processes in the workforce

Conclusion: Request for Comments

FINRA concluded by requesting comments regarding all areas identified in the report. A specific request was made for comments about how FINRA can develop rules that support the adoption of AI applications in the securities industry in a manner that does not compromise investor protection and market integrity. FINRA has requested that interested persons submit their comments by August 31, 2020.

Comments may be submitted by emailing comments to pubcom@finra.org or by mail to Marcia E. Asquith, Office of the Corporate Secretary, FINRA, 1735 K Street NW, Washington, DC 20006-1506.


[1] See FINRA, “Financial Technology Innovation FINRA Requests Comment on Financial Technology Innovation in the Broker-Dealer Industry,” available at https://www.finra.org/sites/default/files/Special-Notice-073018.pdf.

[2] See report at 1.

[3] Id. at 2 (“[t]he capability of a machine to imitate intelligent human behavior”).

[4] Id. (“[t]he theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”).

[5] Id.

[6] See report at 4 (citing The Economist, “The World’s Most Valuable Resource is no Longer Oil, but Data”, May 6, 2017, https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data).

[7] Id.

[8] Id.

[9] Id. at 5.

[10] Id. (“Each firm should conduct its own due diligence and legal analysis when exploring any AI application to determine its utility, impact on regulatory obligations, and potential risks, and set up appropriate measures to mitigate those risks. Furthermore, use of AI applications does not relieve firms of their obligations to comply with all applicable securities laws, rules, and regulations.”)

[11] The report reasons that the benefits to broker-dealers (“increased efficiency, increased productivity, improved risk management, enhanced customer relationships, and increased revenue opportunities”) are counterbalanced by reports of AI applications that were “fraudulent, nefarious, discriminatory, or unfair”. See report at 11.

[12] The report explains that certain ML models allow for explainability regarding the underlying assumptions and factors used to make a production, whereas the process for some models are difficult or impossible to explain (described as “black boxes”). See report at 12. 

© 2020 Perkins Coie LLP