Skip to content


Artificial intelligence (AI) refers to the use of computer science to create models for the performance of functions generally associated with human intelligence.[1] It is possible for data privacy laws to be implicated either during the development of an AI or as part of its use.

As part of developing an artificial intelligence, computers are often provided with large quantities of data from which patterns and associations can be recognized (“training data”). If training data involves the use of personal information, data privacy laws that govern such information may be implicated. If training data does not involve the use of personal information, data privacy laws may not be implicated.

Once developed an AI can be utilized for a variety of purposes, some of which involve the creation or synthesis of information (“output data”). AIs also can be used to make decisions if tasked with deciding the outcome of a question (“decision making”). To the extent that an AI is used to create output data that will be associated with an identified or identifiable individual, the output data will be considered personal information under some data privacy laws. Similarly, to the extent an AI is used for decision making that has an impact on individuals, some data privacy laws will govern such use. If output data does not contain personal information, and the AI is not used for decision making, data privacy laws may not be implicated.

[1] This approximates the definition used within the NIST AI Risk Management Framework (AI RMF 1.0).