Between a Rock and a Hard Place: the Pending Collision of Privacy Regulation and Artificial Intelligence in the EU and U.S.

Newsletter - TerraLex Connections
Between a Rock and a Hard Place: the Pending Collision of Privacy Regulation and Artificial Intelligence in the EU and U.S.

John Kennedy*

Advances in artificial intelligence (“AI”) and machine learning (“ML”) are driving a boom across many industries in automated, self-learning systems and services.  AI has been defined as “computer systems that think and act like humans, and think and act rationally.”[1]   Machine learning has been defined as “the study of computer algorithms that improve automatically through experience.”[2]  Applications of AI and ML today range from sentencing systems used by criminal court judges, to ride sharing apps such as Uber and Lyft, to self-driving cars, and to digital assistants such as Apple’s Siri and Amazon’s Alexa.  How existing data protection laws will address the data privacy impact of such systems is now a focus of intense discussion by policy makers in the European Union (“EU”) and the U.S.  Not surprisingly in light of the history of EU data protection law, the EU is taking the more proactive regulatory approach for now.

The newly-effective EU General Data Protection Regulation (“GDPR”) addresses head on some of the privacy impact of AI through enhanced data controller obligations and expanded data subject rights that specifically focus on automated decision-making and profiling, activities that are increasingly driven by AI technologies and machine-learning algorithms.  The regulatory response to AI in the U.S. currently stands in sharp contrast to the GDPR.  While the ongoing U.S. policy and academic debate over protecting privacy in the context of AI is robust, virtually no new or modified laws or regulations are on the books on this subject.  For the time being, U.S. regulators and AI market participants will need to assess AI’s privacy risks using the existing patchwork quilt of U.S. data privacy law.

This article looks at the contrasting approaches to mitigating AI and ML-driven privacy risks in the EU and U.S. and offers some practical suggestions on privacy risk mitigation for organizations that may be commercially deploying these technologies in both jurisdictions. 

The GDPR’s Approach to AI and Machine Learning Challenges to Data Privacy

The GDPR directly addresses AI and machine learning (as well as other features of ‘Big Data’, such as predictive analytics) in two major contexts: first, rules that are directed to decision-making processes that are 100% automated (i.e., there is no meaningful human intervention in the process that leads to a decision relating to an individual data subject); second, rules that are directed to processing that is partly, but not entirely, automated.   In each case, these rules also address “profiling”, which the GDPR defines as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movement.”[3] 

Regarding the first category of processing – fully automated decision-making using personal data—the GDPR establishes several substantive and procedural rights for data subjects, and places a basic prohibition (with exceptions) on data controllers.  Data subjects are entitled to the following disclosures about fully automated decision-making processing involving their personal data.   These information rights[4] include disclosures of:

  • The existence of automated decision-making;
  • Meaningful information about the logic involved in the automated decision-making; and
  • The significance and envisaged consequences of the automated decision-making.

The second and third rights above will likely present significant challenges to data controllers.  Recent Article 29 Working Party guidance states that explanations of algorithmic logic should be “simple” and should include disclosure of (i) the main types of data relied upon in the logic, (ii) the sources and relevance of these data types, and (iii) assurances that the controller regularly tests the underlying scoring or decision-making methods to ensure they remain fair and unbiased.[5]  In addition, data controllers must disclose any “significant and envisaged consequences” of intended or future automated processing.  These disclosures should provide “real, tangible” examples of possible effects on the data subject, with visual aids showing how different data subject behaviors might affect the outcome (e.g., apps that illustrate the impact on insurance premiums of different driver behaviors analyzed in the processing).[6]

The GDPR buttresses the foregoing informational rights under the GDPR with a broad prohibition on data controllers to engage in decision-making that is based solely on automated processing (including profiling) and produces “legal effects” or similar significant effects on data subjects.[7]  Significant exceptions qualify this broad prohibition on automated decision-making based on personal data, including processing that involves the entering into or the performance of a contract with a data subject, or the data subject’s explicit consent.   These exceptions, however, are to be construed narrowly according to the Article 29 Working Party’s guidance on the subject.[8]   Consent to automated decision-making, for example, must be conditioned upon affirmative acts that are “explicit’ and must go beyond non-specific, generalized consent to processing.[9] 

Even where such exceptions are present, the GDPR requires that certain transparency and due process safeguards be present, namely, (i) disclosures to the data subjects concerning the presence of such automated processing; (ii) the right of the data subject to obtain human intervention in the automated processing; (iii)  the right of the data subject to express his or her point of view on the processing in question; (iv) the right of the data subject to obtain an explanation of the decision making after the automated assessment of the data subject; and (v) the right of the data subject to challenge the fully-automated decision.[10]

In situations where automated decision-making (including profiling of data subjects) is in use but is not 100% automated (i.e., there is meaningful human involvement in the decision-making), the GDPR establishes rights for informing data subjects of the existence and purposes profiling and consequences of profiling[11] and the right to object to profiling when it is being used for direct marketing purposes.[12] 


There is ongoing debate over whether these provisions of the GDPR create a “right to an explanation” of how an automated decision-making process resulted in a particular output for a particular data subject, e.g., denial of her application for a loan or her ineligibility for discounted car insurance.[13]  But at least one pertinent guidance document issued by the Article 29 Working Party strongly suggests that the GDPR does create a right for data subjects to be informed about the algorithmic logic of automated decisions and about the potential consequences of those decisions, especially where the controller relies on the data subject’s consent as the lawful basis of the processing.[14]


The requirement to provide data subjects with “meaningful information about the logic involved” is especially challenging in the context of  ‘deep learning’ ML algorithms, which are designed to learn autonomously over time from processing large amounts of data and to modify their operations as a result of this ‘self-reinforced’ learning.  AI technologies based on “deep learning” techniques can present significant technical obstacles to privacy concepts such as transparency and consent because these technologies may learn in ways that even their engineers cannot fully explain or reverse engineer.[15]  How a particular decision or prediction was made about a data subject—the ‘logic’ used—may not be evident or discoverable and therefore not explainable.  As one commentator has noted, the GDPR requirement to provide an explanation of AI logic, coupled with the inherent opaqueness of some deep learning algorithms, may place data controllers between the proverbial rock and a hard place.[16] As the GDPR enters the real world of implementation and enforcement this Spring, data controllers and data protection authorities alike will be looking for ways to get more comfortable in this particular spot.


The U.S. Privacy Law Landscape for AI in Comparison

Existing U.S. data privacy law provides only limited counterparts to the GDPR’s array of rights and obligations regarding automated decision-making and profiling.  Consumer rights of notice, access, and correction of data inaccuracies are well-established under federal financial and consumer report privacy statutes such as, respectively, the Gramm-Leach-Bliley Act (“GLBA”) and the Fair Credit Reporting Act (“FCRA”).[17]  Similarly, federal healthcare data privacy law includes patient rights of notice, access, usage information and correction as to their personal health information.[18]  California’s “Shine the Light” law enables California residents to obtain information on how companies that collect their personal information share it with third parties.[19]  But apart from these and several other examples, consumers do not have broad statutory rights to demand anything like the required GDPR disclosures from organizations that profile them using automated processing, including AI and machine learning.  This gap has been found by the Federal Trade Commission (“FTC”) to be especially large with respect to the growing data broker industry.[20]


This gap in U.S. data privacy law’s response to the ‘algorithm society’ is the subject of numerous policy studies, white papers and proposed legislation.  In 2012, the Obama Administration released a white paper[21] calling for Congressional enactment of a ‘consumer privacy bill of rights’.  Key principles in that white paper partly align with many of those contained in the GDPR (e.g., transparency, individual control over data, access to and accuracy of data, etc.).  (No legislation has come of that white paper to date.)  Several years later the FTC issued a report calling for increased scrutiny of the impact of ‘Big Data’ on consumer rights.[22] The FTC did not call for any  new legislation but did note that its enforcement powers under the FCRA and Section 5 of the FTC Act (relating to unfair and deceptive practices in interstate commerce) can be applied to consumer harms associated with abuses of profiling and data analytics.  And in 2016, the outgoing Obama Administration issued a report on the anticipated impacts of a future with widespread AI technologies[23], but this was not accompanied with any specific legislative recommendations directed to adapting U.S. privacy law to these technologies. 


As with so much of U.S. privacy law, substantive policy and regulatory activity on Big Data, AI and machine learning is largely developing along industry sector lines, such as policy initiatives in autonomous vehicles[24], medical devices[25] and healthcare.[26]  Although data privacy is on the agenda of these and similar industry-specific efforts, no detailed or comprehensive legislative or regulatory proposals have emerged on how U.S. privacy law should adjust to widespread private sector use of AI technologies.  The most comprehensive legislation pending in Congress on AI is essentially a bi-partisan call for the Department of Commerce to establish a federal advisory commission on the full potential and societal impact of artificial intelligence.[27]  Part of the advisory commission’s charge would be to study “how  the  privacy  rights  of  individuals are  or  will  be  affected  by  technological  innovation relating to artificial intelligence.”  The legislative contrast on AI between the GDPR and “The FUTURE of Artificial Intelligence Act of 2017” could hardly be greater.

In practice, however, U.S. data privacy law is not a complete blank slate when it comes to the privacy issues associated with automated decision-making and profiling.  The concept of reasonable and accurate notice of data practices of data collectors is well-established in state and federal financial privacy law[28], as well as in more than a decade of FTC privacy enforcement actions[29]. The principle that consent to data practices shall not be obtained in a deceptive or misleading manner is also well established in U.S. privacy law.[30]  Secondary uses of personal data which are unrelated to the purposes for which the data was collected are also restricted under several U.S. statues.[31]  And, as noted above[32], rights of access and correction are well-established in certain major federal privacy statutes.  A good number of the rights and obligations now codified in the GDPR regarding automated decision-making and profiling can be found scattered throughout the larger thickets of U.S privacy law.  But U.S. law appears to be some distance from consolidating these principles, either on an industry-by-industry basis or through a comprehensive framework.  If the GDPR is putting controllers who use AI technologies between a rock and a hard place, current U.S. privacy law leaves AI users wandering in a forest full of hidden bogs. 


Practical Risk Mitigation Considerations for AI Users in Both the EU and the U.S.

There are obviously sharp contrasts between the centralized regulatory approach to the privacy challenges posed by AI/ML in the EU and the cautious, sectoral and incremental approach being taken in the U.S.  But until the contrasting legal frameworks for data privacy in the ‘algorithmic society’ mature (or possibly even converge), businesses that are embracing AI and machine learning using personal data should consider some basic compliance practices that will likely serve them in the interim on either side of the Atlantic.  These include:

  • Clearly address informational rights:  Provide reasonable and accessible information on planned and anticipated AI/machine learning processing, especially when the use of personal data will be legally based on consent.  This may include:
      • User-friendly notices that automated processing of personal data will take place.  “User-friendly” may include layered or heightened notice measures designed to alert data subjects to the use of automated processing; visualization, icons and/or  interactive presentations describing (i) how and with whom the organization shares personal data; and (ii) whether personal data will be used by any third parties for profiling and decision-making based on the profiles;
      • “Meaningful information” about the logic involved in the automated processing and the possible consequences of its application or outputs, including (i) the types of data being used in any profiling and the sources of that data, (ii) how the profile will be used in any automated decision-making, and (iii) the relevance and use of the profiling in the automated decision-making process.
      • Obtain unambiguous consent when consent is the basis of the processing:  Pay particular attention to the clarity and completeness of the information provided in order to assure the validity of any consents obtained.   Although the GDPR standard for consent in connection with automated processing (“explicit consent”) is higher than that generally applied in the U.S. in consumer e-commerce, regulators in the U.S. have increasingly pounced on the use of implied or default “consent” to murky or buried disclosures of material privacy practices.[33]  The Article 29 Working Party recommendations for obtaining compliant consents include (i) granular consents tied to specific types of processing, rather than a broad general consent to multiple categories of processing, (ii) ‘just in time’ consents from data subjects near to but prior to any new processing or further processing of personal data, and (iii) clearly and distinctly informing data subjects that their consent can be withdrawn and making that process readily available[34].
      • Make ‘digital’ due process rights available:  Again, the GDPR-mandated rights of access[35], rectification[36] and objection to processing[37] are not generally found in U.S. data privacy law except, for example, in the use of consumer reports and other financial information, processing personal health information, and disclosure of educational records.  Nevertheless, the practice of enabling users to access and correct their personal information has been widely embraced (though not without controversy) by the large U.S.-based technology platforms (e.g., Apple, Google, Facebook). 
      • Adopt appropriate safeguards for automated processing and profiling:  Even where automated processing (including profiling) is permissibly conducted under the GDPR, various safeguards are either required in the regulation or strongly encouraged.  These include:
          • Ongoing quality assurance and “algorithmic auditing” measures to confirm that automated processes are not producing harmful effects on data subjects, whether through inaccuracies, discriminatory effects or other adverse results;
          • Where possible given the nature and goals of the processing, using anonymized or pseudonymized data in profiling or otherwise minimize the use of personal data;
          • Include meaningful components of human intervention and/or review in the processing, together with an administrative process for responding to consumer inquiries and complaints; and
          • Adopt and implement industry or government-recognized certification standards, codes of conduct or ethical review procedures for automated processing and profiling.

[1]  “Artificial Intelligence”, NIST website, available at https://www.nist.gov/topics/artificial-intelligence, citing Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (3rd Edition) (Essex, England: Pearson, 2009).

 

[2]   IEEE Systems, Man and Cybernetic Society website, available at http://www.ieeesmc.org/technical-activities/cybernetics/machine-learning

[3] Regulation (EU) 2016/679 of the European Parliament and of the Council, of 27 April 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)(“ GDPR”), Article 4(4).

[4] GDPR, Article 22(1) and (4).

[5] “Guidelines on Automated individual decision-making and Profiling for the purposes of the Regulation, 2016/679”, Section II (D)(1).  Article 29 Data Protection Working Party, 17/EN, WP 251, Adopted October 3, 2017. 

[6] Id.

[7] GDPR, Article 22(1).

[8] “Guidelines on Automated individual decision-making and Profiling for the purposes of the Regulation, 2016/679”, Section II (3)(C).  Article 29 Data Protection Working Party, 17/EN, WP 251, Adopted October 3, 2017. 

[9] Id.

[10] GDPR, Article 22(3) and Recital 71.

[11] GDPR, Articles 13, 14 and 15.

[12] GDPR, Article 21.

[13] A. Burt, “Is There a ‘Right to Explanation’ for Machine Learning in the GDPR?”, IAPP Privacy Tech, June 1, 2017, available at https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/

[14] “Guidelines on Automated individual decision-making and Profiling for the purposes of the Regulation, 2016/679”, Section II (D)(1).  Article 29 Data Protection Working Party, 17/EN, WP 251, Adopted October 3, 2017. 

[15]W. Wright, “The Dark Secret at the Heart of AI”, MIT Technology Review, April 11, 2017, available at https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

[16] J. Janosik, “Transparency of Machine Learning is a Double-Edged Sword”, November 13, 2017, available at https://www.welivesecurity.com/2017/11/13/transparency-machine-learning-algorithms/

[17] 15 U.S.C. § 6801 et seq.; 15 U.S.C. § 1681 et seq.

[18] 45 CFR Part 164.

[19] Cal. Civ. Code § 1798.83.

[20] See generally, “Data Brokers: A Call for Transparency and Accountability”, Federal Trade Commission, May 2014, available at https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

[21] “Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Data Economy”, The White House, February 2012, available at https://obamawhitehouse.archives.gov/sites/default/files/privacy-final.pdf

[22] “Big Data: A Tool for Inclusion or Exclusion,” Federal Trade Commission Report, January 2016, available at https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf

[23] “Preparing for the Future of Artificial Intelligence”, Executive Office of the President, National Science and Technology Council, Committee on Technology, October 2016, available at https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf

[24] See U.S. Department of Transportation, “Automated Vehicle Activities”, at https://www.transportation.gov/AV.

[25] See U.S. Food and Drug Administration, “Digital Health Devices”, at https://www.fda.gov/MedicalDevices/DigitalHealth/ucm575766.htm

[26] “Hype to Reality: How Artificial Intelligence (AI) Can Transform Health and Healthcare”, Health IT Buzz, January 17, 2018, available at https://www.healthit.gov/buzz-blog/interoperability/hype-reality-artificial-intelligence-ai-transform-health-healthcare/

[27] S. 2217, “The FUTURE of Artificial Intelligence Act of 2017”, introduced in the U.S. Senate Dec. 12, 2017.

[28] 15 U.S.C. § 6801 et seq.; 15 U.S.C. § 1681 et seq.

[29] See, e.g., In the Matter of Sears Holdings Management Corporation Company, Complaint, FT File No. 082 3099, at https://www.ftc.gov/enforcement/cases-proceedings/082-3099-c-4264/sears-holdings-management-corporation-corporation

[30] See, e.g., 5 U.S.C. § 552a(b) (U.S. Privacy Act consent requirement for disclosure of government records); 15 U.S.C. § 1681(b) and § 1681b(g)(consent requirements under FCRA for use of consumer credit reports for employment purposes or contain medical records, respectively.)

[31] See, e.g., Privacy Act, 5. U.S.C. § 552a(e)(3)(B); Driver’s Privacy Protection Act, 18 U.S.C. § 2722(a); Gramm-Leach-Bliley Act, 15 U.S.C. § 6802(c). 

[32] See infra. at Note 15.

[33] See, e.g., “Mobile Privacy Disclosures: Building Trust Through Transparency”, FTC Staff Report, February 2013,     available at https://www.ftc.gov/sites/default/files/documents/reports/mobile-privacy-disclosures-building-trust-through-transparency-federal-trade-commission-staff-report/130201mobileprivacyreport.pdf; “Making Your Privacy Practices Public”, California Department of Justice, May 2014, available at https://oag.ca.gov/sites/all/files/agweb/pdfs/cybersecurity/making_your_privacy_practices_public.pdf

[34] “Guidelines on Automated individual decision-making and Profiling for the purposes of the Regulation, 2016/679”, Annex I.  Article 29 Data Protection Working Party, 17/EN, WP 251, Adopted October 3, 2017.

[35] GDPR, Article 15.

[36] GDPR, Article 16.

[37] GDPR, Article 21(1), (2).

 


 

*John Kennedy is a partner in Wiggin and Dana's Corporate Department, a member of the Outsourcing and Technology Group, and co-chair of the Privacy and Information Security Group.

photo
New Haven, CT 
Thursday, May 31, 2018
Privacy / Use of Personal Information