Artificial Intelligence and Automated Systems Legal Update (1Q22)
Client Alert | May 5, 2022
While news about any artificial intelligence-related legal development often remained buried among the more pressing news of other major world events in the first quarter of 2022, that is not to say that nothing notable occurred. Indeed, each of the three branches of the U.S. Government took a number of significant steps towards developing more focused AI strategies, legislation, regulations, and principles of governance. As highlighted below in this quarter’s update, Congress, the Department of Defense, the Department of Energy, the Intelligence directorates, NIST, the FTC, and the EEOC all were active players in early 2022 in matters relating to AI. In addition, the EU continued this quarter in advancing efforts toward a union-wide, general AI policy and regulation, which, if and when ultimately adopted, seems likely to have an influential impact on much of the debate that continues in the U.S. on the need for a national approach. Meanwhile, state and local governments in the U.S. continue to fill some of the perceived gaps left by the continued piecemeal regulatory approach taken to date by the federal government.
Our 1Q22 Artificial Intelligence and Automated Systems Legal Update focuses on these key efforts, and also examines other policy developments within the U.S. and EU that may be of interest to domestic and international companies alike.
I. U.S. POLICY & REGULATORY DEVELOPMENTS
A. U.S. National AI Strategy
1. Department of Defense Announces Release of Joint All-Domain Command and Control Implementation Plan
On March 15, 2022, Deputy Secretary of Defense, Dr. Kathleen Hicks, signed the Department of Defense Joint All-Domain Command and Control (JADC2) Implementation Plan. JADC2 enables the Joint Force to “sense,” “make sense,” and “act” on information across the battle-space quickly using automation, artificial intelligence, predictive analytics, and machine learning to deliver informed solutions via a resilient and robust network environment. The JADC2 Cross-Functional Team will oversee the execution of the JADC2 Strategy, initially announced in June 2021, and the Implementation Plan.[1]
The unclassified summary of the strategy provides six guiding principles to promote coherence of effort across the Department in delivering JADC2 improvements: “(1) Information Sharing capability improvements are designed and scaled at the enterprise level; (2) Joint Force C2 improvements employ layered security features; (3) JADC2 data fabric consists of efficient, evolvable, and broadly applicable common data standards and architectures; (4) Joint Force C2 must be resilient in degraded and contested electromagnetic environments; (5) Department development and implementation processes must be unified to deliver more effective cross-domain capability options; and, (6) Department development and implementation processes must execute at faster speeds.”[2]
The JADC2 Implementation Plan is classified but is described as “the document which details the plans of actions, milestones, and resourcing requirements. It identifies the organizations responsible for delivering JADC2 capabilities. The plan drives the Department’s investment in accelerating the decision cycle, closing operational gaps, and improving the resiliency of C2 systems. It will better integrate conventional and nuclear C2 processes and procedures and enhance interoperability and information-sharing with our mission partners.”[3]
2. Congress Works to Reconcile the America COMPETES Act (passed by the House of Representatives) with a Similar Bill: the U.S. Innovation and Competition Act (passed by the Senate)
On February 4, 2022, the House voted 222-210 to approve the America Creating Opportunities for Manufacturing, Pre-Eminence in Technology, and Economic Strength Act of 2022 or the America COMPETES Act of 2022, which would allot nearly $300 billion to scientific research and development and improve domestic manufacturing in an effort to boost the country’s ability to compete with Chinese technology.[4] The vote has triggered some divergence with the Senate, which passed a largely similar bill on June 8, 2021, the United States Innovation and Competition Act of 2021.[5] House and Senate members have started discussions to resolve the differences between the bills.
Like the U.S. Innovation and Competition Act, the America COMPETES Act identifies artificial intelligence, machine learning, autonomy and related advances as a “key technology focus area;” however, unlike the Senate bill, the America COMPETES Act does not establish a Directorate of Technology to support research and development in the key technology focus areas and does not include provisions comparable to the “Advancing American AI Act” which was intended to “encourage agency artificial intelligence-related programs and initiatives that enhance the competitiveness of the United States” while ensuring AI deployment “align[s] with the values of the United States, including the protection of privacy, civil rights, and civil liberties.”[6]
Instead, the America COMPETES Act relies on the Director of the National Institute of Science and Technology (NIST) “to support the development of artificial intelligence and data science, and carry out the activities of the National Artificial Intelligence Initiative Act of 2020 authorized in division E of the National Defense Authorization Act for Fiscal Year 2021.”[7] Also, in many instances, the America COMPETES Act incorporates artificial intelligence as an aspect of a broader research objective.[8]
3. Office of Science and Technology Policy Seeks Information Ahead of Updating the National Artificial Intelligence Research and Development Strategic Plan
In June of 2019, the Trump Administration last released an update to the National Artificial Intelligence Research and Development (AI R&D) Strategic Plan.[9] The plan set out eight strategic aims:
- Make long-term investments in AI research.
- Develop effective methods for human-AI collaboration.
- Understand and address the ethical, legal, and societal implications of AI.
- Ensure the safety and security of AI systems.
- Develop shared public datasets and environments for AI training and testing.
- Measure and evaluate AI technologies through standards and benchmarks.
- Better understand the national AI R&D workforce needs.
- Expand Public-Private Partnerships to accelerate advances in AI.
The National AI Initiative Act, which became law on January 1, 2021, calls for regular updates to the National AI R&D Strategic Plan to include goals, priorities, and metrics for guiding and evaluating how the agencies carrying out the National AI Initiative will:
- Determine and prioritize areas of artificial intelligence research, development, and demonstration requiring Federal Government leadership and investment;
- Support long-term funding for interdisciplinary artificial intelligence research, development, demonstration, and education;
- Support research and other activities on ethical, legal, environmental, safety, security, bias, and other appropriate societal issues related to artificial intelligence;
- Provide or facilitate the availability of curated, standardized, secure, representative, aggregate, and privacy-protected data sets for artificial intelligence research and development;
- Provide or facilitate the necessary computing, networking, and data facilities for artificial intelligence research and development;
- Support and coordinate Federal education and workforce training activities related to artificial intelligence;
- Support and coordinate the network of artificial intelligence research institutes.[10]
The Office of Science and Technology Policy, on behalf of the National Science and Technology Council’s (NSTC) Select Committee on Artificial Intelligence, the NSTC Machine Learning and AI Subcommittee, the National AI Initiative Office, and the Networking and Information Technology Research and Development National Coordination Office, is currently considering the input provided through comments in order to provide an updated strategic plan to reflect current priorities related to AI R&D.[11]
4. NIST is Reviewing Stakeholder Input Relating to Advancing a More Productive Tech Economy to Inform a Report that will be Submitted to Congress
On November 22, 2021, NIST issued a Request for Information (RFI) about the public and private sector marketplace trends, supply chain risks, legislation, policy, and the future investment needs of eight emerging technology areas, including: artificial intelligence, internet of things, quantum computing, blockchain technology, new and advanced materials, unmanned delivery services, and three-dimensional printing. The RFI sought comments to help identify, understand, refine, and guide the development of the current and future state of technology in the eight identified emerging technology areas to inform a final report that will be submitted to Congress.[12] The comments are currently under review and includes policy suggestions and information regarding current technological trends.
5. The U.S. Department of Energy (DOE) Announces The Establishment of The Inaugural Artificial Intelligence Advancement Council (AIAC)
On April 18, 2022, the U.S. Department of Energy announced the establishment of AIAC, which will lead artificial intelligence governance, innovation and AI ethics at the department. Through internal and external partnerships with industry, academia, and government, the AIAC will coordinate AI activities and define the Department of Energy AI priorities for national and economic competitiveness and security. The AIAC members will offer recommendations on AI strategies and implementation plans in support of a broader DOE AI strategy that is led by the Office of Artificial Intelligence and Technologies.[13] Notably, the DOE also announced on March 24, 2022, that it would issue $10 million in funding for projects in artificial intelligence research to High Energy Physics to support research that furthers understanding of fundamental particles and their interactions by making use of artificial intelligence.[14]
6. Intelligence Advanced Research Projects Activity Launches New Biometric Technology Research Program
On March 11, 2022 the Intelligence Advanced Research Projects Activity (IARPA), the research and development arm of the Office of the Director of National Intelligence, announced the Biometric Recognition & Identification at Altitude and Range (BRIAR) program, a multi-year research effort to develop new software systems capable of performing whole-body biometric identification from great heights and long ranges. The program’s goal is to enable the Intelligence Community and Department of Defense to recognize or identify individuals under challenging conditions, such as from unmanned aerial vehicles (UAVs), at far distances, and through distortions caused by atmospheric turbulence. BRIAR research contracts regarding research objectives have been awarded to several private companies and universities.[15]
B. Algorithmic Fairness & Consumer Protection
1. FTC Policy
a) WW International Settlement
On March 4, 2022, the FTC entered into a settlement with WW International, Inc., formerly known as Weight Watchers, and a subsidiary called Kurbo, Inc. over allegations that they collected information from children through a weight loss app.[16] WW has agreed to pay a $1.5 million penalty and delete personal information it obtained from underage users of the its Kurbo program without parental consent in order to resolve the FTC’s claims that it unlawfully gathered data from thousands of children.
As part of the settlement, WW and Kurbo will also be required to destroy all personal information they’ve already gathered without adequate notice or parental consent from minors through the Kurbo program; delete any models or algorithms they’ve developed using this data; and ensure that, moving forward, parents receive clear and direct notice of the collection, use and disclosure of their children’s information and are able to consent to these practices.
b) FTC Priorities
Following the WW International settlement, Commissioner Rebecca Slaughter discussed the settlement, and noted that she hoped that the FTC’s increased use of algorithmic destruction as an enforcement tool would lead to discussions between the agency and Congress with respect to legislative or rulemaking action on privacy.[17]
Commissioner Slaughter also addressed the changing landscape following the “devastating” ruling in AMG Capital Mgmt., LLC v. FTC, a 2021 Supreme Court case which curtailed the FTC’s authority under Section 13(b) of the FTC Act to seek monetary redress for consumers.[18] She noted that the AMG ruling informed the need for rulemaking authority, since consumers relied on the FTC to protect them and seek redress from companies that have violated the law. Several Senators have introduced bills that would give the FTC the authority to seek restitution in federal district court, but no bills have yet been passed.
The FTC’s recent shift in focus to rulemaking has posed a challenge for the Commission, however, as it has been operating with only a partial slate of four Commissioners, leaving the Commission without a tiebreaker. The Senate has largely deadlocked in their votes on a fifth Commissioner, but recently advanced the nomination of Alvaro Bedoya, which may allow for an acceleration of rulemaking by the FTC if he is ultimately confirmed.
2. Algorithmic Accountability Act of 2022
The Algorithmic Accountability Act of 2022[19] was introduced on February 3, 2022 by Sen. Ron Wyden, Sen. Cory Booker, and Rep. Yvette Clark. If passed, the bill would require large technology companies across states to perform a bias impact assessment of any automated decision-making system that makes critical decisions in a variety of sectors, including employment, financial services, healthcare, housing, and legal services. The Act’s scope is potentially far reaching as it defines “automated decision system” to include “any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.” The Act comes as an effort to improve upon the 2019 Algorithmic Accountability Act after consultation with experts, advocacy groups, and other key stakeholders.
3. NIST
a) NIST Releases Initial Draft of a Framework for AI Risk Management
On March 17, NIST released an initial draft of an AI Risk management Framework.[20] The Framework is “intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems.” NIST accepted public comments on this draft framework until April 29, 2022.
b) NIST Releases Update to a Special Publication Concerning Standards to Manage Algorithmic Bias
Additionally, on March 16, NIST published an update to a previously released publication, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270).[21] The publication seeks to encourage standards for the adoption of artificial intelligence to help minimize the risk of unintentional biases in algorithms causing widespread societal harm. The main distinction between the draft and final versions of the publication is the “new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.”[22]
C. Facial Recognition
Challenges to facial recognition technology have continued in early 2022.
Following bipartisan backlash, the U.S. Internal Revenue Service (IRS) decided to abandon its use of facial recognition software in February 2022.[23] The IRS intended to utilize the software to authenticate taxpayers’ online accounts by having users uploading a video selfie. Taxpayers reported frustration with the process and there were a host of security and privacy concerns raised regarding the collection of biometric data.
In March 2022, a federal proposed class action was filed in Delaware alleging that Clarifai Inc. violated the Illinois Biometric Information Privacy Act (BIPA) by accessing plaintiff’s profile photos on OKCupid and using them to develop its facial recognition technology without her knowledge or consent.[24] The Complaint alleges that Clarifai has gathered biometric identifiers from more than 60,000 OKCupid users in Illinois and claims several violations of BIPA as well as unjust enrichment. Plaintiff also seeks declaratory and injunctive relief, attorney fees, and statutory damages of up to $5,000 for each violation of BIPA.
Also in March 2022, the District Court for the District of Columbia dismissed a suit challenging the U.S. Postal Service’s use of facial recognition in the Internet Covert Operations Program.[25] Plaintiff alleged that the U.S. Postal Service’s collection of personal data was unlawful because it failed to conduct a privacy impact assessment regarding data collection. In addition, plaintiff accused the Postal Service of using Clearview AI’s controversial facial recognition service. The court, however, made clear that failure to publish a privacy impact assessment is not sufficient to create an information injury for standing.
D. Labor & Employment
Employers are soon to be subject to a patchwork of recently enacted state and local laws regulating AI in employment.[26] Our prior alerts have addressed a number of these legislative developments in New York City, Maryland, and Illinois.[27] So far, New York City has passed the broadest AI employment law in the U.S., which governs automated employment decision tools in hiring and promotion decisions and will go into effect on January 1, 2023. Specifically, before using AI in New York City, employers will need to audit the AI tool to ensure it does not result in disparate impact based on race, ethnicity, or sex. The law also imposes posting and notice requirements for applicants and employees. Meanwhile, since 2020, Illinois and Maryland have had laws in effect directly regulating employers’ use of AI when interviewing candidates. Further, effective January 2022, Illinois amended its law to require employers relying solely upon AI video analysis to determine if an applicant is selected for an in-person interview to annually collect and report data on the race and ethnicity of (1) applicants who are hired, and (2) applicants who are and are not offered in-person interviews after AI video analysis.[28]
Washington, D.C. has also stepped into the ring by proposing a law that would prohibit adverse algorithmic eligibility determinations (based on machine learning, AI, or similar techniques) in an individual’s eligibility for, access to, or denial of employment based on a range of protected traits, including race, sex, religion, and disability.[29] If passed, the law would require DC-based employers to conduct audits of the algorithmic determination practices, as well as provide notice to individuals about how their information will be used. As noted above in Section II.b., the Algorithmic Accountability Act of 2022 would also impose requirements upon employers.
The U.S. Equal Employment Opportunity Commission (EEOC) remains in the early stages of its initiative that ultimately seeks to provide guidance on algorithmic fairness and the use of AI in employment decisions.[30] Thus far, the EEOC has completed a listening session focused on disability-related concerns raised by key stakeholders.[31]
E. Privacy
The first quarter of 2022 included several interesting developments for artificial intelligence in privacy litigation. Through its private right of action, a number of Illinois’ Biometric Information Privacy Act (BIPA) lawsuits have been filed in 2022. These cases promise that BIPA will continue to be the focal point for AI privacy law.
1. Specific Personal Jurisdiction
Rule 9 Challenges to the forum’s exercise of jurisdiction over a defendant continue to be a good first option for defendants seeking an early exit from an BIPA-based lawsuit.[32] A key inquiry for BIPA cases is typically the defendant’s contacts with the forum state. Indeed, the Northern District recently held that an Illinois plaintiff’s choice to download an app, without much more, failed to create specific jurisdiction.[33] In that case, Wemagine, a Canadian app developer, allegedly used artificial intelligence to extract a person’s face from a photo and transform it to look like a cartoon. The Guitierrez court distinguished other cases with a greater connection to Illinois, noting that the defendant was “not registered to do business in Illinois, ha[d] no employees in Illinois,” did not undertake “Illinois-specific shipping, marketing, or advertising, [n]or sought out the Illinois market in any way” and granted dismissal.[34]
However, while this dismissal tactic may useful, another recent case illustrates how it may only offer temporary reprieve, at least when plaintiffs are motivated to continue the fight elsewhere. In a BIPA case filed in Illinois federal court, Clarifai, a technology company incorporated in Delaware and based in New York, allegedly accessed OKCupid dating profile images to build its facial recognition database.[35] However, the Northern District of Illinois held that the company’s profile photo collection from Illinois-based residents and sale of pre-trained visual recognition models to two Illinois customers did not provide sufficient contacts with the state.[36] Rather than be deterred, Plaintiffs subsequently refiled their complaint in Delaware, Clarifai’s state of incorporation.[37]
2. Novel Biometrics
The BIPA litigation landscape often involves technologies that use facial recognition and fingerprints.[38] However, in 2021, the plaintiffs’ bar also began to explore the potential to use voice recordings, which have proliferated through automated business processing systems, as a foundation for BIPA lawsuits. Many of these initial lawsuits suffered from factual pleading deficiency issues relating to how the business actually used the audio recording. In such cases, Plaintiffs cannot simply claim that a defendant recorded a plaintiff’s appearance or voice. Instead, they must show that the audio was used to create some “set of measurements of a specified physical component . . . used to identify a person.”[39]
The Northern District of Illinois recently emphasized this distinction as applied to audio recordings in deciding a motion to dismiss.[40] In this case, plaintiff alleged that McDonald’s “deploys an artificial intelligence voice assistant in the drive-through lanes” to facilitate food orders and violated BIPA by collecting voiceprint biometrics.[41] In assessing how the technology worked, the court noted that:
“[C]haracteristics like pitch, volume, duration, accent and speech pattern, and other characteristics like gender, age, nationality, and national origin—individually—are not biometric identifiers or voiceprints. They surely can help confirm or negate a person’s identity, but one cannot be identified uniquely by these characteristics alone . . . .”[42]
Noting some skepticism and explicitly drawing inferences in the plaintiff’s favor the court nonetheless held that this was enough to survive a motion to dismiss, stating “[b]ased on the facts pleaded in the complaint . . . it is reasonable to infer—though far from proven—that Defendant’s technology mechanically analyzes customers’ voices in a measurable way such that McDonald’s has collected a voiceprint from Plaintiff and other customers.”[43]
For businesses subject to federal regulation, preemption arguments similar to those pled for fingerprint and facial recognition technologies may also provide a successful strategy to avoid BIPA liability for audio recordings. In another recent case, American Airlines faced a BIPA complaint for using an interactive voice response software in the airline’s customer service hotline.[44] The plaintiff alleged that “American’s voice response software collects, analyzes, and stores callers’ actual voiceprints to understand or predict the caller’s request, automatically respond with a personalized response, and ‘trace’ callers” customer interactions.[45] In response, American argued that the Airline Deregulation Act preempted the BIPA lawsuit. The court agreed, granting the motion to dismiss on the basis of federal preemption and holding that “[because] the state-law claims directly impact American’s interactions with its customers, and directly regulate the airline’s provision of services, that state law inherently interferes with the [Airline Deregulation Act]’s purpose.”[46]
These cases indicate that the plaintiffs’ bar will continue to think of creative applications for BIPA.[47]
F. Intellectual Property
Intellectual property has historically offered uncertain protection to AI works. Authorship and inventorship requirements are perpetual stumbling blocks for AI-created works and inventions. For example, in the United States, patent law has rejected the notion of a non-human inventor. Last year, the Artificial Inventor Project and its leader, Dr. Thaler, made several noteworthy challenges to the paradigm. First, the team created DABUS, the “Device for the Autonomous Bootstrapping of Unified Sentience”—an AI system that has created several inventions.[48] The project then partnered with attorneys to lodge test cases in the United States, Australia, the EU, and the UK.[49] These ambitious cases reaped mixed results, likely to further diverge as AI inventorship proliferates.
DABUS’ attempt to gain protection under a copyright theory recently failed in the United States. The Copyright Review Board considered the copyrightability of a two-dimensional artwork, created by DABUS, titled “A Recent Entrance to Paradise.” The board previously refused to register the work in August 2019 and March 2020. In February, the board rejected a second request for reconsideration and the argument that human authorship was not necessary for registration. While the specific question of copyright registration appeared to be a matter of first impression and no express requirement for human authorship exists in the Copyright Act, the board explained that “Thaler must either provide evidence that the Work is the product of human authorship or convince the Office to depart from a century of copyright jurisprudence.”[50] The board reached back to Supreme Court decisions from 1884, which defined an “author” as “he to whom anything owes its origins” and a number of other sources to build a wall against the concept of non-human authorship. For now, “A Recent Entrance to Paradise” is a dead end under U.S. copyright law.
II. EU POLICY & REGULATORY DEVELOPMENTS
The April 2021 European Commission’s proposal for the Regulation of Artificial Intelligence (“Artificial Intelligence Act”) continues to be the focus in the EU regarding AI matters. Various players, from EU Member States to European Parliament Committees, are publishing suggested amendments and opinions, based on public consultations, to address the underlying shortcomings of the Act.
First, France assumed the Presidency of the Council of the EU in January 2022, a role formerly held by Slovenia, and has circulated additional proposed amendments to the Artificial Intelligence Act, particularly regarding definitions about “high-risk” AI systems.[51] While the current Artificial Intelligence Act considers risks to “health, safety, and fundamental rights,” to be “high-risk,” some Member States argue that “economic risks” should also be factored in the same category. Moreover, it was proposed that providers of “high-risk” AI technology should be liable for ensuring that their systems have human oversight under Article 14(4).[52] Additionally, France suggested that the Commission’s desire for data sets to be “free of errors and complete” under Article 10(3) is unrealistic and that instead datasets should be complete and free of error to the “best extent possible,” which affords some leeway for providers of AI systems.[53] Ultimately, finding a consensus among all relevant actors regarding the Artificial Intelligence Act is still far away: indeed, some EU countries have yet to form official positions on the Act.
Second, several European Parliament committees, such as the Committee on Legal Affairs (“JURI”) and the Committee on Industry, Research and Energy (“ITRE”) have published their draft opinions about the Artificial Intelligence Act. After its public consultation in February 2022, JURI published its draft opinion in 2 March 2022: the opinion focuses on addressing the need to balance innovation and the protection of EU citizens; maximizing investment; and harmonizing the digital market with clear standards.[54] ITRE published its draft opinion a day later and called for an internationally recognised definition of artificial intelligence; emphasized the importance of fostering social trust between businesses and citizens; and flagged the need to future-proof the Artificial Intelligence Act given the onset of the “green transition” and continued advancements in AI technologies.[55] Finally, after their joint hearing in 21 March 2022, the European Parliament’s Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs, who are jointly leading the negotiations of the Artificial Intelligence Act, are expected to produce a draft report in April.
Ultimately, the Artificial Intelligence Act continues to be discussed by co-legislators, the European Parliament and EU Member States. This process is expected to continue until 2023 before the Artificial Intelligence Act becomes law.[56]
____________________________
[1] U.S. Department of Defense, DoD Announces Release of JADC2 Implementation Plan, U.S. Department of Defense (March 17, 2022), available at https://www.defense.gov/News/Releases/Release/Article/2970094/dod-announces-release-of-jadc2-implementation-plan/.
[2] U.S. Department of Defense, Summary of the Joint Command and Control (JADC2) Strategy, U.S. Department of Defense (March 17, 2022), available at https://media.defense.gov/2022/Mar/17/2002958406/-1/-1/1/SUMMARY-OF-THE-JOINT-ALL-DOMAIN-COMMAND-AND-CONTROL-STRATEGY.PDF.
[3] U.S. Department of Defense, DoD Announces Release of JADC2 Implementation Plan, U.S. Department of Defense (March 17, 2022), available at https://www.defense.gov/News/Releases/Release/Article/2970094/dod-announces-release-of-jadc2-implementation-plan/.
[4] Catie Edmondson and Ana Swanson, House Passes Bill Adding Billions to Research to Compete With China, New York Times (Feb. 4, 2022), available at https://www.nytimes.com/2022/02/04/us/politics/house-china-competitive-bill.html.
[5] For more information, please see our Artificial Intelligence and Automated Systems Legal Update (2Q21).
[6] H.R.4521, 117th Cong. (2021-2022); S. 1260, 117th Cong. (2021).
[7] H.R.4521, 117th Cong. (2021-2022).
[8] See id. (“In general.–The Secretary shall support a program of fundamental research, development, and demonstration of energy efficient computing and data center technologies relevant to advanced computing applications, including high performance computing, artificial intelligence, and scientific machine learning.”).
[9] For more information, please see our Artificial Intelligence and Automated Systems Legal Update (2Q19).
[10] Science and Technology Policy Office, Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan, Federal Register (June 2, 2022), available at https://www.federalregister.gov/documents/2022/02/02/2022-02161/request-for-information-to-the-update-of-the-national-artificial-intelligence-research-and.
[12] National Institute of Science and Technology, Study To Advance a More Productive Tech Economy, Federal Register (January 28, 2022), available at https://www.federalregister.gov/documents/2022/01/28/2022-01528/study-to-advance-a-more-productive-tech-economy#:~:text=The%20National%20Institute%20of%20Standards%20and%20Technology%20(NIST)%20is%20extending,Register%20on%20November%2022%2C%202021; comments available at https://www.regulations.gov/document/NIST-2021-0007-0001/comment.
[13] Artificial Intelligence and Technology Office, U.S. Department of Energy Establishes Artificial Intelligence Advancement Council, energy.gov (April 18, 2022), available at https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council.
[14] Office of Science, Department of Energy Announces $10 Million for Artificial Intelligence Research for High Energy Physics, energy.gov (March 24, 2022), available at https://www.energy.gov/science/articles/department-energy-announces-10-million-artificial-intelligence-research-high.
[15] Office of the Director of National Intelligence, IARPA Launches New Biometric Technology Research Program, Office of the Director of National Intelligence (March 11, 2022), available at https://www.dni.gov/index.php/newsroom/press-releases/press-releases-2022/item/2282-iarpa-launches-new-biometric-technology-research-program.
[16] The Federal Trade Commission, Weight Management Companies Kurbo Inc. and WW International Inc. Agree to $1.5 Million Civil Penalty and Injunction for Alleged Violations of Children’s Privacy Laws, Office of Public Affairs (March 4, 2022), available at https://www.justice.gov/opa/pr/weight-management-companies-kurbo-inc-and-ww-international-inc-agree-15-million-civil-penalty; United States v. Kurbo Inc and WW International, Inc, No. 3:22-cv-00946-TSH (March 3, 2022) (Dkt. 15).
[17] Rebecca Kelly Slaughter, Commissioner, Fed. Trade Comm’n, Fireside Chat with FTC Commissioner Rebecca Slaughter, Privacy + Security Forum (March 24, 2022).
[19] 117th Cong. H.R. 6580, Algorithmic Accountability Act of 2022 (February 3, 2022), available at https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202022%20Bill%20Text.pdf?_sm_au_=iHVS0qnnPMJrF3k7FcVTvKQkcK8MG.
[20] NIST, AI Risk Management Framework: Initial Draft (March 17, 2022), available at https://www.nist.gov/system/files/documents/2022/03/17/AI-RMF-1stdraft.pdf.
[21] NIST Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (March 2022), available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf.
[22] NIST Pres Release, There’s More to AI Bias Than Biased Data, NIST Report Highlights (March 16, 2022), available at https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights.
[23] IRS, IRS announces transition away from use of third-party verification involving facial recognition (Feb. 7, 2022), available at https://www.irs.gov/newsroom/irs-announces-transition-away-from-use-of-third-party-verification-involving-facial-recognition.
[24] Stein v. Clarifai, Inc., No. 1:22-cv-00314 (D. Del. Mar. 10, 2022).
[25] Electronic Privacy Information Center v. United States Postal Service, No. 1:21-cv-02156 (D.D.C. Mar. 25, 2022).
[26] For more details, see Danielle Moss, Harris Mufson, and Emily Lamm, Medley Of State AI Laws Pose Employer Compliance Hurdles, Law360 (Mar. 30, 2022), available at https://www.gibsondunn.com/wp-content/uploads/2022/03/Moss-Mufson-Lamm-Medley-Of-State-AI-Laws-Pose-Employer-Compliance-Hurdles-Law360-Employment-Authority-03-30-2022.pdf.
[27] For more details, see Gibson Dunn’s Artificial Intelligence and Automated Systems Legal Update (4Q20) and Gibson Dunn’s Artificial Intelligence and Automated Systems Annual Legal Review (1Q22).
[28] Ill. Public Act 102-0047 (effective Jan. 1, 2022).
[29] Washington, D.C., Stop Discrimination by Algorithms Act of 2021 (proposed Dec. 8, 2021), available at https://oag.dc.gov/sites/default/files/2021-12/DC-Bill-SDAA-FINAL-to-file-.pdf.
[30] For more details, see Gibson Dunn’s Artificial Intelligence and Automated Systems Annual Legal Review (1Q22).
[31] EEOC, Initiative on AI and Algorithmic Fairness: Disability-Focused Listening Session, YouTube (Feb. 28, 2022) available at https://www.youtube.com/watch?app=desktop&v=LlqZCxKB05s.
[32] For past examples of these tactic, see, e.g., Gullen v. Facebook.com, Inc., No. 15 C 7681, 2016 WL 245910 at *2 (N.D. Ill. Jan. 21, 2016) (holding that no specific jurisdiction existed because “plaintiff does not allege that Facebook targets its alleged biometric collection activities at Illinois residents, [and] the fact that its site is accessible to Illinois residents does not confer specific jurisdiction over Facebook.”).
[33] Gutierrez v. Wemagine.AI LLP, No. 21 C 5702, 2022 WL 252704, at *2 (N.D. Ill. Jan. 26, 2022) (“There was no directed marketing specific to Illinois, and the fact that Viola is used by Illinois residents does not, on its own, create a basis for personal jurisdiction over Wemagine.”).
[34] Id. at *3.
[35] Stein v. Clarifai, Inc., 526 F. Supp. 3d 339 (N.D. Ill. 2021).
[36] Id. at 346.
[37] Stein v. Clarifai, Inc., No. 22-CV-314 (D. Del. March 10, 2022).
[38] See, e.g., Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197 (Ill. 2019) (fingerprints); Patel v. Facebook Inc., 290 F. Supp. 3d 948 (N.D. Cal. 2018) (facial biometrics).
[39] Rivera v. Google Inc., 238 F. Supp. 3d 1088, 1096 (N.D. Ill. 2017).
[40] Carpenter v. McDonald’s Corp., No. 1:21-CV-02906, 2022 WL 897149 (N.D. Ill. Jan. 13, 2022).
[41] Id. at *1.
[42] Id. at *3 (emphasis added).
[43] Id.
[44] Kislov v. Am. Airlines, Inc., No. 17 C 9080, 2022 WL 846840 (N.D. Ill. Mar. 22, 2022).
[45] Id. at *1.
[46] Id. at *2.
[47] Other recent complaints also include a lawsuit against a testing company for hand vein scans that are used to verify test taker identity (Velazquez v. Pearson Education, No. 2022-CH-00280 (Cook Co. Cir. Court Jan. 13, 2022)), AI-powered vehicle cameras that record facial geometry to monitor driver safety (Arendt v. Netradyne, Inc., No. 2022-CH-00097 (Cook Co. Cir. Court Jan. 5, 2022)), and an insurer’s use of an AI chat bot to analyze videos submitted by consumers for fraud (Pruden v. Lemonade, Inc., et al., No. 1:21-cv-07070-JGK (S.D.N.Y. Aug. 20, 2021).
[48] The Artificial Inventor Project ambitiously describes DABUS as an advanced AI system. DABUS is a “creative neural system” that is “chaotically stimulated to generate potential ideas, as one or more nets render an opinion about candidate concepts” and “may be considered ‘sentient’ in that any chain-based concept launches a series of memories (i.e., affect chains) that sometimes terminate in critical recollections, thereby launching a tide of artificial molecules.” Ryan Abbott, The Artificial Inventor behind this project, available at https://artificialinventor.com/dabus/.
[49] Ryan Abbott, The Artificial Inventor Project, available at https://artificialinventor.com/frequently-asked-questions/.
[50] Ryan Abbott, Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise (Correspondence ID 1-3ZPC6C3; SR # 1-7100387071), United States Copyright Office, Copyright Review Board (Feb. 14, 2022), available at https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf (emphasis added).
[51] European Union (French Presidency), Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts Chapter 2 (Articles 8 – 15) and Annex IV Council Document 5293/22 (12 January 2022), available at https://www.statewatch.org/media/3088/eu-council-ai-act-high-risk-systems-fr-compromise-5293-22.pdf.
[52] Id.
[53] Id.
[54] European Parliament Committee on Legal Affairs, Draft Opinion on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) (2 March 2022), available at https://www.europarl.europa.eu/doceo/document/JURI-PA-719827_EN.pdf.
[55] European Parliament Committee Industry, Research and Energy, Draft Opinion on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) (3 March 2022), available at https://www.europarl.europa.eu/doceo/document/ITRE-PA-719801_EN.pdf.
[56] Nuttall, Chris, EU takes lead on AI laws (21 April 2021), available at https://www.ft.com/content/bdbf8d8b-fdcc-410d-9d37-fec99b889f20.
The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances Waldmann, Tony Bedel, Iman Charania, Kevin Kim, Brendan Krimsky, Emily Lamm, and Prachi Mistry.
Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Artificial Intelligence and Automated Systems Group, or the following authors:
H. Mark Lyon – Palo Alto (+1 650-849-5307, [email protected])
Frances A. Waldmann – Los Angeles (+1 213-229-7914,[email protected])
Please also feel free to contact any of the following practice group members:
Artificial Intelligence and Automated Systems Group:
H. Mark Lyon – Chair, Palo Alto (+1 650-849-5307, [email protected])
J. Alan Bannister – New York (+1 212-351-2310, [email protected])
Patrick Doris – London (+44 (0)20 7071 4276, [email protected])
Kai Gesing – Munich (+49 89 189 33 180, [email protected])
Ari Lanin – Los Angeles (+1 310-552-8581, [email protected])
Robson Lee – Singapore (+65 6507 3684, [email protected])
Carrie M. LeRoy – Palo Alto (+1 650-849-5337, [email protected])
Alexander H. Southwell – New York (+1 212-351-3981, [email protected])
Christopher T. Timura – Washington, D.C. (+1 202-887-3690, [email protected])
Eric D. Vandevelde – Los Angeles (+1 213-229-7186, [email protected])
Michael Walther – Munich (+49 89 189 33 180, [email protected])
© 2022 Gibson, Dunn & Crutcher LLP
Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.