Senate Judiciary Committee Seeks Guidance on Effective AI Regulation
Client Alert | August 25, 2023
Gibson Dunn’s Public Policy Practice Group is closely monitoring the debate in Congress over potential oversight of artificial intelligence (AI). We have previously summarized major federal legislative efforts and White House initiatives regarding AI in our May 19, 2023 alert Federal Policymakers’ Recent Actions Seek to Regulate AI. We have also covered two U.S. Senate hearings that focused on AI in our June 6, 2023 alert “Oversight of AI: Rules for Artificial Intelligence” and “Artificial Intelligence in Government” Hearings.
On July 25, 2023, the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held the second in a series of “Oversight of AI” hearings, following its May 16, 2023 hearing on “Rules for Artificial Intelligence,” with a focus on “Principles for Regulation.”[1] A bipartisan group of Senators, led by Chair Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO), emphasized the urgent need for AI legislation in the face of rapidly advancing AI technology, including generative algorithms and large language models (LLMs).
Witnesses included:
- Stuart Russell, Professor of Computer Science, The University of California – Berkeley;
- Yoshua Bengio, Founder and Scientific Director, Mila – Québec AI Institute; and
- Dario Amodei, Chief Executive Officer, Anthropic.
I. Points of Particular Interest from July 25, 2023 Hearing
We provide a full hearing summary and analysis below. Of particular note, however:
- Chair Blumenthal opened the hearing by noting that, when speaking to constituents about AI, the word he heard most often was “scary.” He pointed to the hearing’s witnesses as “provid[ing] objective, fact-based views to reinforce those fears.” Although he recognized these fears as existential threats, Chair Blumenthal continued to emphasize AI’s enormous potential for good and reiterated the need not to stifle innovation and to maintain U.S. leadership in the AI sector.
- Both Chair Blumenthal and Ranking Member Hawley extolled the rare bipartisan support for AI regulation. In particular, Chair Blumenthal and Ranking Member Hawley highlighted their recent introduction of a bill to waive immunity under Section 230 of the Communications Act of 1934 for claims related to generative AI, following the Subcommittee’s May 19 discussion of whether such immunity should apply to actors in the AI sector.[2]
- Senator Amy Klobuchar (D-MN) emphasized the need to act quickly to capitalize on this bipartisan appetite for AI regulation and avoid “decay[ing] into partisanship and inaction.”
- Ranking Member Hawley emphasized AI’s potential impact, questioning whether it will be an innovation more like the Internet or the atom bomb. Ranking Member Hawley thought the question facing society, and Congress specifically, is whether Congress will “strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country.”
- The subcommittee and its witnesses invoked recent efforts by the White House to secure voluntary commitments from leading AI companies—including Mr. Amodeo’s company, Anthropic—to safeguard against key risks.[3] However, Chair Blumenthal stated that many of these commitments are non-enforceable and relatively unspecific. Chair Blumenthal emphasized that this hearing, in contrast, sought to develop legislation and regulations that would create specific, enforceable obligations on actors in the AI sector.
II. Alleged Risks of Particular Concern
In his opening statement, Ranking Member Hawley commented that he had no doubt that AI will be good for large companies, but that he was less confident that AI would be good for the American people. Much of the hearing, therefore, discussed key areas of risks or alleged harms posed by unregulated AI.
The witnesses testifying before the subcommittee typically divided these risks between immediate or short-term risks that may currently exist in AI—such as privacy concerns, copyright issues, alleged bias in algorithms, and possible misinformation—and more medium-term or long-term risks that may present themselves as AI technology advances. The witnesses emphasized the need for Congress to act urgently to prevent these longer term risks from materializing. Professor Bengio drove home this need, explaining that many AI experts had previously “placed a plausible timeframe for [the] achievement of [human-level AI] somewhere between a few decades and a century” but now considered “a few years to a couple of decades” to be the appropriate estimate.
Throughout the hearing, senators focused on a number of short-term and longer term risks, primarily relating to: (i) misinformation and political influence, (ii) national security, (iii) privacy, and (iv) intellectual property.
a. Misinformation and Political Influence
As in the subcommittee’s previous hearing, concerns about misinformation—particularly in the context of elections—took center stage, with Chair Blumenthal noting that “[i]f there’s nothing else that focuses the attention of Congress, it’s an election.” Both the lawmakers and witnesses highlighted risks associated with “deep fakes” and other forms of misinformation or external influence campaigns using AI.
While Mr. Amadeo noted that his company, Anthropic, trains its AI not to generate misinformation or politically biased content, Ranking Member Hawley pushed back on the idea that AI companies can be trusted to police these lines in the face of business pressures commenting that certain decisions about ethics may be “in the eye of the beholder.” Ranking Member Hawley expressed that, in his view, the control that a relatively small number of companies exercise over the AI sector creates a “serious structural issue” regarding who makes decisions about ethics and misinformation.
Other lawmakers and witnesses echoed Ranking Member Hawley’s concerns about the difficulty of policing AI-generated misinformation, with Senator Klobuchar noting the need to comply with the First Amendment’s protections for free speech and Professor Russell invoking the Orwellian specter of “Ministry of Truth.” Professor Russell proposed, however, that Congress could look to other highly regulated industries like banks and credit cards for guidance on how to balance effective and truthful disclosure requirements with free speech concerns.
b. National Security
Concerns about AI’s implications for national security permeated the hearing, with Ranking Member Hawley listing this issue as one of his top four priorities.
Some of these alleged national security risks related to the use of lethal weapons. For example, Chair Blumenthal noted agreement between the U.S. and China that on limiting certain uses of AI in connection to nuclear weapons, and Professor Russell echoed the popular position against the creation of lethal autonomous weapons systems (LAWS) with the ability to kill in the absence of direction or input from a human actor.
Mr. Amodei specifically addressed concerns that AI could enable malicious actors to develop sophisticated biological weapons. His company had, he explained, conducted a six month study that found that current AI systems are capable of filling in some, but not all, steps in the highly technical process of developing biological weapons. This study extrapolated, however, that AI systems may be able to fill in all steps of these processes within two to three years, allowing malicious actors that lack specialized expertise to weaponize biology.
Ranking Member Hawley and Mr. Amodei also discussed national security risks that could arise if the U.S. fails to secure the AI supply chain, with Mr. Amodei noting the significant number of bottle necks that currently exist in the semiconductor manufacturing process. (For more detailed discussion of U.S. efforts to secure the semiconductor supply chain, see our previous client alerts on the implementation of the CHIPS Act, here and here.) Ranking Member Hawley expressed particular concern about the U.S.’s reliance on Taiwan-origin chips, in light of the possibility of a Chinese invasion of Taiwan.
Some witnesses, however, provided comfort by emphasizing the leadership of the U.S. and its allies in the AI sector. When asked about the AI capabilities of U.S. adversaries, Professor Russell indicated that the U.S., the UK, and Canada currently have the most advanced AI technology in the world whereas, in his view, China’s capabilities have been “slightly overstated.” While he acknowledged China’s extensive investments in AI and its strength in voice and face recognition technology, he suggested that numerical publication requirements on China’s academic sector have limited the country’s ability to produce technological breakthroughs.
c. Privacy
Several senators raised the potential privacy risks that could result from the deployment of AI, often invoking Congress’s perceived failure to address the privacy implications of social media proactively to emphasize an urgent need for AI guardrails. Ranking Member Hawley pointed out that these privacy concerns, if left unchecked, could exacerbate other alleged risks if, for example, an AI program gained access to voter files and used them to target certain voters with misinformation.
Senator Marsha Blackburn (R-TN) stressed her view that the U.S. is “behind” its allies on issues of online consumer privacy, pointing to the digital privacy regimes of the EU, UK, New Zealand, Australia, and Canada as examples. Specifically, she expressed concern that consumers’ personal data was being used to train AI systems, often without their knowledge. Senator Blackburn queried whether a federal privacy standard would help address this concern without interfering with the U.S.’s position as a global leader in generative AI. Professor Russell supported a federal standard, including an absolute requirement to disclose whether systems are harvesting data from users’ conversations.
Mr. Amodei pointed to his own company as an example of how to navigate these concerns, explaining that it relies primarily on publicly available information to train its AI and that the program is trained not to produce results containing certain types of private information.
d. Intellectual Property
Senator Blackburn emphasized the profound impact that unregulated AI could have on the creative sector, suggesting that AI is “robbing [authors, actors, and musicians] of their ability to make a living off of their creative work.” Senator Blackburn queried whether artists whose artistic creations are used to train algorithms are or will be compensated for the use of their work. Professor Russell agreed that existing intellectual property laws may not always be sufficient to address these concerns.
III. Key Regulatory Proposals
As indicated by the hearing’s title, “Principles for Regulation,” the lawmakers and witnesses focused not just on potential risks associated with AI but also with concrete policy measures that could mitigate these risks.
At the end of the hearing, Ranking Member Hawley asked each witness for “one or two recommendations for what Congress should do right now” to regulate the AI industry.
- Professor Russell would establish a federal agency tasked with regulating the AI sector. He would also remove from the market any AI systems that violate a designated set of “unacceptable” behaviors associated with the risks and alleged harms discussed above. Professor Russell described this latter recommendation as creating not just positive effects for consumer protection but also as incentivizing companies to conduct rigorous research and testing to ensure their products are effective and controllable before putting them on the market.
- Professor Bengio would invest in the safety of AI systems, both at the levels or hardware and cybersecurity measures, through a mix of direct investments and incentives for companies. Professor Bengio emphasized that U.S. investment in AI safety should be “at or above the level of investment” that goes into developing AI programs.
- Mr. Amodei would develop rigorous testing and auditing regimes for the AI sector, stressing that “without such testing, we’re blind” to the capabilities and future risks that AI may pose. He also reiterated the importance of an enforcement mechanism for these measures, although he was agnostic on whether that should come from a new federal agency or from existing authorities.
Beyond these high-level recommendations, the subcommittee and witnesses discussed the following regulatory and legislative measures: (a) a potential AI federal agency and auditing regime, (b) labeling and watermarking requirements for information generated by AI, (c) limitations on the release of pre-trained, open source AI models, and (d) the creation of private rights of action authorizing lawsuits against AI companies.
e. AI Federal Agency
Building on conversations from the subcommittee’s May 19, 2023 hearing, the lawmakers and witnesses discussed the possibility of a new federal agency focused on regulating AI, with Chair Blumenthal stating that he had “come to the conclusion that we need some kind of regulatory agency” focused on AI. Chair Blumenthal stressed that this should not be a “passive body” and should instead invest proactively in research to develop countermeasures that can effectively address potential AI risks.
While Chair Blumenthal was the only subcommittee member whose questions focused expressly on the creation of a new federal agency, the witnesses voiced support for the idea throughout the hearing. Noting the rapid development of new AI technology, Professor Bengio observed that legislation alone will be insufficient to mitigate against future AI risks. “We don’t know yet” what regulations might be necessary in one, two, or three years, Professor Bengio commented, and “having an agency is a tool toward [the] goal” of responding agilely to evolving technology. Mr. Amodei also agreed with Chair Blumenthal that this new agency should play a proactive role in researching countermeasures, rather than simply responding to new risks. Mr. Amodei stressed that centralizing research efforts in a new agency, or even through the Department of Commerce’s National Institute of Standards and Technology (NIST) would allow the U.S. to create consistent standards against which to measure risks and benefits associated with AI.
The witnesses believed that one centralized agency focused on AI would provide benefits beyond streamlined domestic implementation of AI regulation. For example, Professor Bengio was of the view that a single agency could better coordinate with the U.S.’s international allies, allowing the U.S. to speak with a single voice while advocating for global standards.
The witnesses emphasized, however, that a federal agency will not, by itself, be sufficient to tackle AI-related risk. Professor Russell observed that “no government agency” will be able to match the tremendous resources—which he estimated at more than $10 billion—that the private sector invests in the creation of AI systems. He suggested that his proposal for involuntary recall provisions could bridge this gap by incentivizing robust testing of AI models by the private sector before these models are released commercially.
f. Labeling and Watermarking Requirements
One of the most frequently mentioned methods of tackling AI-generated misinformation was a regime of labeling and watermarking materials produced by an AI system. As Mr. Amodei and Professor Russell explained, labeling requirements would require as a matter of policy that AI outputs be clearly labeled as produced by AI wherever they are published; watermarking, on the other hand, is a technical measure by which the provenance of both original and AI-generated content can be established. Professor Russell emphasized the need for international coordination to avoid fragmented enforcement, suggesting the creation of an encrypted global escrow system capable of verifying the provenance of any piece of media uploaded to the system.
Chair Blumenthal noted a growing bipartisan consensus on this issue and stressed that labeling and watermarking would be necessary to address election-related misinformation. Senator Klobuchar likewise pointed out that her recently introduced REAL Political Advertisement Act would require election materials produced by AI to be labeled as such.[4]
g. Limitations on Open-Source Model Releases
All three witnesses raised concerns related to the availability to the public of pre-trained open source AI models, because, as Mr. Amodei observed, “when a model is released in an uncontrolled manner, there is no ability to [control it.] It is entirely out of your hands.” This prompted discussion of whether open source AI should be restricted in any way.
Despite the tremendous benefit open source programs may offer in scientific fields, Professor Bengio warned that these could open the door for exploitation by malicious actors who would not otherwise have the technical expertise and computing power necessary to create their own models. He observed that many of these open source systems were being developed at universities and proposed the creation of ethics review boards for university AI programs that could ensure future releases are carefully evaluated for potential risks before they are released. Professor Russell also suggested that “the open source community” may need to face some form of liability “for putting stuff out there that is ripe for misuse.”
h. Private Rights of Action Against AI Companies
The subcommittee and its witnesses focused not just on the legal frameworks necessary to protect against AI-related risk but also on mechanisms for enforcing these laws.
One key enforcement mechanism highlighted by Ranking Member Hawley was the private right of action authorized by the “No Section 230 Immunity for AI Act” that he recently introduced alongside Chair Blumenthal.[5] Ranking Member Hawley described this as an important mechanism to allow Americans to vindicate their privacy rights in court. Specifically, the bill would allow civil actions—as well as criminal prosecutions—“if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence.”[6]
IV. How Gibson Dunn Can Assist
Gibson Dunn’s Public Policy, Artificial Intelligence, and Privacy, Cybersecurity and Data Innovation Practice Groups are closely monitoring legislative and regulatory actions in this space and are available to assist clients through strategic counseling; real-time intelligence gathering; developing and advancing policy positions; drafting legislative text; shaping messaging; and lobbying Congress. Gibson Dunn also offers holistic support representing our clients in, and ensuring our clients are prepared to respond effectively to, any civil, criminal, or congressional investigations or litigation relating to the development and/or deployment of AI systems.
___________________________
[1] Oversight of A.I.: Principles for Regulation: Hearing Before the Subcomm. on Privacy, Tech., and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023), https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-principles-for-regulation.
[2] No Section 230 Immunity for AI Act, S. 1993, 118th Cong. (2023).
[3] See Press Release, Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI, The White House (Jul. 21, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.
[4] Real Political Advertisements Act, S. 1596, 118th Cong. (2023).
[5] No Section 230 Immunity for AI Act, S. 1993, 118th Cong. (2023).
[6] Id.
The following Gibson Dunn lawyers prepared this client alert: Michael Bopp, Roscoe Jones, Jr., Vivek Mohan, Cassandra Gaedt-Sheckter, Amanda Neely, Daniel Smith, and Sean Brennan.
Gibson, Dunn & Crutcher’s lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following in the firm’s Public Policy, Artificial Intelligence, or Privacy, Cybersecurity & Data Innovation practice groups:
Public Policy Group:
Michael D. Bopp – Co-Chair, Washington, D.C. (+1 202-955-8256, [email protected])
Roscoe Jones, Jr. – Co-Chair, Washington, D.C. (+1 202-887-3530, [email protected])
Amanda H. Neely – Washington, D.C. (+1 202-777-9566, [email protected])
Daniel P. Smith – Washington, D.C. (+1 202-777-9549, [email protected])
Artificial Intelligence Group:
Cassandra L. Gaedt-Sheckter – Co-Chair, Palo Alto (+1 650-849-5203, [email protected])
Vivek Mohan – Co-Chair, Palo Alto (+1 650-849-5345, [email protected])
Eric D. Vandevelde – Co-Chair, Los Angeles (+1 213-229-7186, [email protected])
Frances A. Waldmann – Los Angeles (+1 213-229-7914, [email protected])
Privacy, Cybersecurity and Data Innovation Group:
S. Ashlie Beringer – Co-Chair, Palo Alto (+1 650-849-5327, [email protected])
Jane C. Horvath – Co-Chair, Washington, D.C. (+1 202-955-8505, [email protected])
Alexander H. Southwell – Co-Chair, New York (+1 212-351-3981, [email protected])
© 2023 Gibson, Dunn & Crutcher LLP. All rights reserved. For contact and other information, please visit us at www.gibsondunn.com.
Attorney Advertising: These materials were prepared for general informational purposes only based on information available at the time of publication and are not intended as, do not constitute, and should not be relied upon as, legal advice or a legal opinion on any specific facts or circumstances. Gibson Dunn (and its affiliates, attorneys, and employees) shall not have any liability in connection with any use of these materials. The sharing of these materials does not establish an attorney-client relationship with the recipient and should not be relied upon as an alternative for advice from qualified counsel. Please note that facts and circumstances may vary, and prior results do not guarantee a similar outcome.