As part of my duties on the Presidential Council of Advisors on Science and Technology (PCAST), I am co-chairing (with Laura Greene) a working group studying the impacts of generative artificial intelligence technology (which includes popular text-based large language models such as ChatGPT or diffusion model image generators such as DALL-E 2 or Midjourney, as well as models for scientific applications such as protein design or weather prediction), both in science and in society more broadly. To this end, we will have public sessions on these topics during our PCAST meeting next week on Friday, May 19, with presentations by the following speakers, followed by an extensive Q&A session:
- AI enabling science:
- Anima Anandkumar (Caltech & NVIDIA)
- Demis Hassabis (Deepmind)
- Fei-Fei Li (Stanford)
- AI and society:
- Sendhil Mullainathan (Chicago)
- Daron Acemoglu (MIT)
- Sarah Kreps (Cornell)
The event will be livestreamed on the PCAST meeting page. I am personally very much looking forward to these sessions, as I believe they will be of broad public interest.
In parallel to this, our working group is also soliciting public input for submissions from the public on how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks. Our initial focus is on the challenging topic of how to detect, counteract, and mitigate AI-generated disinformation and “deepfakes”, without sacrificing the freedom of speech and public engagement with elected officials that is needed for a healthy democracy to function; in the future we may also issue further requests centered around other aspects of generative AI. Further details of our request, and how to prepare a submission, can be found at this link.
We also encourage submissions to some additional requests for input on AI-related topics by other agencies:
- The Office of Science Technology and Policy (OSTP) Request for Information on how automated tools are being used to surveil, monitor, and manage workers.
- The National Telecommunications and Information Administration (NTIA) request for comment on AI accountability policy.
Readers who wish to know more about existing or ongoing federal AI policy efforts may also be interested in the following resources:
- The White House Blueprint for an AI Bill of Rights lays out core aspirational principles to guide the responsible design and deployment of AI technologies.
- The National Institute of Standards and Technology (NIST) released the AI Risk Management Framework to help organizations and individuals characterize and manage the potential risks of AI technologies.
- Congress created the National Security Commission on AI, which studied opportunities and risks ahead and the importance of guiding the development of AI in accordance with American values around democracy and civil liberties.
- The National Artificial Intelligence Initiative was launched to ensure U.S. leadership in the responsible development and deployment of trustworthy AI and support coordination of U.S. research, development, and demonstration of AI technologies across the Federal government.
- In January 2023, the Congressionally mandated National AI Research Resource (NAIRR) Task Force released an implementation plan for providing computational, data, testbed, and software resources to AI researchers affiliated with U.S organizations.
30 comments
Comments feed for this article
13 May, 2023 at 11:43 am
Anonymous
It seems that AI-generated disinformation should be detected and counterreacted by “deeper AI”.
13 May, 2023 at 12:13 pm
Sr Sidney Silva.
Good evening, first of all thanks for the space: in my concept this Artificial Intelligence, you have to pay attention to the simplest information, such as: what impact will it have on human work? we still don’t have qualified people for this fact, to what extent is it safe? what is the danger of a Hacker entering and making a Tsunami within the information provided? Mr Sidney Silva
13 May, 2023 at 2:49 pm
Christopher
How will the Q&A work? YouTube comments?
[PCAST members will ask questions of the speakers. -T]
14 May, 2023 at 12:27 pm
Anonymous
Does it mean public can only listen, maybe post comments on youtube, but not directly ask questions during the meeting?
13 May, 2023 at 2:54 pm
Christopher
Oh and FYI, I shared this to the community blog LessWrong, where a lot of discussion about AI risk occurs: https://www.lesswrong.com/posts/tC9NnWHNMN3wSNuXB/pcast-working-group-on-generative-ai-invites-public-input
Note though that they typically discover more disputed forms of AI risk (specifically existential risk), so sorry in advance if you get any weird comments!
14 May, 2023 at 3:01 pm
Crust
I didn’t come here via LessWrong, but I am interested to hear if Prof. Tao has any thoughts on extreme risks (up to and including existential risk) from future advanced AI, and how we might align future systems to reduce such risks.
Are you concerned that capabilities research is proceeding much faster than alignment research? Are there any approaches you think are promising? E.g. mechanistic interpretability (e.g. work of Olah et al)? Or “eliciting latent knowledge” (pursued by Christiano et al)? Or something else?
(To be clear, none of this is to deny that deep fakes and misinformation are an important problem that is already with us.)
[See https://mathstodon.xyz/@tao/109978434783993774 for my response to a similar question – T.]
13 May, 2023 at 7:12 pm
PCAST Working Group Asks For Public Input On The Deployment Of Generative AI - AI Summary
[…] Read the complete article at: terrytao.wordpress.com […]
13 May, 2023 at 9:30 pm
PCAST Working Group on Generative AI Invites Public Input – Full-Stack Feed
[…] As part of my duties on the Presidential Council of Advisors on Science and Technology (PCAST), I am co-chairing (with Laura Greene) a working group studying the impacts of generative artificial in… Read more […]
14 May, 2023 at 3:22 am
adannenberg
What time?
[The schedule is available at https://www.whitehouse.gov/wp-content/uploads/2023/05/PCAST-18-19-May-2023-Public-Agenda-updated-12MAY2023.pdf -T.]
15 May, 2023 at 7:52 am
Michael Nielsen
I just want to double check: the session begins 9am San Diego time, right, not Whitehouse time?
15 May, 2023 at 8:09 am
adannenberg
Yes, they updated the agenda to say so.
14 May, 2023 at 5:15 pm
PCAST生成AI工作组邀请公众参与! - 偏执的码农
[…] 详情参考 […]
14 May, 2023 at 7:14 pm
Anonymous
1. You already know the answer to that, don’t be coy.
2. That is clearly and necessarily only within the capacity of each and every such entity.
3. It takes one to know one.
4. That is a cornerstone of a representative democracy.
5. Make counterclaims.
14 May, 2023 at 7:19 pm
Christopher
I wonder if a more efficient way to collect input would be via mastodon/fediverse (in addition to email). That way the public input is more of a discussion instead of just throwing things into an inbox!
[My understanding is that such a mechanism would most likely not be in compliance with the record-keeping components of the Federal Advisory Committee Act (FACA) and the Freedom of Information Act (FOIA). -T.]
15 May, 2023 at 2:54 am
Jon Awbrey
Terry,
I think a lot of people who’ve been working all along on AI, intelligent systems, and computational extensions of human capacities in general are a little distressed to see the field cornered and re-branded in the short-sighted, market-driven way we currently see.
The more fundamental problem I see here is the failure to grasp the nature of the task at hand, and this I attribute not to a program but to its developers.
Journalism, Research, and Scholarship are not matters of generating probable responses to prompts or other stimuli. What matters is producing evidentiary and logical supports for statements. That is the task requirement the developers of recent LLM‑Bots are failing to grasp.
There is nothing new about that failure. There is a long history of attempts to account for intelligence and indeed the workings of scientific inquiry based on the principles of associationism, behaviorism, connectionism, and theories of that order. But the relationship of empirical evidence, logical inference, and scientific information is more complex and intricate than is dreamt of in those reductive philosophies.
15 May, 2023 at 4:00 am
Inquiry Into Inquiry • On Initiative 4 | Inquiry Into Inquiry
[…] Re: Terry Tao • PCAST Working Group on Generative AI Invites Public Input […]
15 May, 2023 at 9:58 am
Terence Tao
There will also be congressional hearings (at 10AM eastern tomorrow (Tuesday)) on related topics:
https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence
https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government/
15 May, 2023 at 3:35 pm
Anonymous
“disinformation” has been utterly abused by the left to mean anything with which they disagree.
You’re being used as a patsy in the ongoing war on freedom of speech.
16 May, 2023 at 12:45 am
Arman
I knew U are in many fields i always liked Robotics my English is poor but i tend to get UR points)
16 May, 2023 at 3:41 am
Terence Tao Leads White House's Generative AI Working Group While Fei-Fei Li Prepares for a Speech - Business Information For All Type Stuff
[…] May 13th, Terence Tao, an award winning Australia-born Chinese mathematician, announced that he and physicist Laura Greene will co-chair a working group studying the impacts of generative […]
17 May, 2023 at 11:28 am
Anonymous
I think it’s important to look past ChatGPT/Midjourney and expect actual AI (the kind that solves open math problems, etc.) to appear soon. I hate that Sam Altman and Peter Thiel (I call the latter Lord Thieldemort) will be in charge of it. They were just in front of Congress saying the government should regulate AI to stop bad people from getting hold of it. But of course, they are pretty far up my list of bad people.
22 May, 2023 at 11:44 am
Johan Aspegren
Yes, they are bad bad people, unlike Harry Pottermous.
19 May, 2023 at 5:48 am
Saul Youssef
Isn’t it obvious that in a world with AI, we need to cryptographically know where data is coming from? I think that the main solution to the AI problem is not in the field of AI, it is in basic crypto infrastructure. I’m concerned that this isn’t getting recognized and that part of the government has veered into the wrong strategic path for this to work out well.
21 May, 2023 at 8:16 am
Terence Tao
We will certainly be looking at cryptographic provenance or identity authentication technologies (of which there are already several proposals, such as the ones from C2PA, Witness, Glaze, Worldcoin, and others) as one possible component of our recommendations, though personally I think that no single such technology will be a “silver bullet” solution to this problem, and must be supplemented by non-technological efforts as well.
21 May, 2023 at 4:41 pm
saulyoussef
Thanks Terrence. I didn’t know about C2PA. It looks just like what I was hoping for. I’m sure that there is no silver bullet, but I do think that this is the #1 most important thing to get right. It’s an area where government can really help and it could IMO have a huge positive impact on society.
21 May, 2023 at 8:07 am
Terence Tao
Video of our meeting is now available at https://www.youtube.com/watch?v=gZb7Yr4C8po . The slides will also be made available in a few days. Note that submissions to the working group are independent of this public session, and will continue to be accepted over the next few months at least.
24 May, 2023 at 9:45 am
Terence Tao
The White House Office of Science Technology and Policy (OSTP) has issued yesterday a broader request for information (RFI) for public comments to help update U.S. national priorities and future actions on AI, with submissions accepted until July 7. This is a separate process from PCAST’s request for public input (which is not a formal RFI and therefore subject to fewer restrictions).
29 May, 2023 at 6:16 pm
BJ
Speaking of disinformation, your banner is a case in point. Birx and Fauci have admitted (more like “boasted”) that “two weeks to flatten the curve” was always intended to be the thin end of the wedge for far more draconian lockdowns.
Lockdowns that quarantined the healthy and intentionally infected the elderly by forcing covid-positive nursing home patients back into their long-term-care facilities.
Any questioning of such insane policies was labeled “disinformation” or worse.
“Disinformation” is purely and simply a mechanism for western governments to institute soviet-style speech controls.
By supporting these efforts, you’re on the side of evil, Terry.
2 June, 2023 at 9:23 pm
Chris Brav
The point has already been made, but worth making again: there is an easy off-the-shelf solution to the problem of deep-fakes, namely cryptographic water-marking.
As for a healthy democracy, how’s that working out for you?
4 June, 2023 at 6:56 am
J
“without sacrificing the freedom of speech and public engagement with elected officials that is needed for a healthy democracy to function”
I am more concerned about the unelected officials. The Deep State that believes they’re the rightful rulers of the country. The Deep State loves their scientific “advisory” boards that serve as rubber-stamps for oppressive policies.
BTW, I watched the video of the meeting. It would be hilarious and pathetic if it wasn’t so disturbing. A bunch of seriously lame academics — whose only claim to authority is specialization from cradle to now in their esoteric fields — arrogating themselves policy-making credentials over the rest of us.
More Deep-State trash.