Governing AI & Evaluating DangerWe might need to shut it all down, AI governance seems more important than ever and technical research is challenged. Welcome to this week's update! We've renamed our newsletter the AI Safety Digest (AISD) and will make a few changes during the next few weeks, so prepare for those. Watch or listen to this weeks episode on YouTube or podcast. |
Stop AGI Development"We need to shut it all down." This is the wording in a new Time Magazine article where Eliezer Yudkowsky urges us to stop the development towards artificial general intelligence completely before it's too late. He refers to a recent open letter signed by over 1800 researchers and experts in AI urging the world to stop the training of larger-than-GPT-4 models for at least 6 months. It is receiving a lot of criticism from different points of view for either not taking the existential risks seriously enough or for being alarmist without any reason. The letter’s perception has been negatively affected by Elon Musk’s controversial inclusion, and many people seem to have not even read it while assuming it is about banning all AI research when it is clearly not, as mentioned above. In addition, the criticism that it is not focused enough on existential risk seems to miss that it has had a positive impact on what is now being talked about in the public sphere. Nearly everyone in the research field has been interviewed about this letter, and it represents a great leap forward for the conversation on AI safety. As part of the release of the letter, The Center for AI and Digital Policy (CAIDP) filed a complaint about OpenAI's release of GPT-4 to the FTC. If this leads to an FTC investigation, we might end up with better government control on large artificial intelligence systems releases for upcoming systems. AI Governance IdeathonIn the context of this letter, we held the AI governance ideathon this past weekend. More than 120 people participated from across all 7 continents with local jam sites in 6 of these. The submissions were amazing and here we'll quickly summarize a few of them.
You can read all the projects on the ideathon page or watch the award ceremony on our YouTube channel. AI Safety Research?With releases such as LangChain, the Zapier Natural Language Actions API and ChatGPT Plugins, we see higher risks emerging from hooking up large language systems with the internet in various ways. You can now even talk to your watch to request GPT-4 to program on Github for you! With these levels of progress, it seems like the main advances we need in AI safety at the moment are related to the evaluation and certification of how dangerous future models are and to create techniques that are specifically applicable to systems like large language models. A good example of this is the Alignment Research Center's evaluations on language models for their capability to break out of their digital confines. In a recent article, they expand more on their work presented in the GPT-4 system card. GPT-4 was given instructions on how to use internet tools and given the help of a scientist as a liaison to the web, it ran on a cloud instance and ended up hiring a TaskRabbit worker to solve Captchas and even dissuaded the TaskRabbit worker from thinking it was a robot by saying it had poor eyesight. Luckily, it was not capable enough to do good long-term planning to escape, though we must remember that this was without further tooling added (e.g. Pinecone) and we're still expecting GPT-5 and -6. It is both an exciting and scary time ahead! OpportunitiesWith the fast developments, we of course see just as many opportunities within the space as usual! Join us in:
Thank you for following along and we look forward to seeing you next time! |
Dokk21, Filmbyen 23, 2. tv, Aarhus, 8000 |