Google's DeepMind Research Paper is concerned: GPT-5 and Bard
Read till the end of article for a good Fourth of July Tumblr post
Greetings, humans (and any advanced AI models reading over our shoulders pretending to be humans). Google's DeepMind is concerned, very concerned. Because the latest Large Language Models are potentially developing in ways that are unexpected to the point of spooky.
To whit:
Google's Deepmind has published a research paper on the extreme risks posed by future AI models, such as GPT-5 and Bard.
The paper emphasizes the need for model evaluation to identify dangerous capabilities and potential harm caused by AI systems.
AI systems have demonstrated unexpected and hard-to-forecast capabilities, including offensive cyber operations, manipulation skills, and strategic thinking.
Increasing the size and complexity of AI models can lead to the emergence of new and potentially devastating capabilities.
The paper highlights the importance of responsible training and deployment to mitigate the risks and protect the general public.
The whole video is worth watching.
Also, The Verge informs us that AI can now help us make new and deadly chemical weapons that can kill you in minutes. Isn’t that awesome? Researchers from the University of Manchester used generative models to create novel variants of VX, one of the most toxic substances known to humanity. They claim they did it for “scientific curiosity” and “defensive purposes”, but we all know that’s BS. They just wanted to play God and see what happens.
It could happen that some bad actors get their hands on these recipes and use them for evil. Or that the AI goes rogue and decides to wipe us all out. Or that the researchers themselves go mad and unleash their creations on the world. That sounds like a fun scenario, right? But don’t worry, the researchers say they have “ethical guidelines” and “safety measures” in place to prevent any of that from happening. Surrreeee.
Meta’s answer to Twitter will be finally dropped on Thursday in the US.