They expected some things would take 10 years to solve or would just be more intractable without new techniques. Some of the performance has just exceeded what people working in AI expected. Will Knight: Yeah, so I think what has spurred this as well as the previous letter calling for a pause, to a large degree, is the advances we've seen in these large language models, most notably GPT-4, which powers ChatGPT. Tell us a little bit about this statement from the Center for AI Safety. ![]() But all of these doomsday warnings had us wondering, what should we believe about the potential for AI harm and who among these researchers and technologists is offering the most trustworthy opinion right now? Will, we wanted to bring you in to talk about this. Some of our listeners might recall the pause that hundreds of top technologists and researchers were calling for back in March, and so far has not really resulted in a pause. Now, this is obviously not the first time that we've been warned of the perils of AI. "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war." That came from the Center for AI Safety, which is a nonprofit, and it was signed by leading figures in the development of AI. It was the one sentence statement that was heard around the tech world earlier this week. Yes, we are talking about AI yet again, but this time it's a statement from a group of technologists who are warning of an existential threat to humanity. That's what we're having you on to talk about. ![]() Lauren Goode: One of these days Will, we're going to reach out to you and just say, "Would you like to talk about cat gadgets or something?" But for now you are squarely in the realm of AI coverage.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |