New research agenda released alongside 35+ co-authors from NLP, ML, and AI Safety communities!

New research agenda released alongside 35+ co-authors from NLP, ML, and AI Safety communities!

“Foundational Challenges In Assuring Alignment and Safety of LLMs” has been released alongside 35+ co-authors from NLP, ML, and AI Safety communities! This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, development and deployment methods,…

ERA – KASL AI Safety Internship 2024

ERA – KASL AI Safety Internship 2024

🌐 ERA-Krueger Lab (University of Cambridge) AI Safety Internship 2024 Join the Krueger AI Safety Lab (KASL) at the University of Cambridge for a paid Research Internship focusing on technical and governance aspects of AI safety. Remote interns are welcome! Apply by Monday, January 29, 23:59 UTC. Shortlisted candidates will be notified by Friday, February…