New research agenda released alongside 35+ co-authors from NLP, ML, and AI Safety communities!
“Foundational Challenges In Assuring Alignment and Safety of LLMs” has been released alongside 35+ co-authors from NLP, ML, and AI Safety communities! This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, development and deployment methods,…