First some general recommendations:
I recommend subscribing to many things, and then prioritizing what to read/listen to from that.
I do not have any good advice for how to prioritize, except, know your goals. And you can have multiple goals, e.g. reading because it is inherently interesting versus reading because it is directly useful for your projects or career plans.
I *strongly* recommend developing habit of skim reading and/or listening to podcasts at 2x or even 3x speeds. You do not need to read or listen to every word to get the gist.
You can always slow down whenever you want. (Sounds obvious, but needs to be said.)
Credit to Emerson Spartz via their interview on the ClearerThinking podcast, for this advice.
For videos, e.g. on YouTube, the chrome extension ‘Video Speed Controller’ is a must-have. I frequently watch videos at more than 2x speed, which is the maximum provided in most interfaces.
There is a recent (Sep 28th 2024) discussion on the EA forum about how people follow AI safety news. Worth checking it out!
And with all things, find what works for you! This is just one person’s thoughts. If you have any suggestions of your own, please share!
Second, the things/people I follow:
The Cognitive Revolution podcast. This person has impressively large overview of AI space - both safety and capabilities. You will learn *a lot* of interesting and useful stuff from this. Examples:
Has two part series on red-teaming GPT o1 with people from Apollo.
Has interviews about AI sentience/consciousness
Interview about for-profit startup Goodfire Lights doing mechanistic interp
Interview about ‘computational life’. How self replicators arose out of evolutionary process that had no optimization!
The AI revolution in biology. Some of the cutting edge techniques in this field are surprising and scary…
Has detailed guide on how best to use current LLMs to automate tasks, from picking which tasks to automate to what steps to go through to get the most out of LLMs.
FLI Podcast. In-depth interviews, with active AI safety researchers, with focus on AI governance and strategy.
Zvi Mowshowitz. In-depth weekly newsletter about AI, with a focus on safety. Do not try to read everything, but skim and find things that interest you. For example, Zvi has detailed series of posts about the SB1047 bill.
Marginal Revolution by Tyler Cowen (with a hint of Tabbarok). This is mostly not about AI Safety, but Tyler does regularly share interesting insights and latest developments in AI. Two recent highlights are a flaw in some AI Doomer's arguments and links to thoughts and observations on Anthropic's latest computer-controlling AI.
Be warned you will get multiple emails *everyday* but they are short and easy to skim through.
AI Evaluation Substack. Monthly newsletter which shares high quality resources and news about AI evaluations.
AI Safety Events and Training. Weekly (?) update on events and training opportunities in AI Safety.
AI Safety Newsletter. Monthly (?) highlights of biggest events or news.
Dwarkesh Patel podcast. Has in depth interviews on AI Safety, including big names in the space, e.g. Leopold Aschenbrenner, Mark Zuckerburg, and Demis Hassabis. Also, randomly has one with Tony Blair (ex prime-minister of UK).
Nonlinear library podcast. This is top voted posts on EA Forum, LessWrong and Alignment Forum automatically turned into speech. Not all of this will be AI Safety related, but large percentage is.
AstralCodexTen. This is mostly not about AI safety, but is interesting in its own right, both for the content but also for the high quality of writing and narrative skill. Two strong examples of AI safety related posts are this accessible explanation of Sparse Autoencoders and recent summary of SB1047.
ImportAI. Weekly (?) newsletter that has been ongoing for several years now.
80,000 Hours Podcast. In depth discussions on how to make world better. Many episodes on AI safety but also other things. And sometimes ‘non standard’ topics, e.g. evidence on which decisions are most important in parenting with Emily Oster.
80,000 Hours job board. Updated regularly with AI safety related jobs (and other jobs considered highly valuable from 80,000 Hours framework). There is mailing list where you can get weekly emails giving latest updates to the job board.
Clearer Thinking podcast. Mostly about how to think more clearly (who’d have guessed) but there are interviews on AI safety.
Apart newsletter. Apart runs monthly hackathons related to AI Safety, in addition to doing in house research. Sign up to their newsletter at the bottom of their homepage.
AI Alignment slack channel. Many individuals and groups use this Slack channel to share opportunities and news in AI safety space. To join, go to the communities section of AISafety.com.
AI Safety Landscape. Not something I follow as such, but highly recommended resource. Note you should click on ‘Tiles View’ in the top left to get a much more user-friendly view of the information.
Thanks for sharing Lovkush, lots of recommends here I'd missed ! Cheers, Jimmy
This is really helpful, thanks, Lovkush!