AI Tracker #4: Are We There Yet?
A lot of high-quality AI safety conversation happens on Twitter. Unfortunately, this leads to great write-ups & content being buried in days. We've set up AI Tracker to aggregate this content.
Our Default Trajectory
One Bullet, 8 Billion Headshots
Are We There Yet?
Eliezer Yudkowsky took to Twitter to share his thoughts about what will happen when the time comes that we find ourselves on the precipice of AGI or not.
This was in response to the anonymous account said OpenAI had achieved AGI internally. Sam Altman’s Reddit account then joked that this had happened, too, but edited after the fact that he was just messing around.
A Fantasy Graph
Michael Nielsen tweets in response to a graph showing the case for “responsible scaling policies”. He finds issue with the plotted line which seems to suggest there' is an easy way for AI development to avert the disaster of the ‘risky region’ in red.
And Nik Samoylov then edited the graph to look a little bit more realistic.
* * *
If we miss something good, let us know: send us tips here. Thanks for reading AI Safety Weekly! Subscribe for free to receive new AI Tracker posts every Friday.