Gwern — AI Safety↗
by Gwern Branwen
Artificial intelligence safety and alignment.
7 posts
Loading...
Delivery order
Each email contains one post, starting with #1
Why Tool AIs Want to Be Agent AIs
32 minAutonomous AI systems (Agent AIs) trained using reinforcement learning can do harm when they take wrong actions, especially superintelligent Agent AIs.
The Hyperbolic Time Chamber & Brain Emulation
14 minA time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur...
Complexity no Bar to AI
22 minOr to put it perhaps more clearly, for a fixed amount of computation, at each greater level of intelligence, a smaller increase in intelligence can be realized with that amount of computation.
Evolution as Backstop for Reinforcement Learning
16 minPain is a curious thing. Why do we have painful pain instead of just a more neutral painless pain, when it can backfire so easily as chronic pain, among other problems?
The Scaling Hypothesis
49 minGPT-3, announced by OpenAI in May 2020, is the largest neural network ever trained, by over an order of magnitude.
It Looks Like You’re Trying To Take Over The World
27 minBy this point in the run, it’s 3AM Pacific Time and no one is watching the TensorBoard logs when HQU suddenly groks a set of tasks (despite having zero training loss on them), undergoing a phase...
The Neural Net Tank Urban Legend
30 minHeather Murphy, “Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine” (NYT), 2017-10-09: So What Did the Machines See? Dr. Kosinski and Mr.