Fuera del hilo clickbaitero,
OpenAI, los creadores de ChatGPT, ha montado un equipo de researchers de Machine Learning e ingenieros para intentar controlar una posible superinteligencia artificial, que estiman posible de que llegue antes de 2030.
https://openai.com/blog/introducing-superalignment
Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
While superintelligence {A spoiler} seems far off now, we believe it could arrive this decade.
Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:
How do we ensure AI systems much smarter than humans follow human intent?
Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us {B spoiler} and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.