How Can We Achieve Artificial General Intelligence, AGI?

While AI intrigues and scares us, a case could be made that in 2021, it’s not very ‘intelligent’ or ‘artificial‘ at all. The holy grail of AI is a system that can learn on its own, independent of context. While human-like artificial general intelligence may not be imminent, substantial advances may be possible in the coming years. Some scientists at DeepMind, owned by Google, think AGI is possible just with reinforcement learning.

We live in a world of algorithms and weak AI, but that won’t be forever. Some AI researchers think by around the year 2060 we’ll reach a singularity where AI will become smarter than we can imagine today.

At the Last Futurist we are not sure if this happens on other planets that develop sentience but in a remarkable twist of fate, a mature AI and climate change that’s significantly disruptive might overlap considerably. Will the mature AI be able to help deal with climate change disruptions?

Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life.

It’s not entirely clear if AGI is possible within our lifetimes. Given how technology improves in exponential waves, and the supercomputers getting into the mix with transformers, one wonders if deep learning and reinforcement learning can make a breakthrough. Or will it require something else?

Is Reward Enough?

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at UK-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence.

Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. Could reinforcement learning be enough?

Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. So it will be interesting to see how machine intelligence evolves and how quickly it will reach some state of general intelligence or some aspects of a less weak AI.

Finally, the researchers argue that the “most general and scalable” way to maximize reward is through agents that learn through interaction with the environment. DeepMind hopefully is doing safe work. Some of their papers lead to surprisingly conclusions.

In the paper, the AI researchers provide some high-level examples of how “intelligence and associated abilities will implicitly arise in the service of maximizing one of many possible reward signals, corresponding to the many pragmatic goals towards which natural or artificial intelligence may be directed.”

Inventing AGI could be a Pandora’s box that humanity might not be ready for. We live in a world where the legal and regulatory framework isn’t even well adapted to the internet, never mind weak AI or AGI.

Hopefully AGI is pushed to the later part of the 21st century, if at all. At the Last Futurist we fear humanity is not ready ethically and socially for any sense of general intelligence in our AI development.

Similar Posts