The intriguing aspect of neural networks is that their inner workings remain largely a mystery. We love how they can learn from data in a multitude of ways, such as text and image, but the cores and insides of these systems remain shrouded in uncertainty. Geoffrey Hinton, a pivotal figure in artificial intelligence research and one of the pioneers of backpropagation, once remarked that no one truly comprehends why neural networks perform so well. This uncertainty motivated his departure from Google to mitigate the risks of potential AI-driven extinction.
Some researchers think AI extinction is Sci-Fi, while others believe an AI doom is a potential possibility. Yann LeCun has expressed skepticism about Hinton’s viewpoint, deeming the likelihood of an AI-induced extinction minimal, and cited that governments and companies are pushing this propaganda in order to minimize the influence of open-source AI. Similarly, Andrew Yang consistently argues that the risk is almost non-existent and reaffirms the propaganda claims commented on by LeCun. This topic has caused a heavy divide in the artificial intelligence community.
All significant AI systems, including Large Language Models (LLMs), are built upon neural networks, which mimic the neural connectivity in the human brain. Yet, the mystery still persists: why do they work so effectively, and how? This uncertainty raises significant concerns about our ability to control systems whose processing methods we don’t fully grasp. Despite feeding these neural networks vast amounts of data under the belief that more data equates to better performance, the underlying reasons for this effectiveness remain unclear and elusive. How can we continue to utilize systems that we don’t fully comprehend? How can we know that we aren’t just a lion in a den that would soon get strong enough to break his chains?
Given this lack of fundamental understanding, how can researchers confidently claim that the potential for AI-induced extinction is nearly zero? We continually feed these systems with data, stabilizing them with mathematical concepts and theoretical frameworks we only partly understand.
OpenAI’s recent formation of a Super Alignment team, aimed at aligning superintelligent systems, underscores this challenge. Their research, including a paper on using smaller systems to supervise larger ones, recognizes the limitations of reinforcement learning. Methods like reinforcement learning and other alignment tools are employed to manage these opaque networks. Yet, a fundamental realization emerges: to align potential superintelligent systems effectively, we first need to understand their mechanics and what drives their performance.
If our goal is to develop superintelligent systems using neural networks, our lack of understanding about how these systems function poses a significant risk. Without this knowledge, controlling AGI remains a daunting challenge, potentially endangering human existence. For the survival of humanity, researchers must prioritize unraveling the fundamentals, core principles, and activities governing neural networks. If we fail to achieve this understanding, we may need to explore alternative systems for building AGI that are within our realm of comprehension. We don’t understand how biology works, we don’t understand how the brain works, we have seen humans exist on a range of spectrums which has caused behavior that potentially could be viewed as erratic, so why do we think simulating a brain without knowledge of the inner workings is a good idea, this is very puzzling.
If our goal is to develop superintelligent systems using neural networks, our lack of understanding about how these systems function poses a significant risk. Without this knowledge, controlling AGI remains a daunting challenge, potentially endangering human existence. For the survival of humanity, researchers must prioritize unraveling the fundamentals, core principles, and activities governing neural networks. If we fail to achieve this understanding, we may need to explore alternative systems for building AGI that are within our realm of comprehension. We don’t understand how biology works, we don’t understand how the brain works, we have seen humans exist on a range of spectrums which has caused behavior that potentially could be viewed as erratic, so why don’t we think simulating a brain without knowledge of the inner workings is a good idea, this is very puzzling.
In conclusion, the pursuit of AGI through neural networks presents a paradox. While these systems offer remarkable capabilities, our limited understanding of their inner workings could be our Achilles’ heel. Balancing the quest for advanced AI with the imperative to comprehend and control these technologies is crucial for the future of humanity.
rustian ⚡️