Academic leaders from around the world have published an open letter calling on artificial intelligence (AI) developers to increase their understanding of consciousness as AI systems continue to develop at an unprecedented pace. The letter was released by the Association for Mathematical Consciousness Science (AMCS), a group of over 150 researchers focusing on mathematical and computational approaches to consciousness. The letter was described as a “wake-up call” to the scientific community, tech sector, and wider society to prioritize consciousness research.
The letter comes in the wake of recent calls from tech leaders for a pause in AI experiments. The authors of the letter warn that AI is advancing at a rate that outstrips our ability to comprehend its implications for ethical, legal, and political concerns. They argue that while language models like Google’s Bard and OpenAI’s ChatGPT currently mimic animal brain neural networks, they will soon be constructed to replicate higher-level brain architecture and functioning.
The authors of the letter suggest that AI systems could soon possess feelings and even human-level consciousness, with some already displaying human psychological traits such as Theory of Mind. Theory of Mind is the understanding that others possess beliefs, perceptions, emotions, and desires that differ from one’s own. As such, the authors argue that AI will possess capabilities beyond those currently comprehensible to its developers, which could profoundly change society’s relationship with these systems.
The letter calls for AI development to be transparent, with society and governing bodies informed of the ethical, safety, and societal implications associated with artificial general intelligence (AGI). The authors suggest that mathematical tools to measure and model consciousness are essential for understanding the implications of AI and ensuring its safety.
The letter concludes by calling on society, the scientific community, and the tech sector to prioritize consciousness research to ensure that AI delivers positive outcomes for humanity. The authors argue that AI research should not proceed unguided and that understanding the ethical, legal, and political implications of AI is crucial.
In summary, the letter serves as a reminder that as AI continues to advance, we must remain vigilant in understanding its ethical, legal, and political implications, particularly concerning consciousness research. The authors urge the scientific community and tech sector to take these issues seriously and to accelerate research in consciousness, ensuring that AI development progresses with positive outcomes for humanity.