Each starling follows pretty simple rules: stay with the group, avoid collisions, and maintain the speed of your neighbors. When these individual actions accumulate and compound across a flock, they give birth to a complex, captivating display of synchrony in the sky. The group behaves in a way that you would not expect by observing individual birds. In this way, murmuration epitomizes emergent behavior in nature. It is a humbling testament to the unknown wonders in nature yet to be discovered. It’s also an inspiring glimpse into the future, where we better understand current mysteries of the world.
Emergent Phenomena & Scientific Discovery
Emergence is a principle in science that addresses the manifestation of new properties, patterns, or behaviors from simpler elements or interactions. Emergent properties transcend the predictability and understanding derived from the individual components of a system. To put it simply, the system as a whole becomes something far greater than its parts.
The starling murmuration is a perfect illustration of this concept. The collective movement of each flock is an emergent property that arises from the actions of individual birds. This movement complexity cannot be grasped by studying a single bird in isolation. Emergent phenomena stretch beyond birdwatching into more abstract domains of science.
Consider the concept of temperature as an example in physics of an emergent property. Temperature is a measure of average kinetic energy of particles within a system. However, the concept of temperature becomes irrelevant when applied to a single particle; it only emerges with a significant number of particles. Similar emergent behavior can be observed in transitions between states of matter with temperature change. As water is heated, it predictably rises in temperature. Then, at a seemingly arbitrary point, it ceases to rise in temperature and instead transforms into steam.
Another intriguing case of emergence is superconductivity in graphene. Superconductors, which allow electrical current to flow without energy loss, hold the potential to revolutionize many fields, ranging from energy transmission to transportation to medical imaging. However, achieving superconductivity at room temperature remains an open challenge. It is a property that has historically only been observed in extreme temperatures, pressures, and other situations. For this reason, scientists at Berkeley were baffled to see that when two sheets of graphene were stacked and twisted by a mere 1.1 degrees, they exhibited superconductivity, a completely unexpected emergent property.
Such observations elicit awe and ignite curiosity in these circumstances. Yet, in the realm of artificial intelligence research, a phenomenon akin to emergence seems to be brushed off with a less appreciative label: "hallucination."
The Hallucination Paradox
In the context of generative AI models like GPT-4, “hallucinations” refer to the AI's ability to generate novel output that isn't directly based on its training data. This doesn't mean the AI is literally hallucinating in the human sense, but it's a term used to describe the AI's capability to synthesize unique, creative outputs.
Much like the starling murmuration, these hallucinations can be viewed as emergent properties of the AI model. The AI isn't explicitly programmed to generate a specific narrative or content. Instead, it's fed a plethora of data and learns to predict the next word based on the contextual information provided by previous words. This simple rule, when applied at scale, results in the generation of diverse, creative, and complex content.
Take GPT-3.5 and GPT-4, for example. These models can create sentences, paragraphs, and even entire articles that are coherent and contextually relevant. They can generate functional code in numerous programming languages. These outputs do not represent facts the models merely memorize and regurgitate. Instead, they represent novel compositions shaped by the patterns the models have learned.
The "hallucination" problem arises when a language model, despite its ability to generate coherent text, outputs well-structured yet incorrect information. Misinterpreting these false outputs as truth can undoubtedly lead to complications. However, the same capacity, when manifested in image generation, is often celebrated as "creativity." The perception of hallucinations, then, bounces between praise and condemnation depending only on context.
Hallucinations, whether they occur in the delusional human mind or in AI as novel and unexpected outputs, represent a form of unexpected and nonlinear information processing. This nonlinearity offers a new way of examining emergent phenomena in nature that we might not otherwise stumble across or even understand.
Similar to natural emergent phenomena that arise from the interaction of numerous simple parts following basic rules, AI hallucinations represent complex outputs emerging from myriad interacting elements within the system. This could potentially reveal new aspects of natural phenomena previously overlooked or misunderstood.
For the realm of scientific research, this nonlinear processing capability could become a transformative instrument of discovery. AI "hallucinations" might unveil emergent phenomena in opaque biological or physical systems, leading to novel understandings and directing further exploration. Also, by scrutinizing the conditions or inputs that catalyze these hallucinations, we might glean insights into the foundational rules or structures that govern these emergent phenomena.
The term "hallucination" suggests a distortion of reality in many machine learning scenarios. In the context of scientific discovery, however, they might serve as a guide toward the deeper structures and emergent phenomena in nature. It's an unconventional approach, yet one with great potential to uncover gaps in our understanding of the natural world.