I recently spent some time reading a PhD thesis about recording electrical signals from a small patch of retinal tissue. The author was trying to understand how neurons in the eye coordinate their firing, how patterns emerge from what are essentially simple on-off switches working together. The thesis is technically impressive, but the deeper question buried underneath all the methodology is what stayed with me: how does something simple, repeated at scale, produce something that looks like intelligence? That question has not gone away. It has just migrated from biology labs into server farms.
The thesis is about a concept called criticality. The brain, the argument goes, operates near a tipping point, the same mathematical edge that water sits on right before it freezes. Not frozen, not boiling, but right at the boundary between order and chaos. At that edge, information travels further, the system is maximally sensitive to small inputs, and the range of possible responses is largest. Move away from that edge in either direction and you lose something. Too rigid and the system cannot adapt. Too chaotic and it is just noise. The researchers were among the first to record from enough neurons simultaneously, over two hundred cells in a half-millimeter patch of retina, to actually test these ideas at meaningful scale. Before that, you were essentially trying to understand a crowd by watching ten people. The tools were the bottleneck, not the theory. That detail matters because a lot of what we think we know about intelligence is limited not by our ideas but by our instruments. We build theories and then wait, sometimes for decades, for the experimental capability to catch up. It is a good reminder that confident claims about how intelligence works, biological or artificial, should be held lightly.
There is a version of the future where AI is simply the next general-purpose technology, like electricity or the internet, that turbocharges everything it touches. That version is probably right. The internet collapsed distances and accelerated every field of human knowledge and commerce. AI does something similar but with one important difference. The internet connected human minds and made collaboration instantaneous. AI does some of the thinking itself. Not all of it, not reliably, not yet, but some. That is a difference in kind, not just degree. But the internet analogy is also a warning. We invented electricity in the late 1800s and hundreds of millions of people still do not have reliable access to it. Not because the technology is hard anymore. The gap persists because of infrastructure, economics, geography, and political will. AI will follow the same pattern. The student in a well-resourced environment will have a personalized tutor available at every hour. The student elsewhere will not. That gap, compounded over decades, produces a different world. The optimistic counterargument is that smartphones spread faster than electricity did in many developing regions, people skipped landlines entirely and went straight to mobile, and lightweight AI on cheap phones could reach people faster than the pessimists expect. I find this somewhat convincing, but access and benefit are not the same thing. Being able to use a tool and being able to use it well, in your language, for your context, in ways that address your actual problems, are different things.
The version of AI I find most realistic is not the replacement scenario but the assistance scenario. TARS and CASE in Interstellar are useful to think about here, not because the film is technically accurate, but because the design philosophy is right. They have defined roles, clear values, explicit limitations, and they augment the humans around them rather than replace them. TARS does not run the mission, he supports the people running it. And he has a humor setting and an honesty setting, meaning someone thought carefully about what values should be tuned into these systems and by how much. That is not science fiction anymore. How do you make a system honest about its uncertainty? How do you make it helpful without making it sycophantic? How do you make it safe without making it useless? These are not purely technical questions and they do not have purely technical answers.
The hardest questions AI raises are not about the technology at all. When a judge uses an AI system to help evaluate evidence, who is actually making the decision? When a parliament uses AI to model policy outcomes, whose values are encoded in the model? When a doctor in a resource-constrained hospital gets diagnostic assistance from a system trained primarily on data from richer places, how much should they trust it? These are questions about power and accountability and what we owe each other, and we are nowhere near having good institutional answers to them even as the technology keeps arriving.
This is where the criticality idea becomes a useful lens beyond biology. The brain stays at the tipping point because of feedback, excitatory and inhibitory signals continuously pushing and pulling the system back toward that productive edge. No central controller, just constant self-correction. That is probably what good governance of AI looks like too. Not a fixed ruleset written once and enforced forever, but a continuous feedback process where researchers, policymakers, users, and affected communities are all recalibrating as the technology evolves. The danger is not that AI becomes too powerful in some abstract sense. The danger is that the feedback loops break down and the pace of capability development outstrips our ability to course-correct.
The question that started all of this, how does something simple repeated at scale produce something that looks like intelligence, does not have a clean answer yet in neuroscience. We have better tools than we did ten years ago and the evidence is accumulating, but the picture is not complete. What we do know is that the interesting behavior does not live in any single neuron. It lives in the relationships between them, in the patterns that emerge, in the collective dynamics that no individual component produces alone. I think that is also true of what AI means for humanity. The answer is not in the technology itself. It is in what we do with it together, how we distribute it, how we govern it, how we make sure it serves questions that actually matter rather than just questions that are easy to optimize for. That is harder to measure than a benchmark. It does not show up cleanly in a product demo. But it is the thing that will determine whether this moment was actually worth something.
This Blog post mentions Dario Amodei’s PhD Dissertation available here at - https://dataspace.princeton.edu/handle/88435/dsp013f462544k