My Lens on the Week - Biocomputers made from living human brain cells are no longer science fiction
Engineers realized that a soup of neurons on a chip can be programmed, perform basic computations, debugged, and productized.
I’ll try to explain how a computer works using 800,000 human neurons as simple as possible.
First things first;
1. What “computation” really means
At its core, any computer (abacus, Apple M3, or brain) changes state in response to inputs and preserves those states long enough to influence the next cycle. Classic digital machines do this with transistors that toggle between two energy-stable states (0 or 1). That binary design gives us reliability and easy scaling, but it also forces every problem into a yes/no straightjacket and burns a lot of electricity shuttling electrons along rigid pathways.
2. Why the rush to rethink hardware?
Energy wall: Training a frontier-scale language model can draw megawatt-hours on GPUs and require entire nuclear-plant build-outs for sustained demand.
Bandwidth wall: Traditional computers waste time and power because the brain (processor) and memory (storage) are in separate rooms and have to pass messages back and forth nonstop.
Scaling wall: We’ve shrunk transistors so small (just atoms wide) that electrons start breaking the rules of classical physics. That’s the scaling wall. It could stop traditional chip progress dead in its tracks.
Brains show an alternative: 20 W of power, no clock, everything co-located, and ten orders of magnitude parallelism.
3. Why brains solve the same problem differently
Biological computing flips the script. Instead of shuttling data, it thinks where it stores. Instead of step-by-step, it thinks all at once. Instead of running hot, it only fires when needed. That’s why it’s fast, light, and insanely efficient.
Let me explain how;
🧠 State is stored where compute happens.
Imagine you’re working on a problem. Instead of writing your ideas on sticky notes and walking them over to someone else’s whiteboard, you write and think on the same board. Everything happens in one place.
Neurons store memory and compute in the same location. When a neuron is active, the synapse (the connection) changes its strength right there, no separate memory system is needed.
Traditional computers constantly move data between memory and processor (like passing sticky notes back and forth). This design eliminates that bottleneck and saves tons of time and energy.
⚡ Massive parallelism
Imagine; one person clapping alone (slow and weak) vs. an entire stadium doing the wave (fast, powerful, and synchronized).
The system being described (likely a brain cell computer with 800,000 neurons) doesn’t work like a linear assembly line. Thousands of neurons fire together, doing many things at once. It crushes latency.
Instead of waiting for a step-by-step process, it all happens simultaneously, making reaction times radically faster.
💧 Energy minimization
Water doesn’t force its way uphill. It naturally flows down the easiest path, no wasted energy. Neurons only fire when they absolutely need to. When the voltage across their membrane hits a threshold. Until then, they rest.
This system uses far less energy than traditional computers that keep everything powered and ticking nonstop, even when nothing is happening.
4. How Cortical Labs turned wetware into hardware
The CL1 unit houses ~800 000 lab-grown human neurons 😱 spread across a micro-electrode array on a silicon substrate. The electrodes supply digital stimuli, record analog spikes, and sit under a life-support module that handles nutrients, temperature, and waste removal, think “motherboard + aquarium.”
Key design choices:
Micro-electrode array: Physical bridge that converts digital pulses to ionic currents and back, preserving signal integrity at neuron-level voltages.
Closed-loop “biOS”: The neurons receive a simplified sensory world (pixels, audio, sensor values). Their responses feed straight back into the simulation, producing the reward → plasticity → learning loop brains evolved for.
6-month life support: Biological tissue stays viable only if waste products, pH, and oxygen remain in narrow bands. Automated perfusion keeps those variables in range so researchers can iterate without cell-culture expertise.
The result: a 30-unit server stack draws ≈ 850–1000 W, far below the megawatt-scale clusters needed to train today’s large language models.
5. How does it actually compute?
Initialization: Cells grow randomly, forming spontaneous oscillating activity. It’s just nature’s default behavior when electrically excitable cells are left to follow chemistry and physics. The randomness seeds an internal rhythm, and that rhythm is the raw canvas on which we later paint goal-directed computations.
Task definition: Engineers map a subset of electrodes to represent bits, pixels, motor commands, etc.
Feedback: The network explores wiring patterns; reward pushes it toward configurations that minimize energy while maximizing reward frequency. Nature dislikes waste. If a connection leads nowhere useful, the network can lower its firing rate and save energy. Researchers apply a brief, rhythmic voltage to a subset of electrodes the instant success happens. The rewarded neurons flood their synapses with calcium, which triggers protein cascades that thicken those connections. On the other hand, if the paddle misses the ball, scientists inject random, jittery voltage bursts and cells quickly “learn” that firing along those paths is energetically painful and drop them. As a result, pathways that helped win the point become physically cheaper to reuse next time.
Stabilization: Once synapses stabilize, the pattern becomes reproducible code you can call from Python, just like a GPU kernel, except the “weights” are wet.
6. What you actually get for the $35k price tag
“Code-deployable” neurons: APIs let you stream bits into the dish and read spikes out, exactly as you would call a TPU or GPU.
Wetware-as-a-Service (WaaS): If you don’t buy a unit, you can rent time on Cortical Labs’ cloud, submitting jobs to a biological back-end while your code runs locally.
Rapid generalization: In early demos the same network learned the arcade game Pong in minutes with a fraction of the training data silicon deep-RL systems need.
7. The competitive benchmark: silicon AI’s energy burn
Training GPT-3 alone consumed ≈ 1 300 MWh, equal to the annual electricity of ~130 US homes. If silicon models keep doubling parameters, aggregate AI demand could hit 100+ TWh per year (the Netherlands’ consumption) by 2027.
Wetware sidesteps that exponential curve by borrowing four billion years of biochemical optimization. Instead of brute-force matrix multiplications, it leans on sparse, event-driven spikes and plastic wiring that reshapes itself to match the problem.
8. Where this could go next
Bio-robotics: If we plug CL1 control loops into drones or quadrupeds and let embodied neurons learn locomotion the way infants do, then we have an ultra-low-power adaptive brains for drones or prosthetics.
Drug discovery: We can test neuro-active compounds on real circuitry without animals.
Personalized medicine: We can grow neurons from a patient’s induced-pluripotent cells, then screen epilepsy or Alzheimer’s drugs on tissue with matching genetics.
Optimization black boxes: We can search problems where energy cost dominates (e.g., logistics).