GL02: How We Listen To The Brain And Why It's Super Hard
This blog explores invasive vs non-invasive brain recording methods, spatial and temporal resolution tradeoffs, EEG vs neural implants, signal degradation challenges, and sampling rate limitations.
Index:
(1) The Scale Problem
(2) Would You Drill a Hole in Your Skull or Wear a Big, Heavy Helmet? — Invasive vs Non-Invasive
(3) The Resolution Wars
(4) Signal Quality Over Time
(5) Sampling Rate — The Hidden Bottleneck
(6) The Measurement Landscape — The Different Stethoscopes We Have for the Brain
(7) Conclusion — Why This Matters for Everything Else
Welcome back, readers of GreyLattice,
We are now expanding the core series to complete its purpose of understanding neurotechnology. But neurotech as a topic contains a large amount of information, and each blog in the core series will focus on a singular aspect or a topic in an intuitive order. In such a way, reading through the core series acts like a course that gives you practical insights into the working of neurotechnology.
When words, such as neurotech device or brain computer interface(BCI), are mentioned, what is the first thing that comes to mind ?
Well, for me, I see a chip attached to my head that is reading my thoughts. My imagination is not very far from what the actual case is, but it is much more complex than it seems, or else there truly would be no need to write a series of blogs about it.
The perfect way to understand neurotechnology would be to gain a general overview or framework of the working of all neurotech.
Most neurotechnology that interacts with brain signals—whether it's for controlling a prosthetic, detecting cognitive states, or treating neurological disorders—follows the same three-stage pipeline:
Sensing – capturing signals from the brain using some form of hardware (electrodes, sensors, imaging devices).
Interpreting – analysing those signals to decode brain states, behaviours, or intentions.
Acting – using the decoded information to drive an output, trigger feedback, or control an external system.
You can intuitively understand it as the same pipeline for any computer system out there, which is: Input → Processing → Output
GL02 will be focusing entirely on stage 1: sensing or the Input. This is the first and most important step of the entire process because, how we choose to listen to the brain fundamentally limits everything that comes after.
So, how do we actually hear the brain ?
Before we understand that, there’s a challenge: the brain is constantly buzzing with electrical activity from 86 billion neurons, all firing at different times for different reasons. To build any neurotech device, we need to eavesdrop on this activity: but how we choose to listen fundamentally determines what we can actually "hear" and understand.
Think of it like trying to monitor a massive, complex system. Every measurement approach forces tradeoffs: Do you want the big picture or fine details? Do you want perfect timing or comprehensive coverage? Do you want clean signals or are you willing to work with noisy data?
This is a useful way to think about how we "listen" to the brain. Neurotech devices are trying to extract meaning from a complex, ongoing internal dialogue—one where neurons within different regions of the brain—and across broader systems—are constantly exchanging information.
Let’s dive into these tradeoffs and see why brain measurement is not just an engineering challenge—but the core constraint shaping both current and future neurotechnology.
(1) The Scale Problem
Where is my brain ? Have any of you seen it ? Well, you can assume it’s in my head working right now, or I wouldn’t be able to type this, and you wouldn’t be able to read this. But if you think about it, no person would be able to see his or her own brain, you can only see another person’s brain. The main reason being that it’s located inside our head, and is covered by multiple layers of protection.
The brain is surrounded by cerebral fluid, where it floats around doing its thing. Then that fluid is enclosed within our skull, which is then enclosed by blood vessels within the muscles of course, and then we finally have the epidermal tissue or the skin. There may be more layers and it might not exactly be the right order but the point trying to be made here is that its quite well covered.
This organ is arguably the most complex system we know of—housing around 86 billion neurons, each forming thousands of connections, firing in electrical bursts, and modulating those signals with chemicals. That’s not just complexity—it’s a dense, dynamic web of communication happening in real time, at multiple scales, from individual spikes to distributed network oscillations. And to make things even harder, all of this is locked away inside a very carefully layered head as mentioned above, designed specifically to protect it from outside interference.
This is the core of the measurement problem, it can be summarized to 2 questions -
(a) how do we observe what’s happening inside such a protected, complex, and electrically and chemically active organ?
(b) And what aspect or part of it do we observe ? The brain is far too complex and vast to monitor everything in high detail—at least with current technology.
These questions force us into the tradeoffs that define all of neurotechnology. Every measurement method is essentially a compromise between what we want to know and what we can practically access.
Let’s say we do find a way to observe to a good extent the brain as a whole, But which particular activity happening in the brain do we want to see and why? This in and of itself a difficult question to answer.
Why ? Brain activity happens on many scales—sometimes we care about the precise spike of a single neuron, sometimes the collective rhythm of a group of neurons, and sometimes the overall state of a brain region. These all contain different types of information, and no current method captures all of them simultaneously. An example of case scenarios are as below:
Want to build a neuroprosthetic that moves when you think “grab”? You might need access to specific motor neurons.
Want to detect the onset of a seizure? You’ll need broad coverage of abnormal brain rhythms.
Want to study attention or emotion? You may need whole-region or network-level data.
We can’t capture their activity purely because of the sheer amount of activity happening every moment, which is about 86 billion neurons (greater than 10x the population of the earth period) all interacting with each other simultaneously all the time.
Signal Types
It may be common knowledge that the neurons within the brain communicate with each other using electrical signals, which are called action potentials or spikes. Some points to note are:
They happen very fast (on the order of milliseconds).
They're how neurons transmit information down axons and to nearby cells.
They're the primary target for most BCIs—because they’re fast, discrete, and digital-like.
But there is a second type of signal that the brain uses for the communication of information or the execution of certain processes. Agents such as neurotransmitters, hormones, and modulators generate these chemical signals.
They operate on slower timescales (milliseconds to seconds to minutes).
They define how and whether electrical signals propagate (e.g., excitatory vs inhibitory). Essentially if a certain neuron fires or not.
They're harder to measure in real-time, but they matter for mood, attention, and learning.
Why does this distinction matter for neurotechnology ?
Different recording methods can only access certain signal types:
EEG, ECoG, and electrode arrays → capture electrical activity
fMRI, PET scans → indirectly reflect chemical or metabolic activity
Optical methods (like calcium imaging) → often infer electrical spikes through chemical proxies
Electrical signals give you speed (temporal resolution) but limited context (why the spike happened).
Chemical signals give you meaning or modulation, but are slower and harder to decode in real-time.
All of this—the complexity, the layers of protection, the types of signals—leads us to the first decision or consideration when it comes to neurotech:
Do we go in and try to measure the brain directly (invasive methods)?
Or stay outside and listen from a distance (non-invasive methods)?
This is more than a medical choice—it defines what kind of data we get, what we can build, and who the technology can reach.
(2) Would you drill a hole in your skull or wear a big, heavy helmet? — Invasive VS Non-Invasive
These are the two main categories of neurotechnology devices, based on where they’re placed on a person’s body to read and interpret brain activity.
Let’s begin with the more extreme or riskier option.
Invasive methods involve placing the brain-computer interface (BCI) device inside the body—typically requiring a portion of the skull to be opened, and the device to be implanted directly onto or into brain tissue. This direct access allows the device to pick up high-resolution neural signals with far greater clarity than any external device. In other words, it listens from the front row.
But this level of access comes at a cost: surgical risk, potential infection, and long-term maintenance challenges. This is unlikely to be the ideal choice for the average consumer looking for a BCI, but it’s currently one of the most powerful options available for treating severe neurological conditions.
Non-invasive methods, on the other hand, don’t enter the body at all. These BCIs are placed outside the skull—usually in the form of caps, headbands, or helmets. They pick up brain activity through sensors that detect electrical or magnetic signals generated by the brain.
Since there’s no direct contact with brain tissue, there’s no surgical risk or any of the complications associated with invasive methods. But the signal quality is much lower—like trying to hear a whisper through a thick wall. This is due to the layers between the brain and the device: the skull, cerebrospinal fluid, muscle tissue, skin, and even hair—each of which degrades the signal.
So in the end, it all comes down to a fundamental tradeoff: signal quality vs. safety. For these technologies to evolve and become truly usable for mainstream applications, invasive BCIs will need to dramatically reduce their medical risk, while non-invasive BCIs must find clever ways to improve signal fidelity.
In Practice: Who Uses What?
Invasive BCIs are currently being tested and used in clinical research for patients with severe motor disorders—such as spinal cord injury, ALS, or locked-in syndrome. One of the most well-known examples comes from the BrainGate project, where participants with implanted electrode arrays have been able to control robotic arms, move cursors on a screen, and even type out full sentences just by imagining movement.
One participant, who had lost all voluntary movement below the neck, was able to order groceries online using only his thoughts.
Non-invasive BCIs, by contrast, are widely available and used in consumer and research settings. You’ve probably seen examples where gamers fly drones or steer vehicles using EEG headsets. Others allow users to control music, play meditation games, or train their focus.
Companies like OpenBCI, Muse, Neurosity, and NextMind are at the forefront of this wave.
Their current capabilities include:
Attention and relaxation tracking
Simple motor imagery-based control (e.g., moving a cursor left/right)
Neurofeedback for sleep, focus, or mood improvement
Gaming or AR/VR interaction (e.g., triggering in-game actions via mental states)
What they’re trying to achieve next:
More accurate and reliable decoding of intention or emotion
Enhanced resolution using new signal processing or machine learning techniques
Real-time interaction with AI models (e.g., BCI-to-ChatGPT interfaces)
Integration into daily-use wearables like smart headbands, AR glasses, or earbuds
The Future May Be Hybrid
As time passes and more innovations emerge, it’s likely that neurotech devices may no longer fit neatly into either category. The line between invasive and non-invasive is beginning to blur.
For example:
Stentrodes (tiny electrodes delivered via blood vessels) can be implanted without opening the skull. They’re being developed by a company called Synchron, and have already been used in human trials where patients could text, email, and bank online using just their brain signals.
Ultrasound-based BCIs direct sound waves through the skull to sense or modulate brain activity. Techniques like functional ultrasound (fUS) allow for high spatial and temporal resolution—nearing fMRI quality—but in a compact, wearable form. While ultrasound typically struggles to penetrate bone, new computational methods compensate for skull distortion.
This hybrid space is exploding—because it may be the key to unlocking daily-use BCIs for communication, control, therapy, and even creativity.
The Real Question
So considering all this, the real question becomes:
How much access do we truly need to the brain to understand it—and at what cost are we willing to pursue it?
(3) The Resolution Wars
You can only get a 4K movie poster or watch the movie in 144p, not both
Observing the brain is already complicated—as we’ve seen—but even if we figure out what to observe and why , we still need to figure out how to observe it.
In an ideal world, our devices would give us access to brain activity with both:
High spatial resolution — knowing exactly where something is happening
High temporal resolution — knowing exactly when it happens
But unfortunately, current neurotechnology makes you choose.
Do you want the crisp image—or the fast playback?
Tradeoffs have to be made in terms of resolution, depending on the application of the neurotech device, either the device has access to high temporal resolution or high spatial resolution but not both.
Before that though, It’s necessary to answer a few questions for context and understanding:
(1) What do spatial and temporal resolution actually mean in the context of the brain? Which technologies offer what?
(2) Why can’t we just compromise a bit on both sides and meet in the middle—like getting a 1080p HD video instead of a 4K still or a blurry 144p stream?
Spatial Resolution: Where in the Brain?
Spatial resolution refers to how precisely a tool can detect where brain activity is happening—down to millimeters, microns, or even individual neurons.
A method with high spatial resolution can:
Visualize fine structures (e.g., cortical layers, neurons)
Map activity to specific brain regions
Distinguish between closely neighboring sources
For example:
fMRI offers high spatial resolution (~1-3 mm), showing large-scale regional activity across the whole brain.
Intracortical electrodes (like Utah arrays) go deeper, recording from small clusters or even individual neurons.
Optical imaging in animals can track activity down to single synapses in real-time—but only in highly controlled, often invasive setups.
But many of these techniques sacrifice time resolution in exchange for this sharp spatial detail.
Temporal resolution - When in the Brain?
As one can guess from the name, this parameter is related to time. Temporal resolution tells us how precisely we can track changes in brain activity over time.
Do we see things by the minute? The second? The millisecond?
A method with high temporal resolution can:
Capture fast spikes of neural firing
Track oscillations and rhythms
Align brain activity with actions, thoughts, or external events in real time
Examples:
EEG and MEG offer millisecond-level temporal resolution, detecting rapid electrical changes across the scalp.
ECoG (electrocorticography) also offers excellent temporal precision with better spatial localization than EEG.
Both of these measures are very important for neurotech devices, But here’s the problem:
The better your time resolution, the worse your spatial resolution tends to be—and vice versa.
The Trade-off: Why Can't We Have Both?
Back to the analogy: why can't we just "split the difference" and get a 1080p HD video?
The core problem is that spatial and temporal resolution are governed by fundamentally different physical and biological limits. Optimizing one usually inherently worsens the other.
It's Not Just Engineering—It's Physics
Here's the key insight that trips up most people: we're not dealing with a simple engineering limitation that we can solve by building better sensors. We're bumping up against fundamental physics constraints about how information exists in the brain.
Different neural signals naturally exist at different timescales and spatial scales:
Electrical signals (action potentials, synaptic currents) happen in milliseconds but are localized to tiny spaces(individual neurons or small clusters)
Metabolic signals (blood flow, oxygen consumption) happen over seconds to minutes but affect large brain regions
This isn't a coincidence—it's how the brain actually works. When a neuron fires, the electrical event is instant and precise. But the metabolic response (increased blood flow to feed that area) takes time to ramp up and affects a much larger area. So as you can see they work in different timescales.
The Signal Detection Problem
Even if we had perfect sensors, we'd still face the signal-to-noise problem:
To get precise timing, you need to sample very frequently, which means each sample has less signal
To get precise location, you need many sensors very close together, but then they interfere with each other
The brain tissue itself acts as a "low-pass filter"—it smooths out fast electrical signals as they travel through tissue
Why "Just Use Both Tools Together" may not work ?
The natural response is: "Okay, so we have tools that are good at different things—why not just use fMRI for spatial detail and EEG for temporal detail and combine them?"
For most neuroscience research, this actually works perfectly. Researchers routinely:
Use fMRI to map brain regions involved in a task
Use EEG to understand timing of brain responses
Combine results to get a fuller picture
But this approach breaks down in specific scenarios where we need simultaneous high resolution in both dimensions, and the timing and location are interdependent.
When We Actually Need Both Simultaneously
Brain-Computer Interfaces (BCIs): Imagine controlling a robotic arm with your thoughts. You need to know exactly which neurons are firing at the exact moment you intend to move, simultaneously, to translate that into smooth, natural movement.
Using EEG + fMRI separately won't work:
EEG: "Something happened in the motor area 50ms ago, but I can't tell you exactly where"
fMRI: "The exact spot activated... 3 seconds ago"
Robot arm: "I need to know which direction to move RIGHT NOW"
Epilepsy Treatment: Surgeons need to identify the exact location of seizure onset and detect the precise moment it starts spreading, in real-time to stop it or guide surgery.
Understanding Neural Computation: Some research questions require seeing how specific neural circuits process information in real-time: "How does neuron A in location X influence neuron B in location Y over the next 10 milliseconds?"
The Real Problem: Correlation vs. Causation
When you use two separate tools at different timescales, you're assuming the slow signal (fMRI) and fast signal (EEG) are measuring the same underlying neural event.
But what if:
The electrical activity happens in one spot at time T
The blood flow response happens in a slightly different spot at time T+3 seconds
You mistakenly think they're measuring the same thing? That may be a wrong assumption which would be problematic
This is like trying to understand a conversation by recording the exact words someone says (high temporal resolution) and taking a detailed photo of their face 3 seconds later (high spatial resolution). You might miss that their facial expression changed during the sentence, which changes the meaning entirely.
Current Solutions: Smart Workarounds
The field isn't stuck—researchers and engineers are developing clever solutions that work around these fundamental limitations:
1. Template-Based Regional Targeting
Modern BCIs use decades of research to identify where specific signals come from:
Motor cortex mapping: We know hand/arm control signals primarily come from the M1 motor cortex
Targeted electrode placement: Companies like Neuralink place high-density arrays specifically in these known regions
Multi-modal sensing: Combine different sensor types:
Utah arrays (high spatial + temporal resolution, small coverage)
ECoG grids (broader coverage, medium resolution)
Muscle sensors (EMG) for additional validation
2. Predictive Signal Modeling
This is where AI becomes crucial:
Signal prediction: If you know someone is about to make a specific movement, you can predict what the neural signature should look like
Temporal compensation: Use fast (but spatially imprecise) signals to trigger predictions, then validate with slower (but spatially precise) signals
Continuous learning: Models adapt to individual users over time
Example workflow:
Fast signal (EEG): "Movement intention detected at T=0"
ML model: "Based on pattern, this is likely a 'reach left' command"
Slow signal (fMRI): "Confirms at T+2s: yes, left motor cortex activated"
System: "Execute 'reach left' at T=0, validate accuracy for future learning"
3. Hybrid Architectures
Research groups are building systems that combine multiple approaches:
Primary sensors: High-resolution arrays in known regions
Secondary sensors: Broader coverage for context and validation
Predictive models: ML systems that learn user-specific patterns
Real-time adaptation: Systems that update predictions based on feedback
The Future: Edge AI and Neuromorphic Computing
The next wave of solutions involves:
Neuromorphic chips: Process neural signals using brain-inspired architectures
Edge computing: Small, specialized processors that can run ML models locally on the device
Federated learning: Models that learn across multiple users while maintaining privacy
Real-time inference: Sub-millisecond prediction times for natural control
The Bottom Line
The spatial-temporal resolution tradeoff isn't going away—it's a fundamental physics constraint. But smart engineering, targeted approaches, and AI are letting us work around it in increasingly sophisticated ways.
We may never get that perfect "1080p HD video" of the brain, but we're getting very good at combining our "4K still photos" with our "144p real-time streams" to build systems that actually work in the real world.
For most applications, that's more than enough.
(4) Signal Quality over time
So are BCIs a once in a lifetime thing ? Once you get one, you should be good to go, right? How many times would you want your skull drilled into or buy more expensive helmets ?
Well unfortunately its a lot more complex than that. Sure a neurotech device can detect brain signals but for how long, days ? weeks ? years ? As any device does get old through time , so do neurotech devices. Whether invasive or non-invasive, signal quality degrades over time — and this is one of the least talked-about problems in the field. So let’s talk about them.
Going by the most problematic, we have invasive devices of course.
Invasive Devices: The Body Fights Back
Invasive BCIs, like implanted electrodes or neural arrays, start off with great signal fidelity. You’re listening from the front row.
But over time, the body does what it’s designed to do: it responds to foreign objects.
Scar tissue (gliosis) begins to form around the electrode
This increases the distance between the electrode and neurons
The recorded signals become weaker and noisier
Eventually, entire channels can go dark
This process is called the foreign body response, and it’s a major barrier to long-term implantable BCIs.
There’s also electrode drift:
Tiny movements of the electrode inside brain tissue due to brain pulsations, micromotions, or even normal movement
Leads to “channel instability” — the same neuron may not show up on the same electrode over time
Result? A BCI that worked great in week 1 might be unreliable by week 12 unless it has adaptive recalibration mechanisms or redundancy built in.
Non-Invasive Devices: The Noise You Can’t Escape
Non-invasive devices don’t suffer from scar tissue, but they’re plagued by a different issue: noise — from everywhere.
Muscle movement (jaw clenching, blinking)
Electrical interference (phones, nearby electronics)
Variations in headgear fit (slight shifts in headset position)
Skin/hair impedance changes (sweat, oil, even mood can affect signal)
All of this leads to low signal-to-noise ratio (SNR) — even when you’re measuring the “right” brain region.
You also face user variability over time:
Attention fatigue
Emotional state
Circadian rhythms
A perfectly calibrated EEG headset might need re-adjustment every session, or even mid-session.
The Hidden Cost: The Maintenance Problem
This ongoing decay of signal quality leads to what you might call the BCI Maintenance Problem — a layer of complexity that’s rarely visible to users or even engineers unless they’re in the weeds.
For long-term use cases, this introduces a few big problems:
Recalibration overhead: Devices need periodic tuning, retraining ML models, or user-guided realignment
Signal drift adaptation: Algorithms must adapt to shifts in signal baseline over time
User frustration: BCIs that work inconsistently create fatigue and disillusionment — a major UX bottleneck
Some companies now even ship dynamic recalibration algorithms to handle this, but it’s still early days.
So, What’s Being Done About It?
Here are a few current and emerging strategies:
Soft, flexible electrodes: Reduce tissue damage and scar formation
Surface coatings: Anti-inflammatory materials to reduce immune response
Machine learning-based recalibration: Models that adapt to signal drift over time
Multi-modal sensing: Combining brain signals with muscle, eye, or behavior data to add redundancy
Auto-tuning headgear: Systems that optimize placement and pressure in real-time
Still, these are mitigations, not solutions. The underlying degradation persists.
So What Does This Mean for Neurotech?
If we want BCIs to move from lab demos to daily tools — like prosthetic control, neurogaming, or mood tracking — we can’t just optimize signal quality once.
We need systems that maintain signal quality continually, despite biology, behaviour, and time.
How Fast Does Signal Quality Degrade?
This degradation isn’t just theoretically mapped out but, it’s measurable.
For example: A typical Utah array might show
~85% of channels functional at 6 months,
~60% after 2 years,
and as low as 30% after 5 years,
depending on the implantation site and biological response.
This loss directly impacts the richness and stability of the signals a BCI can detect—sometimes forcing researchers or patients to retrain models or even consider re-implantation. This would be a significant loss in time, effort and financial resources.
The User Experience ?
The impact isn’t just technical—it’s deeply human.
Imagine spending months learning to control a prosthetic arm, building up speed, confidence, even subconscious muscle memory.
Then slowly, your arm begins to respond erratically.
Movements feel delayed, misunderstood.
You think: “Am I doing something wrong?”
But the truth is:
You’re not losing your ability — the hardware is losing the signal.
This psychological disconnect—between the user’s skill and the system’s decay—creates a huge barrier to long-term adoption.
So considering all these problems what are the companies doing about it ?
Why Neuralink (and Others) Are Obsessed with This Problem
Companies like Neuralink, Paradromics(both of which focus on invasive BCIs), and academic groups working with soft bioelectronics are devoting massive effort to solve these problems. They’re not just designing smaller or denser electrodes—they’re redesigning the entire signal chain, from:
Surgical robot precision (to reduce tissue trauma)
Biocompatible materials (to prevent immune responses)
Flexible interfaces (to move with brain tissue rather than irritate it)
On-device signal processing (to adapt to signal changes in real-time)
What would the ideal device look like ? A long-term, stable, low-maintenance interface that works more like a pacemaker and less like a prototype. And I know we will definitely reach that place soon.
Moving on.
(5) Sampling Rate - The Hidden Bottleneck
Some of you may have heard of sampling rate before, especially if you have worked on college projects that involve some type of sensor hardware. It would essentially tell you how many times it provides an input to a system which would then process it. It is the same case here as well and surprisingly has very large implications for the working of a neurotech device. But this is a blog for anyone who is interested in anything neurotech so let me introduce the concept to you all.
Sampling Rate simply said is the number of times a brain signal is measured per second by the device, and its measured in Hertz per second(Hz/s). In cases of neurotech devices the sampling rate determines the rate at which brain signals are measured and taken in as input to be used for any form of processing. The required rate for neurotech devices depends on multiple factors with most being dependent on the application of the neurotech device. There are different levels of brain activity which cannot be measured unless a certain threshold of sampling rate is reached or technically enabled by the neurotech device. So what are these thresholds and how does one reach them ? Let’s dive deep into this rabbit hole
The Nyquist theorem
The Nyquist theorem states that the sampling rate of a device or any piece of equipement must be atleast 2x(twice) the frequency of the signal it is trying to capture. For example, if the device must capture a frequency of 100 Hz, then the sampling rate of the device must be atleast 200 Hz, as simple as that.
It would be essential to note that this is not the full explanation or definition of the Nyquist theorem but rather a simplified version of it, with the actual theorem being a lot more complex relative to the definition provided above. In general, a higher sampling rate is considered better for a usecase as it prevents distortion of the wave being recorded and aliasing.
Ok why are we talking about this theorem ? Well this matters a lot for any and all neurotech that are being made or will be made in the future because, this leads to a 3 part problem cycling back to itself.
The Self-referential problem
As mentioned above there are thresholds in brain wave measurements. This is because different types of brainwaves happen on different timescales which essentially means different frequencies. Each of these signals in some sense correspond to a different application that can be executed by a neurotech device.
Application Signal — Type Recommended — Sampling Rate
Mood/emotion tracking — Slow cortical potentials — ~1 Hz
Sleep tracking — Delta waves — 10–20 Hz
Attention detection — Alpha/beta — 100–250 Hz
Motor prosthetics — Action potentials — 10,000+ Hz
So your use case determines your needed sampling rate:
Mood tracking → lower is fine
Motor control/prosthetics → much higher needed
Providing a Sampling rate of 10K Hz + is not an easy challenge in software or hardware. But thats just the first part of the problem.
Let’s assume the neurotech device is somehow able to provide the sampling rate of 10k+ Hertz per second. The device now takes 10k screenshots of brain data from a singular channel every second, which is a lot of screenshots, essentially, pure raw data. The size of this data can be calculated using this intuitive formula below:
Data size = (Number of channels) × (Sampling rate) × (Sample size in bytes) × (Duration in seconds)
So based on the above one can say,
A 64-channel EEG recording at 1000 Hz for 1 hour can easily exceed 1 GB of data.
Neural implants with 256 channels at 30 kHz? You’re in terabytes per day territory.
This is the second problem in this cycle of problems, storage of data and essentially which data you want to store and which you do not.
But to understand this cycle let’s assume we now have infinite storage, and any and all the data needed can be stored in this infinite storage. But then the next problem emerges, latency.
Yes, a concept that every person who plays or has played a video game thinks about, especially when they are playing esports, because understandably high latency will definitely make sure of your defeat, but why is that ? Latency simply defined is the difference in time between an input and the response of the system for that input, so Output minus the input equalling to processing time.
Even if we manage a way to have a high sampling rate and an infinite storage to keep all the data that comes with it, a large amount of computational power would also be required to process all that data and take action with it. And processing gigabytes or terrabytes of data to make sure a persons arms move correctly is not a completely solved problem even to this day. The higher the sampling rate , the more the data , larger the processing time for the data, finally leading to high latency.
Now let’s say you have infinite computational power to solve the latency issue, then what are you assigning this power to ? The computational power can be either assigned to maintaining a high sample rate for the initital task or processing all the data gained from the high sampling rate and not both. This way, the problem cycles making it self-referential in a way.
Of course modern BCI companies are aware of such problems, so what are they doing about it ? No company is immune to the sampling rate dilemma — but the way they choose to tackle it often reveals what kind of neurotech they're building and what purpose it serves. Let’s look at three companies and their products in this article — Neuralink, OpenBCI, Neurosity — I chose these 3 companies as they work on the same problem in 3 distinct ways.
What do different companies do about this problem?
Neuralink — filter, compress, transmit and then process
Coming to the approaches, let’s start with a company that works on Invasive BCIs, Neuralink. Their N1 implant( the device ) samples( gets data from ) from 1024 channels at about 20khz per channel. Theoretically thats about 200 mega bits per second (200 mbps, assuming from some sources that its about a 10 bits of data per sample). But the wireless link from the N1 device, is capable of transmitting data at 1 Mbps only. So what data are they transmitting ?
The raw data is not transmitted directly as it would take too long, therefore the N1 device performs on-chip spike detection and compression of the data. That means the ASIC (a custom-designed chip inside the implant) filters out the irrelevant data and only transmits spike events that match pre-determined patterns. It’s like compressing a high-speed security camera feed by only sending frames where motion is detected. You lose a lot — but what you keep is usable in real time.
According to Neuralink’s public documentation, they achieve over 200× compression to make this possible. But it comes at a cost: you lose access to slow cortical activity, field potentials, and sub-threshold events unless you're willing to store the raw data offline — which they currently don’t do.
This optimization is all about enabling low-latency, high-fidelity control, like moving a robotic arm or cursor with brain signals — not deep neuroscientific analysis. For Neuralink, real-time response wins over completeness or is priortized a lot more. Which is understandably the case because they deal with medical patients who have some form of brain injury or neurological disorders, cases in which latency is unacceptable.
OpenBCI - transmit directly, process after
Now, we shall look into OpenBCI, which builds non-invasive , research-friendly EEG headsets like the Cyton Board. Cyton samples at 250 Hz by default (per channel) and transmits the raw, unfiltered signals directly to a connected device over Bluetooth.
While this is far below spike-sorting territory(which is about 10-20 Khz min) it works for other applications as referenced from the threshold above:
Measuring alpha/beta rhythms (attention, alertness)
Running brain-training experiments
Rapid prototyping in neurofeedback or art
Because the data rate is low, latency isn’t a major issue — and researchers get full access to the uncompressed signals. It’s open source, customizable, and built to record first, analyze later. This is much different compared to what Neuralink does but for reasons that can be seen above.
The downside? With this sampling rate , you can’t really do real-time prosthetic control or decode fast neural dynamics — like no prosthetic arm control or fast interactions with electronic devices.
Neurosity – Process Locally, Transmit Smartly
Neurosity’s Crown headset is also non-invasive, but with a twist: it comes with a quad-core ARM CPU, RAM, and onboard storage inside the headset.
Sampling happens at 256 Hz, and pre-processing is done entirely on-device. Only meaningful summaries — like brainwave classifications, metrics, or ML outputs — are sent to the companion app or cloud.
This is a middle path: Neurosity avoids the BLE bottleneck by not trying to send raw EEG at all. Instead, it turns the headset into an edge computer, where signal cleaning, filtering, and ML model inference happen at the point of acquisition.
This lets them avoid both raw data overload and high latency — and is especially well-suited for applications like:
Productivity optimization
Focus/mindfulness tracking
Brain-computer typing (basic intent detection)
You lose the raw brain signals (unless you're using their dev kit), but you gain a slick real-time interface with no lag, even over mobile connections.
Why This Matters
Three companies, three very different answers to the sampling‑rate trap:
Neuralink: Latency is king → sacrifice data breadth, maximise compression.
OpenBCI: Openness is king → sacrifice rate, keep every raw sample.
Neurosity: UX is king → sacrifice raw access, keep edge intelligence.
Understanding these trade‑offs makes it clear why there will never be a one‑size‑fits‑all BCI—every application carves its own path through the bandwidth‑storage‑latency triangle.
(6) The Measurement Landscape — The different stethoscopes we have for the brain
Throughout the blogs of Greylattice, if you have read them , you must have heard of terms like MEG, fMRI, EEG and others. All of these terms are just the different ways of reading or more specifically measuring brain activity of both types — electrical and chemical. This particular section will provide a bit more detail into each of them, their pros and cons. The aim is to basically understand how and why a particular method or procedure is used.
Each brain recording method can be defined as a function of its tradeoffs which were discussed above. No known method to date contains a high score in all of these parameters(meaning each of these has some tradeoff), but they score higher in some parameters compared to the rest. So they are like us human beings, each of us are unique but we all have our strengths and weaknesses.
The parameters are Invasiveness, Spatial resolution, Temporal resolution, Signal quality, Sampling rate and finally based on the rest Application fit.
So here’s the list of the major brain recording methods used in Neurotech today
EEG (Electroencephalography)
How it works: Uses electrodes placed on the scalp that measure voltage fluctuations generated by a large set of neurons firing in sync. Generally, it measures the activity of Cortical Pyramidal cells(Forms the majority of the neurons in the mammalian brain, characterized by their pyramidal shaped cell body). The signals generated from the firing travel through the skull and are received by the surface.
Invasiveness: Non-invasive
Spatial resolution: Low (centimeters, blurred by the skull and tissue)
Temporal resolution: High (~250–1000 Hz)
Used by(Organizations): OpenBCI, Neurosity, Emotiv, Neurable
Applications: Attention tracking, neurofeedback, meditation/focus apps, sleep monitoring, rapid prototyping, BCI games
fNIRS (Functional Near-Infrared Spectroscopy)
How it works: Measures chemical brain activity by shining near-infrared light into the skull and measures how much is absorbed or scattered. Since oxygenated and deoxygenated hemoglobin absorb light differently, this allows estimation of blood flow changes — a proxy for brain activity.
Invasiveness: Non-invasive
Spatial resolution: Medium-low (1–3 cm depth, coarse mapping)
Temporal resolution: Low (~1–10 Hz)
Used by: Kernel (Flow)
Applications: Mental workload tracking, meditation, mood and focus states, brain-health apps
fMRI (Functional Magnetic Resonance Imaging)
How it works: fMRI detects changes in blood oxygenation levels (BOLD signal) using magnetic fields. This reflects local brain activity due to neurovascular coupling, but with a time lag.
Invasiveness: Non-invasive
Spatial resolution: Very high (~1–2 mm voxels)
Temporal resolution: Very low (~0.1–1 Hz)
Used by: Research labs, hospitals
Applications: Mapping active brain regions during tasks, resting-state networks, deep brain research
ECoG (Electrocorticography)
How it works: ECoG involves placing electrodes directly on the surface of the brain (under the skull, but outside the brain tissue) to record electrical activity. It captures local field potentials with less distortion than EEG.
Invasiveness: Semi-invasive (craniotomy required)
Spatial resolution: High (~mm–cm)
Temporal resolution: High (~1–2 kHz)
Used by: Blackrock Neurotech, academic hospitals
Applications: Seizure localization, motor/speech BCIs, clinical brain mapping
Stentrode (Endovascular Brain Interface)
How it works: A stent-like electrode array is implanted via blood vessels (usually jugular vein → motor cortex), avoiding brain penetration. It records from inside blood vessels near the cortex.
Invasiveness: Minimally invasive (via catheter insertion)
Spatial resolution: Medium-high (~cm range, closer than EEG)
Temporal resolution: Moderate (~1 kHz)
Used by: Synchron
Applications: Brain-to-text typing for paralyzed users, ALS communication BCIs
Surface Arrays (Subdural)
How it works: A stent-like electrode array is implanted via blood vessels (usually jugular vein → motor cortex), avoiding brain penetration. It records from inside blood vessels near the cortex.
Invasiveness: Minimally invasive (via catheter insertion)
Spatial resolution: Medium-high (~cm range, closer than EEG)
Temporal resolution: Moderate (~1 kHz)
Used by: Synchron
Applications: Brain-to-text typing for paralyzed users, ALS communication BCIs
Utah Arrays / Microelectrodes
How it works: These are tiny needle-like electrodes inserted directly into brain tissue, recording action potentials from individual or small groups of neurons. Extremely high fidelity.
Invasiveness: Fully invasive
Spatial resolution: Very high (micrometer scale, single neuron level)
Temporal resolution: Very high (~20–30 kHz)
Used by: Neuralink, Paradromics, Blackrock
Applications: Prosthetic control, speech decoding, fine-grained neural research, real-time interfaces
MEG (Magnetoencephalography)
How it works: MEG uses superconducting sensors (SQUIDs or optically pumped magnetometers) to detect magnetic fields(Electromagnetic) produced by synchronized neuronal currents, typically in the cortex. Magnetic fields are less distorted than electric fields.
Invasiveness: Non-invasive
Spatial resolution: High (better than EEG, ~mm–cm)
Temporal resolution: Very high (~1 kHz)
Used by: Kernel (Flux), academic labs
Applications: Cognitive neuroscience research, language processing, real-time cortical mapping
(7) Conclusion — Why this matters for everything else ?
Finally, we reach the ending of this article about the realistic challenges of measuring brain activity to make anything neurotech. Reading this may make it seem like there are a multitude of problems to solve and that there is nothing you or I can do. That is mostly true but, understanding in depth why something won’t work paves the way for ideas of why something else could work or on how to truly solve the problem or maybe an alternative way around things. Anyone out there reading about this through my work or anyone else’s could have the aha moment and make a major leap in Neurotech, like the companies mentioned above and a lot of others not mentioned and are doing so right now. I wish to be a pioneer in the future of safe, healthy and highly functional neurotechnology and I know for sure it’s going to happen soon.
A small recap of the problems discussed were Scaling, Invasiveness, Inverse relation between spatial and temporal resolution, Signal Quality, Sampling rate problem and finally Measurement method trade-offs. This can be seen as the 6 areas of great potential in Neurotech, finding a unique solution to which can lead to a world changing start-up as well.
All of this is to say, the base of building determines what the building can and cannot hold, what is measured defines the product, its potential and its limits. And putting all of this together is the biggest challenge of them all.
I shall finish off this article by revealing the next topic in the GreyLattice core series being worked on, GL03 will pick up on the original topic the GL02 was supposed to be about, The Cell that makes everything possible, the neuron itself. The blog shall potentially discuss what the neurons compute, what defines their behaviour, how different it is from a synthetic neuron and more. If GL02 was about how we hear the brain, then GL03 is about what the brain is trying to say — and how it says it through spikes, structure, and computation.
And finally Thank you all for being here and reading the next piece I have written, I can tell the outreach is not that great but I shall try my best to get this out as much as possible and to grow GreyLattice to the best I can.
If you liked this article, then please do support GreyLattice by spreading and marketing the page as much as possible. A lot of stuff is in the works and is taking more time than it should to come out but whatever does release, I can guarantee its gonna be great.
Thank you again, Hope you have a great day.
Girish-K01