Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Innovative Brain Technology Empowers ALS Patients to Communicate

Innovative Brain Technology Empowers ALS Patients to Communicate

Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease that affects muscle control, and most patients reach a point at which either the disease itself or a necessary tracheotomy impair their speech. Because eye movement is typically preserved longer, assistive technologies that track patients’ gaze have helped them communicate, but even control of eye movement can eventually be lost. This leaves patients in a form of “locked-in syndrome” with no means of communicating with their loved ones and caregivers, sometimes for more than a year at the end of their lives.

But maybe it doesn’t have to be that way. The brain-computer interface (BCI) company Cognixion today announced a clinical trial investigating the use of its Axon-R headset as a communicative device for patients in the late stages of ALS, also known as Lou Gehrig’s disease. The company hopes to provide a communication tool to a group of people with no proven alternatives.

“We’re trying to solve the hardest problem we can find,” says Chris Ullrich, chief technology officer at Cognixion. Using the headset for this application requires combining BCI tech with both artificial intelligence and augmented reality.

How Cognixion’s BCI tech works

A high tech headset wired to a small rectangular box.
Cognixion is starting a clinical trial investigating the use of its Axon-R BCI headset for late-stage ALS patients.Cognixion

The non-invasive headset monitors brain activity with electrodes placed over the occipital lobe at the back of the skull. These electrodes use the standard brain-monitoring technique of electroencephalography (EEG) to detect a signal known as steady state visual evoked potentials (SSVEP), a natural brain reaction to an image flashing at regular intervals, perhaps 8 to 15 times per second.

The Axon-R device can detect a choice among multiple options (such as different letters, words, or phrases) presented at different frequencies within the user’s augmented reality view. The device can offer up groups of letters for the user to chose between and then offer the individual letters within that group; like a smartphone’s autocomplete function, it can also suggest likely words or phrases based on the user’s initial choices.

The resulting message can be read aloud automatically or can be displayed on a front-facing screen next to the patient’s face. Critically for late-stage ALS patients, the brain response is triggered through attention alone, and doesn’t require the user to directly gaze at the option they want to select, says Ullrich.

Cognixion has also developed an assistive AI system, which it dubs a “conversational co-pilot,” to help patients produce speech more quickly. The AI will be tailored to each patient, trained on available examples of their own speech or writing, and will ideally be able to respond to a message with the suggestion of entire phrases or sentences after the user makes a few initial decisions. The company anticipates this will allow communication at “near conversational speed.”

A text bubble containing the question, what kind of music do you like, is displayed at the top of a screen. Beneath it are letter groupings similar to those seen on telephone key pads.
The Cognixion headset’s display shows the user various ways to respond to a question.Cognixion

What are the goals of the ALS trial?

“[Conversational speed] has been the holy grail of a lot of BCI research,” says Brendan Allison, a researcher affiliated with the University of California, San Diego and the BCI Society, who does not work with Cognixion. But generally, the fastest claimed performances (measured in words per minute) have required a controlled laboratory setting, and sometimes restrictions on vocabulary.

Ullrich says that while Cognixion plans to track words per minute, this research will prioritize the rate at which patients make VEP-based selections, as well as the subjective experience of the patients and caretakers in dialogue.

Allison notes that success in this field is highly relative. “If you have someone who is at zero words per minute, [communicating at] even one word per minute—that’s huge,” he says. While naturalistic communication would be a remarkable achievement, for late stage ALS patients, any communication at all would be a boon. Patients using assistive communication are often involved in vital choices about their care and end-of-life decisions, and the reliability of these systems—at any speed—will be a key factor in ethical and legal matters.


The ALS patient and clinical trial participant Rabbi Yitzi Hurwitz tries out Cognixion’s communication tool.
Cognixion

Other applications for Cognixion’s BCI tech

ALS affects somewhere on the order of 30,000 people in the United States, with about 5,000 new diagnoses each year, according to the U.S. Centers for Disease Control and Prevention. The Cognixion study is currently recruiting participants with the help of the ALS Association.

The Axon-R, with tools and feedback for developers, is a research version of the Cognixion One headset, which in 2023 received a breakthrough device designation from the FDA, a program intended to help streamline approval processes for medical devices addressing unmet needs. Cognixion currently sells the Axon-R as a research platform starting at US $25,000. An eventual consumer model Cognixion One would not require all of the same features, but pricing is yet to be determined.

Ullrich notes that the company’s technology, as a versatile platform for communication and control, could potentially also be useful to ALS patients at earlier stages of the disease, as well as people with other conditions that affect mobility or communication, such as cerebral palsy, multiple sclerosis, or spinal cord injury.

Another major approach to BCI-assisted speech uses implanted electrodes and records signals from parts of the brain associated with producing speech. More broadly, BCI technologies are being investigated for a variety of uses, such as control of a wheelchair or robotic prosthetics; gaming and entertainment; and general monitoring of brain health and activity.

Google’s Pixel Watch Detects When Your Heart Ceases to Beat

Google’s Pixel Watch Detects When Your Heart Ceases to Beat

Later this month, Google is expected to roll out software on its Pixel Watch 3 in the United States that has the potential to correctly identify two-thirds of out-of-hospital cardiac arrests in people wearing the smartwatch. The feature uses AI to detect when the wearer no longer has a pulse, and it’s meant to combat the quiet killer of cardiac events that occur at home when people are alone and unable to call for help.

A team of Google researchers and scientists at the University of Washington recently published a study testing the software, with the aim of balancing the need for a low number of false positives—when 911 might be called but not needed—with the desire to identify a loss of pulse in as many cases as possible.

“You can make this more sensitive, but it just comes at a cost,” says Google research scientist Jake Sunshine who led the study. An algorithm that “excessively” calls 911, Sunshine says, “can’t exist in the world like that.”

The study was released by the journal Nature as an accelerated article preview on 26 February, the day after the FDA announced premarket medical device approval of the loss of pulse feature on the Pixel for the company Fitbit, which was acquired by Google’s parent company Alphabet in 2021. The feature was approved in Europe last year, and is expected to become available to people in the U.S. this month.

Training Pixel’s AI

Data from three cohorts of Pixel Watch wearers was used to train the model. The first cohort included 100 patients with an implanted cardiac defibrillator, which delivers small pulses of electricity to the heart when it detects irregular heartbeats. The patients wore a Pixel Watch when their heart temporarily stopped during a scheduled test of their defibrillator.

But that data was hard to get because the tests had to be done under medical supervision. “We can’t just take healthy volunteers and make their heart stop, and then send them on their way home, right?” Sunshine notes. So, the team turned to a second cohort of 99 participants who experienced a temporary loss of pulse when a tourniquet tightened on their arm. The two signals—or lack thereof—recorded on the wrist of defibrillator patients looked “indistinguishable” from the signals of people with a tourniquet wrapped around their arm. The third and largest cohort included nearly 1,000 Pixel Watch wearers living their daily lives.

The model was trained to identify the transition between a regular heart rhythm and loss of pulse. Part of the model processed the pulse signal to identify if the amplitude dropped and if the accelerometer detected any movement. Another part of the model used neural networks to quickly run through more than 500 signal features in order to confirm that a transition between pulse states took place.

But these signals can also occur when a wearer simply falls down or lies in an awkward position, Sunshine says. The model needed to proceed through additional checks before calling 911.

After identifying a possible loss of pulse, the watch turns on an infrared light that penetrates deeper into the skin than the standard green light that is always on to detect pulse. The watch searches for a pulse as the green and infrared lights flood the wrist. At the same time, another algorithm checks that the pulse detected, if there is one, matches the regularity of a beating heart.

Finally, a “quite annoying” haptic buzz with an irregular pattern is turned on, Sunshine says. If the wearer is still motionless after 35 seconds of buzzing, then 911 is called. The goal is for classification to occur in around one minute.

The complex skin sensors and loss of pulse detection features on a Google Pixel watch. Loss of pulse is detected based on a user's inference probability, BP filtered green PPG and accelerometer. Detections trigger an alert with a brief countdown prior to contacting emergency services.
The algorithms in Google’s Pixel Watch look for changes in pulse amplitude that might be a sign of cardiac arrest.

Google

Specificity Over Sensitivity

After training the algorithm in these controlled settings, Sunshine and his colleagues tested the feature on 355 Pixel wearers outside the lab, yielding one errant call to 911. The team also tested the model back in the lab on a new set of participants using the tourniquet technique to temporarily pause their pulse. There, the model correctly identified a loss of pulse in 67 percent of the more than 1,000 sessions of tourniquet-induced pulselessness conducted in the lab by 156 participants (21 of whom were professional stunt persons). This means that the algorithm did not catch around one third of cases where there was a loss of pulse.

The decision to maximize specificity over sensitivity is understandable to Mahsa Khalili, a postdoctoral researcher at the University of British Columbia, who studies out-of-hospital cardiac arrests and was not involved in this work. Similar to other models, this one will likely improve as more data comes in from the U.S.-based users who opt in to the feature, she says.

While many academic labs are limited by the number of participants willing to enroll in cardiac monitoring studies, Google is uniquely resourced and situated to reach many more end users, Khalili adds. The Google research team made the details about the participant numbers and system architecture were made available, but only published pseudo code of the model itself.

The opt-in feature is geared toward the general population. But like all medical devices, it is not for certain populations, such as people with severe cardiac disease, Sunshine says. He and the team expect to evaluate the real-world data as it comes in and disseminate the findings.

Energy-Efficient Neural Processor Anticipates User Intentions

Energy-Efficient Neural Processor Anticipates User Intentions

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Brain-chip technology is quickly accelerating. In one of the latest advancements, researchers have designed a new chip that uses larger groups of neurons and less power to detect when a user wants to initiate a given behavior—for example, reaching for an object. The new approach, if it translates into humans, could theoretically provide users with more autonomy in initiating movement control.

Implanted systems known as intracortical brain-computer interfaces (iBCIs) are a game changer for many people with paralysis, providing them with a means to regain some movement control. iBCIs work by inserting electrode arrays into the brain to record neural activity. Because our neurons naturally communicate with one another using electric pulses, these brain chips are able to detect the electrical signals.

“The detected [signals] are then used by BCI applications to interpret neural activity and translate it into commands, such as controlling a computer cursor or a robotic limb,” explains Daniel Valencia, a researcher at San Diego State University.

Current iBCIs monitor individual neurons in the brain. However, doing so continuously is energy intensive, and it can be difficult to discern whether a signal is indeed coming from the neuron being monitored, or a neighboring neuron that has a similar electrical firing pattern. It takes a lot of power consumption for brain chips to analyze the data, sift through all the “background noise,” and pinpoint true neural firings. Due to this high power consumption, most current iBCIs tend to only be turned on manually during predefined periods, such as clinical or laboratory sessions.

Valencia and his colleagues were interested in creating a different kind of system that passively monitors brain activity and automatically switches on when needed. Instead of monitoring individual neurons, their proposed chip monitors the general activity of a cohort of neurons, or their local field potentials (LFPs).

An Efficient Solution

This approach involves a simpler process, detecting the frequency at which a collection of neurons in a given region of the brain is firing. When certain thresholds of neural activity are hit, the brain chip is switched on. For example, when people are sleeping, the LFP of neurons exhibit increased activity in the 30-to-90-hertz range, but when preparing to move, there is an increase in activity in the 15-to-35-Hz range. The chip proposed by Valencia and his colleagues therefore would only activate when, presumably, the brain activity of a user indicates the user wants to move an object.

In a study published in the February print issue of IEEE Transactions on Biomedical Circuits and Systems, the researchers tested their new LFP approach using previously recorded datasets of neural activity from animals performing movement tasks. They used the data and modeling to determine how much energy is required in their LFP approach compared to conventional brain chips that monitor individual neurons.

The results show that the two approaches are comparable in terms of determining the intentions of a user—conventional brain chips slightly outperformed the LFP approach—but the LFP approach uses significantly less power, which Valencia notes is a key advantage. “Additionally, the recording circuitry needed for LFPs is much simpler compared to [conventional] methods, which reduces hardware complexity,” he says. For instance, brain chips based on LFPs may not require the use of deeply penetrating micro-electrodes, significantly reducing the chance of tissue scarring in the brain and potentially increasing the longevity of the device.

Importantly, this new proposed system would allow users to complete tasks autonomously and more easily, without having to manually activate their brain chip. Many scientists in the field of iBCI design are interested in developing these more advanced, “self-paced” iBCIs. “Our work is a step toward developing these systems, allowing users to control their engagement independently,” says Amir Alimohammad, a professor at San Diego State University who was involved in the study.

Alimohammad adds that his team is currently working on integrating their LFP approach that predicts a user’s intentions within a broader iBCI system that also uses data from single neurons firing. Whereas the LFP data could be used to activate the system, detailed data from individual neurons could be used to execute more precise motor control, he says.

This Portable MRI Operates on Battery Power

This Portable MRI Operates on Battery Power

According to the World Health Organization, strokes are the second most common cause of death worldwide. Of the 15 million patients annually, one-third die and another third are left with permanent disabilities. A new design for a portable MRI scanner has the potential to make a major impact on those numbers.

Medical imaging is essential in diagnosing a stroke. Strokes have two major causes, and the difference is critical. Ischemic strokes are caused by a blockage of blood vessels in the brain, and account for about nine out of 10 cases. Hemorrhagic strokes are the result of bleeding in the brain. Choosing the wrong treatment can be damaging, or even fatal.

A better imaging device could result in faster diagnoses. Many medical facilities—even in developed nations—lack MRI scanners and other sophisticated imaging equipment. This can delay a diagnosis past the “golden hour”—the first hour of a stroke, when treatment can be most effective. Delay increases the chance of brain-tissue damage due to oxygen deprivation.

Even if an MRI machine is available, patients must be transported from a facility’s emergency department to the radiology department for imaging and then returned for treatment. Scientists at Wellumio in Wellington, New Zealand, came up with the idea that a mobile unit that could scan just the patient’s head in the emergency department would allow imaging to be done right there, while the patient is still on a gurney.

Designing a Portable MRI Scanner

The result is Axana: an MRI scanner mounted on a mobile pedestal. Axana’s toroid (doughnut) shape is just large enough for the patient’s head. The device is controlled by a simple touchscreen interface, which means that it requires far less training than traditional imaging systems.

While Axana uses the same sensors as standard MRI machines, the signals are provided in a very different manner. Traditional MRI relies on three pulsed gradient electromagnetic coils along x, y, and z axes. (This is what makes an MRI scan feel like being in a steel drum while goblins pound on the outside with sledgehammers.) The result is highly detailed three-dimensional imaging of soft tissues.

Axana does not use pulsed gradient coils. Instead, it uses magnetic fields at different frequencies to align the hydrogen atoms in soft tissues. The machine creates these fields in different regions of the patient’s head using different frequencies to achieve the spatial resolution.

As it stands now, the images are very low resolution, but they’re sufficient for gross anatomy analysis. The company intends to increase the number of coils operating at more frequencies in the next version to increase the resolution. The system compares the rate of signal decay in the tissue with the diffusion of blood in the tissues. This can provide enough information to detect the impaired blood diffusion that would indicate an ischemic stroke. The data are displayed in an image that is color coded to draw attention to areas of impaired diffusion.

Axana does not require any AI to process the data, just straightforward physics and mathematics. This makes the interpretation of the data much simpler and more reliable.

The end result is a portable device that can be produced at a relatively low cost compared with a traditional MRI installation, which can cost between US $1 to $3 million. The current prototype weighs in at about 100 kilograms, which makes it a bit large for ambulance applications but should fit well in most emergency department facilities. Its power consumption is low enough for the device to be powered by a standard wall outlet, and it contains a battery, so it can continue to operate when moved from one outlet to another.

“Further down the track, we definitely want the device to be capable of operation out in the field,” says Paul Teal, Wellumio’s chief data scientist. “For this application, perhaps three or four scans is all that would be required in a single deployment.”

At present, the prototype is undergoing preclinical testing with human patients at the Royal Melbourne Hospital, in Australia. The system is designed only to detect ischemic stroke at this point, though the company intends to expand its uses to include hemorrhagic diagnoses as well. Ultimately, it could be useful in detecting other forms of head trauma.

A major part of the company’s mission is to make this device available in rural and underserved communities through its small size, lower cost, and reduced need for training. “In order to really make a difference in the time to treatment for treatable ischemic stroke, the device must be operated by emergency room doctors (and eventually paramedics) without the oversight of a neurologist and/or radiologist,” says Teal. “There is a lot of regulatory approval that must be gained before the device can be used in this way, but it is actually quite feasible.”

An Exoskeleton with Self-Balancing Capabilities Moves Closer to Market Launch

An Exoskeleton with Self-Balancing Capabilities Moves Closer to Market Launch

Many people who have spinal cord injuries also have dramatic tales of disaster: a diving accident, a car crash, a construction site catastrophe. But Chloë Angus has quite a different story. She was home one evening in 2015 when her right foot started tingling and gradually lost sensation. She managed to drive herself to the hospital, but over the course of the next few days she lost all sensation and control of both legs. The doctors found a benign tumor inside her spinal cord that couldn’t be removed, and told her she’d never walk again. But Angus, a jet-setting fashion designer, isn’t the type to take such news lying—or sitting—down.

Ten years later, at the CES tech trade show in January, Angus was showing off her dancing moves in a powered exoskeleton from the Canadian company Human in Motion Robotics. “Getting back to walking is pretty cool after spinal cord injury, but getting back to dancing is a game changer,” she told a crowd on the expo floor.

The company will begin clinical trials of its XoMotion exoskeleton in late April, initially testing a version intended for rehab facilities as a stepping stone toward a personal-use exoskeleton that people like Angus can bring home. The XoMotion is only the second exoskeleton that’s self-balancing, meaning that users needn’t lean on crutches or walkers and can have their hands free for other tasks.

“The statement ‘You’ll never walk again’ is no longer true in this day and age, with the technology that we have,” says Angus.

The Origin of the XoMotion Exoskeleton

Angus, who works as Human in Motion’s director of lived experience, has been involved with the company and its technology since 2016. That’s when she met a couple of academics at Simon Fraser University, in Vancouver, who had a novel idea for an exoskeleton. Professor Siamak Arzanpour and his colleague Edward Park wanted to draw on cutting-edge robotics to build a self-balancing device.

At the time, several companies had exoskeletons available for use in rehab settings, but the technology had many limitations: Most notably, all those exoskeletons required crutches to stabilize the user’s upper body while walking. What’s more, users needed assistance to get in and out of the exoskeleton, and the devices typically couldn’t handle turns, steps, or slopes. Angus remembers trying out an exoskeleton from Ekso Bionics in 2016: “By the end of the week, I said, ‘This is fun, but we need to build a better exoskeleton.’”

Arzanpour, who’s the CEO of Human in Motion, says that his team was always drawn to the engineering challenge of making a self-balancing exoskeleton.When we met with Chloë, we realized that what we envisioned is what the users needed,” he says. “She validated our vision.”

Arun Jayaraman, who conducts research on exoskeletons at the Shirley Ryan Ability Lab in Chicago, is working with Human in Motion on its clinical trials this spring. He says that self-balancing exoskeletons are better suited for at-home use than exoskeletons that require arm support: “Having to use assistive devices like walkers and crutches makes it difficult to transition across surfaces like level ground, ramps, curbs, or uneven surfaces.”

How Do Self-Balancing Exoskeletons Work?

Self-balancing exoskeletons use much of the same technology found in the many humanoid robots now entering the market. They have bundles of actuators at the ankle, knee, and hip joints, an array of sensors to detect both the exoskeleton’s shifting positions and the surrounding environment, and very fast processors to crunch all that sensor data and generate instructions for the device’s next moves.

While self-balancing exoskeletons are bulkier than those that require arm braces, Arzanpour says the independence they confer on their users makes the technology an obvious winner. He also notes that self-balancing models can be used by a wider range of people, including people with limited upper body strength and mobility.

When Angus wants to put on an XoMotion, she can summon it from across the room with an app and order it to sit down next to her wheelchair. She’s able to transfer herself and strap herself into the device without help, and then uses a simple joystick that’s wired to the exoskeleton to control its motion. She notes that the exoskeleton could work with a variety of different control mechanisms, but a wired connection is deemed the safest: “That way, there’s no Wi-Fi signal to drop,” she says. When she puts the device into the “dance mode” that the engineers created for her, she can drop the controller and rely on the exoskeleton’s sensors to pick up on the subtle shifts of her torso and translate them into leg movements.

What Are the Challenges for Home-Use Exoskeletons?

The XoMotion isn’t the first exoskeleton to offer hands-free use. That honor goes to the French company Wandercraft, which already has regulatory approval for its rehab model in Europe and the United States and is now beginning clinical trials for an at-home model. But Arzanpour says the XoMotion offers several technical advances over Wandercraft’s device, including a precise alignment of the robotic joints and the user’s biological joints to ensure that undue stress isn’t put on the body, as well as torque sensors in the actuators to gather more accurate data about the machine’s movements.

Getting approval for a home-use model is a challenge for any exoskeleton company, says Saikat Pal, an associate professor at the New Jersey Institute of Technology who’s involved in Wandercraft’s clinical trials. “For any device that’s going to be used at home, the parameters will be different from a clinic,” says Pal.Every home looks different and has different clearances. The engineering problem is several times more complex when you move the device home.”

Angus says she has faith that Human in Motion’s engineers will solve the problems within a couple of years, enabling her to take an XoMotion home with her. And she can’t wait. “You know how it feels to fly 14 hours in coach? You want to stretch so bad. Now imagine living in that airplane seat for the rest of your life,” she says. “When I get into the exoskeleton, it only takes a few minutes for my back to lengthen out.” She imagines putting on the XoMotion in the morning, doing some stretches, and making her husband breakfast. With maybe just a few dance breaks.

Enhanced Brain Connectivity: Biohybrid BCI Integrates Additional Neurons

Enhanced Brain Connectivity: Biohybrid BCI Integrates Additional Neurons

Brain-computer interfaces have enabled people with paralysis to move a computer cursor with their mind and reanimate their muscles with their thoughts. But the performance of the technologyhow easily and accurately a BCI user’s thoughts move a cursor, for example—is limited by the number of channels communicating with the brain.

Science Corporation, one of the companies working towards commercial brain-computer interfaces (BCIs), is forgoing the traditional method of sticking small metal electrodes into the brain in favor of a biology-based approach to increase the number of communication channels safely. “What can I stick a million of, or what could I stick 10 million of, into the brain that won’t hurt it?” says Alan Mardinly, Science Corp co-founder.

The answer: Neurons.

Science Corp has designed a waffle-like device to house and place a new layer of neurons across the brain’s surface. The company’s researchers tested the device in mice, in which the additional neurons enabled the mice to learn to move left or right only if the device is “on.” The research lays the groundwork for a future interface that does not damage the brain as much as existing BCIs—or at all. The research was shared in a study posted to the bioRxiv preprint server in November.

A Neuron-filled Waffle

BCIs connect neurons within the brain to external computers. During clinical studies at universities across the U.S., roughly three dozen humans have controlled BCI technology using millimeter-scale metal electrodes stuck down into the brain through a coin-sized opening in the skull.

Other research teams have designed thinner, softer, or smaller devices than the traditional metal electrodes in order to electrically connect neurons to computers and avoid damaging the neurons and blood vessels while doing so. Neuralink, for instance, uses bendable polymer electrodes in its BCI.

Instead of sticking anything into the brain, the Science Corps device sits on top of it. But the device isn’t just a board stuck on the brain’s surface—it’s full of neurons. Neurons sit in the wells of a waffle-like device before being stuck to the brain surface, neuron-side down. Neurons grow down into the brain, acting as a glue between the device and the brain’s tissue.

Science Corp’s biohybrid technology aims to integrate biology into the devices implanted into the body. Biohybrid technology is an old idea, Mardinly says. It’s gone in and out of popularity in BCI research—it first showing up in early BCI research in the 1990s and again more recently. But the idea is a complex one because neurons are fragile and BCI technology has generally moved towards sturdier electrodes.

Science Corp, based in Alameda, California, was founded in 2021 by Mardinly and one of Neuralink’s co-founders, Max Hodak. The biohybrid project began soon after the medical technology company’s founding in early 2022, and the work presented in the bioRxiv study took around four or five months to complete, Mardinly says.

An illustration of a mouse brain on the left with a pattern of green circles overlaid on a portion of it. On the right, a chart with orange and blue bars.
Science Corp’s setup implants light-sensitive neurons into a mouse’s brain (left). Three weeks after the device was implanted, roughly half of the light-sensitive neurons were still present.Jennifer Brown, Kara M. Zappitelli et al.

The process of building the biohybrid device begins on the benchtop where neurons—specifically a kind called primary cortical excitatory neurons, which fire off signals to neighboring neurons—are derived from embryonic stem cells taken from the same mouse line as the mouse into which Science Corp’s researchers implanted the device to test it. The neurons are modified to have optogenetic properties, meaning they will fire when hit with light of a certain frequency.

The device looks like a waffle with little dishes called microwells, each 10 micrometers in diameter, mounted on a clear backing. Each microwell holds one neuron, and each device, clocking in at around 5 millimeters square, houses an average of 90,000 neurons. Neurons from the biohybrid device grew into the very top part of the cortex and blood vessels grew into the new neurons.

But just having neurons grow into the brain will not mean that the biohybrid device can change the brain’s function.

So, researchers shined light onto the device through a glass window in the mouse’s skull. The light “turned on” the new neurons, and the researchers turned on the light when the mouse was learning whether to turn left or right to get a treat. Mice learned to move to the left of a cage when the light was shone on the device in order to get their reward; when the light was turned off, mice learned to move to the right to get their reward.

The new light-sensitive neurons helped five of the nine mice learn a new behavior, which to Mardinly suggests that the biohybrid device successfully “modulates output behavior.”

Images of the brain under the implant showed neuronal axons sticking down through the pia matter, the dense cell layer at the very surface of the brain, and into layer 1 of the cortex.

“We haven’t proven that they’re forming synopsis, but it seems extremely likely,” Mardinly says.

A Big Jump for BCIs

By itself, this biohybrid device is not yet an interface, says Jack Judy, professor of electrical engineering at the University of Florida, who was uninvolved in the work. Judy previously led a past neuroprosthesis program funded by the U.S. Defense Advanced Research Projects Agency.

“When I think of an interface—well, there’s information coming out of the device,” says Judy. Instead, it’s a way to prepare tissue for an optical interface by spreading optically active cells across the brain, he says.

Mardinly says the team at Science Corps has already begun building future biohybrid devices with inputs and outputs from the brain. The devices house neurons in trenches instead of wells. One side of the trench has LEDs to deliver light to small groups of the neurons, and the other side will have electrical contacts to record the action potentials from neurons, similar to how many BCI technologies record from neurons already in the brain.

Going from proof of concept to a prototype is a big jump, Mardinly says. It’s an “extremely complicated” design, he says. “Is any of this worth it? And, you know, that remains to be seen, right? That’s on us to move forward and demonstrate.”

The research team acknowledges that it is difficult to pinpoint exactly how integrated the neurons from the graft are into the brain. Optogenetic stimulation requires just a few hundred neurons to work, as seen in past studies, and the biohybrid device adds many more than that to the brain.

The study leaves many unknowns, crucially why four of the nine mice with biohybrid devices with light-activated neurons did not learn the task.

The next major milestone is to develop a biohybrid device with human-engineered cells that records and stimulates, and then test the work in a larger animal.

Paragraf is Developing a “Clean Slate” Graphene Manufacturing Facility

Paragraf is Developing a “Clean Slate” Graphene Manufacturing Facility

Scientists and engineers have long touted graphene for use in electronic devices due to its excellent electrical conductivity, optical transparency, mechanical strength, and its ability to conduct heat and to remain stable under high temperatures. Graphene’s use in electronics at the commercial level, however, is still limited. That’s in part because it’s much harder to create and integrate single-layer graphene that is required for most (but not all) electronics at large scales. It’s also due to the robust regulatory and certification requirements that graphene, as a new material, has to go through for many high technology applications before it can be used. That said, many advanced technology markets, including sensors, are starting to more widely use the material.

A number of companies around the world have developed graphene sensors, but many have of these have revolved purely around biosensing, with many companies trying to develop advanced COVID test during the pandemic. But Paragraf, a company based in Cambridgeshire in the United Kingdom, has set its sights higher than being a small-scale batch-to-batch producer of graphene sensors. Instead, the company is aiming to become the first graphene foundry that supplies end-users with “blank canvas” graphene field effect transistor (gFET) sensing components that can be tailored to individual needs by users across different industries.

Paragraf believes this approach will make it easier for the company to scale up its manufacturing capabilities. The approach removes regulatory constraints and allows the engineers to just focus on the core technology, rather than having to consider the multitude of scenarios where their sensors could be used and for which they would need to be specifically customized.

Building Blank Canvas gFETs

Paragraf’s sensor elements are a blank canvas, so to speak. The company is building the main sensing surface—the canvas—by growing graphene on a sapphire substrate and adding two contacts with a gate electrode on top. It’s then up to Paragraf’s customers to finalize the sensor based on what they need it to do: “We’re not selling a finished sensor,” says Mark Davis, Paragraf’s director of biosensors, who adds that many different kinds of receptors can be added to the sensor by the user.

By giving the user control over the sensing receptors, it will make it easier for the gFETs to meet regulatory and certification requirements in their respective industries. Paragraf is targeting plenty of applications and industries, including potassium ion sensing in healthcare diagnostics, detecting heavy metals in agricultural wastewater runoff, gas sensing applications in the healthcare, agricultural, and hydrogen energy industries, pH sensing in cell and gene therapy, food and beverage monitoring, and chemical processing applications.

There is also a lot of potential for the gFETs to possess multiplexing capabilities for healthcare diagnostics, where many different biomarkers or chemical components of interest can be measured on the same chip. “Many gFETs only contain 3 to 5 channels, but the size of Paragraf’s FETs means that we can fit up to 100 channels onto a chip,” says Davis, which allows the resulting chip to detect and differentiate more things in a given sample.

Users of Paragraf’s gFETs, according to the company, are starting to develop healthcare diagnostic platforms using these blank canvas sensors for single ion and pH sensing applications because of the gFET’s high sensitivity. “For most healthcare applications, we’re looking at single-use disposable sensors that will cost $1 per sensor,” says Davis. “In the future, so long as we can make over 1 million [sensors] per annum and keep the wafer size below 3 by 3 millimeters, we will be able to get the costs down to this level―expanding the potential and capabilities of graphene sensors, and graphene electronics in general, in real world scenarios.”

Close up of Paragraf's graphene field effect transistor, against a white background.
Paragraf intends for its graphene field effect transistor to be a blank canvas for users to build upon.Paragraf

Developing gFET Sensor Components at Scale

The gFET being developed by Paragraf is an electrolyte-gated FET. The FET works by placing an electrolyte droplet that the user wants to analyze on the surface of the sensor. The electrolyte’s electrical conductivity creates an electrical bias that changes how the electrons in the sensor’s graphene sheet behave. This also changes the detectable electrical resistance across the graphene sheet—and because graphene’s electrical properties are so good, very small concentrations of ions (including single ions) in the electrolyte sample can be detected.

Paragraf is making the sensors in batches, much like the way the semiconductor industry fabricates wafers full of chips, and are directly depositing the graphene on to the wafers via metal organic chemical vapor deposition and attaching metal contacts on top.

Many chemical vapor deposition techniques grow graphene on copper foil, but the graphene then needs to be transferred to the end device, which can cause structural defects and copper contamination in the graphene that would affect the sensing capabilities of the device. By directly growing the graphene on the wafers, Paragraf is avoiding this to improve the sensing performance of their devices. Davis says that Paragraf is “manufacturing the sensor elements semiconductor-style so that we can miniaturize the sensor elements and fit more sensor elements per chip. The vision for Paragraf is that we are a foundry, and the manufacturer of the sensor components for a final diagnostic solution.”

Davis says that Paragraf can currently fit up to 32 gFETs on a wafer 51 millimeters to a side. The company is in the process of setting up a large-scale manufacturing facility in Huntingdon in Cambridgeshire. Paragraf have also recently acquired another graphene sensor company, Cardea Bio, in San Diego.

Paragraf are also developing graphene Hall effect sensors with a wide dynamic range for both low and high field magnetic measurement applications—from mapping high magnetic fields at CERN and measuring electromagnet fields, to pinpointing current leaks in batteries and measuring the presence of ultra-small magnetic fields inside quantum computers—but these are standard sensor element architecture that don’t require any further input from the end user to be used—they are ready to go. Ultimately, it’s Paragraf’s bet on blank-slate gFETs on which its hopes of creating the first graphene foundry lay.

AI Discovers Unseen Principles of Cellular Interior Design

AI Discovers Unseen Principles of Cellular Interior Design

A new deep-learning model can now predict how proteins sort themselves inside the cell. The model has uncovered a hidden layer of molecular code that shapes biological organization, adding new dimensions of complexity to our understanding of life and offering a powerful biotechnology tool for drug design and discovery.

Previous AI systems in biology, such as the Nobel Prize-winning AlphaFold, have focused on predicting protein structure. But this new system, dubbed ProtGPS, allows scientists to predict not just how a protein is built, but where it belongs inside the cell. It also empowers scientists to engineer proteins with defined distributions, directing them to cellular locations with surgical precision.

“Knowledge of where a protein goes is entirely complementary to how it folds,” says Henry Kilgore, a chemical biologist at the Whitehead Institute for Biomedical Research in Cambridge, Mass., who co-led the research. Together, these properties shape its function and interactions within the cell. These insights—and the machine learning tools that make them possible—“will come to have a substantial impact on drug development programs,” he says.

Kilgore and his colleagues described the new tool in a paper published 6 February in the journal Science.

Putting Proteins on the Cellular Map

Over the past few years, AI tools like AlphaFold have revolutionized structural biology by predicting protein shapes—much like the instruction manual that comes with a piece of IKEA furniture, showing how to assemble the chair or bed. But it turns out knowing a protein’s structure isn’t enough to understand its function. ProtGPS fills in this missing piece by determining where each molecular piece of “furniture” belongs within the cell’s open-plan interior.

Some proteins have clear destinations. Researchers have known for decades that proteins headed for places like the nucleus or mitochondriastructures enclosed by membranes and walled off from the rest of the cellcarry short signaling tags that guide them.

But much of the cell is an open environment, where proteins rely on more subtle cues to sort themselves into what are called biomolecular condensates—dynamic, liquid-like clusters that help regulate gene activity, manage cellular stress, and contribute to disease. And just as a cozy armchair might naturally fit into a reading nook, proteins follow intrinsic molecular placement rules that guide them to specialized condensates suited to particular functions.

ProtGPS has now begun to decode these rules, uncovering hidden features in the sequence of amino acids that form the backbone of all proteins—intrinsic sorting cues that determine whether and where a protein will localize within different condensates in the cell.

“Our model is learning these localization features,” says co-author Itamar Chinn, a machine-learning scientist at MIT. “And we can use those features to make new proteins that have the localization we want.”

Prot GPS development schematic. Learned representation of E SM2 proteins, and protein sequences annotated by distribution, yield protein representation of condensate departments. The compartment probability, which includes P-body, stress granule, and more, results in a fine-tuned model for predicting condensate compartment.
ProtGPS uses a machine-learning framework to predict protein localization within condensate compartments.Henry R. Kilgore et al./Science

Teaching AI the Language of Proteins

ProtGPS is what’s known as a protein language model. It works much like LLMs such as OpenAI’s ChatGPT or Anthropic’s Claude, predicting sequences based on learned patterns. But instead of processing text or speech, ProtGPS analyzes proteins, which are represented as strings of letters, each corresponding to one of 20 amino acid building blocks—L for leucine, S for serine, and so on.

Kilgore, Chinn, and their colleagues built the model using a deep-learning framework called ESM, originally developed by Meta for predicting protein structures, functions, and properties.

Short for Evolutionary Scale Modeling, ESM—like AlphaFold—also extracts meaningful patterns from protein sequences. But instead of using physics to predict precise atomic-level structures, as AlphaFold does, Meta’s model relies on sequence-based learning without complex 3D calculations, making it substantially faster and more scalable for analyzing large datasets. (An upgraded version of ESM with improved capabilities was unveiled last month.)

Kilgore and Chinn’s team used ESM’s architecture to decode cryptic signals embedded in the amino acid sequences. The researchers adapted and refined the tool to both predict where proteins assemble and to enable the design of new kinds of proteins—ones that do not exist in nature, but can be engineered with precise condensate-targeting properties.

Thus, ProtGPS was born. The researchers trained the model on nearly 5,000 human proteins known to localize to one of 12 different condensate compartments. They then tested ProtGPS on an independent dataset, finding that it could accurately place proteins in the correct part of the cell.

An Elusive Code of Compartmentalization

Certain physical and chemical traits, like the charge and water-repelling nature of a protein, seemed to play a role in where things end up in the cell. But, as is often the case with machine-learning models, the exact reasoning behind ProtGPS’s predictions—and, by extension, the biology behind the selective distribution—remain largely a mystery.

That’s not to say the researchers didn’t try to tease it apart. They combed through the model’s predictions, searching for clear sequence patterns or biochemical properties that might explain its sorting rules. “Nothing obvious really falls out,” says co-author Peter Mikhael, a computational biologist at MIT.

That black box opacity is a familiar challenge in AI. Language models, by their very nature, excel at bringing together contributions from many different features and contextual signals, allowing them to detect patterns that aren’t immediately obvious to humans. “So, it’s not all that surprising” that ProtGPS can extract localization cues that even experienced biologists struggle to define, says Ilan Mitnikov, a machine-learning scientist formerly at MIT who helped to develop the model.

“If the rules were simple, people would have already figured them out,” Mitnikov says.

Engineering Proteins, Predicting Diseases

Even without a full understanding of what governs a protein’s cellular destination, the researchers showed that ProtGPS could be used to create proteins with carefully tuned localization properties. The tool also proved capable of predicting how mutations linked to disease might disrupt protein compartmentalization, shedding light on the molecular mechanisms underlying conditions such as cancer and developmental disorders.

Dewpoint Therapeutics—a biotech company co-founded by one of the study’s authors, Whitehead biologist Richard Young—now plans to integrate ProtGPS into its drug discovery efforts, according to chief scientific officer Isaac Klein, who called the tool a “game-changer” for identifying drug targets and designing new therapies. (Young, Kilgore, and MIT computer scientist Regina Barzilay, who also helped lead the study, all hold consulting or advisory roles with Dewpoint.)

Other scientists also see potential for the tool, including Tuomas Knowles, a biophysicist at the University of Cambridge who serves as chief technology officer of Transition Bio, another company focused on drug discovery against condensate targets. “What is particularly exciting is that this paper provides further evidence that there are very specific sequence features that govern localization and partitioning of proteins into condensates in living cells,” says Knowles, who was not involved in the research. “Furthermore, this provides new opportunities to influence and control protein localization—and potentially correct mis-localization, which is at the origin of many diseases,” he adds.

But beyond its applied utility, ProtGPS highlights an emerging paradigm in biology, in which the physical arrangement of the molecules within a cell is as critical to its function as the molecules’ structure, with codes embedded in the amino sequence that impact folding and cellular compartmentalization alike.

Just as a well-designed home is more than a collection of furniture—it relies on intuitive placement to maximize utility—cells, too, require precise molecular organization to function optimally. By uncovering hidden patterns in protein sequences, ProtGPS may serve as the architect of this cellular flow, decoding nature’s blueprint for the cell’s interior design.

Introducing the Latest IEEE Standard for Enhancing Security in Biomedical Devices and Data

Introducing the Latest IEEE Standard for Enhancing Security in Biomedical Devices and Data

If you have an implanted medical device, have been hooked up to a machine in a hospital, or have accessed your electronic medical records, you might assume the infrastructure and data are secure and protected against hackers. That isn’t necessarily the case, though. Connected medical devices and systems are vulnerable to cyberattacks, which could reveal sensitive data, delay critical care, and physically harm patients.

The U.S. Food and Drug Administration, which oversees the safety and effectiveness of medical equipment sold in the country, has recalled medical devices in the past few years due to cybersecurity concerns. They include pacemakers, DNA sequencing instruments, and insulin pumps.

In addition, hundreds of medical facilities have experienced ransomware attacks, in which malicious people encrypt a hospital’s computer systems and data and then demand a hefty ransom to restore access. Tedros Adhanom Ghebreyesus, the World Health Organization’s director-general, warned the U.N. Security Council in November about the “devastating effects of ransomware and cyberattacks on health infrastructure.”

To help better secure medical devices, equipment, and systems against cyberattacks, IEEE has partnered with Underwriters Laboratories, which tests and certifies products, to develop IEEE/UL 2933, Standard for Clinical Internet of Things (IoT) Data and Device Interoperability with TIPPSS (Trust, Identity, Privacy, Protection, Safety, and Security).

“Because most connected systems use common off-the-shelf components, everything is now hackable, including medical devices and their networks,” says Florence Hudson, chair of the IEEE 2933 Working Group. “That’s the problem this standard is solving.”

Hudson, an IEEE senior member, is executive director of the Northeast Big Data Innovation Hub at Columbia. She is also founder and CEO of cybersecurity consulting firm FDHint, also in New York.

A framework for strengthening security

Released in September, IEEE 2933 covers ways to secure electronic health records, electronic medical records, and in-hospital and wearable devices that communicate with each other and with other health care systems. TIPPSS is a framework that addresses the different security aspects of the devices and systems.

“If you hack an implanted medical device, you can immediately kill a human. Some implanted devices, for example, can be hacked within 15 meters of the user,” Hudson says. “From discussions with various health care providers over the years, this standard is long overdue.”

More than 300 people from 32 countries helped develop the IEEE 2933 standard. The working group included representatives from health care–related organizations including Draeger Medical Systems, Indiana University Health, Medtronic, and Thermo Fisher Scientific. The FDA and other regulatory agencies participated as well. In addition, there were representatives from research institutes including Columbia, European University Cyprus, the Jožef Stefan Institute, and Kingston University London.

“Because most connected systems use common off-the-shelf components, everything is now hackable, including medical devices and their networks.”

The working group received an IEEE Standards Association Emerging Technology Award last year for its efforts.

IEEE 2933 was sponsored by the IEEE Engineering in Medicine and Biology Society because, Hudson says, “it’s the engineers who have to worry about ways to protect the equipment.”

She says the standard is intended for the entire health care industry, including medical device manufacturers; hardware, software, and firmware developers; patients; care providers; and regulatory agencies.

Six security measures to reduce cyberthreats

Hudson says that security in the design of hardware, firmware, and software needs to be the first step in the development process. That’s where TIPPSS comes in.

“It provides a framework that includes technical recommendations and best practices for connected health care data, devices, and humans,” she says.

TIPPSS focuses on the following six areas to secure the devices and systems covered in the standard.

  • Trust. Establish reliable and trustworthy connections among devices. Allow only designated devices, people, and services to have access.
  • Identity. Ensure that devices and users are correctly identified and authenticated. Validate the identity of people, services, and things.
  • Privacy. Protect sensitive patient data from unauthorized access.
  • Protection. Implement measures to safeguard devices from cyberthreats and protect them and their users from physical, digital, financial, and reputational harm.
  • Safety. Ensure that devices operate safely and do not pose risks to patients.
  • Security. Maintain the overall security of the device, data, and patients.

TIPPSS includes technical recommendations such as multifactor authentication; encryption at the hardware, software, and firmware levels; and encryption of data when at rest or in motion, Hudson says.

In an insulin pump, for example, data at rest is when the pump is gathering information about a patient’s glucose level. Data in motion travels to the actuator, which controls how much insulin to give and when it continues to the physician’s system and, ultimately, is entered into the patient’s electronic records.

“The framework includes all these different pieces and processes to keep the data, devices, and humans safer,” Hudson says.

Four use cases

Included in the standard are four scenarios that outline the steps users of the standard would take to ensure that the medical equipment they interact with is trustworthy in multiple environments. The use cases include a continuous glucose monitor (CGM), an automated insulin delivery (AID) system, and hospital-at-home and home-to-hospital scenarios. They include devices that travel with the patient, such as CGM and AID systems, as well as devices a patient uses at home, as well as pacemakers, oxygen sensors, cardiac monitors, and other tools that must connect to an in-hospital environment.

The standard is available for purchase from IEEE and UL (UL2933:2024).

On-demand videos on TIPPSS cybersecurity

IEEE has held a series of TIPPSS framework workshops, now available on demand. They include IEEE Cybersecurity TIPPSS for Industry and Securing IoTs for Remote Subject Monitoring in Clinical Trials. There are also on-demand videos about protecting health care systems, including the Global Connected Healthcare Cybersecurity Workshop Series, Data and Device Identity, Validation, and Interoperability in Connected Healthcare, and Privacy, Ethics, and Trust in Connected Healthcare.

IEEE SA offers a conformity assessment tool, the IEEE Medical Device Cybersecurity Certification Program. The straightforward evaluation process has a clear definition of scope and test requirements specific to medical devices for assessment against the IEEE 2621 test plan, which helps manage cybersecurity vulnerabilities in medical devices.

Discovering New Aspects of Magnetism Through Google’s Quantum Simulator

Discovering New Aspects of Magnetism Through Google’s Quantum Simulator

When Nobel laureate Richard Feynman first suggested the idea of quantum computers, he proposed they might perform the kind of complex quantum simulations that may yield insights into next-generation batteries or novel drugs. Now a new quantum simulator from Google has discovered that magnetism does not always work the way scientists think, suggesting that it has promise for unearthing more discoveries in the future.

The new research combines two kinds of quantum computing—analog and digital. In analog quantum computing, qubits can serve as analogues of other objects that display quantum behavior, such as molecules, atoms, and subatomic particles. Analog quantum computing is often used to simulate molecular interactions that are too complex for any classical computer to model within our lifetimes.

In contrast, digital quantum computers run sequences of elementary operations, called quantum logic gates, on a set of qubits. With enough qubits, a quantum computer could theoretically vastly outperform all classical computers on a number of applications. For instance, on quantum computers, Shor’s algorithm can crack modern cryptography, and Grover’s algorithm can search databases at staggering speeds.

Digital quantum computers can perform quantum simulations, but analog quantum computers are faster at this task. For instance, when simulating how three atoms might interact, a digital quantum computer would have to model the interactions between each combination of atoms one step at a time, whereas an analog quantum computer could model them all simultaneously. Speed is especially important, given the current error-prone nature of quantum hardware—the faster the operation, the more likely it will be successfully completed.

Still, digital quantum computers are more flexible at quantum simulation than analog quantum computers are. Analog quantum computers are designed to mimic whatever they are simulating as closely as possible, whereas digital quantum computers are more tunable in what they can simulate.

Google’s Analog-Digital Hybrid Quantum Simulation

Now Google “is launching a new analog-digital hybrid approach for quantum simulation to try and get the best of both worlds,” says Trond Andersen, a senior research scientist at Google Quantum AI in Mountain View, Calif. The researchers detailed their findings online 5 February in the journal Nature.

The new system possesses 69 superconducting qubits. It begins its simulations by applying gates to qubits to prepare the initial states of the model, and then lets the model quickly evolve in an analog manner. Finally, it returns to digital performance so that researchers can measure the results in an extensive way. “We get a combination of flexibility and speed,” Andersen says.

Previous research had explored analog-digital hybrid quantum simulation, but it often suffered from large errors during the analog evolution stage. The new system employed a high-fidelity calibration scheme that significantly reduced this problem, achieving a 0.1 percent error rate per qubit. “This was one of the breakthroughs that made this work possible,” Andersen says.

In benchmarking experiments, the scientists estimated that simulations with the level of fidelity seen with the new system would require more than 1 million years on the Frontier supercomputer at Oak Ridge National Laboratory, in Tennessee. “We’re excited about our new direction for discoveries and applications that we could not achieve on a classical computer,” Andersen says.

Moreover, the new simulator made an unexpected discovery. It found out that the widely used Kibble-Zurek mechanism—which can, for instance, predict the behavior of magnets during phase transitions—does not always hold.

“This was a big surprise—this is a mechanism very widely studied in quantum labs all over the world,” Andersen says. Understanding the dynamics associated with the Kibble-Zurek mechanism “is important for various types of quantum simulation.” he says.

Andersen notes that this discovery could have been made with a classical computer. “We’re now starting to use our approach for applications that would be impossible with a classical computer,” he says. This research was conducted with Google’s Sycamore quantum processors, and Andersen says the company “now has a new, advanced chip, Willow, that we are excited to try our approach on.”

Graphene-Based Tattoos Function as Genuine Biosensors

Graphene-Based Tattoos Function as Genuine Biosensors

Imagine it’s the year 2040, and a 12-year-old kid with diabetes pops a piece of chewing gum into his mouth. A temporary tattoo on his forearm registers the uptick in sugar in his blood stream and sends that information to his phone. Data from this health-monitoring tattoo is also uploaded to the cloud so his mom can keep tabs on him. She has her own temporary tattoos—one for measuring the lactic acid in her sweat as she exercises and another for continuously tracking her blood pressure and heart rate.

Right now, such tattoos don’t exist, but the key technology is being worked on in labs around the world, including
my lab at the University of Massachusetts Amherst. The upside is considerable: Electronic tattoos could help people track complex medical conditions, including cardiovascular, metabolic, immune system, and neurodegenerative diseases. Almost half of U.S. adults may be in the early stages of one or more of these disorders right now, although they don’t yet know it.

Technologies that allow early-stage screening and health tracking long before serious problems show up will lead to better outcomes. We’ll be able to look at factors involved in disease, such as diet, physical activity, environmental exposure, and psychological circumstances. And we’ll be able to conduct long-term studies that track the vital signs of apparently healthy individuals as well as the parameters of their environments. That data could be transformative, leading to better treatments and preventative care. But monitoring individuals over not just weeks or months but years can be achieved only with an engineering breakthrough: affordable sensors that ordinary people will use routinely as they go about their lives.

Building this technology is what’s motivating the work at my
2D bioelectronics lab, where we study atomically thin materials such as graphene. I believe these materials’ properties make them uniquely suited for advanced and unobtrusive biological monitors. My team is developing graphene electronic tattoos that anyone can place on their skin for chemical or physiological biosensing.

The Rise of Epidermal Electronics

The idea of a peel-and-stick sensor comes from the groundbreaking work of
John Rogers and his team at Northwestern University. Their “epidermal electronics” embed state-of-the-art silicon chips, sensors, light-emitting diodes, antennas, and transducers into thin epidermal patches, which are designed to monitor a variety of health factors. One of Rogers’s best-known inventions is a set of wireless stick-on sensors for newborns in the intensive care unit that make it easier for nurses to care for the fragile babies—and for parents to cuddle them. Rogers’s wearables are typically less than a millimeter thick, which is thin enough for many medical applications. But to make a patch that people would be willing to wear all the time for years, we’ll need something much less obtrusive.

In search of thinner wearable sensors,
Deji Akinwande and Nanshu Lu, professors at the University of Texas at Austin, created graphene electronic tattoos (GETs) in 2017. Their first GETs, about 500 nanometers thick, were applied just like the playful temporary tattoos that kids wear: The user simply wets a piece of paper to transfer the graphene, supported by a polymer, onto the skin.

Graphene is a wondrous material composed of a single layer of carbon atoms. It’s exceptionally conductive, transparent, lightweight, strong, and flexible. When used within an electronic tattoo, it’s imperceptible: The user can’t even feel its presence on the skin. Tattoos using 1-atom-thick graphene (combined with layers of other materials) are roughly one-hundredth the thickness of a human hair. They’re soft and pliable, and conform perfectly to the human anatomy, following every groove and ridge.

A close-up photo shows an area of skin with a nearly invisible clear shape adhering to the skin.
The ultrathin graphene tattoos are soft and pliable, conforming to the skin’s grooves and ridges. Dmitry Kireev/The University of Texas at Austin

Some people mistakenly think that graphene isn’t biocompatible and can’t be used in bioelectronic applications. More than a decade ago, during the early stages of graphene development, some
preliminary reports found that graphene flakes are toxic to live cells, mainly because of their size and the chemical doping used in the fabrication of certain types of graphene. Since then, however, the research community has realized that there are at least a dozen functionally different forms of graphene, many of which are not toxic, including oxidized sheets, graphene grown via chemical vapor deposition, and laser-induced graphene. For example, a 2024 paper in Nature Nanotechnology reported no toxicity or adverse effects when graphene oxide nanosheets were inhaled.

We know that the 1-atom-thick sheets of graphene being used to make e-tattoos are completely biocompatible. This type of graphene has already been used for
neural implants without any sign of toxicity, and can even encourage the proliferation of nerve cells. We’ve tested graphene-based tattoos on dozens of subjects, who have experienced no side effects, not even minor skin irritation.

When Akinwande and Lu created the first GETs in 2017, I had just finished my Ph.D. in
bioelectronics at the German research institute Forschungszentrum Jülich. I joined Akinwande’s lab, and more recently have continued the work at my own lab in Amherst. My collaborators and I have made substantial progress in improving the GETs’ performance; in 2022 we published a report on version 2.0, and we’ve continued to push the technology forward.

Electronic Tattoos for Heart Disease

According to the World Health Organization, cardiovascular diseases are the
leading cause of death worldwide, with causal factors including diet, lifestyle, and environmental pollution. The long-term tracking of people’s cardiac activity—specifically their heart rate and blood pressure—would be a straightforward way to keep tabs on people who show signs of trouble. Our e-tattoos would be ideal for this purpose.

Measuring heart rate is the easier task, as the cardiac tissue produces obvious electrical signals when the muscles depolarize and repolarize to produce each heartbeat. To detect such
electrocardiogram signals, we place a pair of GETs on a person’s skin, either on the chest near the heart or on the two arms. A third tattoo is placed elsewhere and used as a reference point. In what’s known as a differential amplification process, an amplifier takes in signals from all three electrodes but ignores signals that appear in both the reference and the measuring electrodes, and only amplifies the signal that represents the difference between the two measuring electrodes. This way, we isolate the relevant cardiac electrical activity from the surrounding electrophysiological noise of the human body. We’ve been using off-the-shelf amplifiers from companies like OpenBCI that are packaged into wireless devices.

Continuously measuring blood pressure via tattoo is much more difficult. We started that work with Akinwande of UT Austin in collaboration with Roozbeh Jafari of Texas A&M University (now at MIT’s Lincoln Laboratory). Surprisingly, the blood pressure monitors that doctors use today aren’t significantly different from the ones that doctors were using 100 years ago. You almost certainly have encountered such a device yourself. The machine uses a cuff, usually placed around the upper arm, that inflates to apply pressure on an artery until it briefly stops the flow of blood, then the cuff slowly deflates. While deflating, the machine records the beats as the heart pushes blood through the artery and measures the highest (systolic) and lowest (diastolic) pressure. While the cuff works well in a doctor’s office, it can’t provide a continuous reading or take measurements when a person is on the move. In hospital settings, nurses wake up patients at night to take blood pressure readings, and at-home devices require users to be proactive about monitoring their levels.

A diagram shows an arm with electrodes on the wrist above the site of an underlying artery. Two simplified charts show an inverse relationship between blood pressure and bioimpedance.
Graphene electronic tattoos (GETs) can be used for continuous blood pressure monitoring. Two GETs placed on the skin act as injecting electrodes [red] and send a tiny current through the arm. Because blood conducts electricity better than tissue, the current moves through the underlying artery. Four GETs acting as sensing electrodes [blue] measure the bioimpedance—the body’s resistance to electric current—which changes according to the volume of blood moving through the artery with every heartbeat. We’ve trained a machine learning model to understand the correlation between bioimpedance readings and blood pressure.Chris Philpot

We developed a new system that uses only stick-on GETs to
measure blood pressure continuously and unobtrusively. As we described in a 2022 paper, the GET doesn’t measure pressure directly. Instead, it measures electrical bioimpedance—the body’s resistance to an electric current. We use several GETs to inject a small-amplitude current (50 microamperes at present), which goes through the skin to the underlying artery; GETs on the other side of the artery then measure the impedance of the tissue. The rich ionic solution of the blood within the artery acts as a better conductor than the surrounding fat and muscle, so the artery is the lowest-resistance path for the injected current. As blood flows through the artery, its volume changes slightly with each heartbeat. These changes in blood volume alter the impedance levels, which we then correlate to blood pressure.

While there is a clear correlation between bioimpedance and blood pressure, it’s not a linear relationship—so this is where machine learning comes in. To train a model to understand the correlation, we ran a set of experiments while carefully monitoring our subjects’ bioimpedance with GETs and their blood pressure with a finger-cuff device. We recorded data as the subjects performed hand grip exercises, dipped their hands into ice-cold water, and did other tasks that altered their blood pressure.

Our graphene tattoos were indispensable for these model-training experiments. Bioimpedance can be recorded with any kind of electrode—a wristband with an array of aluminum electrodes could do the job. However, the correlation between the measured bioimpedance and blood pressure is so precise and delicate that moving the electrodes by just a few millimeters (like slightly shifting a wristband) would render the data useless. Our graphene tattoos kept the electrodes at exactly the same location during the entire recording.

Once we had the trained model, we used GETs to again record those same subjects’ bioimpedance data and then derive from that data their systolic, diastolic, and mean blood pressure. We tested our system by continuously measuring their blood pressure for more than 5 hours, a tenfold longer period than in previous studies. The measurements were very encouraging. The tattoos produced more accurate readings than blood-pressure-monitoring wristbands did, and their performance met the criteria for the highest accuracy ranking under the
IEEE standard for wearable cuffless blood-pressure monitors.

While we’re pleased with our progress, there’s still more to do. Each person’s biometric patterns are unique—the relationship between a person’s bioimpedance and blood pressure is uniquely their own. So at present we must calibrate the system anew for each subject. We need to develop better mathematical analyses that would enable a machine learning model to describe the general relationship between these signals.

Tracking Other Cardiac Biomarkers

With the support of the
American Heart Association, my lab is now working on another promising GET application: measuring arterial stiffness and plaque accumulation within arteries, which are both risk factors for cardiovascular disease. Today, doctors typically check for arterial stiffness and plaque using diagnostic tools such as ultrasound and MRI, which require patients to visit a medical facility, utilize expensive equipment, and rely on highly trained professionals to perform the procedures and interpret the results.

A photo shows a forearm and hand with the palm facing up. On both the left and right side of the forearm, a line of six small shapes adhere to the skin.
Graphene tattoos can be used to continuously measure a person’s bioimpedance, or the body’s resistance to an electric current, which is correlated to the person’s blood pressure.

Dmitry Kireev/The University of Texas at Austin and Kaan Sel/Texas A&M University

With GETs, doctors could easily and quickly take measurements at multiple locations on the body, getting both local and global perspectives. Since we can stick the tattoos anywhere, we can get measurements from major arteries that are otherwise difficult to reach with today’s tools, such as the carotid artery in the neck. The GETs also provide an extremely fast readout of electrical measurements. And we believe we can use machine learning to correlate bioimpedance measurements with both arterial stiffness and plaque—it’s just a matter of conducting the tailored set of experiments and gathering the necessary data.

Using GETs for these measurements would allow researchers to look deeper into how stiffening arteries and the buildup of plaque are related to the development of high blood pressure. Tracking this information for a long time in a large population would help clinicians understand the problems that eventually lead to major heart diseases—and perhaps help them find ways to prevent those diseases.

What Can You Learn from Sweat?

In a different area of work, my lab has just begun developing graphene tattoos for
sweat biosensing. When people sweat, the liquid carries salts and other compounds onto the skin, and sensors can detect markers of good health or disease. We’re initially focusing on cortisol, a hormone associated with stress, stroke, and several disorders of the endocrine system. Down the line, we hope to use our tattoos to sense other compounds in sweat, such as glucose, lactate, estrogen, and inflammation markers.

Several labs have already introduced passive or active electronic patches for sweat biosensing. The passive systems use a chemical indicator that
changes color when it reacts with specific components in sweat. The active electrochemical devices, which typically use three electrodes, can detect substances across a wide range of concentrations and yield accurate data, but they require bulky electronics, batteries, and signal processing units. And both types of patches use cumbersome microfluidic chambers for sweat collection.

In our GETs for sweat, we use the graphene as a transistor. We modify the graphene’s surface by adding certain molecules, such as antibodies, that are designed to bind to specific targets. When a target substance interacts with the antibody, it produces a measurable electrical signal that then changes the resistance of the graphene transistor. That resistance change is converted into a readout that indicates the presence and concentration of the target molecule.

We’ve already successfully developed standalone graphene biosensors that can detect food toxins, measure ferritin (a protein that stores iron), and distinguish between the
COVID-19 and flu viruses. Those standalone sensors look like chips, and we place them on a tabletop and drip liquid onto them for the experiments. With support from the U.S. National Science Foundation, we’re now integrating this transistor-based sensing approach into GET wearable biosensors that can be stuck on the skin for direct contact with the sweat.

We’ve also improved our GETs by adding microholes to allow for water transport, so that sweat doesn’t accumulate under the GET and interfere with its function. Now we’re working to ensure that enough sweat is coming from the sweat ducts and into the tattoo, so that the target substances can react with the graphene.

The Way Forward for Graphene Tattoos

To turn our technology into user-friendly products, there are
a few engineering challenges. Most importantly, we need to figure out how to integrate these smart e-tattoos into an existing electronic network. At the moment, we have to connect our GETs to standard electronic circuits to deliver the current, record the signal, and transmit and process the information. That means the person wearing the tattoo must be wired to a tiny computing chip that then wirelessly transmits the data. Over the next five to ten years, we hope to integrate the e-tattoos with smartwatches. This integration will require a hybrid interconnect to join the flexible graphene tattoo to the smartwatch’s rigid electronics.

In the long term, I envision 2D graphene materials being used for fully integrated electronic circuits, power sources, and communication modules. Microelectronic giants such as
Imec and Intel are already pursuing electronic circuits and nodes made from 2D materials instead of silicon.

Perhaps in 20 years, we’ll have 2D electronic circuits that can be integrated with soft human tissue. Imagine electronics embedded in the skin that continuously monitor health-related biomarkers and provide real-time feedback through subtle, user-friendly displays. This advancement would offer everyone a convenient and noninvasive way to stay informed and proactively manage their own health, beginning a new era of human self-knowledge.

This article appears in the March 2025 print issue as “A Graphene Biosensor Tattoo.”

A Revolutionary Approach: Innovative Method Opens Doors for Large-Scale Production of Therapeutic Nanoparticles

A Revolutionary Approach: Innovative Method Opens Doors for Large-Scale Production of Therapeutic Nanoparticles

This sponsored article is brought to you by NYU Tandon School of Engineering.

In a significant advancement for the field of drug delivery, researchers have developed a new technique that addresses a persistent challenge: scalable manufacturing of nanoparticles and microparticles. This innovation, led by
Nathalie M. Pinkerton, Assistant Professor of Chemical and Biomolecular Engineering at the NYU Tandon School of Engineering, promises to bridge the gap between lab-scale drug delivery research and large-scale pharmaceutical manufacturing.

The breakthrough, known as Sequential NanoPrecipitation (SNaP), builds on existing nano-precipitation techniques to offer improved control and scalability, essential factors in ensuring that drug delivery technologies reach patients efficiently and effectively. This technique enables scientists to
manufacture drug-carrying particles that maintain their structural and chemical integrity from lab settings to mass production—an essential step toward bringing novel therapies to market.

Using 3D Printing to Overcome a Challenge in Drug Delivery

Nanoparticles and microparticles hold tremendous promise for targeted drug delivery, allowing precise transport of medicines directly to disease sites while minimizing side effects. However, producing these particles consistently at scale has been a major barrier in translating promising research into viable treatments. As Pinkerton explains, “One of the biggest barriers to translating many of these precise medicines is the manufacturing. With SNaP, we’re addressing that challenge head-on.”

Photo of a smiling woman.
Pinkerton is an Assistant Professor of Chemical and Biomolecular Engineering at NYU Tandon.NYU Tandon School of Engineering

Traditional methods like Flash Nano-Precipitation (FNP) have been successful in creating some types of nanoparticles, but they often struggle to produce larger particles, which are essential for certain delivery routes such as inhalable delivery. FNP creates polymeric core–shell nanoparticles (NPs) between 50 to 400 nanometers in size. The process involves mixing drug molecules and block-copolymers (special molecules that help form the particles) in a solvent, which is then rapidly blended with water using special mixers. These mixers create tiny, controlled environments where the particles can form quickly and evenly.

Despite its success, FNP has some limitations: it can’t create stable particles larger than 400 nm, the maximum drug content is about 70 percent, the output is low, and it can only work with very hydrophobic (water-repelling) molecules. These issues arise because the particle core formation and particle stabilization happen simultaneously in FNP. The new SNaP process overcomes these limitations by separating the core formation and stabilization steps.

In the SNaP process, there are two mixing steps. First, the core components are mixed with water to start forming the particle core. Then, a stabilizing agent is added to stop the core growth and stabilize the particles. This second step must happen quickly, less than a few milliseconds after the first step, to control the particle size and prevent aggregation. Current SNaP setups connect two specialized mixers in series, controlling the delay time between steps. However, these setups face challenges, including high costs and difficulties in achieving short delay times needed for small particle formation.

A new approach using 3D printing has solved many of these challenges. Advances in 3D printing technology now allow the creation of precise, narrow channels needed for these mixers. The new design eliminates the need for external tubing between steps, allowing for shorter delay times and preventing leaks. The innovative stacked mixer design combines two mixers into a single setup, making the process more efficient and user-friendly.

“One of the biggest barriers to translating many of these precise medicines is the manufacturing. With SNaP, we’re addressing that challenge head-on.”
—Nathalie M. Pinkerton, NYU Tandon

Using this new SNaP mixer design, researchers have successfully created a wide range of nanoparticles and microparticles loaded with rubrene (a fluorescent dye) and cinnarizine (a weakly hydrophobic drug used to treat nausea and vomiting). This is the first time small nanoparticles under 200 nm and microparticles have been made using SNaP. The new setup also demonstrated the critical importance of the delay time between the two mixing steps in particle size control. This control over the delay time enables researchers to access a larger range of particle sizes. Additionally, the successful encapsulation of both hydrophobic and weakly hydrophobic drugs in nanoparticles and microparticles with SNaP was achieved for the first time by Pinkerton’s team.

Democratizing Access to Cutting-Edge Techniques

The SNaP process is not only innovative but also offers a unique practicality that democratizes access to this technology. “We share the design of our mixers, and we demonstrate that they can be manufactured using 3D printing,” Pinkerton says. “This approach allows academic labs and even small-scale industry players to experiment with these techniques without investing in costly equipment.”

An illustration of a process.
A stacked mixer schematic, with an input stage for syringe connections (top), which connects immediately to the first mixing stage (middle). The first mixing stage is interchangeable, with either a 2-inlet or a 4-inlet mixer option depending on the desired particle size regime (dotted antisolvent streams only present in the 4-inlet mixer). This stage also contains pass-through for streams used in the second mixing step. All the streams mix in the second mixing stage (bottom) and exit the device.

The accessibility of SNaP technology could accelerate advances across the drug delivery field, empowering more researchers and companies to utilize nanoparticles and microparticles in developing new therapies.

The SNaP project exemplifies a successful cross-disciplinary effort. Pinkerton highlighted the team’s diversity, which included experts in mechanical and process engineering as well as chemical engineering. “It was truly an interdisciplinary project,” she noted, pointing out that contributions from all team members—from undergraduate students to postdoctoral researchers—were instrumental in bringing the technology to life.

Beyond this breakthrough, Pinkerton envisions SNaP as part of her broader mission to develop universal drug delivery systems, which could ultimately transform healthcare by allowing for versatile, scalable, and customizable drug delivery solutions.

From Industry to Academia: A Passion for Innovation

Before arriving at NYU Tandon, Pinkerton spent three years in Pfizer’s Oncology Research Unit, where she developed novel nano-medicines for the treatment of solid tumors. The experience, she says, was invaluable. “Working in industry gives you a real-world perspective on what is feasible,” she points out. “The goal is to conduct translational research, meaning that it ‘translates’ from the lab bench to the patient’s bedside.”

Pinkerton — who earned a B.S. in Chemical Engineering from the Massachusetts Institute of Technology (2008) and a doctoral degree in Chemical and Biological Engineering from Princeton University — was attracted to NYU Tandon, in part, because of the opportunity to collaborate with researchers across the NYU ecosystem, with whom she hopes to develop new nanomaterials that can be used for controlled drug delivery and other bio-applications.

She also came to academia because of a love of teaching. At Pfizer, she realized her desire to mentor students and pursue innovative, interdisciplinary research. “The students here want to be engineers; they want to make a change in the world,” she reflected.

Her team at the Pinkerton Research Group focuses on developing responsive soft materials for bio-applications ranging from controlled drug delivery, to vaccines to medical imaging. Taking an interdisciplinary approach, they use tools from chemical and materials engineering, nanotechnology, chemistry and biology to create soft materials via scalable synthetic processes. They focus on understanding how process parameters control the final material properties, and in turn, how the material behaves in biological systems — the ultimate goal being a universal drug delivery platform that improves health outcomes across diseases and disorders.

Her SNaP technology represents a promising new direction in the quest to scale drug delivery solutions effectively. By controlling assembly processes with millisecond precision, this method opens the door to creating increasingly complex particle architectures, providing a scalable approach for future medical advances.

For the field of drug delivery, the future is bright as SNaP paves the way toward an era of more accessible, adaptable, and scalable solutions.

Neurostimulation Aids Walking Recovery Post-Spinal Cord Injury

Neurostimulation Aids Walking Recovery Post-Spinal Cord Injury

A team of Swiss researchers has improved the walking ability of two people with long-standing spinal cord injuries (SCI) using deep brain stimulation (DBS), which excites neurons with surgically implanted electrodes in the brain.

Investigators targeted a surprising brain region: the lateral hypothalamus, which is associated with a variety of basic functions, though not especially with locomotion. A paper detailing the human pilot study and underlying animal research, which led the researchers to the lateral hypothalamus, was published last week in Nature Medicine. Many of the researchers involved in the study hail from the NeuroRestore Lab affiliated with the Swiss Federal Institute of Technology (EPFL), which has previously done extensive work on restoring walking with electrodes implanted in the spinal cord.

The new study is attracting attention. “This is really a tour de force,” says Christopher Butson, a biomedical engineer at the University of Florida, which hosts an annual Deep Brain Stimulation Think Tank. “It seems amazingly thorough.”

“It could have been ten papers,” said Nestor Tomycz, a neurosurgeon with the Allegheny Health Network and Drexel University, who routinely treats motor-related diseases, such as Parkinson’s, with DBS. He also called it a “tour de force,” with implications in fields such as neurosurgery, neurobiology, brain mapping, and rehabilitation.

An Unexpected Brain Target

The research didn’t begin with the lateral hypothalamus in mind. “Instead of looking at individual targets, the technique we have used considered all possible brain regions and statistically highlighted the regions that underwent anatomical and functional changes following SCI,” said Léonie Asboth, a study co-author and clinical research director at Lausanne University Hospital.

Following a spinal cord injury classified as incomplete, some communication between the brain and extremities is preserved, and some degree of natural recovery of walking ability is not uncommon in mice or humans. The researchers set out to learn which parts of the brain might be most active in that recovery.

The team looked at the brains of injured mice soon after injury and again after eight weeks, comparing them to the brains of uninjured mice to create a “brain atlas” of locomotion recovery. This mapping left the team with one prime candidate: the lateral hypothalamus. This brain region is typically associated with a variety of bodily functions and behaviors, including “feeding, motivation, reward processing, and arousal,” said Asboth.

Stimulating the lateral hypothalamus in both injured mice and rats improved walking recovery, leading to an attempt at translating the treatment to human patients. “Prior studies had already explored DBS in the hypothalamus for other conditions, such as cluster headaches and refractory obesity, providing sufficient safety data as a foundation for its use in this context,” said Asboth.


Deep brain stimulation enabled a man with an incomplete spinal cord injury to climb stairs. NPG Press/YouTube

A Pilot Study in Humans

The study used commercially available deep brain stimulation technology from Medtronic, taking advantage of decades of research behind equipment and surgical techniques. After receiving the implant one patient reportedly said, “I feel the urge to move my legs.”

A pair of patients, both with incomplete spinal cord injuries, then used DBS throughout a three-month rehabilitation program with about nine hours of training per week. Walking ability improved immediately with DBS turned on, with positive results following treatment even with the electrodes turned off. Notably, with DBS, both participants were able to walk without braces and navigate stairs independently. No serious side effects were reported.

“It’s surprising they could achieve something that is so specific,” said Butson—that is, improved locomotion recovery, without any side effects related to other functions of the lateral hypothalamus or surrounding brain areas.

Both patients, years removed from their initial injuries, were beyond the conventional recovery period, and wouldn’t benefit from standard treatments. If DBS becomes available as a treatment for people like them, it could have significant advantages. “Even some improvement in motor function could significantly improve quality of life,” said Tomycz, noting a range of benefits associated with improved mobility, including independence, cardiovascular health, and preventing dementia. The World Health Organization estimates that there are over 15 million people living worldwide with some form of spinal cord injury.

The team plans to continue safety and efficacy studies with more human patients, said Asboth, and test how patients could benefit from hybrid therapies that use DBS in conjunction with other neuromodulation techniques, such as spinal stimulation. Future research could also use the framework the group established to identify new brain regions related to other disorders.

EMA Suggests Halting Use of Oxbryta for Treating Sickle Cell Disease

EMA Suggests Halting Use of Oxbryta for Treating Sickle Cell Disease

EMA’s human medicines committee (CHMP) has recommended suspending the marketing authorisation for the sickle cell disease medicine Oxbryta (voxelotor); this measure is taken as a…, Related contentCommittee for Medicinal Products for Human Use (CHMP)Referral procedures: human medicines