Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Innovative Brain Technology Empowers ALS Patients to Communicate

Innovative Brain Technology Empowers ALS Patients to Communicate

Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease that affects muscle control, and most patients reach a point at which either the disease itself or a necessary tracheotomy impair their speech. Because eye movement is typically preserved longer, assistive technologies that track patients’ gaze have helped them communicate, but even control of eye movement can eventually be lost. This leaves patients in a form of “locked-in syndrome” with no means of communicating with their loved ones and caregivers, sometimes for more than a year at the end of their lives.

But maybe it doesn’t have to be that way. The brain-computer interface (BCI) company Cognixion today announced a clinical trial investigating the use of its Axon-R headset as a communicative device for patients in the late stages of ALS, also known as Lou Gehrig’s disease. The company hopes to provide a communication tool to a group of people with no proven alternatives.

“We’re trying to solve the hardest problem we can find,” says Chris Ullrich, chief technology officer at Cognixion. Using the headset for this application requires combining BCI tech with both artificial intelligence and augmented reality.

How Cognixion’s BCI tech works

A high tech headset wired to a small rectangular box.
Cognixion is starting a clinical trial investigating the use of its Axon-R BCI headset for late-stage ALS patients.Cognixion

The non-invasive headset monitors brain activity with electrodes placed over the occipital lobe at the back of the skull. These electrodes use the standard brain-monitoring technique of electroencephalography (EEG) to detect a signal known as steady state visual evoked potentials (SSVEP), a natural brain reaction to an image flashing at regular intervals, perhaps 8 to 15 times per second.

The Axon-R device can detect a choice among multiple options (such as different letters, words, or phrases) presented at different frequencies within the user’s augmented reality view. The device can offer up groups of letters for the user to chose between and then offer the individual letters within that group; like a smartphone’s autocomplete function, it can also suggest likely words or phrases based on the user’s initial choices.

The resulting message can be read aloud automatically or can be displayed on a front-facing screen next to the patient’s face. Critically for late-stage ALS patients, the brain response is triggered through attention alone, and doesn’t require the user to directly gaze at the option they want to select, says Ullrich.

Cognixion has also developed an assistive AI system, which it dubs a “conversational co-pilot,” to help patients produce speech more quickly. The AI will be tailored to each patient, trained on available examples of their own speech or writing, and will ideally be able to respond to a message with the suggestion of entire phrases or sentences after the user makes a few initial decisions. The company anticipates this will allow communication at “near conversational speed.”

A text bubble containing the question, what kind of music do you like, is displayed at the top of a screen. Beneath it are letter groupings similar to those seen on telephone key pads.
The Cognixion headset’s display shows the user various ways to respond to a question.Cognixion

What are the goals of the ALS trial?

“[Conversational speed] has been the holy grail of a lot of BCI research,” says Brendan Allison, a researcher affiliated with the University of California, San Diego and the BCI Society, who does not work with Cognixion. But generally, the fastest claimed performances (measured in words per minute) have required a controlled laboratory setting, and sometimes restrictions on vocabulary.

Ullrich says that while Cognixion plans to track words per minute, this research will prioritize the rate at which patients make VEP-based selections, as well as the subjective experience of the patients and caretakers in dialogue.

Allison notes that success in this field is highly relative. “If you have someone who is at zero words per minute, [communicating at] even one word per minute—that’s huge,” he says. While naturalistic communication would be a remarkable achievement, for late stage ALS patients, any communication at all would be a boon. Patients using assistive communication are often involved in vital choices about their care and end-of-life decisions, and the reliability of these systems—at any speed—will be a key factor in ethical and legal matters.


The ALS patient and clinical trial participant Rabbi Yitzi Hurwitz tries out Cognixion’s communication tool.
Cognixion

Other applications for Cognixion’s BCI tech

ALS affects somewhere on the order of 30,000 people in the United States, with about 5,000 new diagnoses each year, according to the U.S. Centers for Disease Control and Prevention. The Cognixion study is currently recruiting participants with the help of the ALS Association.

The Axon-R, with tools and feedback for developers, is a research version of the Cognixion One headset, which in 2023 received a breakthrough device designation from the FDA, a program intended to help streamline approval processes for medical devices addressing unmet needs. Cognixion currently sells the Axon-R as a research platform starting at US $25,000. An eventual consumer model Cognixion One would not require all of the same features, but pricing is yet to be determined.

Ullrich notes that the company’s technology, as a versatile platform for communication and control, could potentially also be useful to ALS patients at earlier stages of the disease, as well as people with other conditions that affect mobility or communication, such as cerebral palsy, multiple sclerosis, or spinal cord injury.

Another major approach to BCI-assisted speech uses implanted electrodes and records signals from parts of the brain associated with producing speech. More broadly, BCI technologies are being investigated for a variety of uses, such as control of a wheelchair or robotic prosthetics; gaming and entertainment; and general monitoring of brain health and activity.

Google’s Pixel Watch Detects When Your Heart Ceases to Beat

Google’s Pixel Watch Detects When Your Heart Ceases to Beat

Later this month, Google is expected to roll out software on its Pixel Watch 3 in the United States that has the potential to correctly identify two-thirds of out-of-hospital cardiac arrests in people wearing the smartwatch. The feature uses AI to detect when the wearer no longer has a pulse, and it’s meant to combat the quiet killer of cardiac events that occur at home when people are alone and unable to call for help.

A team of Google researchers and scientists at the University of Washington recently published a study testing the software, with the aim of balancing the need for a low number of false positives—when 911 might be called but not needed—with the desire to identify a loss of pulse in as many cases as possible.

“You can make this more sensitive, but it just comes at a cost,” says Google research scientist Jake Sunshine who led the study. An algorithm that “excessively” calls 911, Sunshine says, “can’t exist in the world like that.”

The study was released by the journal Nature as an accelerated article preview on 26 February, the day after the FDA announced premarket medical device approval of the loss of pulse feature on the Pixel for the company Fitbit, which was acquired by Google’s parent company Alphabet in 2021. The feature was approved in Europe last year, and is expected to become available to people in the U.S. this month.

Training Pixel’s AI

Data from three cohorts of Pixel Watch wearers was used to train the model. The first cohort included 100 patients with an implanted cardiac defibrillator, which delivers small pulses of electricity to the heart when it detects irregular heartbeats. The patients wore a Pixel Watch when their heart temporarily stopped during a scheduled test of their defibrillator.

But that data was hard to get because the tests had to be done under medical supervision. “We can’t just take healthy volunteers and make their heart stop, and then send them on their way home, right?” Sunshine notes. So, the team turned to a second cohort of 99 participants who experienced a temporary loss of pulse when a tourniquet tightened on their arm. The two signals—or lack thereof—recorded on the wrist of defibrillator patients looked “indistinguishable” from the signals of people with a tourniquet wrapped around their arm. The third and largest cohort included nearly 1,000 Pixel Watch wearers living their daily lives.

The model was trained to identify the transition between a regular heart rhythm and loss of pulse. Part of the model processed the pulse signal to identify if the amplitude dropped and if the accelerometer detected any movement. Another part of the model used neural networks to quickly run through more than 500 signal features in order to confirm that a transition between pulse states took place.

But these signals can also occur when a wearer simply falls down or lies in an awkward position, Sunshine says. The model needed to proceed through additional checks before calling 911.

After identifying a possible loss of pulse, the watch turns on an infrared light that penetrates deeper into the skin than the standard green light that is always on to detect pulse. The watch searches for a pulse as the green and infrared lights flood the wrist. At the same time, another algorithm checks that the pulse detected, if there is one, matches the regularity of a beating heart.

Finally, a “quite annoying” haptic buzz with an irregular pattern is turned on, Sunshine says. If the wearer is still motionless after 35 seconds of buzzing, then 911 is called. The goal is for classification to occur in around one minute.

The complex skin sensors and loss of pulse detection features on a Google Pixel watch. Loss of pulse is detected based on a user's inference probability, BP filtered green PPG and accelerometer. Detections trigger an alert with a brief countdown prior to contacting emergency services.
The algorithms in Google’s Pixel Watch look for changes in pulse amplitude that might be a sign of cardiac arrest.

Google

Specificity Over Sensitivity

After training the algorithm in these controlled settings, Sunshine and his colleagues tested the feature on 355 Pixel wearers outside the lab, yielding one errant call to 911. The team also tested the model back in the lab on a new set of participants using the tourniquet technique to temporarily pause their pulse. There, the model correctly identified a loss of pulse in 67 percent of the more than 1,000 sessions of tourniquet-induced pulselessness conducted in the lab by 156 participants (21 of whom were professional stunt persons). This means that the algorithm did not catch around one third of cases where there was a loss of pulse.

The decision to maximize specificity over sensitivity is understandable to Mahsa Khalili, a postdoctoral researcher at the University of British Columbia, who studies out-of-hospital cardiac arrests and was not involved in this work. Similar to other models, this one will likely improve as more data comes in from the U.S.-based users who opt in to the feature, she says.

While many academic labs are limited by the number of participants willing to enroll in cardiac monitoring studies, Google is uniquely resourced and situated to reach many more end users, Khalili adds. The Google research team made the details about the participant numbers and system architecture were made available, but only published pseudo code of the model itself.

The opt-in feature is geared toward the general population. But like all medical devices, it is not for certain populations, such as people with severe cardiac disease, Sunshine says. He and the team expect to evaluate the real-world data as it comes in and disseminate the findings.

Energy-Efficient Neural Processor Anticipates User Intentions

Energy-Efficient Neural Processor Anticipates User Intentions

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Brain-chip technology is quickly accelerating. In one of the latest advancements, researchers have designed a new chip that uses larger groups of neurons and less power to detect when a user wants to initiate a given behavior—for example, reaching for an object. The new approach, if it translates into humans, could theoretically provide users with more autonomy in initiating movement control.

Implanted systems known as intracortical brain-computer interfaces (iBCIs) are a game changer for many people with paralysis, providing them with a means to regain some movement control. iBCIs work by inserting electrode arrays into the brain to record neural activity. Because our neurons naturally communicate with one another using electric pulses, these brain chips are able to detect the electrical signals.

“The detected [signals] are then used by BCI applications to interpret neural activity and translate it into commands, such as controlling a computer cursor or a robotic limb,” explains Daniel Valencia, a researcher at San Diego State University.

Current iBCIs monitor individual neurons in the brain. However, doing so continuously is energy intensive, and it can be difficult to discern whether a signal is indeed coming from the neuron being monitored, or a neighboring neuron that has a similar electrical firing pattern. It takes a lot of power consumption for brain chips to analyze the data, sift through all the “background noise,” and pinpoint true neural firings. Due to this high power consumption, most current iBCIs tend to only be turned on manually during predefined periods, such as clinical or laboratory sessions.

Valencia and his colleagues were interested in creating a different kind of system that passively monitors brain activity and automatically switches on when needed. Instead of monitoring individual neurons, their proposed chip monitors the general activity of a cohort of neurons, or their local field potentials (LFPs).

An Efficient Solution

This approach involves a simpler process, detecting the frequency at which a collection of neurons in a given region of the brain is firing. When certain thresholds of neural activity are hit, the brain chip is switched on. For example, when people are sleeping, the LFP of neurons exhibit increased activity in the 30-to-90-hertz range, but when preparing to move, there is an increase in activity in the 15-to-35-Hz range. The chip proposed by Valencia and his colleagues therefore would only activate when, presumably, the brain activity of a user indicates the user wants to move an object.

In a study published in the February print issue of IEEE Transactions on Biomedical Circuits and Systems, the researchers tested their new LFP approach using previously recorded datasets of neural activity from animals performing movement tasks. They used the data and modeling to determine how much energy is required in their LFP approach compared to conventional brain chips that monitor individual neurons.

The results show that the two approaches are comparable in terms of determining the intentions of a user—conventional brain chips slightly outperformed the LFP approach—but the LFP approach uses significantly less power, which Valencia notes is a key advantage. “Additionally, the recording circuitry needed for LFPs is much simpler compared to [conventional] methods, which reduces hardware complexity,” he says. For instance, brain chips based on LFPs may not require the use of deeply penetrating micro-electrodes, significantly reducing the chance of tissue scarring in the brain and potentially increasing the longevity of the device.

Importantly, this new proposed system would allow users to complete tasks autonomously and more easily, without having to manually activate their brain chip. Many scientists in the field of iBCI design are interested in developing these more advanced, “self-paced” iBCIs. “Our work is a step toward developing these systems, allowing users to control their engagement independently,” says Amir Alimohammad, a professor at San Diego State University who was involved in the study.

Alimohammad adds that his team is currently working on integrating their LFP approach that predicts a user’s intentions within a broader iBCI system that also uses data from single neurons firing. Whereas the LFP data could be used to activate the system, detailed data from individual neurons could be used to execute more precise motor control, he says.

This Portable MRI Operates on Battery Power

This Portable MRI Operates on Battery Power

According to the World Health Organization, strokes are the second most common cause of death worldwide. Of the 15 million patients annually, one-third die and another third are left with permanent disabilities. A new design for a portable MRI scanner has the potential to make a major impact on those numbers.

Medical imaging is essential in diagnosing a stroke. Strokes have two major causes, and the difference is critical. Ischemic strokes are caused by a blockage of blood vessels in the brain, and account for about nine out of 10 cases. Hemorrhagic strokes are the result of bleeding in the brain. Choosing the wrong treatment can be damaging, or even fatal.

A better imaging device could result in faster diagnoses. Many medical facilities—even in developed nations—lack MRI scanners and other sophisticated imaging equipment. This can delay a diagnosis past the “golden hour”—the first hour of a stroke, when treatment can be most effective. Delay increases the chance of brain-tissue damage due to oxygen deprivation.

Even if an MRI machine is available, patients must be transported from a facility’s emergency department to the radiology department for imaging and then returned for treatment. Scientists at Wellumio in Wellington, New Zealand, came up with the idea that a mobile unit that could scan just the patient’s head in the emergency department would allow imaging to be done right there, while the patient is still on a gurney.

Designing a Portable MRI Scanner

The result is Axana: an MRI scanner mounted on a mobile pedestal. Axana’s toroid (doughnut) shape is just large enough for the patient’s head. The device is controlled by a simple touchscreen interface, which means that it requires far less training than traditional imaging systems.

While Axana uses the same sensors as standard MRI machines, the signals are provided in a very different manner. Traditional MRI relies on three pulsed gradient electromagnetic coils along x, y, and z axes. (This is what makes an MRI scan feel like being in a steel drum while goblins pound on the outside with sledgehammers.) The result is highly detailed three-dimensional imaging of soft tissues.

Axana does not use pulsed gradient coils. Instead, it uses magnetic fields at different frequencies to align the hydrogen atoms in soft tissues. The machine creates these fields in different regions of the patient’s head using different frequencies to achieve the spatial resolution.

As it stands now, the images are very low resolution, but they’re sufficient for gross anatomy analysis. The company intends to increase the number of coils operating at more frequencies in the next version to increase the resolution. The system compares the rate of signal decay in the tissue with the diffusion of blood in the tissues. This can provide enough information to detect the impaired blood diffusion that would indicate an ischemic stroke. The data are displayed in an image that is color coded to draw attention to areas of impaired diffusion.

Axana does not require any AI to process the data, just straightforward physics and mathematics. This makes the interpretation of the data much simpler and more reliable.

The end result is a portable device that can be produced at a relatively low cost compared with a traditional MRI installation, which can cost between US $1 to $3 million. The current prototype weighs in at about 100 kilograms, which makes it a bit large for ambulance applications but should fit well in most emergency department facilities. Its power consumption is low enough for the device to be powered by a standard wall outlet, and it contains a battery, so it can continue to operate when moved from one outlet to another.

“Further down the track, we definitely want the device to be capable of operation out in the field,” says Paul Teal, Wellumio’s chief data scientist. “For this application, perhaps three or four scans is all that would be required in a single deployment.”

At present, the prototype is undergoing preclinical testing with human patients at the Royal Melbourne Hospital, in Australia. The system is designed only to detect ischemic stroke at this point, though the company intends to expand its uses to include hemorrhagic diagnoses as well. Ultimately, it could be useful in detecting other forms of head trauma.

A major part of the company’s mission is to make this device available in rural and underserved communities through its small size, lower cost, and reduced need for training. “In order to really make a difference in the time to treatment for treatable ischemic stroke, the device must be operated by emergency room doctors (and eventually paramedics) without the oversight of a neurologist and/or radiologist,” says Teal. “There is a lot of regulatory approval that must be gained before the device can be used in this way, but it is actually quite feasible.”

An Exoskeleton with Self-Balancing Capabilities Moves Closer to Market Launch

An Exoskeleton with Self-Balancing Capabilities Moves Closer to Market Launch

Many people who have spinal cord injuries also have dramatic tales of disaster: a diving accident, a car crash, a construction site catastrophe. But Chloë Angus has quite a different story. She was home one evening in 2015 when her right foot started tingling and gradually lost sensation. She managed to drive herself to the hospital, but over the course of the next few days she lost all sensation and control of both legs. The doctors found a benign tumor inside her spinal cord that couldn’t be removed, and told her she’d never walk again. But Angus, a jet-setting fashion designer, isn’t the type to take such news lying—or sitting—down.

Ten years later, at the CES tech trade show in January, Angus was showing off her dancing moves in a powered exoskeleton from the Canadian company Human in Motion Robotics. “Getting back to walking is pretty cool after spinal cord injury, but getting back to dancing is a game changer,” she told a crowd on the expo floor.

The company will begin clinical trials of its XoMotion exoskeleton in late April, initially testing a version intended for rehab facilities as a stepping stone toward a personal-use exoskeleton that people like Angus can bring home. The XoMotion is only the second exoskeleton that’s self-balancing, meaning that users needn’t lean on crutches or walkers and can have their hands free for other tasks.

“The statement ‘You’ll never walk again’ is no longer true in this day and age, with the technology that we have,” says Angus.

The Origin of the XoMotion Exoskeleton

Angus, who works as Human in Motion’s director of lived experience, has been involved with the company and its technology since 2016. That’s when she met a couple of academics at Simon Fraser University, in Vancouver, who had a novel idea for an exoskeleton. Professor Siamak Arzanpour and his colleague Edward Park wanted to draw on cutting-edge robotics to build a self-balancing device.

At the time, several companies had exoskeletons available for use in rehab settings, but the technology had many limitations: Most notably, all those exoskeletons required crutches to stabilize the user’s upper body while walking. What’s more, users needed assistance to get in and out of the exoskeleton, and the devices typically couldn’t handle turns, steps, or slopes. Angus remembers trying out an exoskeleton from Ekso Bionics in 2016: “By the end of the week, I said, ‘This is fun, but we need to build a better exoskeleton.’”

Arzanpour, who’s the CEO of Human in Motion, says that his team was always drawn to the engineering challenge of making a self-balancing exoskeleton.When we met with Chloë, we realized that what we envisioned is what the users needed,” he says. “She validated our vision.”

Arun Jayaraman, who conducts research on exoskeletons at the Shirley Ryan Ability Lab in Chicago, is working with Human in Motion on its clinical trials this spring. He says that self-balancing exoskeletons are better suited for at-home use than exoskeletons that require arm support: “Having to use assistive devices like walkers and crutches makes it difficult to transition across surfaces like level ground, ramps, curbs, or uneven surfaces.”

How Do Self-Balancing Exoskeletons Work?

Self-balancing exoskeletons use much of the same technology found in the many humanoid robots now entering the market. They have bundles of actuators at the ankle, knee, and hip joints, an array of sensors to detect both the exoskeleton’s shifting positions and the surrounding environment, and very fast processors to crunch all that sensor data and generate instructions for the device’s next moves.

While self-balancing exoskeletons are bulkier than those that require arm braces, Arzanpour says the independence they confer on their users makes the technology an obvious winner. He also notes that self-balancing models can be used by a wider range of people, including people with limited upper body strength and mobility.

When Angus wants to put on an XoMotion, she can summon it from across the room with an app and order it to sit down next to her wheelchair. She’s able to transfer herself and strap herself into the device without help, and then uses a simple joystick that’s wired to the exoskeleton to control its motion. She notes that the exoskeleton could work with a variety of different control mechanisms, but a wired connection is deemed the safest: “That way, there’s no Wi-Fi signal to drop,” she says. When she puts the device into the “dance mode” that the engineers created for her, she can drop the controller and rely on the exoskeleton’s sensors to pick up on the subtle shifts of her torso and translate them into leg movements.

What Are the Challenges for Home-Use Exoskeletons?

The XoMotion isn’t the first exoskeleton to offer hands-free use. That honor goes to the French company Wandercraft, which already has regulatory approval for its rehab model in Europe and the United States and is now beginning clinical trials for an at-home model. But Arzanpour says the XoMotion offers several technical advances over Wandercraft’s device, including a precise alignment of the robotic joints and the user’s biological joints to ensure that undue stress isn’t put on the body, as well as torque sensors in the actuators to gather more accurate data about the machine’s movements.

Getting approval for a home-use model is a challenge for any exoskeleton company, says Saikat Pal, an associate professor at the New Jersey Institute of Technology who’s involved in Wandercraft’s clinical trials. “For any device that’s going to be used at home, the parameters will be different from a clinic,” says Pal.Every home looks different and has different clearances. The engineering problem is several times more complex when you move the device home.”

Angus says she has faith that Human in Motion’s engineers will solve the problems within a couple of years, enabling her to take an XoMotion home with her. And she can’t wait. “You know how it feels to fly 14 hours in coach? You want to stretch so bad. Now imagine living in that airplane seat for the rest of your life,” she says. “When I get into the exoskeleton, it only takes a few minutes for my back to lengthen out.” She imagines putting on the XoMotion in the morning, doing some stretches, and making her husband breakfast. With maybe just a few dance breaks.

Enhanced Brain Connectivity: Biohybrid BCI Integrates Additional Neurons

Enhanced Brain Connectivity: Biohybrid BCI Integrates Additional Neurons

Brain-computer interfaces have enabled people with paralysis to move a computer cursor with their mind and reanimate their muscles with their thoughts. But the performance of the technologyhow easily and accurately a BCI user’s thoughts move a cursor, for example—is limited by the number of channels communicating with the brain.

Science Corporation, one of the companies working towards commercial brain-computer interfaces (BCIs), is forgoing the traditional method of sticking small metal electrodes into the brain in favor of a biology-based approach to increase the number of communication channels safely. “What can I stick a million of, or what could I stick 10 million of, into the brain that won’t hurt it?” says Alan Mardinly, Science Corp co-founder.

The answer: Neurons.

Science Corp has designed a waffle-like device to house and place a new layer of neurons across the brain’s surface. The company’s researchers tested the device in mice, in which the additional neurons enabled the mice to learn to move left or right only if the device is “on.” The research lays the groundwork for a future interface that does not damage the brain as much as existing BCIs—or at all. The research was shared in a study posted to the bioRxiv preprint server in November.

A Neuron-filled Waffle

BCIs connect neurons within the brain to external computers. During clinical studies at universities across the U.S., roughly three dozen humans have controlled BCI technology using millimeter-scale metal electrodes stuck down into the brain through a coin-sized opening in the skull.

Other research teams have designed thinner, softer, or smaller devices than the traditional metal electrodes in order to electrically connect neurons to computers and avoid damaging the neurons and blood vessels while doing so. Neuralink, for instance, uses bendable polymer electrodes in its BCI.

Instead of sticking anything into the brain, the Science Corps device sits on top of it. But the device isn’t just a board stuck on the brain’s surface—it’s full of neurons. Neurons sit in the wells of a waffle-like device before being stuck to the brain surface, neuron-side down. Neurons grow down into the brain, acting as a glue between the device and the brain’s tissue.

Science Corp’s biohybrid technology aims to integrate biology into the devices implanted into the body. Biohybrid technology is an old idea, Mardinly says. It’s gone in and out of popularity in BCI research—it first showing up in early BCI research in the 1990s and again more recently. But the idea is a complex one because neurons are fragile and BCI technology has generally moved towards sturdier electrodes.

Science Corp, based in Alameda, California, was founded in 2021 by Mardinly and one of Neuralink’s co-founders, Max Hodak. The biohybrid project began soon after the medical technology company’s founding in early 2022, and the work presented in the bioRxiv study took around four or five months to complete, Mardinly says.

An illustration of a mouse brain on the left with a pattern of green circles overlaid on a portion of it. On the right, a chart with orange and blue bars.
Science Corp’s setup implants light-sensitive neurons into a mouse’s brain (left). Three weeks after the device was implanted, roughly half of the light-sensitive neurons were still present.Jennifer Brown, Kara M. Zappitelli et al.

The process of building the biohybrid device begins on the benchtop where neurons—specifically a kind called primary cortical excitatory neurons, which fire off signals to neighboring neurons—are derived from embryonic stem cells taken from the same mouse line as the mouse into which Science Corp’s researchers implanted the device to test it. The neurons are modified to have optogenetic properties, meaning they will fire when hit with light of a certain frequency.

The device looks like a waffle with little dishes called microwells, each 10 micrometers in diameter, mounted on a clear backing. Each microwell holds one neuron, and each device, clocking in at around 5 millimeters square, houses an average of 90,000 neurons. Neurons from the biohybrid device grew into the very top part of the cortex and blood vessels grew into the new neurons.

But just having neurons grow into the brain will not mean that the biohybrid device can change the brain’s function.

So, researchers shined light onto the device through a glass window in the mouse’s skull. The light “turned on” the new neurons, and the researchers turned on the light when the mouse was learning whether to turn left or right to get a treat. Mice learned to move to the left of a cage when the light was shone on the device in order to get their reward; when the light was turned off, mice learned to move to the right to get their reward.

The new light-sensitive neurons helped five of the nine mice learn a new behavior, which to Mardinly suggests that the biohybrid device successfully “modulates output behavior.”

Images of the brain under the implant showed neuronal axons sticking down through the pia matter, the dense cell layer at the very surface of the brain, and into layer 1 of the cortex.

“We haven’t proven that they’re forming synopsis, but it seems extremely likely,” Mardinly says.

A Big Jump for BCIs

By itself, this biohybrid device is not yet an interface, says Jack Judy, professor of electrical engineering at the University of Florida, who was uninvolved in the work. Judy previously led a past neuroprosthesis program funded by the U.S. Defense Advanced Research Projects Agency.

“When I think of an interface—well, there’s information coming out of the device,” says Judy. Instead, it’s a way to prepare tissue for an optical interface by spreading optically active cells across the brain, he says.

Mardinly says the team at Science Corps has already begun building future biohybrid devices with inputs and outputs from the brain. The devices house neurons in trenches instead of wells. One side of the trench has LEDs to deliver light to small groups of the neurons, and the other side will have electrical contacts to record the action potentials from neurons, similar to how many BCI technologies record from neurons already in the brain.

Going from proof of concept to a prototype is a big jump, Mardinly says. It’s an “extremely complicated” design, he says. “Is any of this worth it? And, you know, that remains to be seen, right? That’s on us to move forward and demonstrate.”

The research team acknowledges that it is difficult to pinpoint exactly how integrated the neurons from the graft are into the brain. Optogenetic stimulation requires just a few hundred neurons to work, as seen in past studies, and the biohybrid device adds many more than that to the brain.

The study leaves many unknowns, crucially why four of the nine mice with biohybrid devices with light-activated neurons did not learn the task.

The next major milestone is to develop a biohybrid device with human-engineered cells that records and stimulates, and then test the work in a larger animal.

Paragraf is Developing a “Clean Slate” Graphene Manufacturing Facility

Paragraf is Developing a “Clean Slate” Graphene Manufacturing Facility

Scientists and engineers have long touted graphene for use in electronic devices due to its excellent electrical conductivity, optical transparency, mechanical strength, and its ability to conduct heat and to remain stable under high temperatures. Graphene’s use in electronics at the commercial level, however, is still limited. That’s in part because it’s much harder to create and integrate single-layer graphene that is required for most (but not all) electronics at large scales. It’s also due to the robust regulatory and certification requirements that graphene, as a new material, has to go through for many high technology applications before it can be used. That said, many advanced technology markets, including sensors, are starting to more widely use the material.

A number of companies around the world have developed graphene sensors, but many have of these have revolved purely around biosensing, with many companies trying to develop advanced COVID test during the pandemic. But Paragraf, a company based in Cambridgeshire in the United Kingdom, has set its sights higher than being a small-scale batch-to-batch producer of graphene sensors. Instead, the company is aiming to become the first graphene foundry that supplies end-users with “blank canvas” graphene field effect transistor (gFET) sensing components that can be tailored to individual needs by users across different industries.

Paragraf believes this approach will make it easier for the company to scale up its manufacturing capabilities. The approach removes regulatory constraints and allows the engineers to just focus on the core technology, rather than having to consider the multitude of scenarios where their sensors could be used and for which they would need to be specifically customized.

Building Blank Canvas gFETs

Paragraf’s sensor elements are a blank canvas, so to speak. The company is building the main sensing surface—the canvas—by growing graphene on a sapphire substrate and adding two contacts with a gate electrode on top. It’s then up to Paragraf’s customers to finalize the sensor based on what they need it to do: “We’re not selling a finished sensor,” says Mark Davis, Paragraf’s director of biosensors, who adds that many different kinds of receptors can be added to the sensor by the user.

By giving the user control over the sensing receptors, it will make it easier for the gFETs to meet regulatory and certification requirements in their respective industries. Paragraf is targeting plenty of applications and industries, including potassium ion sensing in healthcare diagnostics, detecting heavy metals in agricultural wastewater runoff, gas sensing applications in the healthcare, agricultural, and hydrogen energy industries, pH sensing in cell and gene therapy, food and beverage monitoring, and chemical processing applications.

There is also a lot of potential for the gFETs to possess multiplexing capabilities for healthcare diagnostics, where many different biomarkers or chemical components of interest can be measured on the same chip. “Many gFETs only contain 3 to 5 channels, but the size of Paragraf’s FETs means that we can fit up to 100 channels onto a chip,” says Davis, which allows the resulting chip to detect and differentiate more things in a given sample.

Users of Paragraf’s gFETs, according to the company, are starting to develop healthcare diagnostic platforms using these blank canvas sensors for single ion and pH sensing applications because of the gFET’s high sensitivity. “For most healthcare applications, we’re looking at single-use disposable sensors that will cost $1 per sensor,” says Davis. “In the future, so long as we can make over 1 million [sensors] per annum and keep the wafer size below 3 by 3 millimeters, we will be able to get the costs down to this level―expanding the potential and capabilities of graphene sensors, and graphene electronics in general, in real world scenarios.”

Close up of Paragraf's graphene field effect transistor, against a white background.
Paragraf intends for its graphene field effect transistor to be a blank canvas for users to build upon.Paragraf

Developing gFET Sensor Components at Scale

The gFET being developed by Paragraf is an electrolyte-gated FET. The FET works by placing an electrolyte droplet that the user wants to analyze on the surface of the sensor. The electrolyte’s electrical conductivity creates an electrical bias that changes how the electrons in the sensor’s graphene sheet behave. This also changes the detectable electrical resistance across the graphene sheet—and because graphene’s electrical properties are so good, very small concentrations of ions (including single ions) in the electrolyte sample can be detected.

Paragraf is making the sensors in batches, much like the way the semiconductor industry fabricates wafers full of chips, and are directly depositing the graphene on to the wafers via metal organic chemical vapor deposition and attaching metal contacts on top.

Many chemical vapor deposition techniques grow graphene on copper foil, but the graphene then needs to be transferred to the end device, which can cause structural defects and copper contamination in the graphene that would affect the sensing capabilities of the device. By directly growing the graphene on the wafers, Paragraf is avoiding this to improve the sensing performance of their devices. Davis says that Paragraf is “manufacturing the sensor elements semiconductor-style so that we can miniaturize the sensor elements and fit more sensor elements per chip. The vision for Paragraf is that we are a foundry, and the manufacturer of the sensor components for a final diagnostic solution.”

Davis says that Paragraf can currently fit up to 32 gFETs on a wafer 51 millimeters to a side. The company is in the process of setting up a large-scale manufacturing facility in Huntingdon in Cambridgeshire. Paragraf have also recently acquired another graphene sensor company, Cardea Bio, in San Diego.

Paragraf are also developing graphene Hall effect sensors with a wide dynamic range for both low and high field magnetic measurement applications—from mapping high magnetic fields at CERN and measuring electromagnet fields, to pinpointing current leaks in batteries and measuring the presence of ultra-small magnetic fields inside quantum computers—but these are standard sensor element architecture that don’t require any further input from the end user to be used—they are ready to go. Ultimately, it’s Paragraf’s bet on blank-slate gFETs on which its hopes of creating the first graphene foundry lay.

Epax Allocates Million to Expand Its Marine Oil Product Line

Epax Allocates Million to Expand Its Marine Oil Product Line

Epax is investing $10 million to expand its Aalesund facility with a new synthesis plant, using advanced fractionation technology to extract omega-9 and omega-11 from North Atlantic fish oil for specialized nutrition products. The investment aims to enhance sustainability, increase crude oil processing capacity to 5,000 tonnes per year and support local economic growth while reducing CO2 emissions.