Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Podcast Featuring Q&A with the FDA

Podcast Featuring Q&A with the FDA

“Q&A with FDA” is a monthly podcast series that provides engaging conversation and discussion about the latest regulatory topics. In this podcast series, FDA’s Division of Drug Information will answer some of the most commonly asked questions received by FDA. Perhaps you have had the same questions

Resilient and Adaptable Sensor: Survives Halving While Remaining Stretchy and Self-Healing

Resilient and Adaptable Sensor: Survives Halving While Remaining Stretchy and Self-Healing

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Stretchy sensors are useful for many applications, including monitoring human health and emulating artificial muscles in soft robots. One big problem with these sensors is that they don’t last that long as they twist, stretch, and otherwise deform.

Now, a team of researchers in Belgium have created a highly durable, stretchable sensor with remarkable self-healing capabilities—to the point that it can heal itself after being cut completely in half and still work at near-perfect performance. The results are described in a study published 16 July in IEEE Sensors Journal.

Lead author Rathul Sangma is a Ph.D. candidate at Vrije Universiteit Brussel who is affiliated with Imec. He says his team was motivated to develop a reliable, stretchable sensor for health monitoring, rehabilitation, and motion tracking because “these systems often endure repeated strain or accidental damage. Existing stretchable sensors can fail under such conditions, leading to unreliability and waste.”

Self-Healing Polymer Sensors for Wearables

To create their durable sensor, Sangma and his colleagues decided to use a polymer with a chemical bonding mechanism called Diels–Alder crosslinks. These chemical bonds are reversible, meaning they can break when damaged and reform upon recontact. “When the material is cut, the broken bonds become reactive and, when properly realigned, [they] reconnect, restoring the polymer’s original structure,” says Sangma.

In experiments, the researchers showed that the polymer could be cut in half and self-heal at room temperature over the course of roughly 24 hours. The self-healing process could be sped up to just four hours when the sensor was placed in an oven at 60 °C.

Even after being stretched to the point of breaking and then healed six times, the sensor worked at 80 percent capacity.

Smart wearables could benefit from stretchable sensors capable of recovering their functionality even after sustaining significant damage. BruBotics/YouTube

Embedded within the polymer is a liquid metal called Galinstan, which acts as conductor. While you might expect the liquid metal to spill out when the polymer is severely damaged, the researchers found that the loss of Galinstan was minimal. They suspect that the liquid metal oxidizes when exposed to air, and the resulting oxide creates a thin, protective barrier that prevents the liquid from escaping. The oxide barrier is broken down once the two pieces of the sensor are mechanically reconnected.

“This mechanism is remarkably analogous to how human veins form a clot after rupture to prevent further blood loss,” Sangma says. “Here, the oxide acts as a temporary seal that preserves the integrity of the system until healing is complete.”

In a series of tests, the researchers explored how pristine and damaged sensors experienced drift, which is the gradual change in a sensor’s signal over a long period of continuous stretching and relaxing. The results show that a pristine sensor subjected to repeated stretching through 800 cycles drifted less than 5 percent, while a sensor that had been cut in half and stretched the same number of times drifted less than 10 percent.

“This dual healing—both in structure and electrical functionality—is what makes our design stand out,” Sangma says.

The tests also show that the materials can be recycled with high efficiency once the device finally reaches the end of its operational life. “Over 95 percent of the sensor material can be recovered and reprocessed—an important step toward eco-friendly wearables,” says Sangma.

The research team is actively exploring opportunities to commercialize their sensor, with the aim of using it for medical rehabilitation, sports performance monitoring, and soft robotic systems. They have established a spin-off company, Valence Technologies, to commercialize the materials.

Moving forward, the researchers are looking to scale the sensor so that it can track full body movements, and they would like to conduct long-term durability testing in real-world environments, such as seeing how the sensor performs when exposed to sweat.

Quirky Graphene May Enhance Proton Therapy Precision

Quirky Graphene May Enhance Proton Therapy Precision

A new twist on pencil graphite might be a key ingredient to better cancer treatment, scientists in Singapore say. Graphite is composed of stacked layers of graphene, a single-atom-thick sheet of carbon atoms arranged in repeating hexagonal rings. Now add pentagons, septagons, and octagons of carbon atoms into the sheet, and you’re looking at a new form of ultrathin carbon that promises to sharpen beams of subatomic particles used in proton therapy.

Ultrathin foils of carbon materials have been used for decades in proton therapy to filter particles into high-precision beams meant to kill tumors. But, they take time to make and often contain impurities from the manufacturing process that lower the precision of the beam. In research described in Nature Nanotechnology, Jiong Lu and his colleagues at National University of Singapore and in China developed a technique that can grow a 200-millimeter sheet of a new kind of ultrathin carbon material in just 3 seconds, with no detectable impurities.

Proton therapy is a noninvasive radiation treatment in which hydrogen ions are accelerated through a cyclotron to form a high-energy beam used to destroy DNA in tumors. In a cyclotron, an electromagnetic field accelerates ions of molecular hydrogen, which spiral outward as they pick up speed. They then strike a carbon foil that strips away the hydrogen’s electrons, leaving protons that exit the machine as a high-energy beam. Proton therapy is often preferred as a treatment because of its precision. The sharp beam eliminates tumors while preserving healthy tissue. The new carbon promises an even sharper and more energy-intense beam, potentially making the treatment more potent.

The benefits of the new material, called ultra-clean monolayer amorphous carbon (UC-MAC), are derived from its disordered ring structure, which contrasts with the perfect hexagonal rings in graphene. The structures present in UC-MAC create tiny pores in the material that are only one-tenth of a nanometer wide. The researchers have found a way to fine-tune these angstrom-scale pores to control how the material filters hydrogen ions, in order to produce proton beams with less scattering.

Nanograins and Nanopores

The new technique starts with depositing a thin film of copper on top of a sapphire wafer inside a chamber filled with high-density plasma. Depending on the temperature of the copper and the rate at which it’s deposited, irregular crystals a couple dozen nanometers in size called nanograins form. The nanograins provide the right conditions for UC-MAC to grow, and eventually, a complete layer of the atom-thick carbon material crystallizes on top of the copper. This growth happens in just three seconds, more than an order of magnitude faster than previous methods used to grow carbon foils.

Huihui Lin, a research scientist at Singapore’s Agency for Science, Technology and Research who worked on the project, explains that the synthesis’s rapid speeds come from the high density of the nanograins that form on the copper, and from the plasma in the growth chamber, which provides high quantities of particles that react with the substrate to form the carbon structure.

Despite its potential importance in cancer treatment though, Lin says that UC-MAC was originally designed with different applications in mind. “We tried it in electronics and optical devices, and after three years of work, we discovered its unique advantage as a membrane for producing precision proton beams,” he explains.

Because of the angstrom-size pores in the material, the team discovered that UC-MAC was uniquely suited to turning molecular hydrogen ions into protons. Accelerating molecular hydrogen ions through the cyclotron instead of already-filtered protons increased the quantity of protons in the beam in a given amount of time, by an order of magnitude.

Lin thinks it will still take time to get the material to the point of commercialization. He explains that like many other 2D materials, “you need tens of steps” to grow the carbon on the substrate. So, simplifying the process is crucial to getting closer to commercialization. Eventually though, the material may make proton therapy a more widely available treatment option. “The UC-MAC makes proton beams more tunable [and] affordable,” says Lin.

Speech BCI Enhancement through Machine Learning Competition

Speech BCI Enhancement through Machine Learning Competition

For the next five months, machine learning gurus can try to best predict the speech of a brain-computer interface (BCI) user who lost the ability to speak due to a neurodegenerative disease. Competitors will design algorithms that predict words from the patient’s brain data. The individual or team whose algorithm makes the fewest errors between predicted sentences and actual attempted sentences will win a US $5,000 prize.

The competition, called Brain-to-text ‘25, is the second-annual public, open-source brain-to-text competition hosted by a research lab part of the BrainGate consortium, which has been pioneering BCI clinical trials since the early 2000s. This year, the competition is being run by the University of California Davis’s Neuroprosthetics Lab. (A group from Stanford University hosted the first competition using brain data from a different BCI user.)

For two years, the UC Davis research team has collected brain data from a 46-year-old man, Casey Harrell, whose speech is unintelligible except to his regular caregivers. Once the speech BCI was trained on Harrell’s brain data, it could decode what he was trying to say over 97 percent of the time and could instantly synthesize his own voice, as previously reported by IEEE Spectrum.

Decoding Speech from Brain Data

Parsing words from brain data is a two-step process: The algorithm must first predict speech sounds, called phonemes, from neural data. Then it must predict words from the phonemes. Competitors will train their algorithms on the brain data corresponding to 10,948 sentences with accompanying transcripts of what Harrell was attempting to say.

Then comes the real test: The algorithms must predict the words in 1,450 sentences from brain data withheld from the training data. The difference between the final set of predicted words and the words that Harrell attempted to say is called the word error rate—the lower the word error rate, the better the speech BCI works, overall.

Researchers reported a 6.70 percent word error rate, which they hope the public can beat. The goal of the competition is to attract machine learning experts who may not realize how valuable their skills are to speech BCIs, says Nick Card, a postdoctoral researcher at UC Davis leading both the clinical trial and the competition.

“We could sit on this data and hide it internally and make more discoveries with it over time,” says Card. “But if the goal is to help make this technology mature faster to help the people who need to benefit from this technology right now, then we want to share it and we want people to help us solve this problem.”

The public invite into the research world is “an awesome development” that is “long overdue” in the BCI space, said Konrad Kording, a professor at the University of Pennsylvania who researches the brain using machine learning, and who is not involved in the research or competition.

This year, Card and his fellow researchers have raised the bar by lowering the starting word error rate with their own high-performing algorithm. The first brain-to-text competition in 2024 began with the Stanford University group posting an error rate of 11.06 percent and finished with the competition winner achieving 5.77 percent. Also new this year are cash prizes for lowest error rates and the most innovative approach, provided by BCI company Blackrock Neurotech, whose electrodes and recording hardware have been used by BrainGate clinical trials since 2006.

Ethical Concerns in BCI Data Sharing

BCIs have long served as a bridge between neuroscience, medicine, and machine learning. And while machine learning has a tradition of open-source research, medical research is bound by patient confidentiality.

The main concern with public brain data is that the patient will be identified, says bioethicist Veljko Dubljević, a professor of both philosophy and science, technology, and society at North Carolina State University.

That concern is moot in this case because Harrell went public in August 2024, roughly five years after he began losing muscle tone because of amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease). In 2023, neurosurgeons at UC Davis implanted four electrode arrays with a total of 256 electrodes into the top layers of his brain. Harrell used his speech BCI in an interview with the New England Journal of Medicine last year to explain how the disease feels like being in a “slow-motion car crash.” Harrell said at the time that “it was very painful to lose the ability to communicate, especially with my daughter.”

The speech BCI was trained on data collected while Harrell conducted in-lab experiments and while he spoke casually with family and friends. But competitors of Brain-to-text ‘25 will not see any “personal use” data recorded while Harrell spoke casually and extemporaneously, Card says.

While this is a “good precaution,” Dubljević says, he wonders if Harrell realizes what it means to have someone’s sensitive medical data in the public domain for years. The “noise” of today’s BCIs could be decoded into meaningful personal information in 50 years, for instance, in a way similar to how blood donated in 1955 can now also reveal details about a person’s DNA. (DNA profiling wasn’t established until the 1980s.) Dubljević recommends limiting the data storage to five years.

Speech BCIs decode the intended movements of a person’s jaw and mouth muscles, in the same way a BCI for an arm or hand prosthesis decodes intended movements. But speech BCIs feel more personal than BCIs that control a hand prostheses, Dubljević says. Speech is closer to “the innermost sanctum of a person,” he says. “There’s quite a lot of fear about mind reading, right?”

“As a researcher who wants to see science technology deployed for the public good, I want the technology not to be hyped up” in order to avoid a backlash, Dubljević says.

Cash Prizes for Innovative BCI Solutions

The two lowest word error rates come with $5,000 and $3,000 cash prizes, respectively, and the most innovative approach will win $1,000.

The last category is meant to encourage out-of-the-box ideas with great potential, if given more data or more time. Stacking 10 multiples of the same algorithm is a common way to force a more accurate overall performance, but it costs 10 times as much computational power and, “quite frankly, it’s not a very creative solution, right?” Card says.

The innovative category is likely to attract the usual crowd of academic and industry BCI scientists, who enjoy finding creative solutions, Kording says.

But the top slots will likely go to coders with no background in BCIs and who sport a “street fighting” style of machine learning, as Kording calls it. These “street fighters” focus on speed over ingenuity. In practice, the best BCI algorithms, Kording said, are “usually not really driving from a deep knowledge of how brains work. They’re driving from a deep understanding of how machine learning works.”

That said, both the traditional BCIs and new entrants are important parts of the science engineering ecosystem, Kording says. With the corners full, the competition is slated to be an exciting battle.