Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Innovative Machine Learning System Tracks Patient Discomfort Levels Throughout Surgical Procedures

Innovative Machine Learning System Tracks Patient Discomfort Levels Throughout Surgical Procedures

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

In the operating room, patients undergoing procedures with local anesthesia, while still conscious, may have difficulty expressing their levels of pain. Some, such as infants or people with dementia, may not be able to communicate these feelings at all. In the search for a better way to monitor patients’ pain, a team of researchers has developed a contactless method that analyzes a combination of patients’ heart rate data and facial expressions to estimate the pain they’re feeling. The approach is described in a study published 14 November in the IEEE Open Journal of Engineering in Medicine and Biology.

Bianca Reichard, a researcher at the Institute for Applied Informatics in Leipzig, Germany, notes that camera-based pain monitoring sidesteps the need for patients to wear sensors with wires, such as ECG electrodes and blood pressure cuffs, which could interfere with the delivery of medical care.

To create their contactless approach, the researchers created a machine-learning algorithm capable of analyzing aspects of pain that can be detected visually by a camera. First, the algorithm analyzes the nuances of a person’s facial expressions to estimate their pain levels.

The system also uses heart rate data via a technique called remote photoplethysmogram (rPPG), which involves shining a light on a person’s skin. The amount of light reflected back can be used to detect changes in blood volume within their vessels. The researchers initially considered 15 different heart-rate variability parameters measured by rPPG to include in their model and selected the top seven that are statistically most relevant to pain prediction, such as heart rate maximums, minimums, and intervals.

Pain-Prediction Model Training Datasets

The team used two different datasets to train and test their pain-prediction model. One is a well-established and widely used database that measures pain called the BioVid Heat Pain Database. Researchers created this dataset in 2013 through experiments in which thermodes induced incremental, measurable temperature increases on individuals’ skin. The researchers then captured the participants’ physical responses to the corresponding pain that they felt.

The second dataset was developed by the researchers for this new work. Twenty-nine patients undergoing heart procedures involving insertion of a catheter were surveyed about their pain levels at five-minute intervals.

Importantly, most other pain-prediction algorithms have been trained using very short video clips, but Reichard and her team specifically used longer training videos (ranging from 30 minutes to 3 hours) of realistic surgery scenarios to train their model. For instance, the training videos used may have included scenarios where lighting may not be ideal, or the patient’s face may be partially obscured from the camera at times. “This reflects a more realistic clinical situation compared to laboratory datasets,” Reichard explains.

Tests of their model show that it has a pain-prediction accuracy of about 45 percent. Reichard says she is surprised that the model is so accurate, given the number of disruptions that occurred throughout the raw video footage, such as a patient moving on the operating table or changes in the camera angle. While many previously developed pain-prediction models can achieve higher accuracies, those were trained using short video clips that are “ideal” with no visual obstructions. Instead, this research team used less-than-ideal—but more realistic—video footage to train their model.

What’s more, Reichard notes that the team used a fairly simple statistical machine-learning model. “Using more complex approaches, for example, based on neural networks, would most likely further improve performance,” she says.

Reichard says she finds this type of research—which could support both patients and medical staff—meaningful and is planning on developing similar contactless systems for measuring patients’ vital signs using radar in medical settings, in future work.

Enhancing PMUT Development for Biomedical Ultrasonic Uses through AI Innovation

Enhancing PMUT Development for Biomedical Ultrasonic Uses through AI Innovation

This whitepaper provides MEMS engineers, biomedical device developers, and multiphysics simulation specialists with a practical AI-accelerated workflow for optimizing piezoelectric micromachined ultrasonic transducers (PMUTs), enabling you to explore complex design trade-offs between sensitivity and bandwidth while achieving validated performance improvements in minutes instead of days using standard cloud infrastructure.

What you will learn about:

  • MultiphysicsAI combines cloud-based FEM simulation with neural surrogates to transform PMUT design from trial-and-error iteration into systematic inverse optimization
  • Training on 10,000 randomized geometries produces AI surrogates with 1% mean error and sub-millisecond inference for key performance indicators: transmit sensitivity, center frequency, fractional bandwidth, and electrical impedance
  • Pareto front optimization simultaneously increases fractional bandwidth from 65% to 100% and improves sensitivity by 2-3 dB while maintaining 12 MHz center frequency within ±0.2%
From Post-Nuclear Chill to Rebirth: Investing in Biotech for the Year 2026

From Post-Nuclear Chill to Rebirth: Investing in Biotech for the Year 2026

In this episode of Denatured, Jennifer C. Smith-Parker speaks to Maha Katabi, general partner at Sofinnova Investments and Andrew Lam, managing director, head of Biotech Private Equity at Ally Bridge Group, about how M &A dynamics, dealmaking and global partnerships are reshaping portfolio valuations and paths to growth in 2026.