Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

EMA and FDA Establish Unified Guidelines for Artificial Intelligence in Medical Development

EMA and FDA Establish Unified Guidelines for Artificial Intelligence in Medical Development

EMA and the U.S. Food and Drug Administration (FDA) have jointly identified ten principles for good artificial intelligence (AI) practice in the medicines lifecycle.The…, “The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation in the field of novel medical technologies. The…, The use of AI technologies across the medicines lifecycle has increased significantly in recent years. As emphasised in the European Commission’s Biotech Act proposal, AI holds…

CDER SBIA On-Demand Learning Resource Hub

CDER SBIA On-Demand Learning Resource Hub

FDA’s CDER Small Business and Industry Assistance (SBIA) is making available our YouTube learning library – now hundreds of our recordings are readily accessible.

Innovative Machine Learning System Tracks Patient Discomfort Levels Throughout Surgical Procedures

Innovative Machine Learning System Tracks Patient Discomfort Levels Throughout Surgical Procedures

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

In the operating room, patients undergoing procedures with local anesthesia, while still conscious, may have difficulty expressing their levels of pain. Some, such as infants or people with dementia, may not be able to communicate these feelings at all. In the search for a better way to monitor patients’ pain, a team of researchers has developed a contactless method that analyzes a combination of patients’ heart rate data and facial expressions to estimate the pain they’re feeling. The approach is described in a study published 14 November in the IEEE Open Journal of Engineering in Medicine and Biology.

Bianca Reichard, a researcher at the Institute for Applied Informatics in Leipzig, Germany, notes that camera-based pain monitoring sidesteps the need for patients to wear sensors with wires, such as ECG electrodes and blood pressure cuffs, which could interfere with the delivery of medical care.

To create their contactless approach, the researchers created a machine-learning algorithm capable of analyzing aspects of pain that can be detected visually by a camera. First, the algorithm analyzes the nuances of a person’s facial expressions to estimate their pain levels.

The system also uses heart rate data via a technique called remote photoplethysmogram (rPPG), which involves shining a light on a person’s skin. The amount of light reflected back can be used to detect changes in blood volume within their vessels. The researchers initially considered 15 different heart-rate variability parameters measured by rPPG to include in their model and selected the top seven that are statistically most relevant to pain prediction, such as heart rate maximums, minimums, and intervals.

Pain-Prediction Model Training Datasets

The team used two different datasets to train and test their pain-prediction model. One is a well-established and widely used database that measures pain called the BioVid Heat Pain Database. Researchers created this dataset in 2013 through experiments in which thermodes induced incremental, measurable temperature increases on individuals’ skin. The researchers then captured the participants’ physical responses to the corresponding pain that they felt.

The second dataset was developed by the researchers for this new work. Twenty-nine patients undergoing heart procedures involving insertion of a catheter were surveyed about their pain levels at five-minute intervals.

Importantly, most other pain-prediction algorithms have been trained using very short video clips, but Reichard and her team specifically used longer training videos (ranging from 30 minutes to 3 hours) of realistic surgery scenarios to train their model. For instance, the training videos used may have included scenarios where lighting may not be ideal, or the patient’s face may be partially obscured from the camera at times. “This reflects a more realistic clinical situation compared to laboratory datasets,” Reichard explains.

Tests of their model show that it has a pain-prediction accuracy of about 45 percent. Reichard says she is surprised that the model is so accurate, given the number of disruptions that occurred throughout the raw video footage, such as a patient moving on the operating table or changes in the camera angle. While many previously developed pain-prediction models can achieve higher accuracies, those were trained using short video clips that are “ideal” with no visual obstructions. Instead, this research team used less-than-ideal—but more realistic—video footage to train their model.

What’s more, Reichard notes that the team used a fairly simple statistical machine-learning model. “Using more complex approaches, for example, based on neural networks, would most likely further improve performance,” she says.

Reichard says she finds this type of research—which could support both patients and medical staff—meaningful and is planning on developing similar contactless systems for measuring patients’ vital signs using radar in medical settings, in future work.