Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Could AI Search Lead to a Catastrophe in Medical Misinformation?

Could AI Search Lead to a Catastrophe in Medical Misinformation?

Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA.

A woman with brown hair in a black dress
Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.

While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.

I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?

Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here.

We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.

There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day [a response that was drawn for an Onion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?

DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?

I don’t think so.

DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.

That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.

So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?

DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.

I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?

DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.

I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?

DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.

What do you think Google’s next moves should be to prevent medical misinformation in AI search?

DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools.

Skin-Deep Monitoring: The Role of Microneedle Glucose Sensors in Continuous Health Tracking

Skin-Deep Monitoring: The Role of Microneedle Glucose Sensors in Continuous Health Tracking

For people with diabetes, glucose monitors are a valuable tool to monitor their blood sugar. The current generation of these biosensors detect glucose levels with thin, metallic filaments inserted in subcutaneous tissue, the deepest layer of the skin where most body fat is stored.

Medical technology company Biolinq is developing a new type of glucose sensor that doesn’t go deeper than the dermis, the middle layer of skin that sits above the subcutaneous tissue. The company’s “intradermal” biosensors take advantage of metabolic activity in shallower layers of skin, using an array of electrochemical microsensors to measure glucose—and other chemicals in the body—just beneath the skin’s surface.

Biolinq just concluded a pivotal clinical trial earlier this month, according to CEO Rich Yang, and the company plans to submit the device to the U.S. Food and Drug Administration for approval at the end of the year. In April, Biolinq received US $58 million in funding to support the completion of its clinical trials and subsequent submission to the FDA.

Biolinq’s glucose sensor is “the world’s first intradermal sensor that is completely autonomous,” Yang says. While other glucose monitors require a smartphone or other reader to collect and display the data, Biolinq’s includes an LED display to show when the user’s glucose is within a healthy range (indicated by a blue light) or above that range (yellow light). “We’re providing real-time feedback for people who otherwise could not see or feel their symptoms,” Yang says. (In addition to this real-time feedback, the user can also load long-term data onto a smartphone by placing it next to the sensor, like Abbott’s FreeStyle Libre, another glucose monitor.)

black and white image of a grid of black and white squares
More than 2,000 microsensor components are etched onto each 200-millimeter silicon wafer used to manufacture the biosensors.Biolinq

Biolinq’s hope is that its approach could lead to sustainable changes in behavior on the part of the individual using the sensor. The device is intentionally placed on the upper forearm to be in plain sight, so users can receive immediate feedback without manually checking a reader. “If you drink a glass of orange juice or soda, you’ll see this go from blue to yellow,” Yang explains. That could help users better understand how their actions—such as drinking a sugary beverage—change their blood sugar and take steps to reduce that effect.

Biolinq’s device consists of an array of microneedles etched onto a silicon wafer using semiconductor manufacturing. (Other glucose sensors’ filaments are inserted with an introducer needle.) Each chip has a small 2-millimeter by 2-millimeter footprint and contains seven independent microneedles, which are coated with membranes through a process similar to electroplating in jewelry making. One challenge the industry has faced is ensuring that microsensors do not break at this small scale. The key engineering insight Biolinq introduced, Yang says, was using semiconductor manufacturing to build the biosensors. Importantly, he says, silicon “is harder than titanium and steel at this scale.”

Miniaturization allows for sensing closer to the surface of the skin, where there is a high level of metabolic activity. That makes the shallow depth ideal for monitoring glucose, as well as other important biomarkers, Yang says. Due to this versatility, combined with the use of a sensor array, the device in development can also monitor lactate, an important indicator of muscle fatigue. With the addition of a third data point, ketones (which are produced when the body burns fat), Biolinq aims to “essentially have a metabolic panel on one chip,” Yang says.

Using an array of sensors also creates redundancy, improving the reliability of the device if one sensor fails or becomes less accurate. Glucose monitors tend to drift over the course of wear, but with multiple sensors, Yang says that drift can be better managed.

One downside to the autonomous display is the drain on battery life, Yang says. The battery life limits the biosensor’s wear time to 5 days in the first-generation device. Biolinq aims to extend that to 10 days of continuous wear in its second generation, which is currently in development, by using a custom chip optimized for low-power consumption rather than off-the-shelf components.

The company has collected nearly 1 million hours of human performance data, along with comparators including commercial glucose monitors and venous blood samples, Yang says. Biolinq aims to gain FDA approval first for use in people with type 2 diabetes not using insulin and later expand to other medical indications.

This article appears in the August 2024 print issue as “Glucose Monitor Takes Page From Chipmaking.”

Silencing Mental Clatter: Embrace Brain Noise Cancellation Techniques

Silencing Mental Clatter: Embrace Brain Noise Cancellation Techniques

Elemind, a 5-year-old startup based in Cambridge, Mass., today unveiled a US $349 wearable for neuromodulation, the company’s first product. According to cofounder and CEO Meredith Perry, the technology tracks the oscillation of brain waves using electroencephalography (EEG) sensors that detect the electrical activity of the brain and then influence those oscillations using bursts of sound delivered via bone conduction.

Elemind’s first application for this wearable aims to suppress alpha waves to help induce sleep. There are other wearables on the market that monitor brain waves and, through biofeedback, encourage users to actively modify their alpha patterns. Elemind’s headband appears to be the first device to use sound to directly influence the brain waves of a passive user.

In a clinical trial, says Perry [no relation to author], 76 percent of subjects fell asleep more quickly. Those who did see a difference averaged 48 percent less time to progress from awake to asleep. The results were similar to those of comparable trials of pharmaceutical sleep aids, Perry indicated.

“For me,” Perry said, “it cuts through my rumination, quiets my thinking. It’s like noise cancellation for the brain.”

I briefly tested Elemind’s headband in May. I found it comfortable, with a thick cushioned band that sits across the forehead connected to a stretchy elastic loop to keep it in place. In the band are multiple EEG electrodes, a processor, a three-axis accelerometer, a rechargeable lithium-polymer battery, and custom electronics that gather the brain’s electrical signals, estimate their phase, and generate pink noise through a bone-conduction speaker. The whole thing weighs about 60 grams—about as much as a small kiwi fruit.

My test conditions were far from optimal for sleep: early afternoon, a fairly bright conference room, a beanbag chair as bed, and a vent blowing. And my test lasted just 4 minutes. I can say that I didn’t find the little bursts of pink noise (white noise without the higher frequencies) unpleasant. And since I often wear an eye mask, feeling fabric on my face wasn’t disturbing. It wasn’t the time or place to try for sound sleep, but I—and the others in the room—noted that after 2 minutes I was yawning like crazy.

How Elemind tweaks brain waves

What was going on in my brain? Briefly, different brain states are associated with different frequencies of waves. Someone who is relaxed with eyes closed but not asleep produces alpha waves at around 10 hertz. As they drift off to sleep, the alpha waves are supplanted by theta waves, at around 5 Hz. Eventually, the delta waves of deep sleep show up at around 1 Hz.

Ryan Neely, Elemind’s vice president of science and research, explains: “As soon as you put the headband on,” he says, “the EEG system starts running. It uses straightforward signal processing with bandpass filtering to isolate the activity in the 8- to 12-Hz frequency range—the alpha band.”

“Then,” Neely continues, “our algorithm looks at the filtered signal to identify the phase of each oscillation and determines when to generate bursts of pink noise.”

two graphs with black and pink lines, blue text above and a small orange arrow
To help a user fall asleep more quickly [top], bursts of pink noise are timed to generate a brain response that is out of phase with alpha waves and so suppresses them. To enhance deep sleep [bottom], the pink noise is timed to generate a brain response that is in phase with delta waves.Source: Elemind

These auditory stimuli, he explains, create ripples in the waves coming from the brain. Elemind’s system tries to align these ripples with a particular phase in the wave. Because there is a gap between the stimulus and the evoked response, Elemind tested its system on 21 people and calculated the average delay, taking that into account when determining when to trigger a sound.

To induce sleep, Elemind’s headband targets the trough in the alpha wave, the point at which the brain is most excitable, Neely says.

“You can think of the alpha rhythm as a gate for communication between different areas of the brain,” he says. “By interfering with that communication, that coordination between different brain areas, you can disrupt patterns, like the ruminations that keep you awake.”

With these alpha waves suppressed, Neely says, the slower oscillations, like the theta waves of light sleep, take over.

Elemind doesn’t plan to stop there. The company plans to add an algorithm that addresses delta waves, the low-frequency 0.5- to 2-Hz waves characteristic of deep sleep. Here, Elemind’s technology will attempt to amplify this pattern with the intent of improving sleep quality.

Is this safe? Yes, Neely says, because auditory stimulation is self-limiting. “Your brain waves have a natural space they can occupy,” he explains, “and this stimulation just moved it within that natural space, unlike deep-brain stimulation, which can move the brain activity outside natural parameters.”

Going beyond sleep to sedation, memory, and mental health

Applications may eventually go beyond inducing and enhancing sleep. Researchers at the University of Washington and McGill University have completed a clinical study to determine if Elemind’s technology can be used to increase the pain threshold of subjects undergoing sedation. The results are being prepared for peer review.

Elemind is also working with a team involving researchers at McGill and the Leuven Brain Institute to determine if the technology can enhance memory consolidation in deep sleep and perhaps have some usefulness for people with mild cognitive impairment and other memory disorders.

Neely would love to see more applications investigated in the future.

“Inverse alpha stimulation [enhancing instead of suppressing the signal] could increase arousal,” he says. “That’s something I’d love to look into. And looking into mental-health treatment would be interesting, because phase coupling between the different brain regions appears to be an important factor in depression and anxiety disorders.”

Perry, who previously founded the wireless power startup UBeam, cofounded Elemind with four university professors with expertise in neuroscience, optogenetics, biomedical engineering, and artificial intelligence. The company has $12 million in funding to date and currently has 13 employees.

Preorders at $349 start today for beta units, and Elemind expects to start general sales later this year. The company will offer customers an optional membership at $7 to $13 monthly that will allow cloud storage of sleep data and access to new apps as they are released.

Bionic Eye Receives Fresh Opportunities for Advancement

Bionic Eye Receives Fresh Opportunities for Advancement

The future of an innovative retinal implant and dozens of its users just got brighter, after Science, a bioelectronics startup run by Neuralink’s cofounder, Max Hodak, acquired Pixium’s technology at the last minute.

Pixium Vision, whose Prima system to tackle vision loss is implanted in 47 people across Europe and the United States, was in danger of disappearing completely until Science stepped in to buy the French company’s assets in April, for an undisclosed amount.

Pixium has been developing Prima for a decade, building on work by Daniel Palanker, a professor of ophthalmology at Stanford University. The 2-by-2-millimeter square implant is surgically implanted under the retina, where it turns infrared data from camera-equipped glasses into pulses of electricity. These replace signals generated by photoreceptor rods and cones, which are damaged in people suffering from age-related macular degeneration (AMD).

Early feasibility studies in the E.U. and the United States suggested Prima was safe and potentially effective, but Pixium ran out of money last November before the final results of a larger, multiyear pivotal trial in Europe.

“It’s very important to us to avoid another debacle like Argus II.”

With the financial and legal clock ticking down, the trial data finally arrived in March this year. “And the results from that were just pretty stunning,” says Max Hodak, Science’s founder and CEO, in his first interview since the acquisition.

Although neither Pixium nor Science has yet released the full dataset, Hodak shared with IEEE Spectrum videos of three people using Prima, each of them previously unable to read or recognize faces due to AMD. The videos show them slowly but fluently reading a hardback book, filling in a crossword puzzle, and playing cards.

“This is legit ‘form vision’ that I don’t think any device has ever done,” says Hodak. Form vision is the ability to recognize visual elements as parts of a larger object. “It’s this type of data that convinced us. And from there we were like, this should get to patients.”

As well as buying the Prima technology, Hodak says that Science will hire the majority of Pixium’s 35 engineering and regulatory staff, in a push to get the technology approved in Europe as quickly as possible.

illustration of the back of a head and person wearing a headset with a blue box underneath and a penny
The Prima implant receives visual data and is powered by near-infrared signals beamed from special spectacles.Pixium

Another priority is supporting existing Prima patients, says Lloyd Diamond, Pixium’s outgoing CEO. “It’s very important to us to avoid another debacle like Argus II,” he says, referring to another retinal implant whose manufacturer went out of business in 2022, leaving users literally in the dark.

Diamond is excited to be working with Science, which is based in Silicon Valley with a chip foundry in North Carolina. “They have a very deep workforce in software development, in electronic development, and in biologic research,” he says. “And there are probably only a few foundries in the world that could manufacture an implant such as ours. Being able to internalize part of that process is a very big advantage.”

Hodak hopes that a first-generation Prima product could quickly be upgraded with a wide-angle camera and the latest electronics. “We think that there’s one straight shrink, where we’ll move to smaller pixels and get higher visual acuity,” he says. “After that, we’ll probably move to a 3D electrode design, where we’ll be able to get closer to single-cell resolution.” That could deliver even sharper artificial vision.

In parallel, Science will continue Pixium’s discussions with the FDA in the United States about advancing a clinical trial there.

The success of Prima is critical, says Hodak, who started Science in 2021 after leaving Neuralink, a brain-computer interface company he cofounded with Elon Musk. “Elon can do whatever he wants for as long as he wants, but we need something that can finance future development,” he says. “Prima is big enough in terms of impact to patients and society that it is capable of helping us finance the rest of our ambitions.”

These include a next-generation Prima device, which Hodak says he is already talking about with Palanker, and a second visual prosthesis, currently called the Science Eye. This will tackle retinitis pigmentosa, a condition affecting peripheral vision—the same condition targeted by Second Sight’s ill-fated Argus II device.

“The Argus II just didn’t work that well,” says Hodak. “In the end, it was a pure bridge to nowhere.” Like the Argus II and Prima, the Science Eye relies on camera glasses and an implant, but with the addition of optogenetic therapy. This uses a genetically engineered virus to deliver a gene to specific optic nerve cells in the retina, making them light-sensitive at a particular wavelength. A tiny implanted display with a resolution sharper than an iPhone screen then enables fine control over the newly sensitized cells.

That system is still undergoing animal trials, but Hodak is almost ready to pull the trigger on its first human clinical studies, likely in Australia and New Zealand.

“In the long term, I think precision optogenetics will be more powerful than Prima’s electrical stimulation,” he says. “But we’re agnostic about which approach works to restore vision.”

One thing he does believe vehemently, unlike Musk, is that the retina is the best place to put an implant. Neuralink and Cortigent (the successor company of Second Sight) are both working on prosthetics that target the brain’s visual cortex.

“There’s a lot that you can do in cortex, but vision is not one of them,” says Hodak. He thinks the visual cortex is too complex, too distributed, and too difficult to access surgically to be useful.

“As long as the optic nerve is intact, the retina is the ideal place to think about restoring vision to the brain,” he says. “This is all a question of effect size. If someone has been in darkness for a decade, with no light, no perception, and you can give them any type of visual stimulus, they’re going to be into it. The Pixium patients can intuitively read, and that was really what convinced us that this was worth picking up and pursuing.”

Revitalizing Noninvasive Spinal Stimulation with Recent Advancements

Revitalizing Noninvasive Spinal Stimulation with Recent Advancements

In 2010, Melanie Reid fell off a horse and was paralyzed below the shoulders.

“You think, ‘I am where I am; nothing’s going to change,’ ” she said, but many years after her accident, she participated in a medical trial of a new, noninvasive rehabilitative device that can deliver more electrical stimulation than similar devices without harming the user. For Reid, use of the device has led to small improvements in her ability to use her hands, and meaningful changes to her daily life.

“Everyone thinks with spinal injury all you want to do is be able to walk again, but if you’re a tetraplegic or quadriplegic, what matters most is working hands,” said Reid, a columnist for The Times, as part of a press briefing. “There’s no miracles in spinal injury, but tiny gains can be life-changing.”

For the study, Reid used a new noninvasive therapeutic device produced by Onward Medical. The device, ARC-EX (“EX” indicating “external”), uses electrodes placed along the spine near the site of injury—in the case of quadriplegia, the neck—to promote nerve activity and growth during physical-therapy exercises. The goal is to not only increase motor function while the device is attached and operating, but the long-term effectiveness of rehabilitation drills. A study focused on arm and hand abilities in patients with quadriplegia was published 20 May in Nature Medicine.

Researchers have been investigating electrical stimulation as a treatment for spinal cord injury for roughly 40 years, but “one of the innovations in this system is using a very high-frequency waveform,” said coauthor Chet Moritz, a neurotechnologist at the University of Washington. The ARC-EX uses a 10-kilohertz carrier frequency overlay, which researchers think may numb the skin beneath the electrode, allowing patients to tolerate five times as much amperage as from similar exploratory devices. For Reid, this manifested as a noticeable “buzz,” which felt strange, but not painful.

The study included 60 participants across 14 sites around the world. Each participant undertook two months of standard physical therapy, followed by two months of therapy combined with the ARC-EX. Although aspects of treatment such as electrode placement were fairly standardized, the current amplitude was personalized to each patient, and sometimes individual exercises, said Moritz.


The ARC-EX uses a 10-kilohertz current to provider stronger stimulation for people with spinal cord injuries.

Over 70 percent of patients showed an increase in at least one measurement of both strength and function between standard therapy and ARC-EX therapy. These changes also meant that 87 percent of study participants noted some improvement in quality of life in a followup questionnaire. No major safety concerns tied to the device or rehabilitation process were reported.

Onward will seek approval from the U.S. Food and Drug Administration for the device by the end of 2024, said study coauthor Grégoire Courtine, a neuroscientist and cofounder of Onward Medical. Onward is also working on an implantable spinal stimulator called ARC-IM; other prosthetic approaches, such as robotic exoskeletons, are being investigated elsewhere. ARC-EX was presented as a potentially important cost-accessible, noninvasive treatment option, especially in the critical window for recovery a year or so after a spinal cord injury. However, the price to insurers or patients of a commercial version is still subject to negotiation.

The World Health Organization says there are over 15 million people with spinal cord injuries. Moritz estimates that around 90 percent of patients, even many with no movement in their hands, could benefit from the new therapy.

Dimitry Sayenko, who studies spinal cord injury recovery at Houston Methodist and was not involved in the study, praised the relatively large sample size and clear concern for patient safety. But he stresses that the mechanisms underlying spinal stimulation are not well understood. “So far it’s literally plug and play,” said Sayenko. “We don’t understand what’s happening under the electrodes for sure—we can only indirectly assume or speculate.”

The new study supports the idea that noninvasive spinal cord stimulation can provide some benefit to some people but was not designed to help predict who will benefit, precisely how people will benefit, or how to optimize care. The study authors acknowledged the limited scope and need for further research, which might help turn currently “tiny gains” into what Sayenko calls “larger, more dramatic, robust effects.”

Portable Psychiatry: Access a Therapist Anytime with Innovative Apps

Portable Psychiatry: Access a Therapist Anytime with Innovative Apps

Nearly every day since she was a child, Alex Leow, a psychiatrist and computer scientist at the University of Illinois Chicago, has played the piano. Some days she plays well, and other days her tempo lags and her fingers hit the wrong keys. Over the years, she noticed a pattern: How well she plays depends on her mood. A bad mood or lack of sleep almost always leads to sluggish, mistake-prone music.

In 2015, Leow realized that a similar pattern might be true for typing. She wondered if she could help people with psychiatric conditions track their moods by collecting data about their typing style from their phones. She decided to turn her idea into an app.

After conducting a pilot study, in 2018 Leow launched
BiAffect, a research app that aims to understand mood-related symptoms of bipolar disorder through keyboard dynamics and sensor data from users’ smartphones. Now in use by more than 2,700 people who have volunteered their data to the project, the app tracks typing speed and accuracy by swapping the phone’s onscreen keyboard with its own nearly identical one.

The software then generates feedback for users, such as a graph displaying hourly keyboard activity. Researchers get access to the donated data from users’ phones, which they use to develop and test machine learning algorithms that interpret data for clinical use. One of the things Leow’s team has observed: When people are manic—a state of being overly excited that accompanies bipolar disorder—they type “ferociously fast,” says Leow.

Three screenshots of BiAffects app show a healthy patient, with a range of time spent lying down, a bipolar patient with little time spent prone, and one with depression and significant time spent lying down.
Compared to a healthy user [top], a person experiencing symptoms of bipolar disorder [middle] or depression [bottom] may use their phone more than usual and late at night. BiAffect measures phone usage and orientation to help track those symptoms. BiAffect

BiAffect is one of the few mental-health apps that take a passive approach to collecting data from a phone to make inferences about users’ mental states. (Leow suspects that fewer than a dozen are currently available to consumers.) These apps run in the background on smartphones, collecting different sets of data not only on typing but also on the user’s movements, screen time, call and text frequency, and GPS location to monitor social activity and sleep patterns. If an app detects an abrupt change in behavior, indicating a potentially hazardous shift in mental state, it could be set up to alert the user, a caretaker, or a physician.

Such apps can’t legally claim to treat or diagnose disease, at least in the United States. Nevertheless, many researchers and people with mental illness have been using them as tools to track signs of depression, schizophrenia, anxiety, and bipolar disorder. “There’s tremendous, immediate clinical value in helping people feel better today by integrating these signals into mental-health care,” says
John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center, in Boston. Globally, one in 8 people live with a mental illness, including 40 million with bipolar disorder.

These apps differ from most of the more than
10,000 mental-health and mood apps available, which typically ask users to actively log how they’re feeling, help users connect to providers, or encourage mindfulness. The popular apps Daylio and Moodnotes, for example, require journaling or rating symptoms. This approach requires more of the user’s time and may make these apps less appealing for long-term use. A 2019 study found that among 22 mood-tracking apps, the median user-retention rate was just 6.1 percent at 30 days of use.

App developers are trying to avoid the pitfalls of previous smartphone-psychiatry startups, some of which oversold their capabilities before validating their technologies.

But despite
years of research on passive mental-health apps, their success is far from guaranteed. App developers are trying to avoid the pitfalls of previous smartphone psychiatry startups, some of which oversold their capabilities before validating their technologies. For example, Mindstrong was an early startup with an app that tracked taps, swipes, and keystrokes to identify digital biomarkers of cognitive function. The company raised US $160 million in funding from investors, including $100 million in 2020 alone, and went bankrupt in February 2023.

Mindstrong may have folded because the company was operating on a different timeline from the research, according to an
analysis by the health-care news website Stat. The slow, methodical pace of science did not match the startup’s need to return profits to its investors quickly, the report found. Mindstrong also struggled to figure out the marketplace and find enough customers willing to pay for the service. “We were first out of the blocks trying to figure this out,” says Thomas Insel, a psychiatrist who cofounded Mindstrong.

Now that the field has completed a “hype cycle,” Torous says, app developers are focused on conducting the research needed to prove their apps can actually help people. “We’re beginning to put the burden of proof more on those developers and startups, as well as academic teams,” he says. Passive mental-health apps need to prove they can reliably parse the data they’re collecting, while also addressing serious privacy concerns.

Passive sensing catches mood swings early

Mood Sensors

Seven metrics apps use to make inferences about your mood

An illustration of a series of keys.
All icons: Greg Mably

Keyboard dynamics: Typing speed and accuracy can indicate a lot about a person’s mood. For example, people who are manic often type extremely fast.

An illustration of a pair of curved arrows.

Accelerometer: This sensor tracks how the user is oriented and moving. Lying in bed would suggest a different mood than going for a run.

An illustration of a phone and text bubble icon.

Calls and texts: The frequency of text messages and phone conversations signifies a person’s social isolation or activity, which indicates a certain mood.

An illustration of an arrow pointing downward

GPS location: Travel habits signal a person’s activity level and routine, which offer clues about mood. For example, a person experiencing depression may spend more time at home.

An illustration of a speaker and sound coming off

Mic and voice: Mood can affect how a person speaks. Microphone-based sensing tracks the rhythm and inflection of a person’s voice.

Sleep: Changes in sleep patterns signify a change in mood. Insomnia is a common symptom of bipolar disorder and can trigger or worsen mood disturbances.

An illustration of colored bars.

Screen time: An increase in the amount of time a person spends on a phone can be a sign of depressive symptoms and can interfere with sleep.

A crucial component of managing psychiatric illness is tracking changes in mental states that can lead to more severe episodes of the disease. Bipolar disorder, for example, causes intense swings in mood, from extreme highs during periods of mania to extreme lows during periods of depression. Between 30 and 50 percent of people with bipolar disorder will attempt suicide at least once in their lives. Catching early signs of a mood swing can enable people to take countermeasures or seek help before things get bad.

But detecting those changes early is hard, especially for people with mental illness. Observations by other people, such as family members, can be subjective, and doctor and counselor sessions are too infrequent.

That’s where apps come in. Algorithms can be trained to spot subtle deviations from a person’s normal routine that might indicate a change in mood—an objective measure based on data, like a diabetic tracking blood sugar. “The ability to think objectively about my own thinking is really key,” says retired U.S. major general
Gregg Martin, who has bipolar disorder and is an advisor for BiAffect.

The data from passive sensing apps could also be useful to doctors who want to see objective data on their patients in between office visits, or for people transitioning from inpatient to outpatient settings. These apps are “providing a service that doesn’t exist,” says
Colin Depp, a clinical psychologist and professor at the University of California, San Diego. Providers can’t observe their patients around the clock, he says, but smartphone data can help close the gap.

Depp and his team have developed an app that uses GPS data and microphone-based sensing to determine the frequency of conversations and make inferences about a person’s social interactions and isolation. The app also tracks “location entropy,” a metric of how much a user moves around outside of routine locations. When someone is depressed and mostly stays home, location entropy decreases.

Depp’s team initially developed the app, called
CBT2go, as a way to test the effectiveness of cognitive behavioral therapy in between therapy sessions. The app can now intervene in real time with people experiencing depressive or psychotic symptoms. This feature helps people identify when they feel lonely or agitated so they can apply coping skills they’ve learned in therapy. “When people walk out of the therapist’s office or log off, then they kind of forget all that,” Depp says.

Another passive mental-health-app developer,
Ellipsis Health in San Francisco, uses software that takes voice samples collected during telehealth calls to gauge a person’s level of depression, anxiety, and stress symptoms. For each set of symptoms, deep-learning models analyze the person’s words, rhythms, and inflections to generate a score. The scores indicate the severity of the person’s mental distress, and are based on the same scales used in standard clinical evaluations, says Michael Aratow, cofounder and chief medical officer at Ellipsis.

Aratow says the software works for people of all demographics, without needing to first capture baseline measures of an individual’s voice and speech patterns. “We’ve trained the models in the most difficult use cases,” he says. The company offers its platform, including an app for collecting the voice data, through health-care providers, health systems, and employers; it’s not directly available to consumers.

In the case of BiAffect, the app can be downloaded for free by the public. Leow and her team are using the app as a research tool in clinical trials sponsored by the U.S. National Institutes of Health. These
studies aim to validate whether the app can reliably monitor mood disorders, and determine whether it could also track suicide risk in menstruating women and cognition in people with multiple sclerosis.

BiAffect’s software tracks behaviors like hitting the backspace key frequently, which suggests more errors, and an increase in typing “@” symbols and hashtags, which suggest more social media use. The app combines this typing data with information from the phone’s accelerometer to determine how the user is oriented and moving—for example, whether the user is likely lying down in bed—which yields more clues about mood.

Screenshot of Ellipsis Health sample patientu2019s case management dashboard with text about the patientu2019s health and popup window showing high risk score
Ellipsis Health analyzes audio captured during telehealth visits to assign scores for depression, anxiety, and stress.Ellipsis Health

The makers of BiAffect and Ellipsis Health don’t claim their apps can treat or diagnose disease. If app developers want to make those claims and sell their product in the United States, they would first have to get regulatory approval from the U.S. Food and Drug Administration. Getting that approval requires rigorous and large-scale clinical trials that most app makers don’t have the resources to conduct.

Digital-health software depends on quality clinical data

The sensing techniques upon which passive apps rely—measuring typing dynamics, movement, voice acoustics, and the like—are well established. But the algorithms used to analyze the data collected by the sensors are still being honed and validated. That process will require considerably more high-quality research among real patient populations.

Illustration of a hand holding a phone upwards, with many colored bubbles floating around them.
Greg Mably

For example, clinical studies that include control or placebo groups are crucial and have been lacking in the past. Without control groups, companies can say their technology is effective “compared to nothing,” says Torous at Beth Israel.

Torous and his team aim to build software that is backed by this kind of quality evidence. With participants’ consent, their app, called
mindLAMP, passively collects data from their screen time and their phone’s GPS and accelerometer for research use. It’s also customizable for different diseases, including schizophrenia and bipolar disorder. “It’s a great starting point. But to bring it into the medical context, there’s a lot of important steps that we’re now in the middle of,” says Torous. Those steps include conducting clinical trials with control groups and testing the technology in different patient populations, he says.

How the data is collected can make a big difference in the quality of the research. For example, the rate of sampling—how often a data point is collected—matters and must be calibrated for the behavior being studied. What’s more, data pulled from real-world environments tends to be “dirty,” with inaccuracies collected by faulty sensors or inconsistencies in how phone sensors initially process data. It takes more work to make sense of this data, says
Casey Bennett, an assistant professor and chair of health informatics at DePaul University, in Chicago, who uses BiAffect data in his research.

One approach to addressing errors is to integrate multiple sources of data to fill in the gaps—like combining accelerometer and typing data. In another approach, the BiAffect team is working to correlate real-world information with cleaner lab data collected in a controlled environment where researchers can more easily tell when errors are introduced.

Who participates in the studies matters too. If participants are limited to a particular geographic area or demographic, it’s unclear whether the results can be applied to the broader population. For example, a night-shift worker will have different activity patterns from those with nine-to-five jobs, and a city dweller may have a different lifestyle from residents of rural areas.

After the research is done, app developers must figure out a way to integrate their products into real-world medical contexts. One looming question is when and how to intervene when a change in mood is detected. These apps should always be used in concert with a professional and not as a replacement for one, says Torous. Otherwise, the app’s assessments could be dangerous and distressing to users, he says.

When mood tracking feels like surveillance

No matter how well these passive mood-tracking apps work, gaining trust from potential users may be the biggest stumbling block. Mood tracking could easily feel like surveillance. That’s particularly true for people with bipolar or psychotic disorders, where paranoia is part of the illness.

Keris Myrick, a mental-health advocate, says she finds passive mental-health apps “both cool and creepy.” Myrick, who is vice president of partnerships and innovation at the mental-health-advocacy organization
Inseparable, has used a range of apps to support her mental health as a person with schizophrenia. But when she tested one passive sensing app, she opted to use a dummy phone. “I didn’t feel safe with an app company having access to all of that information on my personal phone,” Myrick says. While she was curious to see if her subjective experience matched the app’s objective measurements, the creepiness factor prevented her from using the app enough to find out.

Keris Myrick, a mental-health advocate, says she finds passive mental-health apps “both cool and creepy.”

Beyond users’ perception, maintaining true digital privacy is crucial. “Digital footprints are pretty sticky these days,” says
Katie Shilton, an associate professor at the University of Maryland focused on social-data science. It’s important to be transparent about who has access to personal information and what they can do with it, she says.

“Once a diagnosis is established, once you are labeled as something, that can affect algorithms in other places in your life,” Shilton says. She cites the misuse of personal data in the
Cambridge Analytica scandal, in which the consulting firm collected information from Facebook to target political advertising. Without strong privacy policies, companies producing mental-health apps could similarly sell user data—and they may be particularly motivated to do so if an app is free to use.

Conversations about regulating mental-health apps have been ongoing
for over a decade, but a Wild West–style lack of regulation persists in the United States, says Bennett of DePaul University. For example, there aren’t yet protections in place to keep insurance companies or employers from penalizing users based on data collected. “If there aren’t legal protections, somebody is going to take this technology and use it for nefarious purposes,” he says.

Some of these concerns may be mediated by confining all the analysis to a user’s phone, rather than collecting data in a central repository. But decisions about privacy policies and data structures are still up to individual app developers.

Leow and the BiAffect team are currently working on a new internal version of their app that incorporates natural-language processing and generative AI extensions to analyze users’ speech. The team is considering commercializing this new version in the future, but only following extensive work with industry partners to ensure strict privacy safeguards are in place. “I really see this as something that people could eventually use,” Leow says. But she acknowledges that researchers’ goals don’t always align with the desires of the people who might use these tools. “It is so important to think about what the users actually want.”

This article appears in the July 2024 print issue as “The Shrink in Your Pocket.”

Intriguing “Serpent-shaped” Device Captures Internal Images of Arteries

Intriguing “Serpent-shaped” Device Captures Internal Images of Arteries

Neurosurgeon Vitor Mendes Pereira has grown accustomed to treating brain aneurysms with only blurry images for guidance.

Equipped with a rough picture of the labyrinthine network of arteries in the brain, he does his best to insert mesh stents or coils of platinum wire—interventions intended to promote clotting and to seal off a bulging blood vessel.

The results are not always perfect. Without a precise window into the arterial architecture at the aneurysm site, Pereira says that he and other neurovascular specialists occasionally misplace these implants, leaving patients at a heightened risk of stroke, clotting, inflammation, and life-threatening ruptures. But a new fiber-optic imaging probe offers hope for improved outcomes.


Pereira et al./Science Translational Medicine

According to Pereira’s early clinical experience, the technology—a tiny snake-like device that winds its way through the intricate maze of brain arteries and, using spirals of light, captures high-resolution images from the inside-out—provides an unprecedented level of structural detail that enhances the ability of clinicians to troubleshoot implant placement and better manage disease complications.

“We can see a lot more information that was not accessible before,” says Pereira, director of endovascular research and innovation at St. Michael’s Hospital in Toronto. “This is, for us, an incredible step forward.”

And not just for brain aneurysms. In a report published today in Science Translational Medicine, Pereira and his colleagues describe their first-in-human experience using the platform to guide treatment for 32 people with strokes, artery hardening, and various other conditions arising from aberrant blood vessels in the brain.

Whereas before, with technologies such as CT scans, MRIs, ultrasounds, and x-rays, clinicians had a satellite-like view of the brain’s vascular network, now they have a Google Street View-like perspective, complete with in-depth views of artery walls, plaques, immune cell aggregates, implanted device positions, and more.

“The amount of detail you could get you would never ever see with any other imaging modality,” says Adnan Siddiqui, a neurosurgeon at the University at Buffalo, who was not involved in the research. “This technology holds promise to be able to really transform the way we evaluate success or failure of our procedures, as well as to diagnose complications before they occur.”

A Decade of Innovation

A four part figure showing a diagram of the probe and placement in a human brain.
The new fiber-optic probe is flexible enough to snake through the body’s arteries and provide previously unavailable information to surgeons.Pereira et al./Science Translational Medicine

The new imaging platform is the brainchild of Giovanni Ughi, a biomedical engineer at the University of Massachusetts’ Chan Medical School in Worcester. About a decade ago, he set out to adapt a technique called optical coherence tomography (OCT) for imaging inside the brain’s arteries.

OCT relies on the backscattering of near-infrared light to create cross-sectional images with micrometer-scale spatial resolution. Although OCT had long been used in clinical settings to generate pictures from the back of the eye and from inside the arteries that supply blood to the heart, the technology had proven difficult to adapt for brain applications owing to several technical challenges.

One major challenge is that the fiber-optic probes used in the technology are typically quite stiff, making them too rigid to twist and bend through the convoluted passageways of the brain’s vasculature. Additionally, the torque cables—traditionally used to rotate the OCT lens to image surrounding vessels and devices in three dimensions as the probe retracts—were too large to fit inside the catheters that are telescopically advanced into the brain’s arteries to address blockages or other vascular issues.

“We had to invent a new technology,” Ughi explains. “Our probe had to be very, very flexible, but also very, very small to be compatible with the clinical workflow.”

To achieve these design criteria, Ughi and his colleagues altered the properties of the glass at the heart of their fiber-optic cables, devised a new system of rotational control that does away with torque cables, miniaturized the imaging lens, and made a number of other engineering innovations.

The end result: a slender probe, about the size of a fine wire, that spins 250 times per second, snapping images as it glides back through the blood vessel. Researchers flush out blood cells with a tablespoon of liquid, then manually or automatically retract the probe, revealing a section of the artery about the length of a lip balm tube.


St. Michael’s Foundation

Clinical Confirmation

After initial testing in rabbits, dogs, pigs, and human cadavers, Ughi’s team sent the device to two clinical groups: Pereira’s in Toronto and Pedro Lylyk’s at the Sagrada Familia Clinic in Buenos Aires, Argentina. Across the two groups, neurosurgeons treated the 32 participants in the latest study, snaking the imaging probe through the patients’ groins or wrists and into their brains.

The procedure was safe and well-tolerated across different anatomies, underlying disease conditions, and the complexity of prior interventions. Moreover, the information provided frequently led to actionable insights—in one case, prompting clinicians to prescribe anti-platelet drugs when hidden clots were discovered; in another, aiding in the proper placement of stents that were not flush against the arterial wall.

“We were successful in every single case,” Ughi says. “So, this was a huge confirmation that the technology is ready to move forward.”

“We can see a lot more information that was not accessible before.” —Vitor Mendes Pereira, St. Michael’s Hospital

A startup called Spryte Medical aims to do just that. According to founder and CEO David Kolstad, the company is in discussions with regulatory authorities in Europe, Japan, and the United States to determine the steps necessary to bring the imaging probe to market.

At the same time, Spryte—with Ughi as senior director of advanced development and software engineering—is working on machine learning software to automate the image analysis process, thus simplifying diagnostics and treatment planning for clinicians.

Bolstered by the latest data, cerebrovascular specialists like Siddiqui now say they are chomping at the bit to get their hands on the imaging probe once it clears regulatory approval.

“I’m really impressed,” Siddiqui says. “This is a tool that many of us who do these procedures wish they had.”

MRI Unveils Enhanced Capabilities by Eliminating Shielding and Utilizing Superconducting Magnets

MRI Unveils Enhanced Capabilities by Eliminating Shielding and Utilizing Superconducting Magnets

Magnetic resonance imaging (MRI) has revolutionized healthcare by providing radiation-free, non-invasive 3-D medical images. However, MRI scanners often consume 25 kilowatts or more to power magnets producing magnetic fields up to 1.5 tesla. These requirements typically limits scanners’ use to specialized centers and departments in hospitals.

A University of Hong Kong team has now unveiled a low-power, highly simplified, full-body MRI device. With the help of artificial intelligence, the new scanner only requires a compact 0.05 T magnet and can run off a standard wall power outlet, requiring only 1,800 watts during operation. The researchers say their new AI-enabled machine can produce clear, detailed images on par with those from high-power MRI scanners currently used in clinics, and may one day help greatly improve access to MRI worldwide.

To generate images, MRI applies a magnetic field to align the poles of the body’s protons in the same direction. An MRI scanner then probes the body with radio waves, knocking the protons askew. When the radio waves turn off, the protons return to their original alignment, transmitting radio signals as they do so. MRI scanners receive these signals, converting them into images.

More than 150 million MRI scans are conducted worldwide annually, according to the Organization for Economic Cooperation and Development. However, despite five decades of development, clinical MRI procedures remain out of reach for more than two-thirds of the world’s population, especially in low- and middle-income countries. For instance, whereas the United States has 40 scanners per million inhabitants, in 2016 there were only 84 MRI units serving West Africa’s population of more than 370 million.

This disparity largely stems from the high costs and specialized settings required for standard MRI scanners. They use powerful superconducting magnets that require a lot of space, power, and specialized infrastructure. They also need rooms shielded from radio interference, further adding to hardware costs, restricting their mobility, and hampering their availability in other medical settings.

Scientists around the globe have already been exploring low-cost MRI scanners that operate at ultra-low-field (ULF) strengths of less than 0.1 T. These devices may consume much less power and prove potentially portable enough for bedside use. Indeed, as the Hong Kong team notes, MRI development initially focused on low fields of about 0.05 T, until the introduction of the first whole-body 1.5 T superconducting scanner by General Electric in 1983.

illustration of a person laying down into between a cart and blocks on top, black and white images of different body parts below, text explaining throughout
The new MRI scanner (top left) is smaller than conventional scanners, and does away with bulky RF shielding and superconducting magnetics. The new scanner’s imaging resolution is on par with conventional scanners (bottom).Ed X. Wu/The University of Hong Kong

Current ULF MRI scanners often rely on AI to help reconstruct images from what signals they gather using relatively weak magnetic fields. However, until now, these devices were limited to solely imaging the brain, extremities, or single organs, Udunna Anazodo, an assistant professor of neurology and neurosurgery at McGill University in Montreal who did not take part in the work, notes in a review of the new study.

The Hong Kong team have now developed a whole-body ULF MRI scanner in which patients are placed between two permanent neodymium ferrite boron magnet plates—one above the body and the other below. Although these permanent magnets are far weaker than superconductive magnets, they are low-cost, readily available, and don’t require liquid helium or to be cooled to superconducting temperatures. In addition, the amount of energy ULF MRI scanners deposit into the body is roughly one-thousandth that from conventional scanners, making heat generation during imaging much less of a concern, Anazodo notes in her review. ULF MRI is also much quieter than regular MRI, which may help with pediatric scanning, she adds.

The new machine consists of two units, each roughly the size of a hospital gurney. One unit houses the MRI device, while the other supports the patient’s body as it slides into the scanner.

To account for radio interference from both the outside environment and the ULF MRI’s own electronics, the scientists deployed 10 small sensor coils around the scanner and inside the electronics cabinet to help the machine detect potentially disruptive radio signals. They also employed deep learning AI methods to help reconstruct images even in the presence of strong noise. They say this eliminates the need for shielding against radio waves, making the new device far more portable than conventional MRI.

In tests on 30 healthy volunteers, the device captured detailed images of the brain, spine, abdomen, heart, lung, and extremities. Scanning each of these targets took eight minutes or less for image resolutions of roughly 2 by 2 by 8 cubic millimeters. In Anazodo’s review, she notes the new machine produced image qualities comparable to those of conventional MRI scanners.

“It’s the beginning of a multidisciplinary endeavor to advance an entirely new class of simple, patient-centric and computing-powered point-of-care diagnostic imaging device,” says Ed Wu, a professor and chair of biomedical engineering at the University of Hong Kong.

The researchers used standard off-the-shelf electronics. All in all, they estimate hardware costs at about US $22,000. (According to imaging equipment company Block Imaging in Holt, Michigan, entry-level MRI scanners start at $225,000, and advanced premium machines can cost $500,000 or more.)

The prototype scanner’s magnet assembly is relatively heavy, weighing about 1,300 kilograms. (This is still lightweight compared to a typical clinical MRI scanner, which can weigh up to 17 tons, according to New York University’s Langone Health center.) The scientists note that optimizing the hardware could reduce the magnet assembly’s weight to about 600 kilograms, which would make the entire scanner mobile.

The researchers note their new device is not meant to replace conventional high-magnetic-field MRI. For instance, a 2023 study notes that next-generation MRI scanners using powerful 7 T magnets could yield a resolution of just 0.35 millimeters. Instead, ULF MRI can complement existing MRI by going to places that can’t host standard MRI devices, such as intensive care units and community clinics.

In an email, Anazodo adds this new Hong Kong work is just one of a number of exciting ULF MRI scanners under development. For instance, she notes that Gordon Sarty at the University of Saskatchewan and his colleagues are developing that device that is potentially even lighter, cheaper and more portable than the Hong Kong machine, which they are researching for use in whole-body imaging on the International Space Station.

Wu and his colleagues detailed their findings online 10 May in the journal Science.

This article appears in the July 2024 print issue as “Compact MRI Ditches Superconducting Magnets.”

Innovative Heart Monitor Design Draws Inspiration from the Hearing Mechanism of Sea Turtles

Innovative Heart Monitor Design Draws Inspiration from the Hearing Mechanism of Sea Turtles

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Sea turtles are remarkable creatures for a number of reasons, including the way they hear underwater—not through openings in the form of ears, but by detecting vibrations directly through the skin covering their auditory system. Inspired by this ability to detect sound through skin, researchers in China have created a heart-monitoring system, which initial tests in humans suggest may be a viable for monitoring heartbeats.

A key way in which doctors monitor heart health involves “listening” to the heartbeat, either using a stethoscope or more sophisticated technology, like echocardiograms. However, these approaches require a visit to a specialist, and so researchers have been keen to develop alternative, lower cost solutions that people can use at home, which could also allow for more frequent testing and monitoring.

Junbin Zang, a lecturer at the North University of China, and his colleagues specialize in creating heart-monitoring technologies. Their interest was piqued when they learned about the inner workings of the sea turtle’s auditory system, which is able to detect low-frequency signals, especially in the 300- to 400-hertz range.

“Heart sounds are also low-frequency signals, so the low-frequency characteristics of the sea turtle’s ear have provided us with great inspiration,” explains Zang.

At a glance, it looks like turtles don’t have ears. Their auditory system instead lies under a layer of skin and fat, through which it picks up vibrations. As with humans, a small bone in the ear vibrates as sounds hit it, and as it oscillates, those pulses are converted to electrical signals that are sent to the brain for processing and interpretation.

A sea turtle swims underwater in a blue sea.

iStock

But sea turtles have a unique, slender T-shaped conduit that encapsulates their ear bones, restricting the movement of the similarly T-shaped ear bones to only vibrate in a perpendicular manner. This design provides their auditory system with high sensitivity to vibrations.

Zang and his colleagues set out to create a heart monitoring system with similar features. They created a T-shaped heart-sound sensor that imitates the ear bones of sea turtles using a tiny MEMS cantilever beam sensor. As sound hits the sensor, the vibrations cause deformations in its beam, and the fluctuations in the voltage resistance are then translated into electrical signals.

The researchers first tested the sensor’s ability to detect sound in lab tests, and then tested the sensor’s ability to monitor heartbeats in two human volunteers in their early 20s. The results, described in a study published 1 April in IEEE Sensors Journal, show that the sensor can effectively detect the two phases of a heartbeat.

“The sensor exhibits excellent vibration characteristics,” Zang says, noting that it has a higher vibration sensitivity compared to other accelerometers on the market.

However, the sensor currently picks up a significant amount of background noise, which Zang says his team plans to address in future work. Ultimately, they are interested in integrating this novel bioinspired sensor into devices they have previously created—including portable handheld and wearable versions, and a relatively larger version for use in hospitals—for the simultaneous detection of electrocardiogram and phonocardiogram signals.

This article appears in the July 2024 print issue as “Sea Turtles Inspire Heart-Monitor Design.”

Revolutionary Miniature Biosensor Reveals the Hidden Insights of perspiration

Revolutionary Miniature Biosensor Reveals the Hidden Insights of perspiration

Sweat: We all do it. It plays an essential role in controlling body temperature by cooling the skin through evaporation. But it can also carry salts and other molecules out of the body in the process. In medieval Europe, people would lick babies; if the skin was salty, they knew that serious illness was likely. (We now know that salty skin can be an indicator for cystic fibrosis.)

Scientists continue to study how the materials in sweat can reveal details about an individual’s health, but often they must rely on gathering samples from subjects during strenuous exercise in order to get samples that are sufficiently large for analysis.

Now researchers in China have developed a wearable sensor system that can collect and process small amounts of sweat while providing continuous detection. They have named the design a “skin-interfaced intelligent graphene nanoelectronic” patch, or SIGN for short. The researchers, who described their work in a paper published in Advanced Functional Materials, did not respond to IEEE Spectrum’s interview requests.

The SIGN sensor patch relies on three separate components to accomplish its task. First, the sweat must be transported from the skin into microfluidic chambers. Next, a special membrane removes impurities from the fluid. Finally, this liquid is delivered to a bioreceptor that can be tuned to detect different metabolites.

The transport system relies on a combination of hydrophilic (water-attracting) and hydrophobic (water-repelling) materials. This system can move aqueous solutions along microchannels, even against gravity. This makes it possible to transport small samples with precision, regardless of the device’s orientation.

The fluid is transported to a Janus membrane, where impurities are blocked. This means that the sample that reaches the sensor is more likely to produce accurate results.

Finally, the purified sweat arrives at a flexible biosensor. This graphene sensor is activated by enzymes designed to detect the desired biomarker. The result is a transistor that can accurately measure the amount of the biomarker in the sample.

illustration of a black box with a snowflake-looking drawing and orange lines and numbers
At its center, the system has a membrane that removes impurities from sweat and a biosensor that detects biomarkers.Harbin Institute of Technology/Shenyang Aerospace University

One interesting feature of the SIGN patch is that it can provide continuous measurements. The researchers tested the device through multiple cycles of samples with known concentrations of a target biomarker, and it was about as accurate after five cycles as it was after just one. This result suggests that it could be worn over an extended period without having to be replaced.

Continuous measurements can provide useful longitudinal data. However, Tess Skyrme, a senior technology analyst at the research firm IDTechEx, points out that continuous devices can have very different sampling rates. “Overall, the right balance of efficient, comfortable, and granular data collection is necessary to disrupt the market,” she says, noting that devices also need to optimize “battery life, calibration, and data accuracy.”

The researchers have focused on lactate—a metabolite that can be used to assess a person’s levels of exercise and fatigue—as the initial biomarker to be detected. This function is of particular interest to athletes, but it can also be used to monitor the health status of workers in jobs that require strenuous physical activity, especially in hazardous or extreme working conditions.

Not all experts are convinced that biomarkers in sweat can provide accurate health data. Jason Heikenfeld, director of the Novel Device Lab at the University of Cincinnati, has pivoted his research on wearable biosensing from sweat to the interstitial fluid between blood vessels and cells. “Sweat glucose and lactate are way inferior to measures that can be made in interstitial fluid with devices like glucose monitors,” he tells Spectrum.

The researchers also developed a package to house the sensor. It’s designed to minimize power consumption, using a low-power microcontroller, and it includes a Bluetooth communications chip to transmit data wirelessly from the SIGN patch. The initial design provides for 2 hours of continuous use without charging, or up to 20 hours in standby mode.