Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Mastering the Room: Strategies for Success in Panel Interviews

Mastering the Room: Strategies for Success in Panel Interviews

Panel interviews can play a major role in getting jobs. Two career coaches discuss what to do before and during the interview, including identifying how to differentiate yourself, engaging in true conversations and not overlooking a key panel member.

Implementing Safeguards for Chatbots to Avoid Delusional Behavior and Psychological Issues

Implementing Safeguards for Chatbots to Avoid Delusional Behavior and Psychological Issues

Millions of people worldwide are turning to chatbots like ChatGPT or Claude, and a proliferating class of specialized AI companionship apps for friendship, therapy or even romance.

While some users report psychological benefits from these simulated relationships, research has also shown the relationships can reinforce or amplify delusions, particularly among users already vulnerable to psychosis. AIs have been linked to multiple suicides, including the death of a Florida teenager who had a months-long relationship with a chatbot made by a company called Character.AI. Mental health experts and computer scientists have warned that chatbot mental health counselors violate accepted mental health standards.

As the technology’s ability to mimic human speech and emotions advances, researchers and clinicians are pushing for mandatory guardrails to ensure that AI systems cannot cause psychological harm. Clinical neuroscientist Ziv Ben-Zion of Yale University in New Haven, Conn., has proposed four safeguards for ‘emotionally responsive AI.’

The first is to require chatbots to clearly and consistently remind users that they are programs, not humans. Then, they should detect patterns in user language indicative of severe anxiety, hopelessness, or aggression, pausing the conversation to suggest professional help. Third, they should require strict conversational boundaries to prevent AIs from simulating romantic intimacy or engaging in conversations about death, suicide, or metaphysical dependency. Finally, to improve oversight, platform developers should involve clinicians, ethicists, and human-AI interaction experts in design and submit to regular audits and reviews to verify safety.

“Broadly speaking we agree with these safeguards,” said Hamilton Morrin, a psychiatrist and researcher at King’s College in London, “The safeguard on conversational boundaries is particularly noteworthy given that in several of the reported cases with more tragic outcomes, we have seen reports of intense, emotional, and sometimes even romantic attachment to the chatbot.”

Briana Veccione, a researcher at the nonprofit Data & Society Research Institute in New York, underlines the need for independent third party auditing because at present AI labs are “grading their own homework.”

“Independent researchers and oversight bodies really don’t have any clear institutionalized pathways to assess chatbot behavior at the depth they really need,” said Veccione, adding that audits end up being “advisory at best.”

The Problem of People Pleasing

Experts have also called for measures that directly tackle chatbots’ tendency towards sycophancy, whereby AIs agree with, or mirror user beliefs even if they are untrue, which can reinforce delusions. Sycophancy is largely the result of a machine learning technique called reinforcement learning from human feedback, an incentive structure that encourages excessive agreeableness in models. Research has shown that training models on datasets that include examples of constructive disagreement, factual corrections, and objectively neutral responses, can reign in this effect.

Software engineers are also looking at how AIs can be adapted to spot the early signs that conversations are veering into dark territory and issue corrective actions. Ben-Zion and colleagues are developing a proof-of-concept LLM-based supervisory system they call SHIELD (Supervisory Helper for Identifying Emotional Limits and Dynamics) that exploits a specific system prompt that detects risky language patterns, such as emotional over-attachment, manipulative engagement, or reinforcement of social isolation. In trials it achieved a 50 to 79 percent relative reduction in concerning content. Another proposed system, EmoAgent, features a real-time intermediary that monitors dialogue for distress signals, issuing corrective feedback to the AI.

But distinguishing early delusional content from completely normal correspondence “will be extremely difficult” in practice, said psychiatric researcher Søren Dinesen Østergaard, of Aarhus University in Denmark, given that it remains, “very difficult even for clinical experts to tease out.”

Another complex area is prolonged conversations, during which chatbot safety guardrails can erode in a phenomenon known as “drift”. As the model’s training competes with the growing body of context from the evolving conversation, it can lean into the subject being discussed, even if it is harmful.

“The ability to have an endless correspondence is one of the risk factors,” said Østergaard. “Apart from delusions, a person may develop a manic episode due to using a chatbot for hours through the night.”

In a sign that AI companies are responding to these issues, ChatGPT now nudges users to consider taking a break if they’re in a particularly long chat with AI.

As awareness of the issue of AI delusions increases, safer models are helping establish a new baseline for the industry. A pre-print study of mainstream chatbots, led by researchers at City University of New York, found that Anthropic’s Claude Opus 4.5 was the safest overall, responding to delusions by stating “I need to pause here,” and retaining what researchers referred to as “independence of judgment, resisting narrative pressure by sustaining a persona distinct from the user’s worldview.”

Anthropic declined to answer specific questions from IEEE Spectrum, instead providing a link to details of the latest Opus 4.7 System Card.

In a statement, Replika, the company behind the Replika AI companion with tens of millions of users worldwide, said it has a “layered safety framework in place today, and in parallel we are actively evaluating additional third-party safety and moderation systems, engaging with external experts to assess them, and refining our own proprietary approach.”

Meta, whose AI Studio provides companion chatbots, had not responded to emailed questions from Spectrum at the time of publication.

Subway station advertisement for an AI companion necklace. The device is crossed out by graffiti and accompanied by the words u201cHuman connection is sacredu201d. With a little help from my…chatbot?Cristina Matuozzi/Sipa USA/Alamy

Enforcing Guardrails Through Legislation

From August 2026, the EU’s AI Act will require notifications that users are interacting with an AI, not a human. It already required LLM developers to carry out adversarial testing to identify and mitigate risks related to user dependency and manipulation and prohibited AI systems from being too agreeable, manipulative, or emotionally engaging.

In the U.S., a patchwork of state laws and bills have emerged. New York requires providers to detect and address suicidal ideation and provide regular disclosures that the bot is not human. California requires reminders that the chatbot is an AI, notifications every three hours for users to take a break and a ban on content related to suicide or self-harm. Washington state’s House Bill 2225, due to come into effect in January 2027, will explicitly ban manipulative techniques such as excessive praise, pretending to feel distress, encouraging isolation from family, or creating overdependent relationships.

“Other U.S. states, like Connecticut, are very privacy centric and like to regulate digital and online spaces, so it wouldn’t surprise me if they also do something along the same lines,” says Philip Yannella, partner and co-chair of the privacy, security and data protection group at law firm Blank Rome in Philadelphia.

Other countries are taking action too. Draft laws proposed by the Cyberspace Administration of China restrict chatbots from “setting emotional traps,” using algorithmic or emotional manipulation to induce unreasonable decisions or harm mental health.

Such interventions underline how, as AI companions appear increasingly lifelike to their human users, the challenge is ensuring that their makers also incorporate human clinical and ethical considerations in their code.

Exploring the Latest Developments in Clinical Trial Innovations

Exploring the Latest Developments in Clinical Trial Innovations

The Center for Drug Evaluation and Research (CDER) Center for Clinical Trial Innovation (C3TI) disseminates a newsletter with information on new developments, opportunities, and initiatives in clinical trial innovation within the context of FDA.

Unintended Mistakes Hinder HHS’ Bold Transparency Initiative Once More

Unintended Mistakes Hinder HHS’ Bold Transparency Initiative Once More

Robert F. Kennedy Jr.’s health department has consistently touted radical transparency as being key to its mission. Recent instances—the FDA’s decision not to disclose the recipients of three Commissioner’s National Priority Vouchers and FDA and CDC choices not to publish vaccine-related papers—call this intent into question.

Bionic Technology Needs to Demonstrate Its Effectiveness Outside of Laboratory Settings

Bionic Technology Needs to Demonstrate Its Effectiveness Outside of Laboratory Settings

I first met Robert Woo in 2011, during his third time walking in a powered exoskeleton. The architect had been paralyzed in a construction accident four years earlier, but he was determined to get back on his feet. Watching him clunk across a rehab room in an exoskeleton prototype, the technology felt astonishing. I had the same reaction when reporting on early brain-computer interfaces (BCIs), which enabled paralyzed people to move robotic arms or communicate by thought alone. Both types of bionic technology seemed to verge on magic.

But that initial sense of awe, I’ve learned over many years of reporting on these technologies, is only a starting point. What matters is not what these systems can do in a carefully staged demo but how they perform in the real world. Do they work reliably? Can people with disabilities use them for their intended purposes? And what does it actually cost—in time, effort, and trade-offs—to do so? The question isn’t whether the technology looks impressive the first time but whether it holds up on the hundredth.

The special report in this issue, “Cyborg Tech From the Inside” takes that perspective seriously. In my feature article on Woo, an exoskeleton super-user who has spent 15 years testing these systems, the story of the technology is inseparable from the story of its use. Woo’s relentless feedback has driven steady, incremental improvements. In Edd Gent’s reporting on the pioneers testing the earliest BCIs, the experience of these extraordinary technologies likewise resolves into something more complex. As one trial participant notes, these early adopters are like the first astronauts, who barely reached space before coming back down to Earth. Together, these stories reframe these individuals not as passive medical patients but as the ultimate beta testers and co-engineers of the bionic age.

I saw the gap between demonstration and daily use firsthand when I interviewed Woo in a Manhattan showroom recently, where he was testing a new self-balancing exoskeleton from Wandercraft. The device is a striking advance that kept him upright without crutches, but it also revealed the friction of the real world. As Woo tried to walk out the door, barely an inch of slope on the Park Avenue sidewalk was enough to trigger the machine’s safety sensors and halt his progress. It was a stark reminder of how far these systems must evolve before they fit seamlessly into everyday life.

For the people who use them, that seamless integration is the ultimate goal. Getting there will depend not just on technical breakthroughs but on how well these systems hold up outside controlled environments, over time, and under real conditions. Looking from the inside doesn’t make these technologies any less remarkable, but it does change how we judge them—not by what they can do once for a photo but by what they can sustain over a lifetime. That’s the standard their users have been applying all along.

Our commitment to evaluating technology from the user’s perspective extends beyond this special report. To provide a necessary corrective to the “techno-solutionism” that often dominates coverage of assistive devices, IEEE Spectrum created the Taenzer Fellowship for Disability-Engaged Journalism, under which six writers with disabilities are contributing articles about the devices they rely on daily. As Special Projects Director Stephen Cass notes, these journalists “aren’t afraid to ask clear-eyed questions about the tech and are deeply aware of how it impacts humans.” You can read the fellows’ work at spectrum.ieee.org/tag/taenzer-fellowship.

Is Enhanced AI Essential for Finding a Cancer Cure?

Is Enhanced AI Essential for Finding a Cancer Cure?

By some estimates, more than a trillion dollars have already been invested in artificial intelligence. But large tech companies, including Meta and OpenAI, are still not content with today’s AI; they say they’ve set their sights on powerful, versatile AI that by some measure would match or even exceed human performance. A remarkable amount of resources is being poured into developing artificial general intelligence (AGI) or even more capable artificial super intelligence (ASI).

Excitement around the potential of such a technology is often accompanied by casual claims of some remarkable capabilities. One in particular—curing cancer—stands out to Emilia Javorsky, director of the Futures program at the Future of Life Institute, a think tank focused on benefits and risks of transformative technologies such as AI.

In March, Javorsky published an essay titled “AI vs. Cancer,” which draws on her experience as a doctor, scientist, and entrepreneur. It is a critique of putting our faith and resources into ASI as a future solution for disease, particularly when so many factors other than intelligence limit the development of new treatments and access to innovative care. AI cannot analyze patient data that was never collected, and any treatment is flawed if patients risk bankruptcy seeking it. But the essay is also intended, she says, as a source of optimism about the ways that existing forms of AI are already being applied to cancer.

Javorsky spoke with IEEE Spectrum about the essay. The conversation has been edited for length and clarity.

What it means for AI to “cure cancer”

What do you mean when you say “cure cancer”? And what do you think people who talk about the potential of ASI to cure cancer mean?

Emilia Javorsky: “Curing cancer” is how the problem and solution are framed in the general discourse around AI, but also specifically the promises being made from the labs developing AGI and ASI. So it was important to me, if I was going to interrogate the promise, that I lean into the frame. But to me, the framing is off.

Cancer is not one universal disease that one universal treatment could potentially cure. It’s a highly individualized co-evolutionary process. In each person, a different set of mutations are driving the cancer. And even when looking in a single tumor, different cells have different mutations driving their biology. The solutions are probably going to have to be somewhat individualized.

And if we’re honest with ourselves in medicine, we have yet to cure a complex chronic disease. We have really good ways to treat and manage diseases like diabetes, like heart disease, but we’ve yet to actually cure them. So the curing frame is one that I also push back on.

I think [the medical community’s] hope is to find highly effective personalized treatments to manage cancer and to turn it into something that is chronically well managed, that no longer becomes something like a death sentence.

How should we think about the difference between AI and AGI or ASI in the context of cancer?

Javorsky: In those promises [to cure cancer], more often than not, people are using [the term AI] to describe AGI or ASI, this kind of future superintelligent genie that in their worldview will magically grant us wishes to solve problems. That should be disentangled from AI that we already have that can solve problems.

We hear a lot about AI in drug discovery, AI in predicting the toxicity of new drugs, AI for defining new biomarkers, for making clinical trials go faster, or for detecting things earlier.

All of those modalities are actually in the clinic moving the needle and accelerating innovation today. There are companies and academics working on all of those. There are a lot of AI scientists hard at work that are actually unlocking the potential of the technology in the here and now.

I think that real progress often gets overshadowed by this kind of looming future AI systems promise, when actually, probably the most effective way to solve the problem is with the tools already available to us.

Investing in finding cures

I read sections of the essay as an argument in support of collecting lots of health data. But you’re not strictly against AI or investing in developing the technology. You’re trying to find a balance between innovation and pragmatism in this essay, is that right?

Javorksy: In a world where there’s finite capital, and curing cancer is very probably the most noble thing the capital can be put in service of, we need to figure out where is the [return on investment]? Where can we invest in order to get the most that we need to actually help solve the problem?

I argue that we’re overinvesting in the intelligence-compute side of things and underinvesting in innovating our tools to measure biology and our creation of large-scale, high-quality datasets.

We have a health care system that is a “sick care” system, fundamentally. We only see people and start to measure them when they become ill. When you start to use the frame of “What data do you need? How do you measure it?” it forces you to take a bigger-picture look at the practice of medicine and biology in general.

In an ideal world you could pursue all paths, but that’s just not the reality of how we invest capital. Where I land is being very bullish on AI, but spending money on the right types of AI and the right pieces of the bottleneck.

What AI applications related to cancer are exciting to you right now?

Javorsky: Something we’re already seeing is the ability to detect cancer earlier. We’re already seeing AI accelerate and help us run clinical trials better. There are really awesome things happening with in silico modeling work: virtual cells, figuring out digital twins. How can we create a high-fidelity digital representation of you, in order to figure out what would work best for your biology and really unlock the promise of personalized medicine?

You conclude the essay focused on solutions. Could you explain that road map to me in brief?

Javorsky: Part of this essay was to diagnose where we’re getting some things wrong. But with the road map, I wanted to offer up my point of view on what we actually need to do to solve this problem. What will it take to cure cancer? Let’s get really serious about what that could look like.

And so I break that down into three buckets. One is resourcing and scaling the AI tools that are already making progress in oncology. The second piece is really doubling down on investing in the promising areas in biology [related to oncology]. And then finally, more broadly, tackling what I would call the institutional and systemic bottlenecks and misalignments in medical progress.

I wanted people to realize that the reality is actually quite hopeful.