Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

Global Breakthrough: FGC2.3 Feline Vocalization Project Nears Record Reads — Over 14,000 Scientists Engage With Cat-Human Translation Research

MIAMI, FL — The FGC2.3: Feline Vocalization Classification and Cat Translation Project, authored by Dr. Vladislav Reznikov, has crossed a critical scientific milestone — surpassing 14,000 reads on ResearchGate and rapidly climbing toward record-setting levels in the field of animal communication and artificial intelligence. This pioneering work aims to develop the world’s first scientifically grounded…

Tariff-Free Relocation to the US

Tariff-Free Relocation to the US

EU, China, and more are now in the crosshairs. How’s next? It’s time to act. The Trump administration has announced sweeping tariff hikes, as high as 50%, on imports from the European Union, China, and other major markets. Affected industries? Pharmaceuticals, Biotech, Medical Devices, IVD, and Food Supplements — core sectors now facing crippling costs,…

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

Global Distribution of the NRAs Maturity Levels as of the WHO Global Benchmarking Tool and the ICH data

This study presents the GDP Matrix by Dr. Vlad Reznikov, a bubble chart designed to clarify the complex relationships between GDP, PPP, and population data by categorizing countries into four quadrants—ROCKSTARS, HONEYBEES, MAVERICKS, and UNDERDOGS depending on National Regulatory Authorities (NRAs) Maturity Level (ML) of the regulatory affairs requirements for healthcare products. Find more details…

Training AI to Anticipate the Appearance of Cells Prior to Conducting Experiments

Training AI to Anticipate the Appearance of Cells Prior to Conducting Experiments

This is a sponsored article brought to you by MBZUAI.

If you’ve ever tried to guess how a cell will change shape after a drug or a gene edit, you know it’s part science, part art, and mostly expensive trial-and-error. Imaging thousands of conditions is slow; exploring millions is impossible.

A new paper in Nature Communications proposes a different route: simulate those cellular “after” images directly from molecular readouts, so you can preview the morphology before you pick up a pipette. The team calls their model MorphDiff, and it’s a diffusion model guided by the transcriptome, the pattern of genes turned up or down after a perturbation.

At a high level, the idea flips a familiar workflow. High-throughput imaging is a proven way to discover a compound’s mechanism or spot bioactivity but profiling every candidate drug or CRISPR target isn’t feasible. MorphDiff learns from cases where both gene expression and cell morphology are known, then uses only the L1000 gene expression profile as a condition to generate realistic post-perturbation images, either from scratch or by transforming a control image into its perturbed counterpart. The claim is that competitive fidelity on held-out (unseen) perturbations across large drug and genetic datasets plus gains on mechanism-of-action (MOA) retrieval can rival real images.

aspect_ratioLogo with connected black dots next to the words Mohamed bin Zayed University of Artificial Intelligence

This research led by MBZUAI researchers starts from a biological observation: gene expression ultimately drives proteins and pathways that shape what a cell looks like under the microscope. The mapping isn’t one-to-one, but there’s enough shared signal for learning. Conditioning on the transcriptome offers a practical bonus too: there’s simply far more publicly accessible L1000 data than paired morphology, making it easier to cover a wide swath of perturbation space. In other words, when a new compound arrives, you’re likely to find its gene signature which MorphDiff can then leverage.

Under the hood, MorphDiff blends two pieces. First, a Morphology Variational Autoencoder (MVAE) compresses five-channel microscope images into a compact latent space and learns to reconstruct them with high perceptual fidelity. Second, a Latent Diffusion Model learns to denoise samples in that latent space, steering each denoising step with the L1000 vector via attention.

Diagram depicting cell painting analysis pipeline, including dataset curation and perturbation modeling. Wang et al., Nature Communications (2025), CC BY 4.0

Diffusion is a good fit here: it’s intrinsically robust to noise, and the latent space variant is efficient enough to train while preserving image detail. The team implements both gene-to-image (G2I) generation (start from noise, condition on the transcriptome) and image-to-image (I2I) transformation (push a control image toward its perturbed state using the same transcriptomic condition). The latter requires no retraining thanks to an SDEdit-style procedure, which is handy when you want to explain changes relative to a control.

It’s one thing to generate photogenic pictures; it’s another to generate biologically faithful ones. The paper leans into both: on the generative side, MorphDiff is benchmarked against GAN and diffusion baselines using standard metrics like FID, Inception Score, coverage, density, and a CLIP-based CMMD. Across JUMP (genetic) and CDRP/LINCS (drug) test splits, MorphDiff’s two modes typically land first and second, with significance tests run across multiple random seeds or independent control plates. The result is consistent: better fidelity and diversity, especially on OOD perturbations where practical value lives.

The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments.

More interesting for biologists, the authors step beyond image aesthetics to morphology features. They extract hundreds of CellProfiler features (textures, intensities, granularity, cross-channel correlations) and ask whether the generated distributions match the ground truth.

In side-by-side comparisons, MorphDiff’s feature clouds line up with real data more closely than baselines like IMPA. Statistical tests show that over 70 percent of generated feature distributions are indistinguishable from real ones, and feature-wise scatter plots show the model correctly captures differences from control on the most perturbed features. Crucially, the model also preserves correlation structure between gene expression and morphology features, with higher agreement to ground truth than prior methods, evidence that it’s modeling more than surface style.

Graphs and images comparing different computational methods in biological data analysis. Wang et al., Nature Communications (2025), CC BY 4.0

The drug results scale up that story to thousands of treatments. Using DeepProfiler embeddings as a compact morphology fingerprint, the team demonstrates that MorphDiff’s generated profiles are discriminative: classifiers trained on real embeddings also separate generated ones by perturbation, and pairwise distances between drug effects are preserved.

Charts comparing accuracy across morphing methods for image synthesis techniques in four panels. Wang et al., Nature Communications (2025), CC BY 4.0

That matters for the downstream task everyone cares about: MOA retrieval. Given a query profile, can you find reference drugs with the same mechanism? MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images. In top-k retrieval experiments, the average improvement over the strongest baseline is 16.9 percent and 8.0 percent over transcriptome-only, with robustness shown across several k values and metrics like mean average precision and folds-of-enrichment. That’s a strong signal that simulated morphology contains complementary information to chemical structure and transcriptomics which is enough to help find look-alike mechanisms even when the molecules themselves look nothing alike.

MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images.

The paper also lists some current limitations that hint at potential future improvements. Inference with diffusion remains relatively slow; the authors suggest plugging in newer samplers to speed generation. Time and concentration (two factors that biologists care about) aren’t explicitly encoded due to data constraints; the architecture could take them as additional conditions when matched datasets become available. And because MorphDiff depends on perturbed gene expression as input, it can’t conjure morphology for perturbations that lack transcriptome measurements; a natural extension is to chain with models that predict gene expression for unseen drugs (the paper cites GEARS as an example). Finally, generalization inevitably weakens as you stray far from the training distribution; larger, better-matched multimodal datasets will help, as will conditioning on more modalities such as structures, text descriptions, or chromatin accessibility.

What does this mean in practice? Imagine a screening team with a large L1000 library but a smaller imaging budget. MorphDiff becomes a phenotypic copilot: generate predicted morphologies for new compounds, cluster them by similarity to known mechanisms, and prioritize which to image for confirmation. Because the model also surfaces interpretable feature shifts, researchers can peek under the hood. Did ER texture and mitochondrial intensity move the way we’d expect for an EGFR inhibitor? Did two structurally unrelated molecules land in the same phenotypic neighborhood? Those are the kinds of hypotheses that accelerate mechanism hunting and repurposing.

The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments. We’ve already seen text-to-image models explode in consumer domains; here, a transcriptome-to-morphology model shows that the same diffusion machinery can do scientifically useful work such as capturing subtle, multi-channel phenotypes and preserving the relationships that make those images more than eye candy. It won’t replace the microscope. But if it reduces the number of plates you have to run to find what matters, that’s time and money you can spend validating the hits that count.

Prebiotic Role of Resistant Starch in Enhancing Antioxidant and Choline Levels

Prebiotic Role of Resistant Starch in Enhancing Antioxidant and Choline Levels

Research on Solnul, a resistant starch derived from potatoes, reveals its capacity to enhance nutrient absorption and improve gut barrier function. Studies indicate Solnul significantly boosts levels of choline, vitamins A and E, and influences Akkermansia abundance, a probiotic linked to gut health. Additionally, Solnul appears to reduce free fatty acids without altering TMAO levels, suggesting potential benefits for metabolic health and nutrient absorption. Dr. Jason Bush highlights these findings as a promising advance in understanding the gut-brain axis and addressing nutrient deficiencies.

Innovative Communication: Artificial Neurons Engage with Living Cells for the First Time

Innovative Communication: Artificial Neurons Engage with Living Cells for the First Time

The bacteria Geobacter sulfurreducens came from humble beginnings; it was first isolated from dirt in a ditch in Norman, Oklahoma. But now, the surprisingly remarkable microbes are the key to the first ever artificial neurons that can directly interact with living cells.

G. sulfurreducens communicate with each other through tiny, protein-based wires that researchers at the University of Massachusetts Amherst have harvested and used to make artificial neurons that can, for the first time, process information from living cells without an intermediary device amplifying or modulating the signals, the researchers say.

While some artificial neurons already exist, they require electronic amplification to sense the signals our bodies produce, explains Jun Yao, who works on bioelectronics and nanoelectronics at UMass Amherst. The amplification inflates both power usage and circuit complexity, and so counters efficiencies found in the brain.

Yao’s team’s neuron can understand the body’s signals at their natural amplitude of around 0.1 volts. This “is highly novel,” says Bozhi Tian, a biophysicist who studies living bioelectronics at The University of Chicago and was not involved in the work. This work “bridges the long-standing gap between electronic and biological signaling” and demonstrates interaction between artificial neurons and living cells that Tian calls “unprecedented.”

Real neurons and artificial neurons

Biological neurons are the fundamental building blocks of the brain. If external stimuli are strong enough, charge builds up in a neuron, triggering an action potential, a spike of voltage that travels down the neuron’s body to enable all types of bodily functions, including emotion and movement.

Scientists have been working to engineer a synthetic neuron for decades, chasing after the efficiency of the human brain, which have so far seemed to escape the abilities of electronics.

Yao’s group has designed new artificial neurons that mimic how biological neurons sense and react to electrical signals. They use sensors to monitor external biochemical changes and memristors—essentially resistors with memory—to emulate the action potential process.

As voltage from the external biochemical events increases, ions accumulate and begin to form a filament across a gap in the memristor—which in this case was filled with protein nanowires. If there is enough voltage, the filament completely bridges the gap. Current shoots through the device, and the filament then dissolves, dispersing the ions and stopping the current. The complete process mimics a neuron’s action potential.

The team tested its artificial neurons by connecting them to cardiac tissue. The devices measured a baseline amount of cellular contraction, which did not produce enough signal to cause the artificial neuron to fire. Then the researchers took another measurement after the tissue was dosed with norepinephrine—a drug that increases how frequently cells contract. The artificial neurons only triggered action potentials during the higher, medicated trial, proving that they can detect changes in living cells.

The experimental results were published 29 September in Nature Communications.

Natural nanowires

The group has G. sulfurreducens to thank for the breakthrough.

The microbes synthesize miniature cables, called protein nanowires, that they use for intraspecies communication. These cables are charge conductors that survive for long periods of time in the wild without decaying. (Remember, they evolved for Oklahoma ditches.) They’re extremely stable, even for device fabrication, Yao says.

To the engineers, the most notable property of the nanowires is how efficiently ions move along them. The nanowires offered a low energy means of transferring charge between human cells and artificial neurons, thus avoiding the need for a separate amplifier or modulator. “And amazingly, the material is designed for this,” says Yao.

The group developed a method to shear the cables off of bacterial bodies, purifying the material and suspending it in a solution. They lay the mixture out and let the water evaporate, leaving a one-molecule-thin film made from the protein nanowire material.

This efficiency allows the artificial neuron to yield huge power savings. Yao’s group integrated the film into the memristor at the core of the neuron, lowering the energy barrier for the reaction that causes the memristor to respond to signals recognized by the sensor. With this innovation, the researchers say, the artificial neuron uses 1/10th the voltage and 1/100th the power of others.

Chicago’s Tian thinks this “extremely impressive” energy efficiency is “essential for future low-power, implantable, and biointegrated computing systems.”

The power advantages make this synthetic neuron design attractive for all kinds of applications, researchers say.

Responsive wearable electronics, like prosthetics that adapt to stimuli from the body, could make use of these new artificial neurons, Tian says. Eventually, implantable systems that rely on the neurons could “learn like living tissues, advancing personalized medicine and brain-inspired computing” to “interpret physiological states, leading to biohybrid networks that merge electronics with living intelligence,” he says.

The artificial neurons could also be useful in electronics outside the biomedical field. Millions of them on a chip could replace transistors, completing the same tasks while decreasing power usage, Yao says. The fabrication process for the neurons does not involve high temperatures and utilizes the same kind of photolithography silicon chip manufacturers do, he says.

Yao does, however, point out two possible bottlenecks producers could face when scaling up these artificial neurons for electronics. The first is obtaining more of the protein nanowires from G. sulfurreducens. His lab currently works for three days to generate only 100 micrograms of material—that’s about the mass of one grain of table salt. And that amount can only coat a very small device, so Yao questions how this step in the process could scale up for production.

His other concern is how to achieve a uniform coating of the film at the scale of a silicon wafer. “If you wanted to make high-density, small devices, the uniformity of film thickness actually is a critical parameter,” he explains. But the artificial neurons his group has developed are too small to do any meaningful uniformity testing for now.

Tian doesn’t expect artificial neurons to replace silicon transistors in conventional computing, but instead sees them as a parallel offering for “hybrid chips that merge biological adaptability with electronic precision,” he says.

In the far future, Yao hopes that such bio-derived devices will also be appreciated for not contributing to e-waste. When a user no longer wants a device, they can simply dump the biological component in the surrounding environment, Yao says, because it won’t cause an environmental hazard.

“By using this kind of nature-derived, microbial material, we can create a greener technology that’s more sustainable for the world,” Yao says.