The White House, Microsoft, SaS, Foley Hoag & Astellas Provide AI Insights for Investors: Section One - Current Landscape

The White House, Microsoft, SaS, Foley Hoag & Astellas Provide AI Insights for Investors: Section One – Current Landscape

Just five years ago, discussing artificial intelligence with senior leaders in the life sciences sector posed a significant challenge. Today, with AI technologies available to the public, their adoption has surged, becoming integral to strategic plans that extend into 2030. Nevertheless, the rapid rise of AI comes with considerable risks related to misunderstandings about its capabilities, especially amid shifting legal frameworks and the anticipated innovations from quantum computing. Investors contemplating shifts in funding towards AI and companies that embrace this technology must grasp the current landscape of AI within the life sciences sector alongside future strategies.

A recent panel held by the Cambridge Healthtech Institute (CHI) in August during the Bioprocessing Venture, Innovation & Partnering Conference focused on AI progress in bioprocessing and drug development. Lead by Lori Ellis from BioSpace, the AI & Technology Panel: Navigating Innovation, Technological Advancements, and Evolving Regulations provided insights for investors and industry leadership on the present state of AI and its future trajectory. This article is derived from the insights shared during that session, focusing on AI’s current role in bioprocessing.

Comprehending AI and POC Purgatory

Generative AI has catalyzed an explosion of interest in AI technologies, although it’s essential to remember that AI has been in existence for decades, with generative AI being just a piece of the broader framework.

 Mike Walker, executive director - life sciences supply chain, Microsoft.

Mike Walker, executive director – life sciences supply chain, Microsoft.

Cambridge Healthtech Institute

With the unprecedented amount of data and enhanced capabilities for processing and storage, significant advancements are being achieved in the industry, as highlighted by Mike Walker, executive director of life sciences supply chain at Microsoft. Nevertheless, companies face hurdles in effectively embracing AI technologies. To shape their investment portfolio and make informed future choices, investors need to understand AI’s current position within the life sciences sector.

Walker mentioned the prevalence of “proof of concept (POC) purgatory,” where promising ideas lead to multiple POCs but leave uncertainty about their practical applications. Many legacy paper processes remain across life sciences, and as the industry digitizes, a richer understanding of AI’s evolution is essential.

Data Context and Trust

The growing volume of data introduces further inquiries and considerations; data quality is vital in AI discussions. “Context is everything with data,” Walker emphasized. “What does this data signify? Is it even usable? Can I rely on it?” The term “trust” carries various implications, particularly regarding data quality and context, which if mismanaged, can lead to significant errors. He reiterated the importance of addressing data biases, expressing that incorrect outcomes could prompt recalls or procedural failures.

Risk Profiles and Management in AI

Sarah Glaven, a principal assistant director at the White House Office of Science and Technology Policy, noted that AI’s impact on bioeconomy spans numerous sectors: health, agriculture, climate, economic security, and industrial chemicals. The Executive Order on Advancing Biotechnology and Biomanufacturing emphasizes protection and balancing the benefits of AI against potential risks.

Sarah Glaven, principal assistant director, biotechnology and biomanufacturing, White House Office of Science and Technology Policy

Sarah Glaven, principal assistant director, biotechnology and biomanufacturing, White House Office of Science and Technology Policy.

Cambridge Healthtech Institute

Walker suggested that different AI types require distinct risk profiles. “Machine learning should be handled differently than generative AI because machine learning deduces from existing data, while generative AI fabricates synthetic data based on models,” he stated. Generative AI’s tendency to produce ‘hallucinations’ adds complexity and risk. “Despite the opportunities AI presents, significant risks necessitate proactive management,” Walker asserted.

Risk management in AI should align with standard business practices. Colin Zick, a partner at Foley Hoag LLP, emphasized that AI risks should not be viewed as distinct from other business risks. “It’s crucial that stakeholders understand your approach, its limitations, and the necessary insurance,” he detailed. Creating a precise risk understanding is vital for management comfort levels concerning potential exposures.

AI risk management must be contextualized with its adoption. Sherrine Eid, global head of real-world evidence & epidemiology at SAS Institute, Inc., cautioned against overly complex AI technologies. “If a sophisticated model complicates procedures and increases FDA approval timelines, what’s the real gain?” she questioned.

Nagisa Sakurai, a senior investment manager at Astellas Pharma – Astellas Venture Management, aligned with the panelists, sharing that while AI holds tremendous potential for drug and medical device development, trust remains an issue. “From an investment viewpoint, we grapple with how much risk we should embrace,” Sakurai concluded.

Regulatory and Legislative Influences on AI

Colin Zick, partner, Foley Hoag LLP.

Colin Zick, partner, Foley Hoag LLP.

Cambridge Healthtech Institute

The EU AI Act has established the groundwork for global AI regulatory frameworks, expected to evolve along with technology. Zick believes there will be a convergence around the EU AI Act, likening it to the GDPR model adopted by various states in the U.S. However, there’s significant resistance to instituting privacy regulations similar to the EU’s model in the U.S. “The financial implications of AI present more robust opposition,” he explained, predicting less uniformity than what has occurred regarding privacy legislation.

Furthermore, the Supreme Court’s upcoming decisions on the Loper Bright and Corner Post cases could have ramifications for AI regulations. The rulings could reduce regulatory agency authority and alter the timeline within which companies can contest federal regulations based on harm caused by federal guidance. Zick’s firm is keeping a close watch on these developments, noting an uptick in litigation challenging federal oversight.

Nagisa Sakurai, senior investment manager, Astellas Pharma - Astellas Venture Management.

Nagisa Sakurai, senior investment manager, Astellas Pharma – Astellas Venture Management.

Cambridge Healthtech Institute

Glaven highlighted that future nuances might be lost in translation, potentially overwhelming regulatory agencies currently facing staffing shortages. Balancing the need for accelerated regulatory processes with the Biden administration’s commitment to facilitating product market entry poses significant challenges.

Despite these challenges, the administration, through the Office of Science and Technology Policy, aims to foster collaboration between government and industry. Glaven noted, “We want stakeholder perspectives to ensure that regulations align with industry needs,” which includes parameters on funding and risk management within regulatory frameworks.

Preparing for Cybersecurity Risks

AI’s capacity to amplify cyber threats, like denial-of-service attacks and phishing schemes, is widely recognized, yet AI can also be creatively utilized to mislead businesses.

From a corporate standpoint, various avenues present risks, notably around patent filings generated by AI. Given the multitude of risks AI presents, developing risk mitigation tactics based on “what if?” scenarios is crucial. Walker highlighted quantum computing as an example: “Using quantum annealing can reduce computational tasks significantly, urging us to consider potential implications for our businesses.”

When addressing healthcare data, HIPAA mandates emergency preparedness concerning protected health information. Zick pointed out that while HIPAA lays out expectations, it lacks explicit guidance on addressing these threats.

Organizations must establish fundamental data hygiene practices and emergency response strategies, ensuring they comprehend potential vulnerabilities. Without this insight, understanding ‘what the adversaries possess’ becomes practically impossible.

Eid emphasized liability considerations from investing in emerging companies, illustrating that not all risks can be assumed. While companies like SAS may share risks effectively, they cannot assume complete liability for every venture.

This in-depth discussion occurred at a recent event tailored for investors and industry leaders focused on innovations in bioprocessing. The forthcoming segment will explore strategies for AI assimilation, communication, and adapting to evolving trends.

Disclaimer: The views expressed by panelists do not necessarily reflect the beliefs of their respective organizations.