Skip to content

The Rise and Inspiration of Agentic AI in Clinical Development

Listen to the podcast here

Agentic AI in Clinical Trials
5:46

 

The Rise and Inspiration of Agentic AI in Clinical Development

From MaxisAI Inspire 2025 Panel Discussion

BC Consulting Founder and CEO Bryan Clayton recently participated in a panel discussion alongside Abhishek Gupta of BlueRock Therapeutics and Rakesh Maniar of PKR Health and Life Sciences LLC. The event was hosted by MaxisAI and moderated by Nicole Powell.

 

Cohost: Okay, let's get into this deep dive.
Cohost: We're looking at the Agents of Change panel from Max's AI Inspire 2025.
Cohost: Really interesting discussion hosted by Max's AI.
Host: Yeah, it featured Abhishek Gupta from Blue Rock Therapeutics, Rakesh Maniar from PKR Health and Life Sciences, and Brian Clayton from BC Consulting.
Cohost: And they really jumped into the thick of it, didn't they?
Cohost: Agentic AI in clinical development, the hurdles, the culture, what's actually working.
Host: They did.
Host: And just quickly for anyone maybe less familiar, agentic AI, we're talking about AI systems that can act autonomously towards a goal.
Cohost: Right.
Cohost: So the big question on the table was how to get these tools from just being like a buzzword executives like to actually being part of the day-to-day grind in drug development, safely, reliably.
Host: Exactly.
Host: And Gupta from Blue Rock really hit on that first challenge.
Cohost: Yeah.
Host: The business case.
Cohost: Yeah.
Cohost: Executives are enthusiastic, apparently, but pinning down the ROI, especially for smaller biopharmas, that's tough.
Host: He split it neatly, though, into quick wins, productivity boosts, like competitive intelligence, automating reports, stuff like that.
Host: And then upend, which is more about
Host: Fundamental financial impact.
Host: Think of fraud prevention, bigger strategic shifts.
Cohost: Makes sense.
Cohost: You have to walk before you can run.
Host: Well, precisely.
Host: And that's where Maynard's point about GXP compliance came in so strongly.
Host: Patient safety, data integrity, efficacy, those are paramount.
Cohost: Absolutely critical in this space.
Host: Yeah.
Host: So any AI agent, before it gets near anything impacting patients or core data, needs really solid governance from the get-go.
Host: Plus, upskilling people and being realistic about budgets, prioritizing hard.
Cohost: That governance piece ties into the cultural side, which honestly I thought was maybe the most revealing part of the chat.
Host: Especially Clayton's analogy.
Cohost: Right, the Maslow's hierarchy thing.
Cohost: Leaders are aiming for the top actualization transformation, while the teams using the tools are stuck down at the bottom, worried about basic safety.
Host: That's the core of adoption, isn't it?
Host: If people are worried, will this AI take my job?
Host: Or what if it messes up and I get blamed?
Host: It just won't embrace it.
Cohost: No way.
Cohost: You need that psychological safety to even experiment.
Cohost: Let it untrust it.
Host: And trust is everything with GXP.
Host: Maniar was very clear.
Host: The human in the loop isn't going anywhere soon.
Host: He called Agentic AI a virtual assistant.
Host: Not really a partner yet.
Cohost: A helpful tool, but one that needs oversight.
Host: Definitely.
Host: Every model involved in GXP work has to be completely ready for audits for inspections.
Host: You need the evidence to back it up.
Cohost: So how do you build that trust, overcome that risk aversion?
Host: Well, Gupta suggested starting small, low-risk tasks like, say, transcribing calls autonomously, using the tech for a risk-appropriate purpose, as he put it.
Cohost: OK, to be chosen.
Host: Exactly.
Host: Help teams get past what Clayton called the cultural storm phase, that bit where there's fear and maybe some internal competition.
Cohost: Before you get to stability and actual performance with the new tools.
Host: Yeah.
Cohost: So moving beyond the theory and the culture, what about actual progress?
Cohost: Where is agentic AI making inroads right now?
Host: We had some good concrete examples, mostly in non-GXP areas so far.
Host: Things like calendar scheduling or helping manage supplies for decentralized trials.
Host: Simple but useful stuff.
Cohost: And the learning there?
Host: That it's not just an IT thing.
Host: You need dedicated business resources involved in training these models.
Host: It takes time and operational know-how.
Cohost: OK.
Cohost: Practical realities.
Cohost: Now, Clayton offered some particularly clear examples of where things are working well, didn't he?
Host: He did.
Host: The one about using agentic AI through chat channels like Teams or Slack to handle common questions from clinical sites really stood out.
Cohost: Oh, yeah.
Cohost: Like, what kind of questions?
Host: Things about randomization, supply logistics.
Host: But apparently, the sites often prefer the instant consistent answers from the AI compared to waiting for a human help desk.
Cohost: That's fascinating.
Cohost: Consistency and speed winning out.
Host: It shows acceptance is possible, even preferable sometimes.
Host: And he also mentioned computer vision and imaging.
Host: That's pretty established now, right?
Host: An AI reader working alongside the human radiologist.
Cohost: That's a second pair of eyes.
Cohost: Builds confidence.
Host: Exactly.
Host: It's a model for how we can integrate AI into more complex decisions.
Cohost: OK, so looking ahead, say, five years out, what was the consensus or was there one?
Host: Huh.
Host: Well, opinions varied a bit.
Host: Maynier was perhaps more cautious, predicting we'd still have humans involved, maybe 60% in the loop.
Cohost: Still significant human oversight.
Host: Yeah.
Host: Gupta hoped we'd reach a plateau in understanding, meaning AI just becomes normal, part of the standard toolkit.
Cohost: Less hype, more routine.
Host: Right.
Host: But Brian Clayton, I thought, offered the most forward-looking perspective.
Host: He argued that, yeah, agentic AI will become established, but the next big wave we need to prepare for is already forming.
Host: Which is?
Host: Quantum AI, leveraging that massive leap in computing power, and federated AI.
Cohost: Federated AI, that's about training models without centralizing sensitive data, right?
Host: Exactly.
Host: Training across different data sets, like different hospitals or research centers, without anyone having to share their raw proprietary patient data.
Host: huge for privacy and collaboration.
Cohost: So the journey is, prove the value, ensure the safety and culture, embed the tech, and then get ready for the next technological leap.
Host: That seems to be the trajectory.
Cohost: It really brings it back to that cultural safety point Clayton made, doesn't it?
Cohost: The provocative thought for you, listening, might be, is your organization building that foundation of trust today?
Cohost: Because it sounds like you'll need it to handle what's coming next.
Host: Well put.
Host: From insights on innovation to the human side of AI adoption, the panel at MaxEI A&I Inspire 2025 reflected both the promise and the responsibility of agentic AI in reshaping clinical development.