The pharma industry’s embrace of decentralized clinical trials is transforming clinical research, uncovering new efficiencies while improving the patient experience without compromising data quality. However, these trial models require new methods to enable remote acquisition, sharing, and analysis of heterogeneous types of data across multiple systems. Raj Indupuri, CEO of clinical data software and services provider eClinical Solutions, discusses evolving challenges with clinical trial data and the solutions the company offers with Pharma’s Almanac Editor in Chief David Alvaro, Ph.D.
David Alvaro (DA): Congratulations on the 10th anniversary of the founding of eClinical Solutions. Could you bring me up to speed on the company’s history and your driving mission and vision?
Raj Indupuri (RI): It’s definitely an exciting time for eClinical Solutions. When we started the company in 2012, our mission was to build software and deliver tech-enabled services to make data acquisition and analytics easy and intelligent, with the same end goal that every company in this industry works toward: to accelerate research and bring therapies to patients faster.
When we started the company, we anticipated this industry’s problem with data diversity — the number of sources, the different modalities, and the wide range of ways in which data gets collected. Also, we predicted that with increased trial complexity, the volume and variety of data would expand. Not only did the data diversity problem that we anticipated end up manifesting, but the “data chaos” continued to increase, and this is the reality that we are faced with today.
If you look at the industry right now, there are a lot of different modalities in terms of how trials are being conducted. As a result, significant technologies have evolved not only to deal with this problem but also, in the end, to digitize and modernize trials, bring trials to patients, and reduce cycle times, ultimately accelerating research.
DA: Before the pandemic, adoption of DCTs was limited. How have you seen those attitudes shifting?
RI: The pandemic really pushed the boundaries. We had been working as an industry even before the pandemic toward the goal of effectively bringing trials to patients, but the barriers in terms of available solutions at the time were much greater. Right now, there is significant intent across the board from all industry stakeholders to rapidly evolve and ultimately transform how we conduct, digitize, and modernize trials. And as with the evolution of any paradigm-shifting trend, you have people, process, and technology evolution. In our industry, we also have regulatory bodies.
In terms of people, when you run decentralized trials, you need different skill sets and upskilling of staff, especially when you look at source data acquisition and the expertise required for conducting end-to-end data monitoring and review. As for processes and technologies, transforming how you conduct research requires significant process reengineering and new tech solutions, which means that there are also significant initial, near-term, and even long-term costs for any company, large or small, in exchange for significant future benefits.
And lastly, regulatory agencies have been encouraging the industry to move in the direction of DCTs and have been working with industry stakeholders to make sure that patient-centric trials can be conducted in a way that doesn’t impact quality, compliance, or patient safety. Cooperation with regulatory bodies has also done a lot to reassure the industry of the legitimacy of DCTs — public perception is another aspect of the shift toward DCTs that continues to evolve.
DA: Has eClinical Solutions seen a corresponding shift, where support for DCTs was once a small piece of your business but is now in a real growth phase?
RI: Definitely. As a company, we have enterprise software and tech-enabled services. We provide a clinical data cloud, elluminate, that can import data from any source, structure, or format, and we provide different capabilities to transform these raw data into something meaningful. It could be for data review, data analysis, data standardization for submission, or even for exploring purposes.
Data is the currency of the industry. Our software enables our clients to get insights from their data that are efficient, easy, and intelligent. In terms of DCTs, our software helps at key points over the course of conducting clinical trials.
I always talk about three components of the clinical data life cycle: data acquisition and generation, pipelines and data automation, and data consumption. Data acquisition and generation is where you see the use of EDC, COA devices, and apps, especially with DCTs, which frequently require new technologies to capture data more effectively. The second component is to build an automated data pipeline and develop advanced capabilities to bring all these data together, harness their value, and extract meaningful insights. The last component is data consumption — different stakeholders using the capabilities that we provide to generate insights from these data for efficient decision-making, allowing them to achieve outcomes like reduced cycle time and increased quality.
elluminate helps clients with the second and the third components. As DCT data collection modalities keep expanding to include more new devices and apps, our cloud is able to bring data from these different sources together, minimizing the fragmentation in the digital value chain to make later data consumption and decision-making much easier.
DA: Some of the data challenges that you’re discussing are very particular to decentralized trials, but some of them also reflect the broader challenges of working with complicated trial data. Which new challenges are common across all trials, and which are more specifically relevant to DCT models?
RI: Many of these data challenges, such as increased data sources and more modalities to collect or generate data, affect all trials, not just DCTs. Irrespective of the movement toward DCTs, the diversity of data from trials has been increasing rapidly in the past few years. This has led to increased complexity in terms of standardizing the data for submission and reviewing the data for cleaning, especially because the industry’s current method of collecting data using legacy infrastructures and architectures is not going to scale or serve the purpose of organizing data into the pipeline and enabling effective analytics.
Beyond that, another challenge — whether you aim to decentralize trials or to modernize them in general — is shifts in how trials are conducted. It all goes back to people, process, technology, and regulatory bodies — the industry has been pushed to evolve across all four of these pillars in ways that go beyond decentralized trials. Taking risk-based approaches to conducting a trial or reviewing the data in the context of critical data elements requires a different mindset in terms of designing protocols, creating a data review plan, and monitoring the trial. This requires upskilling and evolution of the skill sets that we have for our team.
DA: What are some of the new data acquisition challenges unique to DCTs, like ensuring accuracy in patient-reported adherence and outcomes data?
RI: These are challenges that the industry has been trying to solve with eCOA (electronic clinical outcome assessment) and ePRO (electronic patient-reported outcomes) to collect data directly from patients. New technologies have arisen to address inaccuracy in data acquisition, using analytics, artificial intelligence (AI) and machine learning (ML) to detect anomalies or outliers and pinpoint inaccurately reported data, which shows researchers the particular sites or patients that need extra focus.
As for ensuring patient adherence, this issue has less to do with the shortcomings of available technology and more to do with how convenient adherence is for the patient. Making the trial processes more convenient and a better experience for patients will increase adherence. One of the benefits of the DCT model is higher patient enrollment and decreased dropout rates, because it eliminates common reasons for patient dropout, like the difficulties of traveling to the trial site. By making trial enrollment easier, decentralized models also increase trial diversity and inclusion.
In general, DCTs solve many more problems than they cause. They reduce some of the common industry challenges around enrollment, dropouts, diversity and inclusion, faster access to data, and reducing cycle times. By taking advantage of the technologies available, being more patient-centric, and using newer technologies to track metrics, DCTs help researchers improve outcomes in dealing with these problems that are typical to traditional clinical trials.
DA: The pharma industry is driven by innovation but at the same time is slow to change. In the adoption of DCTs, who has been most important in driving those changes, and who took more convincing or still needs to be convinced about the value and data quality of DCTs?
RI: Right now, we are seeing a major convergence across the industry in this regard. Efforts to adopt decentralized trials and remote technologies have been influenced by the pandemic and led by major industry players, which has caused a number of tailwinds that we are seeing now. But, increasingly, change is being driven by collaborative efforts across the industry.
The industry has always sought to accelerate research as much as possible. The pandemic disrupted the entire value chain and really pushed the industry to not only adopt but advance the current technologies. These technologies caused significant tailwinds, because their adoption was necessary to keep up the pace of research that all stakeholders want.
Any major push from big industry players will drive change in the industry and convince reluctant companies to adopt new innovations. For example, CVS recently announced that they’re getting into research and providing remote clinical trial services. These are industry changes on a macro level — CVS is everywhere, so if companies can partner with CVS, they may be more inclined to do research. The pandemic also encouraged collaboration and partnership between the industry and regulatory bodies, and this has given the industry the confidence to invest in new technologies and process changes.
There are different consortiums and alliances that have formed to establish best practices around change management and to provide a technology blueprint and recommendations for the implementation of decentralized trials. Collaboration is happening more than ever across the board among software providers, service providers, biotech and pharma industry leaders, and regulatory agencies. I’ve been in this industry for 25 years, and I have never seen this level of intent and excitement. To reflect on this in terms of this being our company’s 10th year: we have been very fortunate and thankful for the journey thus far, but we are more excited in terms of what’s ahead.
DA: Aside from resistance to change, are there other barriers that are making some stakeholders hesitant to invest in DCTs?
RI: The major barrier is the amount of investment into different technologies that is required to implement DCTs efficiently and at scale. No single platform can provide every capability and address the needs of every stakeholder across all stages of clinical development. The industry is still trying to figure out the optimal way to adopt and streamline new technologies, and that’s where companies like eClinical Solutions fit. We are very excited for the opportunity to provide the technology to break down that barrier to entry.
More specifically, the challenge for our team is: How do we reduce the fragmentation of the digital and tech systems that are needed to conduct decentralized trials? The solution is to create an ecosystem where different digital systems interoperate efficiently, providing seamless data access through a single sign-on; no sponsor wants to log into five or six different systems to review and monitor data. Our goal is to provide a unified experience for end users, solving for the technical complexities behind the scenes.
Solutions to this problem are rapidly evolving. Right now, there are mature clouds that already enable a higher level of interoperability, and we predict that a highly interoperable technology ecosystem is going to rapidly evolve in the next two to three years. It’s all about reducing fragmentation to create better digital experiences for everyone involved, from patients, to payers, to sponsors, to providers.
DA: On a conceptual level, how is it possible to integrate such heterogeneous data?
RI: In the last 10 years, I have seen our sponsors and the larger industry increasingly pushing software providers to ensure that they have APIs to enable interoperability. With DCTs and modern trials in general, software providers are realizing the need to have open systems, because we know for a fact that a single software provider is not going to provide one cloud to meet all the needs and provide all the capabilities that a sponsor could possibly want.
At eClinical Solutions, we know that to succeed as software providers we must create interoperable systems that fit within the technology ecosystem that a sponsor company is investing into. We also need to think about future readiness: how to ensure that, as sponsor needs evolve, these clouds can evolve as well. On a fundamental level, this takes a data-centric, platform-centric approach and APIs designed for future integration from the beginning of development, not after the fact.
On the acquisition side, the conversation about integrating electronic health record (EHR) systems with EDC has been going on for 20 years. The industry’s standards are maturing — on the Clinical Data Interchange Standards Consortium (CDISC) side, we have the Operational Data Model (ODM) standards, which support the interoperability and movement of data from one system to another. Everybody involved in the value chain realizes that there’s more value creation when you work with interoperable systems and fit nicely into a larger ecosystem rather than working in your own silos and building closed systems and closed processes. There’s currently a lot of investment into standards for data exchange as well, and I see that advancing in the near future.
DA: On a related note, what kinds of top-level standardization / regulatory standards are needed in order to make the standardization of data easier downstream?
RI: There are two standards that are playing a significant role: First, the five standards of EHR, which support data integration and interoperability between the healthcare systems and clinical research systems. Second, on the CDISC side, ODM has been available and supporting interoperability for a long time. I believe that moving forward, the industry will base its solutions mostly on technology rather than standards, because the sources, variety, and collection modalities of data are only going to increase — and the structures of all this data are completely diverse.
It’s going to be challenging to achieve interoperability with only standardization as the primary driver, because standardizing data first and then trying to bring all the data together through interoperability could be a costly process. Technology can significantly help with this problem — beyond the use of APIs, there are also more advanced techniques that are still nascent in pharma and biopharma, like data fabric and data mesh, which allow us to bring all the data together to enable a frictionless dataflow downstream to end users.
DA: Do you think that interoperability between different software is happening organically, and will these organic efforts be sufficient to accomplish what the industry needs, or will there be a need for someone to push to make it happen?
RI: As part of their design principles, some developers are building software with interoperability in mind, but there are still software providers in our space who are building closed systems. Ultimately, there needs to be push from sponsors to accelerate this interoperability innovation. We have observed that when there is a push from sponsors for software providers to integrate our services, that motivates provider companies to collaborate more efficiently and think more about future readiness compared to what typically happens when provider companies reach out to each other to collaborate. As both systems evolve, we need to consider the APIs’ evolution as well. It makes us think more about the evolution of our APIs as both systems evolve, end-to-end use cases, and other capabilities beyond APIs that need to be embedded within our respective software.
Another motivating factor for collaboration is that as everyone else caters to interoperability, software providers that don’t will eventually be left behind. In our experience, interoperability is a top priority for sponsors today. We are now receiving requests for proposals (RFPs) from the top 25 largest biotech and pharma companies in the world. Every RFP includes a section dedicated specifically to interoperability, because large sponsors have data lakes from internal systems in which they’ve already invested. They need to ensure that any new systems they implement will fit nicely within their existing ecosystems. Software providers cannot deliver that unless we think about interoperability.
DA: Earlier, you broke down the clinical data life cycle into three components: acquisition, pipelines, and consumption. Among these three, where do you see the most focus of innovation in the coming years?
RI: I sincerely believe that there are significant opportunities across all three components. On the data acquisition side, collecting data seamlessly from patients is still not a straightforward process. There are significant opportunities to create better experiences for patients in support of data acquisition and generation.
Regarding the data pipeline, there are huge opportunities for innovation, because the volume of data generated in trials will continue to increase, and because new technologies are enabling better solutions. For example, we are now able to leverage AI and ML to automate the data pipeline further by automatically identifying data attributes and classifying data.
As for data consumption, there are huge opportunities for innovation, because that’s where the data are accumulated. We invest millions of dollars to collect data, but the question remains: Are we extracting the maximum value out of it? This is where we can leverage AI, ML, and other advanced techniques to not only gain insights, reduce cycle times, look for trends, and enable faster trial submissions; but also to tap into historical data for exploratory purposes. We are now seeing a convergence between real-world data (RWD) and clinical research data. Since RWD already exist in EHR systems, there are efforts in the industry to gain insights from RWD, and current techniques to analyze these data are inefficient and will not scale. That’s where I see a huge opportunity to apply ML and deep learning techniques to maximize and harness the value of this data for better insights.
DA: You mentioned using data in smarter ways to create more adaptive trial designs. Do you see other areas where a better use of data can improve the way that sponsors conduct clinical research?
RI: Absolutely. Beyond data analysis and submission, there are so many other aspects of the clinical trial value chain — there are safety and regulatory components, as well as working with payers and providers. Before bringing a new therapy to patients, drug makers should be able to gather feedback from payers and providers and use it to optimize what therapies to focus on and to understand early on how effective or safe the therapy would be. This information could reduce overall cycle times, and even a reduction of a few months can save not only lives but also significant costs.
At the same time, if sponsors can gain insights into safety or efficacy data early on, that could lead to better therapy and trial designs and more efficient decision-making to pivot as needed. There are tremendous opportunities to connect different digital systems across the entire R&D value chain, leveraging interoperability to gain insights on a macro level. This is already happening at a smaller scale, but as technology matures and becomes easier to implement, we’ll see significant adoption of these feedback strategies.
DA: Is there anything that you want to share about eClinical Solutions’ products and service offerings or how you support clients with DCTs?
RI: When we started this journey 10 years ago, we were not necessarily thinking about DCTs yet, but we were building the software to solve the current data problem, which we had anticipated back then. We worked to provide the capability to bring data from any source, structure, or format into a hub and transform it seamlessly, easily, and intelligently for downstream analytics, decision-making, patent submissions, and other uses.
As our vision has evolved since then, it has aligned nicely with what the industry is trying to do with decentralized trials. We believe that elluminate, our data cloud, is a foundational technology component for decentralized trials, and we are investing heavily into this data management workbench, using AI and ML. We know that the current methods of reviewing and cleaning data will not scale as data diversity increases, so we are building machine learning models to provide augmented and automated capabilities for reviewers looking at data trends.
Earlier, I touched briefly on risk-based approaches. We have a product line built around risk-based quality management that enables central monitoring teams and community leader–assigned teams to implement risk-based approaches, focus on critical data points, and gain insights that help them pinpoint particular sites or a particular core of patients that require further action.
Another one of our new product lines that I’m very excited about is a statistical computing environment, which we refer to as SCE. At the end of the day, transforming the increasing amount of incoming trial data requires more advanced capabilities and mature systems. Right now, the industry has been using systems that are siloed — they have all this data in one system, but all their computational capabilities in a different system, and analytics in yet another system. To address this fragmentation in the value chain, we are bringing as many systems together into one cloud as possible, eliminating sponsors’ need to copy and transfer data.
We are quite excited about where the industry is right now and where it’s headed. And we believe that our cloud is a foundational component to enable decentralized trials with multiple products on top of our cloud.
DA: eClinical Solutions began this journey with an almost “science fiction” vision of what an ideal system would look like. Is elluminate approaching your original vision, or is there still work that needs to be done to achieve it?
RI: We have a very aspirational vision that continues to evolve. We have already achieved our original vision from years ago, but by now we have new goals in mind that will require three or four years of R&D. Five years back, we had a roadmap among our internal teams to develop these capabilities and own the market. Now we have those capabilities, but the industry’s expectations and needs have also evolved rapidly, and so have the opportunities to leverage technology across the industry. Right now, we work with over 100 clients, some of which are really big and have complex roadmaps and needs. We realize that our system needs to evolve with their feedback.
There is no such thing as a static state; innovation will never be over. The capabilities and expectations of this industry will continuously evolve, and as we keep innovating to solve current problems, new ones will surface — and this process will continue indefinitely. There will always be bigger and better opportunities to keep reducing cycle time so that we can bring therapies to patients faster, and we are excited to continue on this journey of innovation.