We’ve been discussing simplifying protocol designs for decades. And yet, a phase III protocol today collects nearly 6 million data points on average; a figure that’s been climbing by roughly 11% annually. Despite genuine efforts across the industry to reduce protocol complexity, the trend line hasn’t budged.
The reasons are partly scientific and operational. And they’re also deeply human.
The forces we can’t ignore
Some complexity in our protocols is unavoidable and even necessary. Our understanding of disease biology has advanced, allowing us to characterize conditions with far greater precision than in the past. Demonstrating efficacy for a novel oncology therapy or a precision medicine approach requires sophisticated biomarkers, advanced imaging, and carefully constructed endpoints. Global trials must navigate regulatory requirements that vary by country, which increases multi-country design complexity. Decentralized study components, while designed to increase flexibility, introduce new operational demands. These are legitimate drivers of data volume, and addressing them requires scientific judgement and regulatory alignment.
But protocol complexity also creeps in through avenues that have little to do with scientific need. New patient-facing technologies, for example, make participation more convenient while also making it effortless to collect additional data points, sometimes simply because the capability now exists. Without intentional guardrails, convenience can unintentionally become a conduit for excess.
There’s also the pull of habit: protocol templates inherited from earlier studies, carrying forward assessments that made sense for phase II but may not be essential for phase III. There’s competitive pressure: the instinct to position a therapy more strongly at launch by expanding the data we collect. And then there’s something harder to name but easy to recognize: a kind of “fear of missing something” that pushes teams to add exploratory objectives, just in case.
Within clinical teams, individuals understandably want to ensure nothing valuable is overlooked. Team members may add assessments they’ve seen in recent publications or include exploratory objectives to demonstrate thoroughness. Each decision seems reasonable in isolation, but data collection compounds in ways that aren’t always intuitive. One additional blood draw doesn’t simply add one unit of burden — it’s nonlinear, where 1+1 = 2.8. It adds operational overhead for sites, attention demands for data management teams, and time for participants who are already giving a great deal of themselves.
The Hidden Administrative Tasks Draining Small Practices
Small practices play a critical role in healthcare delivery, but they cannot continue to absorb ever-increasing administrative demands without consequences.
The cumulative effect is significant. A new collaborative study between TransCelerate Biopharma and the Tufts Center for the Study of Drug Development (Tufts CSDD), Tufts University School of Medicine, found that approximately one-third of all procedures and data points fall into categories that are either non-core or non-essential to a protocol’s primary and key secondary endpoints. Much of this activity serves exploratory or “future use” purposes, particularly questionnaires and patient diaries. While these help capture essential participant perspectives, adding extra questions and assessments increases burden, making it important to consider whether each one meaningfully contributes to the study.
How current practices affect sites and patients
Protocol complexity has long been one of the most frequently cited challenges for investigative sites. Its downstream effects include longer startup timelines, more operational negotiation, and greater executional strain. It’s also the top driver of FDA non-compliance complaints. In an environment where site capacity is finite, even small increases in procedure volume can have a noticeable impact on feasibility and performance.
For participants who are giving a piece of themselves — blood, biopsies, time, and risk — the burden of participation is more personal. It can include extended visits, more frequent appointments, additional tests, and questionnaires that stretch on. For someone juggling work, caregiving, or limited transportation, these demands can determine whether participation is even possible. Overly intensive protocols can slow enrollment and inadvertently limit participation among groups already underrepresented in research.
And for sponsors, the operational consequences multiply. Data quality suffers when teams are responsible for managing millions of data points, and even excellent operations groups can find their attention diluted. Sites facing staffing shortages may deprioritize complex studies in favor of more streamlined ones. The clinical trial enterprise depends on willing participants and capable sites. When protocol requirements push these integral partners to take on too much burden, nobody wins.
The case for intentionality
A path forward is emerging, and it starts with fit-for-purpose data collection. This principle — now emphasized in the final ICH E6(R3) Good Clinical Practice guideline — asks teams to rigorously evaluate which data are essential to answer a protocol’s scientific questions and which can be removed without compromising safety or interpretability.
The goal isn’t to collect less data for its own sake, but to collect the right data, recognizing that volume and value aren’t the same thing. When attention is spread across millions of data points, the focus on primary endpoints can drift. When sites are overburdened, execution suffers. When participants face excessive demands, they may decline to enroll or struggle to remain in studies.
Sponsors who approach data collection more intentionally can see real benefits, such as more manageable protocols for sites, clearer operational expectations, reduced startup timelines, and more efficient use of staff. Even a small reduction in unnecessary data across a large portfolio can fund future studies. For participants, streamlined procedures mean shorter visits, fewer invasive tests, and a more positive research experience. For the study itself, focusing resources on essential data can improve quality and accelerate timelines.
What sponsors can do now
Organizations looking to apply these principles can start by embedding burden-reduction into protocol design from the beginning:
- Classify each planned procedure based on its relationship to primary or key secondary endpoints
- Review protocols through the lens of site and participant burden, examining frequency, duration, and invasiveness
- Engage investigative sites earlier to identify operational friction before protocols are finalized
- Use digital tools to help diagnose and surface redundant or low-value data elements
- Remove or reduce the number of procedures that are non-core or conducted more times than necessary to demonstrate a primary or key secondary outcome or fulfill a regulatory requirement
These principles can also be applied during execution. When trials encounter enrollment challenges or sites signal that data burden is affecting performance, structured analytics can help identify areas for real-time adjustment.
Rebalancing data needs and burden in today’s trials
For years, addressing protocol complexity has often felt like a “when we have time” problem — important in principle, but easy to defer when facing the immediate pressures of getting a study launched. But clinical portfolios are expanding while site capacity remains constrained, and the economics of drug development continue to tighten. Meanwhile, patients are making their own decisions about whether clinical trial participation is worth their time.
None of this is new information. The challenge has always been making the reduction of complexity a priority when there are so many other demands competing for attention. What’s different now is that we have better tools to measure the problem, regulatory frameworks like ICH E6(R3) that encourage a more intentional approach, and mounting pressure that makes inaction harder to justify.
Fit-for-purpose design serves multiple goals at once: scientific quality, respect for participants, and the long-term sustainability of clinical research. It also provides a counterweight to two powerful forces driving complexity today, namely the growth of patient technologies that make additional data collection effortless and the tendency to add measures “just in case” regulators might someday request them. Both impulses are understandable, but they require clear boundaries and shared expectations. Ultimately, regulatory alignment — not only in formal guidance but in how expectations are perceived by sponsors — is central to shifting behaviors around data collection. The teams that embrace this approach will run better trials. And get to the answers first.
Photo: Nitat Termmee, Getty Images
Eliav Barr is Chair of the Board at TransCelerate BioPharma and Senior Vice President, Global Clinical Development and Chief Medical Officer at Merck, known as MSD outside of the United States and Canada. He oversees late-stage development programs across multiple therapeutic areas, including oncology, infectious diseases, neurology, and cardiometabolic conditions. Throughout his career, Dr. Barr has led several large global programs, including the clinical development of vaccines and therapeutics that have shaped standards of care worldwide. He has also served in senior scientific and medical affairs leadership roles guiding late-phase strategy, evidence generation, and global medical governance. Dr. Barr earned his medical degree from Jefferson Medical College and completed residency and post-doctoral training at Johns Hopkins, the University of Michigan, and the University of Chicago.
Kenneth Getz is Executive Director and Research Professor at the Tufts Center for the Study of Drug Development, where he leads research examining drug development management, protocol design practices, clinical data usage, and global site landscape trends. An internationally recognized expert on clinical trial efficiency and innovation, he has published extensively in peer-reviewed journals and speaks frequently at industry and academic forums. He also holds multiple board appointments across the public and private sectors and is the founder of CenterWatch, a leading source of clinical trials information. Getz received his MBA from Northwestern University’s Kellogg School of Management and his bachelor’s degree from Brandeis University.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
