In cluster-randomized trials, intervention occurs at the cluster level (such as clinics or hospitals) and outcomes are measured at the individual level.
The goal in a cluster-randomized trial is to leverage design-based control of baseline covariates through stratification, pair matching, and constrained randomization.
Constrained randomization allows a researcher to assess balance for different allocation schemes and to randomize only within a constrained space with “balanced” schemes.
Two lessons learned in statistical analysis are model-based inference and permutation inference; in both, analysis of trial results should account for design.
The “Reminder/Recall Immunization Study” example demonstrates 16 randomized counties (clusters), balanced to ensure that urban and rural counties were equally represented in control and treatment groups.
Constrained randomization is often a preferable technique to balance multiple baseline covariates in small cluster-randomized trials because it avoids categorization of continuous covariates (versus stratification).
Software to perform constrained randomization is available in “Stata” and “R” by the Duke biostatistics group.
Joshua C. Denny, MD, MS, FACMI
Professor of Biomedical Informatics and Medicine
Director, Center for Precision Medicine
Vice President for Personalized Medicine
Vanderbilt University Medical Center
Early Progress on the All of Us Research Program
Pragmatic clinical trial; All of Us; Electronic health record
The Framingham Heart Study was a major influence of All of Us, because it followed a small number of participants, but followed them very closely, and showed a significant impact in lowering cardiovascular disease.
Engagement and diversity are core goals of the All of Us Program, and seeing patients as partners is a guiding principle.
Participants can enroll in the protocol through either the traditional route of health care provider organizations, or as direct volunteers.
There will be centralized electronic health (EHR) data broken into three tiers: Public, registered, and controlled, with any obvious identifiers removed from all.
A major goal is to give All of Us participants access to information, including study updates and aggregated results.
The All of Us Program does not intend to replicate the U.S. population, but rather to focus on under-represented populations.
Researchers can apply for data from the All of Us Biobank, a repository that stores and manages biological samples, by posting their research questions. There has been some pushback from researchers on publicly sharing their questions, but it seems to promote efficiency and collaboration.
Patient portals and smartphone apps are two vehicles that the All of Us Program will use to deliver information and results back to participants.
Jeffrey (Jerry) G. Jarvik MD MPH
Professor, Radiology, Neurological Surgery and Health Services
Adjunct Professor, Pharmacy and Orthopedics & Sports Medicine
Co-Director, Comparative Effectiveness, Cost and Outcomes Research Center
Director, UW CLEAR Center for Musculoskeletal Disorders
University of Washington
Patrick J. Heagerty PhD
Gilbert S. Omenn Endowed Chair in Biostatistics
Professor and Chair, Department of Biostatistics
University of Washington
The Lumbar Imaging with Reporting of Epidemiology (LIRE) Trial: Subsequent Cross-Sectional Imaging Through 90 Days—Preliminary Results
Pragmatic clinical trial; LIRE; Lumbar imaging; Spinal imaging; Low back pain; Benchmark data
The Lumbar Imaging with Reporting of Epidemiology (LIRE) study hypothesis was that adding prevalence benchmark data to spinal imaging would reduce future injections, surgery, and opioid prescriptions.
A recent study by Fried et al. in Radiology showed that inclusion of benchmark data in lumbar reports is associated with decreased utilization of high-cost low back pain management.
The study has enrolled over 240,000 patients at 4 sites, with the majority of patients older than 40 and receiving imaging through standard x-rays or MRIs.
Researchers have analyzed data from two of the four sites so far, and results showed minor reduction in follow-up care for intervention group versus control group. Due to the large, complex data set, they will need more time to review and look at fixed effects.
The LIRE study used a stepped-wedge design with five waves, where the randomization was broken down so that all five waves would receive the intervention by the end of the accrual period.
Researchers have consolidated duplicate records within the data set during their analysis by cross-referencing different data pulls over time.
This study could be considered “research to see the details of delivery,” in that researchers learned a lot about patterns of clinician x-ray/MRI ordering behavior, and this helped them to determine definitions of what constitutes an index or an outcome.
Kevin A. Schulman, MD
Professor of Medicine
Associate Director, Duke Clinical Research Institute
Visiting Scholar, Harvard Business School
The Healthcare Pivot: Technology and Transformation of Healthcare
Pragmatic clinical trial; mHealth; Electronic health data; Data mining; Machine Learning; Health IT
Health IT implementation is affected by multiple factors including data analytics, business workflow and process improvement, and patient engagement.
Mobile health (mHealth) can deliver actionable data to clinicians, with detailed reporting, peer comparisons, and provider “report cards.”
The mPower initiative aims to enable patients to take ownership over their healthcare, including storing and accessing their electronic health records via their mobile phones.
The HeartStrong trial demonstrated the positive effect of electronic reminders and online social support on medication adherence and overall patient outcomes.
What if 50% of healthcare was delivered via mHealth by 2025?
A family of four with a median income of $75,000 will pay over $18,000 in healthcare expenses per year, so it is critical to build value and efficiency within the system.
Informatics is central to the top five fastest growing companies in the U.S., so leveraging health IT will help with scalability of the digitalization of healthcare data.
Who will drive innovation in healthcare in the future? Academic medical centers? Insurance companies? It will depend on the leadership of a group deciding that innovation will solve large-scale societal healthcare issues.
As many as 30% of antibiotic prescriptions are unnecessary. Can behavioral economics explain and help guide how to change this clinician prescribing behavior?
Habit, pressure from patient, and a “just to be safe” mentality are the most common factors driving inappropriate antibiotic prescribing, but antibiotic stewardship is critical to improving patient outcomes.
The Behavioral Economics/Acute Respiratory Infection (BEARI) trial looked at three behavioral interventions to reduce inappropriate antibiotic prescribing for acute respiratory infections: suggested alternatives to antibiotics, accountable justification, and peer comparison.
Because doctors are people who are affected by emotion and social interaction, peer comparison had greatest effect on prescribing behavior, even after the intervention period ended.
Overall, clinicians in the BEARI trial expressed desire to follow guidelines for good antibiotic stewardship, but some of the responses in the intervention group indicated a misunderstanding of the guidelines.
Is some of the impact of interventions due to the Hawthorne effect, in that these clinicians knew they were being enrolled in a trial, and thereby aware they are being watched and having their work reviewed?
Older doctors have been shown to inappropriately prescribe at a higher rate than younger doctors, but the question remains whether this is based on generational learning or based on decision fatigue over time.
There have been significant efforts by the Centers for Disease Control and Prevention and other public health groups to spread awareness in recent years about the antibiotic resistance and the importance of good antibiotic stewardship, so it is possible outside factors also impacted clinician behavior modification.
The NIH Collaboratory is pleased to announce that the new episode of the Grand Rounds podcast is now available, featuring Dr. Richard Platt of Harvard Pilgrim Health Care Institute and Dr. Christopher Granger of Duke University. In this episode, Drs. Platt and Granger speak with moderator Dr. Adrian Hernandez about the IMPACT-AFib atrial fibrillation trial and the role of the FDA’s Sentinel Inititative in leveraging pharmacy data to find eligible participants.
In this episode of the NIH Collaboratory Grand Rounds podcast, Drs. Richard Platt and Christopher Granger speak with moderator Dr. Adrian Hernandez about the IMPACT-AFib atrial fibrillation trial and the role of the FDA’s Sentinel Inititative. In this trial, researchers used Sentinel pharmacy data to find eligible patients with at least one oral anticoagulation prescription fill, and the speakers describe how the platform provided them with an efficient and cost-effective way to access the data of over 80,000 potential participants.
Click on the recording below to listen to the podcast.
We encourage you to share this podcast with your colleagues and tune in for our next episode with Grand Rounds speaker Dr. Andy Faucett and his presentation “Considerations for the Return of Genomic Results,” which will be posted the week of February 19th.
Noelle M. Cocoros, DSc, MPH
Epidemiologist, Department of Population Medicine
Harvard Medical School and Harvard Pilgrim Health Care Institute
Christopher B. Granger, MD, FACC, FAHA
Professor of Medicine, Duke University
Director, Cardiac Care Unit
Duke University Medical Center
Richard Platt, MD, MS
Professor and Chair, Department of Population Medicine
Harvard Medical School and Harvard Pilgrim Health Care Institute
Sean Pokorney, MD
Assistant Professor of Medicine, Duke University
IMPACT-AFib: An 80,000 Person Randomized Trial Using the Sentinel Initiative Platform
Pragmatic clinical trial; Sentinel Initiative; IMPACT-AFib; Atrial fibrillation; Data sharing
The Sentinel Initiative uses the Common Data Model to curate and distribute large amounts of electronic health record (EHR) data from a diverse group of data partners.
IMPACT-AFib (an atrial fibrillation trial) used Sentinel registry data to find eligible patients with at least one oral anticoagulation prescription fill.
The trial looked at usual care with delayed provider anticoagulation intervention versus early patient and provider anticoagulation intervention, using access to pharmacy records.
Is there an ethical question raised by delaying intervention in the usual care group?
Oral anticoagulant (OAC) underuse is a public health priority and also a priority of health plans, which made health plan stakeholders very engaged in the IMPACT A-Fib trial.
IMPACT-AFib found efficiencies with a single IRB that facilitated streamlined processes across multiple institutions.
Research weighed practical considerations against ethical concerns, in that by consenting individuals for the trial, researchers would be performing a type of intervention and thereby negating the comparison between true control and intervention groups.
FDA sponsored the IMPACT-AFib trial to demonstrate feasibility but researchers hope that other sponsors will be open to trials leveraging Sentinel in the next year or so.
Michael Pencina, PhD
Professor of Biostatistics and Bioinformatics, Duke University
Director of Biostatistics
Duke Clinical Research Institute
Does Machine Learning Have a Place in a Learning Health System?
Machine Learning; Artificial Intelligence; AI; Learning Health Systems
Machine learning has many different applications for generating evidence in meaningful ways in a learning health system (LHS).
Although other industries are using machine learning, the health care industry has been slow to adopt artificial intelligence (AI) methodologies.
The Forge Center was formed under the leadership of Dr. Robert Califf and uses team science—biostatisticians, engineers, computer scientists, informaticists, clinicians, and patients collaborate to develop machine learning solutions and prototypes to improve health.
In a learning health system, the process is to identify the problem, formulate steps to solve it, find the right data and perform analysis, test the proposed solution (by embedding randomized experiments in a LHS), and implement or modify the solution.
Machine learning is a small piece of a LHS, but an important one, and methods are characterized by the use of complex mathematical algorithms trained and optimized on large amounts of data.
Demonstrating enhanced value of machine learning over existing algorithms will be an important next step. An ongoing question is how do models get translated into clinical decision making? Machine learning is a tool to develop a model, but implementation of the findings will require team science.
Prediction models can be calibrated to work across health systems to an extent, but there are many unique features of individual health systems, so large health systems should use their own data to optimize the information and learning in a specific setting.
There are key issues related to accurate ascertainment of data, especially with relation to completeness. For example, inpatient data collected during a hospital stay are likely to yield models that have value. If data rely on events that happen outside the system, it can be harder to get the complete picture.