Dr. Thomas Newman discusses Observational studies on the Clinical Trial Podcast

Introduction

What separates good clinical research from misleading conclusions?

Often, it comes down to study design.

In this episode of the Clinical Trial Podcast, Dr. Thomas Newman, Professor of Epidemiology and Biostatistics at UCSF and co-author of Designing Clinical Research, breaks down one of the most misunderstood areas in research: observational studies.

While randomized controlled trials are often considered the gold standard, observational studies play a critical role in answering real-world clinical questions. But they come with their own challenges – bias, confounding, and interpretation errors that can easily lead researchers astray.

This episode gives you a practical framework to think more clearly about designing, analyzing, and interpreting observational research.

What You’ll Learn

  • A simple framework for designing observational studies
  • When observational studies are more appropriate than RCTs
  • The biggest threats to validity: bias, confounding, and chance
  • How to interpret associations vs. causal effects
  • Practical strategies for handling confounders in analysis
  • How to build intuition for statistics – even if you struggle with it

About the Guest

Dr. Thomas Newman, MD, MPH, is a physician-scientist and educator at the University of California, San Francisco. He is a Professor of Epidemiology and Biostatistics and is affiliated with the Department of Pediatrics.

His research applies epidemiologic methods to critical clinical problems in child health, including neonatal jaundice, infections in newborns, urinary tract infections, and cholesterol screening.

Dr. Newman is widely recognized for his contributions to clinical research education and is co-author of Designing Clinical Research (5th edition) and Evidence-Based Diagnosis.

Connect with Thomas Newman

Company: UCSF Department of Epidemiology & Biostatistics

Sponsor(s) 

Selected Links from the Episode:

 Books:

Show Notes

[0:00] Podcast Intro

  • Opening theme: great clinical research professionals are built through learning and practice.

[0:20] Episode Setup: Study Design + Focus on Observational Studies

  • Clinical trial design is framed as a core career skill.
  • Episode focus: when observational studies are appropriate vs. RCTs, and how to design them for trustworthy results.
  • Guest introduction: Dr. Thomas Newman (UCSF), co-author of Designing Clinical Research.

[2:46]   Why this Topic Matters

  • Kunal shares how he recommends Newman’s textbook to students and sets expectations for practical learning.

[2:49] How Dr. Newman Got into Study Design as a Clinician

  • Dr. Newman notes much of his work is observational rather than clinical trials.
  • He references using electronic medical records (EMRs) from Northern California / Kaiser Permanente data sources.

[3:35] Defining Clinical Trials vs. Clinical Research (and Observational Studies)

  • Clinical trials: a subset of clinical research where treatment is assigned by investigators (experimental studies).
  • Observational studies: investigators do not assign treatment; they compare treated vs. untreated groups.
  • Core challenge: estimating causal effects when treatment is not randomized and groups differ at baseline.

[4:43] Career Path: Pediatrics, Epidemiology, and Research Methods

  • Dr. Newman shares his training path and interest in teaching, pediatrics, and epidemiology.
  • Research as part of academic promotion plus genuine curiosity about clinical and methods questions.

[5:32] Natural Experiments and Instrumental Variables: Vietnam draft Lottery Example

  • Inspiration from a CDC-related discussion about Agent Orange leads to using the draft lottery as a quasi-random assignment.
  • Concept: treatment assigned randomly by birth dates (eligibility for the Vietnam-era draft).
  • Example outcome: higher deaths from suicide and motor vehicle accidents among those more likely to be drafted.

[8:44] Biggest Barriers for Clinicians starting research

  • Recommendation: get formal research training (certificate or master’s pathways).
  • Barriers: time, cost, and finding funding/support to pursue training.
  • Next critical step: find mentors with the roles and experience you want.

[9:11] Choosing Feasible Questions + Finishing What You Start

  • Mentors help identify questions that are novel, feasible, and worth finishing.
  • Many projects stall due to underestimated workload, competing priorities, and roadblocks.

[11:41] FINER criteria refresher

  • Discussion of the FINER framework for good research questions: Feasible, Interesting, Novel, Ethical, Relevant.

[13:16] Practical Path To Publication: Start Small And Present

  • Host shares early experience in NICU pediatrics research (retrospective analysis, IRB, poster presentation).
  • Emphasis on mentorship to navigate IRB, analysis, and dissemination (posters, conferences, papers).

[15:30] Where To Start When Designing An Observational Study Or Registry

  • Begin with a strong clinical question anchored in real uncertainty from patient care.
  • Consider observational study types:
    • Etiologic (risk factors, case-control)
    • Descriptive (frequency of outcomes or adverse events)
    • Causal treatment-effect estimation (most challenging)

[21:52] High-Level Framework: Observational Studies That Emulate Randomized Trials

  • Focus on observational designs that estimate causal treatment effects.
  • Distinguish short, time-limited treatments (e.g., procedures) vs. long-term chronic therapies.
  • Key challenge: handling treatment changes over time (start, stop, restart).

[24:15] Intention-To-Treat Logic In Observational Studies: Initiators Vs Non-Initiators

  • In RCTs, intention-to-treat analyzes participants by original assignment.
  • Common mistake in observational studies: restricting to people who stayed on treatment for a minimum duration (uses future information).
  • Preferred approach: compare initiators to non-initiators at a defined starting point (time zero), even if they later stop.

[26:06] Statistical Challenges: Treatment Switching And Bias Toward The Null

  • Treatment discontinuation and delayed initiation can dilute effects (bias toward the null).
  • More advanced approaches may require inverse probability weighting and assumptions about measured confounders.
  • Practical limitation: EMR data may not capture all variables needed to adjust correctly.

[29:39] When Observational Studies Are A Better Choice Than Rcts

  • Faster and cheaper, especially retrospective cohort designs.
  • Often more generalizable: includes patients excluded from industry-sponsored RCTs (older age, comorbidities).

[30:59] Observational Studies For Approved Products And Off-Label Use

  • Studying new indications for existing drugs can be valuable and efficient.
  • Regulatory approval may be needed for marketing, but clinicians can still use drugs off-label.
  • Pediatric medicine often relies heavily on off-label use due to limited RCT evidence.

[34:24] Causal Inference Threats: Chance, Bias, Effect-Cause, Confounding

  • Why associations can differ from causal effects.
  • These four concepts guide how to interpret results and design studies.

[35:16] Chance And Confidence Intervals

  • Chance = random error; larger sample sizes reduce uncertainty.
  • Confidence intervals quantify precision and the range of plausible effects.

[39:38] Confounding Explained (And Confounding By Indication)

  • Confounding occurs when a common cause influences both exposure and outcome.
  • Classic example: matches/lighter and lung cancer confounded by smoking.
  • Confounding by indication:
    • Often makes treatments look worse (sicker patients are more likely treated).
    • In some contexts (e.g., oncology), can make treatments look better if only healthier patients receive toxic therapies.

[45:16] Tools To Address Confounding: Stratification, Multivariable Modeling, Propensity Scores

  • Stratification: compare within strata of a confounder (e.g., smokers vs non-smokers; sex strata).
  • Multivariable modeling: adjust for many confounders simultaneously when stratification becomes impractical.
  • Propensity scores: model the probability of receiving treatment, then compare treated vs untreated within similar propensities.

[47:55] Time-To-Event Analysis: Cox Proportional Hazards Model

  • Useful when follow-up time differs between groups.
  • Still requires assumptions; it does not solve unmeasured bias.

[52:46] How To Build Statistical Intuition: Learn By Doing

  • Work through realistic problems repeatedly.
  • Learning in groups improves motivation and retention.
  • Teaching/explaining concepts to others is one of the strongest ways to learn.

[55:36] Recommended Resources And Free Learning Materials

  • UCSF Training in Clinical Research program materials: many slides and assignments are publicly available.
  • Recommended causal inference/DAGs learning: free course by Miguel Hernan (HarvardX/edX).
  • Practical tip: paying for a course can increase commitment and follow-through.

[58:39] Ai, Machine Learning, And Observational Research: Opportunities And Cautions

  • EMRs and shared datasets expand what is feasible.
  • Risk: trying to answer questions with datasets that cannot validly support the assumptions needed.
  • Mentorship helps avoid wasted effort on underpowered or poorly matched datasets.

[1:03:24] Using Ai To Refine Inclusion/Exclusion Criteria

  • Dr. Newman endorses AI as an idea generator (e.g., surfacing missed criteria).
  • Key caution: AI can hallucinate; users must verify and apply clinical judgment.
  • Enterprise versions may be preferable when working with proprietary information.

[1:05:38] Accessing EMR Data Ethically: Collaboration And Trust

  • Data Holders (e.g., Kaiser Research Divisions) are appropriately protective.
  • Best practice: build true collaborations with internal investigators; share credit and opportunities.

[1:07:22] Closing Resources: Evidence-Based Diagnosis (2nd Ed.)

  • Dr. Newman highlights a second book focused on clinical epidemiology and decision-making.
  • Emphasis on real-world examples and problems to learn from common published mistakes.

[1:10:15] Wrap-Up And Outro

  • Kunal thanks Dr. Newman and encourages listeners to visit the podcast website for more resources.

Major Themes

  • Observational studies can be powerful, but causal inference requires careful design (define time zero, emulate a target trial, and avoid using future information).
  • Confounding is the central challenge in non-randomized research, and must be addressed with design and analysis tools (stratification, multivariable models, propensity scores, time-to-event models).
  • Skill-building in research design and statistics comes from deliberate practice: mentorship, structured training, and working on real problems (often in a group).

Selected Quotes

  • “Clinical trials are a subset of clinical research in which the treatment is assigned by the investigator.”
  • “If you could do a randomized trial, how would you do it?”
  • “You just have to remember that it hallucinates.”

Audience Question

What is the biggest barrier you face in designing or interpreting observational studies: defining time zero, handling treatment switching, confounding adjustment, or data quality? What would help you overcome it?

Leave a Reply

Your email address will not be published. Required fields are marked *

Share On Facebook
Share On Twitter
Share On Linkedin
Share On Pinterest
Contact us