When

March 16, 2026, Noon
Image
BME seminar logo
Monday, March 16, 2026, at 12:00 p.m.
Keating 103 | Zoom link
Hosts: Swarna Ganesh and Kellen Chen
 
Image
Reid Loeffler
Reid Loeffler
PhD Candidate, Yoon Lab
"Smartphone-Based, Isothermal Nucleic Acid Amplification Assay With Rapid Quantification for Point-of-Care Diagnostics"

Abstract: Nucleic acid amplification is a widely utilized lab technique, and the current gold standard method is polymerase chain reaction (PCR), which involves cycling through three distinct temperatures. PCR has largely been limited to a traditional lab setting due to the bulky and expensive equipment it requires, in addition to needing trained personnel. Isothermal amplification techniques, such as recombinase polymerase amplification (RPA), have arisen as an alternative to PCR toward point-of-care diagnostics. This is because RPA has a single and low reaction temperature of 39 ℃, and can be completed in less than 20 minutes, significantly reducing the time from sample acquisition to results. This work has focused on 1) incorporation of portable smartphone-based detection, 2) product evaluation (initial copy number and sequence length) on low-cost paper microfluidic chips, and 3) integration into an all-in-one device. Applications of this work range from identifying various bacterial species to HPV-related oropharyngeal cancer screening.
 

Image
Nick Souligne
Nick Souligne
PhD Candidate, Subbian Lab
"Intersectional Fairness for Trustworthy AI-based Systems in Biomedicine"

Abstract: Intersectionality, the idea that overlapping social identities such as race, gender, and socioeconomic status, have complex interactions that drive lived experiences, is a key driver behind structural inequities within the healthcare system. Algorithmic fairness, the field of study that has arisen to combat the prevalence of bias driven disparities in AI-based systems, has widely focused on single-attribute approaches that fail to account for the full range of disparities faced by those belonging to multiple under-represented demographics. This narrow approach limits the ability of these fairness techniques to adequately detect, assess and mitigate the disparities emerging from the intersections of these social identities.

In this presentation, I will describe an integrated approach for operationalizing intersectional fairness analysis and bias mitigation within machine learning workflows through three complementary components. First, I introduce FairLogue, a modular toolkit designed to quantify and contextualize intersectional bias through a combination of observational and counterfactual fairness diagnostics. Second, I present a mitigation framework that enables concurrent implementation and evaluation of multiple fairness interventions with support for intersectional subgroup analysis. Third, I describe a clinical machine learning workflow integrating these toolkits into a reproducible development pipeline and evaluates the impact of these tools using a synthetic data testbed designed to simulate clinically relevant heterogeneous populations and controlled bias scenarios.

Integration of these components enables systematic investigation of intersectional disparities, how different mitigation strategies interact, and how fairness diagnostics can be incorporated into practical model development workflows. Experiments within the synthetic testbed enable controlled comparison of mitigation techniques and allowing developers to assess how fairness interventions may vary in effectiveness across varied populations, including intersectional demographics. Together, this work provides both the methodological tools and empirical guidance for researchers and developers to propagate and operationalize intersectional fairness principles in the design and evaluation of machine learning systems.


Accessibility: Persons with a disability may request a reasonable accommodation by contacting the Disability Resource Center at 621-3268 (V/TTY).