Welcome back to This Week in Cardiovascular AI - a bi-monthly email newsletter for followers and subscribers that summarizes recent research in Cardiovascular AI and Digital Health that we find important and unique.
Key research in this edition includes Electrocardiographic Imaging for advanced atrial fibrillation classification, machine learning and hypertrophy assessment with cardiac MRI, “snapshot” AI for static echocardiogram prediction. I also read an interesting article discussing Bioelectronic Medicine, which I reviewed in more detail for those interested (scroll to bottom). As always, thanks for reading and don’t forget to share This week in Cardiovascular AI!
Not a subscriber? Join now to stay up-to-date and consider upgrading to support our work!
Atrial Fibrillation Treatment Stratification Based on Artificial Intelligence–Driven Analysis of the Electrophysiological Complexity
Journal of Cardiovascular Electrophysiology
This prospective study evaluated the utility of an AI-driven algorithm to stratify atrial fibrillation (AF) patients based on noninvasive electrophysiological complexity using Electrocardiographic Imaging (ECGI). The authors created a complexity score derived from three ECGI biomarkers (highest dominant frequency, median dominant frequency, mean rotor time) and applied a clustering algorithm that integrates this score with AF type (paroxysmal vs. persistent) and treatment strategy (rate vs. rhythm control) to predict 1-year sinus rhythm (SR) maintenance. The model significantly outperformed traditional paroxysmal/persistent classification.
Key Findings:
Study Population:
204 patients (84 outpatient, 120 ablation)
Stratified by AF type, ECGI biomarkers, and treatment approach
Clinical endpoint: Freedom from AF at 1 year
Complexity Score:
Higher highest dominant frequency = higher complexity
Lower median dominant frequency and shorter rotor time = higher complexity
Clustering Algorithm:
5 patient clusters based on complexity, AF type, and treatment
Low complexity → favorable outcome with ablation regardless of AF type
Intermediate complexity in paroxysmal AF → rhythm control superior to rate control
High complexity → poor outcome regardless of strategy, especially in persistent AF
Prediction Accuracy:
AUC of new model: 0.73 (95% CI: 0.63–0.81)
AUC of paroxysmal/persistent classification: 0.58 (p < 0.05)
90% prediction success on test set
Clinical Takeaways:
ECGI Complexity is a Superior Predictor: Noninvasive electrical mapping quantifies AF burden more meaningfully than AF duration alone.
AI-Driven Stratification Enhances Personalization: Identifies subgroups who benefit most from ablation or rhythm control—especially useful for tailoring therapy in paroxysmal vs. persistent AF.
New Workflow Proposed:
Low complexity: Ablation favored for both paroxysmal and persistent AF
Intermediate complexity: Rhythm control viable in paroxysmal, less so in persistent
High complexity: Consider alternative strategies (e.g., rate control, advanced ablation)
Potential to Replace 3P Classification: Traditional temporal pattern classification is insufficient for guiding therapy—electrophysiological complexity is more prognostic.
Supports Imaging-Free ECGI Tools: Use of “imageless ECGI” facilitates broad clinical implementation by removing CT/MRI dependencies.
Read it here: Link
What is Electrocardiographic Imaging (ECGI) technology?
This study employs Corify Care's ACORYS® system. It is an advanced medical technology that non-invasively maps the heart's electrical activity in 3D. The system uses a specialized vest with my electrodes (63 in the study) to record cardiac signals from the surface of the torso. Its core innovation is its "imageless" nature; instead of requiring a CT or MRI scan, ACORYS® employs sophisticated algorithms and artificial intelligence to generate a personalized anatomical model of the patient's heart. By solving a complex mathematical "inverse problem," the system translates the body surface signals into a dynamic, real-time electroanatomic map of the heart's surface, allowing clinicians to precisely assess complex arrhythmias like atrial fibrillation before an invasive procedure.
Machine Learning to Automatically Differentiate Hypertrophic Cardiomyopathy, Cardiac Light Chain, and Cardiac Transthyretin Amyloidosis: A Multicenter CMR Study
Circulation: Cardiovascular Imaging
This multicenter study developed and validated a machine learning (ML) model based on cardiac MRI (CMR) data to differentiate between hypertrophic cardiomyopathy (HCM) and cardiac amyloidosis (AL and ATTR subtypes). The ML model integrated automated imaging metrics and basic clinical data, enabling accurate classification without reliance on expert interpretation or contrast-enhanced imaging. It achieved high accuracy in a 3-step cascade: (1) distinguishing patients from healthy volunteers, (2) differentiating HCM from amyloidosis, and (3) classifying amyloidosis as AL or ATTR.
Key Findings:
Population:
400 subjects (95 healthy, 94 HCM, 95 AL, 116 ATTR) from 56 centers.
CMR data processed using automated software (cvi42); 173 imaging parameters analyzed.
Model Performance:
Step 1: Patients vs. Healthy – AUC 1.00
Step 2: HCM vs. Amyloidosis – AUC 0.99
Step 3: AL vs. ATTR – AUC 0.92
Performance remained excellent with just imaging and demographics, even without gadolinium contrast.
Shapley Explainability Analysis:
Key imaging biomarkers: LV wall thickness, LV/RV strain (GLS, GRS, GCS), presence of diffuse enhancement, atrial function, and pericardial effusion.
Clinical features like macroglossia, carpal tunnel syndrome, and NT-proBNP were also influential for differentiating AL vs. ATTR.
Contrast-Free Utility:
Even without LGE data (gadolinium), the model distinguished amyloidosis from HCM with AUC ≥ 0.95.
This enables use in centers with limited access to contrast agents or patients with renal impairment.
Clinical Takeaways:
First-of-Its-Kind ML Differentiator: This is the first large multicenter CMR-based ML model to successfully distinguish HCM from cardiac amyloidosis and its subtypes—automatically and with minimal manual input.
Automated Diagnostic Aid: Can serve as a background tool in MRI workflows, flagging likely amyloidosis or HCM for follow-up, reducing misdiagnosis.
Reduces Contrast Agent Dependency: Strong performance without gadolinium makes it safer, more scalable, and accessible across institutions.
Future Clinical Integration:
Embeddable into CMR software platforms.
Could prompt faster specialist referral and earlier therapeutic intervention (e.g., tafamidis, daratumumab).
Next Steps: Prospective validation, expansion to more diseases, and real-world clinical deployment.
Read it here: Link
Snapshot Artificial Intelligence—Determination of Ejection Fraction from a Single Frame Still Image: A Multi-Institutional, Retrospective Model Development and Validation Study
The Lancet: Digital Health
This study presents a deep learning model that estimates left ventricular ejection fraction (LVEF) from a single echocardiographic frame, rather than from full video loops. The model was trained on over 19,000 echocardiograms and validated across multiple institutions and patient populations, including handheld cardiac ultrasound (HCU) by both experts and novices. The approach significantly reduces computational load while maintaining strong performance, making it suitable for point-of-care or low-resource settings.
Key Findings:
Model Input & Output:
Input: Single still frames from echocardiographic views (A2C, A3C, A4C, PLAX).
Output: A continuous LVEF estimate (regression) and classification (≤40% vs >40%).
Training Dataset:
19,627 echocardiograms (Rochester, Mayo Clinic); enriched for lower LVEF values.
Used over 470,000 frames for training via a ResNet-18 architecture.
Validation Datasets:
Internal Mayo Clinic test sets (Rochester, Arizona, Florida), EchoNet-Dynamic (Stanford), and two prospective cohorts:
One with TTE + HCU from same session (n=625).
One with HCU by expert and novice users (n=100).
Performance (classification LVEF ≤40%):
AUC > 0.90 across all datasets (except EchoNet and novice-collected HCU: AUC ~0.85).
Regression R² > 0.70 in most datasets; MAE ~5–6%.
Accuracy increased with frame averaging across clips.
Model Behavior:
Performance improved when using multiple frames per patient (even one per clip).
End-systolic frames provided more accurate LVEF than end-diastolic frames.
Grad-CAM showed that model attention focused on LV structures.
The model retained sensitivity to cardiac phase despite using static images.
Estimations were more stable when averaging across phases of the cardiac cycle.
Clinical Takeaways:
Revolutionizes Point-of-Care Echo: Enables accurate, rapid LVEF estimation from handheld devices—even by novice users—with just a single still frame.
Reduces Technical and Computational Barriers:
No need for long cine loops or expert interpretation.
Lower data and power demands make it suitable for portable deployment.
Future Potential:
Could support LVEF screening in emergency, low-resource, or outpatient settings.
May pave the way for static image-based AI diagnosis of other cardiac diseases (e.g., cardiomyopathies).
Limitations:
Needs broader validation in more diverse populations.
Still relies on adequate image quality and correct view classification.
Performance slightly lower in handheld ultrasound by novice users.
Read it here: Link
What else I read…
From Lab to Clinic: 6 Steps to Unlock the Power of Digital Health
From Nature Medicine
Digital technology is everywhere, and healthcare is no exception. We've seen a boom in tools like mobile apps, wearable devices, and AI-powered support systems designed to help people manage chronic diseases like diabetes and heart conditions. The potential is enormous—the World Health Organization estimates a small investment in these technologies could save over 2 million lives in the next decade.
But there's a problem. Despite all this research and promise, very few of these evidence-based digital tools ever make it into the hands of patients through their routine healthcare providers.
In a recent commentary in Nature Medicine, researchers Marie Löf and Ralph Maddison tackle this issue head-on. They provide six key recommendations to help bridge the critical gap between research and real-world implementation.
1. Go Beyond Theory and Study the Actual Rollout
Researchers have spent a lot of time identifying the barriers to digital health adoption. Now, it's time to focus on studying the actual implementation of these tools. This means using robust study designs to measure concrete outcomes like adoption rates, how deeply the tech penetrates the healthcare system, and its long-term sustainability. The authors suggest using hybrid designs that assess both the tool's effectiveness and its implementation simultaneously to get solutions into routine care faster.
2. Plan for Implementation from Day One
Implementation can't be an afterthought. Researchers must plan for it from the very beginning of any digital health project. This involves using established implementation frameworks (like CFIR or RE-AIM) and maintaining a flexible approach to deal with barriers as they arise.
3. Involve End-Users in Every Step
To create a tool that people will actually use, researchers must engage all stakeholders—patients, healthcare professionals, and industry partners—early in the development process. This participatory approach ensures the technology solves real-world problems and addresses crucial issues like digital literacy and cultural diversity from the start. It's also vital to assess the readiness of healthcare providers to use these new tools and co-design solutions that meet their needs.
4. Create a Sustainable Business Model
Great technology is useless if there's no way to pay for it. The authors highlight the need to fund the delivery of evidence-based digital health programs within the healthcare system. Researchers should conduct robust economic analyses to build a case for funding and understand how their tools fit into existing reimbursement schemes. They also call for new, accessible pathways for researchers to distribute their tools.
5. Future-Proof the Technology
Technology evolves at a lightning pace—from simple text messages to complex, 24-hour monitoring with wearables today. To keep up, digital health solutions should be "technology-agnostic," prioritizing function and flexibility over any single piece of technology. It is also critical to plan for integration with existing clinical systems, data security, and privacy regulations.
6. Ensure Digital Health is Accessible to All
Many digital tools have been developed in a single language without cultural adaptation, which is a major limitation. This is especially concerning because noncommunicable diseases often hit socially disadvantaged and migrant populations the hardest. While engaging end-users helps, equity deserves its own focus. Researchers must prioritize making their tools accessible and effective for diverse cultural and linguistic groups to ensure that digital health benefits everyone.
Ultimately, digital health can't live up to its incredible potential to improve lives without effective implementation. These six recommendations offer a crucial roadmap to help researchers translate their innovations from the lab into routine clinical practice where they can make a real difference.
…
What is Bioelectronic Medicine?
From WSJ
At its core, bioelectronic medicine is a new approach to treating and diagnosing diseases by reading and modulating the electrical signals within the body's nervous system. Our nerves act as the body's wiring, carrying signals that control everything from our heart rate and breathing to our immune system and digestion. When these signals go haywire, it can lead to a host of chronic and acute conditions.
Instead of using drugs to chemically alter these processes, bioelectronic medicine uses sophisticated devices to deliver targeted electrical impulses to specific nerves. These impulses can either stimulate or inhibit nerve activity, restoring the body's natural balance and treating the root cause of the disease.
Current Uses: A Glimpse into the Present
Bioelectronic medicine is already making a significant impact on patients' lives. Some of the current applications include:
Pain Management: Spinal cord stimulators are used to treat chronic pain by sending mild electrical currents to the spinal cord, interrupting pain signals before they reach the brain.
Epilepsy: Vagus nerve stimulation (VNS) is an FDA-approved treatment for certain types of epilepsy. A small, implanted device sends regular, mild pulses of electricity to the brain through the vagus nerve, helping to prevent seizures.
Inflammatory Diseases: Pioneering research has shown that stimulating the vagus nerve can reduce inflammation, offering hope for patients with conditions like rheumatoid arthritis and Crohn's disease. This is because the vagus nerve plays a key role in regulating the body's immune response.
Restoring Function: In cases of paralysis, brain-computer interfaces are being developed to bypass spinal cord injuries, allowing individuals to control prosthetic limbs with their thoughts.
Future Applications: The Promise of Tomorrow
The future of bioelectronic medicine is even more exciting. Researchers are exploring a vast array of potential applications, including:
Cancer Treatment: Electric fields and pulses are being investigated as a way to disrupt the growth of cancer cells without the harmful side effects of chemotherapy and radiation.
Diabetes Management: Devices could be developed to stimulate the pancreas to produce insulin or to help the body better regulate blood sugar levels.
Cardiovascular Disease: Bioelectronic therapies could be used to treat heart failure, high blood pressure, and other cardiovascular conditions by modulating the nerves that control heart function.
Mental Health: Vagus nerve stimulation is also being explored as a potential treatment for depression, anxiety, and post-traumatic stress disorder (PTSD).
Personalized Medicine: In the future, bioelectronic devices could be paired with sensors to create "closed-loop" systems. These systems would continuously monitor a patient's condition and automatically adjust the electrical stimulation in real-time, providing truly personalized and adaptive treatment.
Bioelectronic medicine represents a paradigm shift in how we approach disease. By harnessing the power of the body's own electrical language, we are opening the door to a new era of targeted, effective, and personalized therapies with the potential to treat a wide range of conditions and improve the lives of millions.
Great article.