CommSense
Hi, we are the CommSense project team at the University of Virginia.
Our interdisciplinary group brings together experts in nursing, engineering, and medicine to explore how AI and wearable sensing can transform patient-clinician communication.
Background
High-quality patient–clinician communication is essential for safe and effective healthcare, especially in serious illness and palliative care. Yet scalable, objective methods to monitor and assess communication performance in real clinical environments remain limited.
Purpose
- Evaluate acceptability of the CommSense platform with clinicians, patients, and key external stakeholders.
- Assess clinical and technical feasibility of deploying CommSense in real healthcare settings.
Overview of the System
CommSense is an AI-powered system developed by an interdisciplinary UVA team to capture and analyze clinical conversations in real time:
- Deployed on mobile devices (e.g., smartwatches) to securely record encounters.
- Uses large language models (LLMs) to quantify communication metrics—such as speech dominance, use of medical jargon, and expressions of empathy.
- Provides actionable feedback to clinicians to support improved communication.
Methods
- Sample: UVA palliative care patients (≥18 years, English-speaking, any diagnosis), clinicians, and key stakeholders.
- Data Collection: Audio-recorded clinical interactions; baseline demographics and surveys; post-encounter surveys; stakeholder interviews.
- Analysis: Descriptive survey statistics, thematic analysis of interviews, and benchmarking of CommSense performance metrics (balanced accuracy, precision) against ground truth.
- Integration: Compared human transcripts with CommSense AI outputs, gathered clinician feedback on usability and visuals, and explored integration with health systems.
Results at a Glance
- Conversations captured: 51 unique encounters with 7 clinicians and 36 patients (as of Aug 2025).
- Stakeholders interviewed: 14 participants on CommSense implementation in healthcare systems.
- Performance: LLM-based models demonstrated strong balanced accuracy compared with non-LLM models.
Conclusions & Next Steps
CommSense shows promising feasibility and acceptability as an AI-driven platform to enhance clinical communication.
Next steps include scaling deployment within healthcare systems and refining the LLM-based communication metrics for broader clinical impact.
Funding
Supported by the Gordon & Betty Moore Foundation, the Nurse Leaders and Innovators Fellows Program, and the UVA EIM Seed Pilot Program.
Collaborating Institutions
- University of Virginia (UVA) School of Nursing
- UVA School of Engineering & Applied Science
- UVA School of Medicine
