Fine-Tuning LUNA AI for Human-Centered Clinical Collaboration
LUNA AI is a clinical insight assistant designed to help mental health professionals understand patient progress and patterns using therapy transcripts, notes, and assessments. While its technical capability to generate insights was strong, early clinician feedback revealed a gap: the AI needed to interact more like a trusted assistant—not just a clinical engine. This case study outlines how we fine-tuned LUNA to support clinician judgment, emotional nuance, and natural collaborative behavior.
The Problem: AI Recommendations Without Sensitivity
In early prototypes, LUNA could generate detailed summaries and treatment suggestions. But when clinicians disagreed with the AI, or felt overwhelmed, the responses often lacked nuance:
Original AI Output:
“Journaling is recommended based on GAD-7 scores and avoidance patterns.”
Clinician Feedback:
“This doesn’t fit my approach. I don’t want to argue with the AI.”
The AI was correct—but it didn’t know how to yield. It offered no empathy, no transparency, and no awareness of clinician workload or tone.
Fine-Tuning Objectives
We defined four goals for refining LUNA’s interaction style:
Respect human authority – Always defer to clinician expertise.
Sense clinician tone – Recognize stress, uncertainty, or disagreement.
Adapt suggestions – Avoid repeating rejected ideas.
Offer context, not control – Be helpful, humble, and traceable.
Tuning Approach
We used prompt-based fine-tuning and conversational intent modeling.
Example 1: Clinician Rejects a Recommendation
Prompt Input:
“The AI suggests introducing a journaling exercise. Clinician responds: ‘This isn’t the right direction.’”
Expected Output:
“Understood. I’ll hold off on that path. Would you like to revise focus or deprioritize journaling-based suggestions going forward?”
Why it matters: This respects clinician control while keeping collaboration open.
Example 2: Clinician Appears Overwhelmed
Prompt Input:
“‘Too much happening. Can we just look at the basics?’”
Expected Output:
“Of course. I’ll pause new insights and summarize where we are so far. Let me know when you’re ready for more.”
Why it matters: Reduces decision load. Builds emotional trust.
Example 3: Clinician Edits the AI’s Output
Prompt Input:
“The clinician deletes the AI’s suggested exposure task from a treatment plan.”
Expected Output:
“Noted. Should I avoid suggesting exposure-based tasks for this case in the future?”
Why it matters: Demonstrates learning and adaptability.
Conclusion
The power of clinical AI doesn’t just come from its insights—it comes from how it speaks to humans. Fine-tuning LUNA to be respectful, self-aware, and emotionally intelligent made it more than a tool—it became a collaborator. This tuning framework now serves as the ethical backbone of all LUNA interaction design.