LOGIN 

 

JOIN

 

RENEW

 

CME/MOC

What Educators Are Reading

Featured Articles

Article:

Amy B. Zelenski, Jessica S. Tischendorf, Michael Kessler, Scott Saunders, Melissa M. MacDonald, Bennett Vogelman, and Laura Zakowski (2019) Beyond “Read More”: An Intervention to Improve Faculty Written Feedback to Learners. Journal of Graduate Medical Education, In-Press. https://doi.org/10.4300/JGME-D-19-00058.1

Summary:

This paper examined the impact of using a novel framework for effective feedback to improve written feedback from faculty members to trainees. After a one hour faculty development session, written feedback comments showed improvements in content, specificity and recommendations for further growth. These changes were evident through 12 months of follow-up after the session. For this blog post, Camille Petri interviewed first author Amy Zelenski.

Interview:

CP: One of the strengths of your study is the applicability of the intervention and tool to multiple learner levels and across subspecialties. What challenges do you see translating this to other disciplines, such as surgery or obstetrics/gynecology (OB/Gyn)?

AZ: The basic framework can be applied to any discipline.  The main differences I can imagine are the contexts in which learners are observed and (related to the context) the cognitive load of the teachers/supervisors during these observations.  A couple of years ago I was involved in a study examining the operating room (OR) as a learning environment and was struck by how much the attending surgeons were directing the residents.  I understand the reluctance to have the residents take the lead, but can also see how difficult it is to assess someone’s competence if you are always telling them what to do.  You might be able to assess their technical skill, but not their decision-making.  This is easier to do on rounds and during other encounters not in the OR.  I could see a similar challenge in other disciplines like OB/Gyn.  Being able to observe the learners in order to assess what they can do independently is an important first step.

CP: Your results show improvements in the written feedback after the intervention but it's challenging to know if this represents a substantial change. How could you judge if these differences ultimately assist the competency committee or impact learners?

AZ: Great question!  One way to answer that question would be to analyze the clinical competency committee reports that are produced to see if those after we instituted the intervention are different in any way.  As an extension of this project, we have formed a committee that reviews the written feedback faculty offer to learners on a semi-annual basis.  My experience as a reviewer has been that the comments have gotten more substantive.  The last faculty member I was asked to review had included all of the essential elements.

To gain the resident perspective, we added a question to their faculty evaluation about feedback. We found that, among those divisions participating in the faculty development program, resident ratings of faculty were higher for the statements “The attending gave useful feedback frequently” (4.5 vs. 4.0; p<.05) and “Did the attending tell you what outcomes they would want to see once you mastered what needed to be learned?” (4.05 vs. 3.63; p<.05).  This was verbal, not written, feedback but we were pleased to see this improvement nonetheless.

CP: Why do you think providing adequate written feedback is hard to accomplish for faculty?

AZ: There are many factors involved here.  Time is always a factor and this goes hand in hand with motivation.  We are asking teaching faculty to do so many things and when they prioritize this work, patient care (understandably) comes before teaching.  Since patient care entails so much documentation, I think our teachers are just not motivated to go in and complete a helpful written evaluation of a learner because they feel like they gave feedback verbally and the written “note” is not needed.  I certainly hope this is true, but that feedback (just like the conversations had with patients) can get lost in translation and/or forgotten by the learner.  One goal of our project was to remind faculty members about the importance of written feedback; not only for the clinical competency committees, but also for the learners themselves.  It can be very disorienting to go from rotation to rotation and not have a written record of your progress across the year(s).  I also think our faculty are reluctant to be critical in writing, especially when working with medical students.  They realize the high-stakes nature of these assessments and even with reassurance that their comments will not hinder the student, they hold back some constructive comments to protect them.  We need to shift our mindset into realizing that the kind/compassionate thing to do is to give that honest feedback.  It is far more damaging to withhold feedback that could help learners improve than to give feedback that might at first be hard to hear (and say).

CP: What did you learn about teaching feedback practices that you wish you had known prior to starting this research? If you were to recreate the intervention now, how would you change your strategy?

AZ: Faculty members were concerned about the reluctance their learners had to receiving feedback, and were worried about receiving poor evaluations from those learners if they gave any constructive feedback.  There was also an interesting gender dynamic in which female faculty felt more penalized for giving constructive feedback than male faculty.  This made some female faculty feel like they could not be completely honest with their learners.  I do not have the answer to fix these issues, but I would call them out in future sessions and acknowledge that feedback is best when is given in an atmosphere of trust and when the learners believe that the faculty members want them to succeed.  We could do a better job of cultivating a growth mindset in our learners and our faculty so that feedback is asked for, and given, with the belief that it will lead to improvement.

CP: What are the next steps for this tool and intervention?

AZ: We are continuing to evaluate written feedback on an ongoing basis. We pool the feedback written by faculty members semi-annually and ask a member of our education committee to assess this feedback.  The documents are sent to the reviewers in a de-identified way to mitigate any bias that might arise from relationships the reviewer may have with the person being reviewed.  The faculty member then receives a letter with specific comments about what they are doing well and areas in which they could improve.  We try to follow our model so they see yet another example of this feedback cycle. As a reviewer I have noticed an improvement in the quality of the written feedback over time.  We also teach a version of the workshop to our PGY2 residents as they transition into their PGY3 year.  Our hope is to seed the next generation of clinician-educators with these skills.

 


 

Haala Rokadia

Blog post author

Camille Petri, M.D. is a Clinical and Research Fellow in Pulmonary and Critical Care Medicine at the Massachusetts General Hospital and Beth Israel Deaconess Medical Center. She is a Research Fellow in the Harvard Medical School Academy Fellowship for Medical Education Research where she is investigating interdisciplinary teamwork and collaboration in the intensive care unit. 
Twitter handle: @snurpycp

 

Jonathan Keller

Article author

Amy B. Zelenski, PhD, is an Assistant Professor and the Director of Education for the Department of Medicine in the School of Medicine and Public Health at University of Wisconson-Madison. She designs curriculum and teaches improv, communication, empathy, teamwork, teaching, and self-awareness skills to medical professionals.  Her research focuses on teaching physicians how to engage in empathic behaviors and how building skill in empathic behavior can increase the quality of teaching and patient care while decreasing healthcare provider burnout and personal distress.