The Bionic Resident or: Competency Theater

Slogging through another revision of the residency program manual has gotten me thinking about competency and how we measure it. I accept the premise that there are multiple dimensions to physician competence–patient care skills, communication skills, professionalism, etc. These are what the ACGME terms “core competencies” and it’s easy, in a general way at least, to assess them. Everyone knows who the knowledgeable physicians are, who has great operating skills but no bedside manner, and who tends to lie and cover up mistakes.

It’s tougher to quantify these competencies. Medical knowledge can be tested with standardized tests, but I’ve seen plenty of residents whose real-world knowledge base is very good receive low scores on the in-service exams. This is in part because those exams test a lot of minutiae in order to create a distribution of scores. I still don’t understand why such a distribution is necessary or desirable; why not ask only pertinent questions and set an objective (criterion-referenced) standard instead of grading on a curve (norm-referenced)?

But where I think we’ve really gone too far is in trying to quantify not only the core competencies, but dozens of milestones as well. In neurology, there are 26 of them, and each one of those has several elements. Here’s the Patient Care core competency / Epilepsy milestone, with all ten of its elements:

Epilepsy Milestone

Twice each year, each residency’s Clinical Competency Committee is required to upload to the ACGME, for each resident, a score for each milestone. 26 milestones times 10 elements per milestone times 10 residents in our (very small) program equals 2600 assessments that are supposed to be adjudicated by the committee. Twice each year. If you’ve ever had the pleasure of attending a committee meeting, you know that it can be a challenge for a group to make 5 decisions in a few hours, much less 2600!

Each residency program is developing mechanisms to streamline this process and I won’t bore everyone with the details of how we do it. My concern is with the underlying assumption here–that if we create a large, complex system of resident assessment whereby 5200 boxes need to be checked each year, the result will be a valid assurance that the resident is indeed competent. I’m highly skeptical of this claim; I think we’re trying to create precision where it doesn’t exist and over-using check boxes at the expense of narrative assessment. Here’s a recent article that contrasts this “iDoc” paradigm of medical education with a “tea-steeping” paradigm (wherein the resident steeps in the training milieu for 4 years and emerges as a competent neurologist).

“Security theater” is a term some use to describe our airport screening procedures–an array of measures that give the appearance of making the skies safer for the flying public without actually doing so. I think our current assessment scheme can aptly be termed competency theater. Just as the public (reasonably) demands that something be done to ensure that terrorists can’t wreak havoc in the skies, various stakeholders are demanding (reasonably) more accountability from the medical education system. They want assurances that graduating residents have the necessary competence to care for patients. So the ACGME responded by creating the “Next Accreditation System“–a huge undertaking for that organization and for every residency and fellowship program seeking their accreditation (i.e. almost all of them). Now we have an impressive-looking competency framework to show those stakeholders, complete with fancy charts!

Milestone Graph

We can record residents’ developmental milestones over time and show these nifty charts to law makers, CMS (Medicare) administrators, and other stakeholders. But do these charts truly accurate measures of that competence? “This resident scores a 3.4 and that one a 3.7, so let’s offer our fellowship position to the latter one.” Or do the charts simply reflect the obvious fact that residents mature and improve over time?

I think we should bring the pendulum back a bit toward the tea-steeping model, with an emphasis on narrative assessment and a sprinkling of boxes to check. In neurology, I’d maintain many of the structural requirements–continuity clinic, 6 months of inpatient, 6 months of outpatient, 3 months of peds, etc. But I’d severely limit the number of outcome boxes to check: The 5 directly-observed clinical exams, X successful lumbar punctures, Y EEG reports, Z EMG reports, 1 grand rounds, 1 QI project, etc. Otherwise, the core competencies should be assessed via narrative feedback from faculty, peers, students, and staff.

Indeed, when assessing prospective residents, I pay scant attention to their transcripts–basically looking for red flags such as failed courses and gaps in the curriculum. Who cares if this student got an 87 in histology while that one got a 92? What I’m really interested in are the narrative assessments. Not even the recommendation letters, which are almost uniformly effusive, but (when I can find them) the in-the-trenches observations of faculty and residents from the end-of-rotation evaluations. That’s where there’s some possibility of ferreting out who’s an effective team member and who has a personality problem. The former will be teachable and will develop clinical skills over time. The latter often shouldn’t be practicing medicine. In reality, we take good people and develop them into neurologists–we don’t build them like the bionic man.

About Justin A. Sattin

I'm a vascular neurologist and residency program director. I created this blog in order to share some thoughts with my resident and other colleagues, and to foster my own learning as well.
This entry was posted in Uncategorized and tagged . Bookmark the permalink.