We are under increasing pressure to practice what I would term checklist medicine. That pressure comes from three main sources: billing, legal concerns, and time. In order to bill for our services, especially if we wish to use the (potentially more lucrative) E&M coding scheme, our assessment and documentation must fulfill a checklist of standard criteria. To ward off potential lawsuits, we must document standardized lethality assessments and risk/benefits discussions. Of course, we must do all of these things within a specific time interval, which can be as short as 15 minutes per encounter. We consequently don’t have much room to get to know our patients as human beings, to dive deep into the multi-faceted reasons for their suffering, or to understand (much less help them utilize) the strengths and resources they possess. Heck, when faced with patient’s with complex diagnoses, we barely even have time to provide appropriate psychoeducation.
I butted heads with these limitations over the course of the past year through moonlighting at a rural mental health clinic via telepsychiatry. They served a low income, primarily medicaid/medicare population and gave me only 15 minutes per follow-up. Every second of the interaction was precious and yet I felt a tremendous amount of pressure to just read off a checklist of routine questions to make sure all of my billing/legal bases were covered. This was not how I wanted to practice medicine. I suppose I could have turned down the job, but instead I chose to take it on as a challenge to improve my clinical efficiency.
I thought long and hard about about potential ways to trim corners and alternative methods of data collection. I decided to utilize self-report measures as they could be completed by patients prior to the appointment and have generally been shown to have good correlation with clinician-administered tools. I had previously worked at a clinic that used thehttp://www.ids-qids.org/index2.html (QUIDS) for all of their follow-up appointments and was also very familiar with the weekly mood/lethality tracking charts used in Dialectical Behavioral Therapy. The QUIDS was mildly useful for tracking depressive symptoms in those with mood disorder’s but was cumbersome to read/score and did not provide the broader symptom coverage that I knew I needed. In my naive optimism, I thought I could just search the internet and readily identify a different, more comprehensive measure and plug that in for use with my patients. The obvious advantage of using an existing scale is that it has already been well studied and validated. Making my own scale would mean taking a risk of creating a potentially biassed or invalid measure. I did not have much luck identifying such an assessment and piecing together different standardized assessments created an overly long, tedious, and repetitive document. I decided the risk of making my own was worth taking. To help better conceptualize the questionnaire, I created a list of the things I needed the form to accomplish as well as a list of potential limitations/pitfalls.
Clinical information needs
- Provide a basic review of the more common psychiatric symptoms seen in my patient population.
- Screen for medical illnesses, medication changes, hospitalizations, medication non-adherence, common medication side effects.
- Help gauge treatment progress.
The ideal self assessment
- Must have a low perceived burden of scale completion as form completion and validity tends to decrease with increased perceived burden. The form must therefore be: 1. Easy to read ~5th grade reading level (the average reading level among medicaid enrollees) 2. Relatively brief, so that patients don’t have to arrive 15 min early to complete it.
- Should provide me with clinically useful information in an easily accessible format.
- Questions can’t be too specific for a particular disorder. This was one of the major complaints I remembered patients having about the QUIDS at my prior clinic. It was a great fit for patients with major depression but patients with unrelated disorders, e.g. ADHD or schizophrenia felt like they were constantly asked to answer questions that were not particularly relevant to their treatment.
- Must fit on one page front and back for ease of scanning and paper conservation (this is specific to my situation).
Potential pitfalls of self-report measures
- Reporting bias resulting in minimization or over-reporting of symptom severity, thereby reducing validity.
- People interpret and use scales differently, what I might rate as ‘8’ on a 10 point scale, someone with the same opinion might only rate as a ‘6’ because they interpret the meanings of the scale points differently.
- Cannot be completed by some individuals due to illiteracy, physical debility, or compromised cognitive functioning.
(This applies to using any checklist type measures) Risk of the measure replacing clinical judgment.
The self assessment form
After putting the follow-up form together, I went through several iterations of question wording, formatting, spacing, etc. I would generally revise the form, send it out to my clinic then use the revised form for a month prior to making a decision about what worked well versus what worked poorly and needed to be changed. All along the way, I also consistently obtained feedback from my patients. I am currently on version 4 of the adult/teen self report form and version 2 of the more recently invented “caretaker” form. I won’t bore you with all of the details that went into the numerous revisions. Rather, I’ll attempt to share some of the things that I have learned in the process.
1. Consistency is key
I’ve had the pleasure of integrating these forms into my workflow in 3 different locations now. The first few weeks of using the forms at a new clinic are always tenuous – staff forget to give out the forms, patients forget to fill them out, patients miss the back page, patients don’t arrive at their appointments in time to get the form filled out. Amazingly, within a few weeks of consistent use, a rhythm emerges and everything runs more more smoothly. I too have steadily gotten into a rhythm. I need about 2 min to glance over the current version of the form and prioritize the topics for discussion. For patients is see in-person, I will usually look over the form while checking the patient’s weight and BP.
2. The scale you use matters
The very first version of a follow-up form was basically a combination of items from the PHQ9, QUIDS, and some of my standard illness/medication side effects questions thrown together. After just a few weeks of testing, it became evident that the likert scale that I had adopted from the PHQ9 (Over the past 2 weeks, how often have you been bothered by any of the following problems? Not at all, several days, more than half the days, nearly every day) was too specific or didn’t seem to measure what I needed it to measure. For instance, patients with chronic fatigue always seemed to put down #3 for feeling tired nearly every day but the degree to which they felt tired varied dramatically. Several patients specifically gave me feedback that they weren’t sure how to rate some of their symptoms on the PHQ9 scale. I subsequently changed the likert scale to a more general: never, rarely, sometimes, often. Conversely, that same likert scale does not work as well for assessing overall mood, anxiety levels (e.g. patients can have frequent lower-level anxiety or rare extreme periods of anxiety), or quality of life. For the later, numbers without an associated verbal benchmark turned out to be too prone to patient response bias. The compromise has verbal benchmarks along the bottom for reference and seems to provide much more consistent and clinically meaningful results.
3. Grouping specific items improves efficiency
My initial form was a jumble of words in a grid and took effort to look through and interpret the results. It was not much better than the QUIDS where I had to glance through the various question boxes to get a gestalt of the presentation. The effort required to do this for each patient really takes away from the utility of the assessment. After several iterations, my form is now broken up into discretely labeled sections addressing life events, medical problems, and pharmacotherapy. The psychiatric ROS is broken up into subsets of questions pertaining to neurovegetative symptoms, anxiety/agitation, lethality etc.
4. Medical review of systems (ROS) increases efficiency and improves the overall assessment
Concurrently with creating my forms, I was also reading up on the intricacies of E&M coding. It turns out that a major distinction between being able to code a 99213 versus a 99215 is the medical ROS. Imagine, in the middle of your appointment, you find out your patient is having increased SI with a plan. So, obviously, you now need to conduct a 12 point medical review of systems to bill the higher complexity code. Makes total sense right? Wrong, but easy to fix if you’re using a follow-up form. As of version 3, in the space of 3 lines, I have all of the medical ROS information to qualify for 99215.
Since adding a 12 point ROS to my forms, I have encountered a somewhat unintended consequence – I’m catching previously undiagnosed medical conditions at a much higher rate. One particular case really stands out in my mind: a young adult who had previously seen by a different provider at my telepsychiatry clinic. She had marked off muscle/joint pain, nausea, headaches, and had noted a physical injury (“sprained my knee AGAIN bending down to get groceries.”) as well as a recent ED visit for chest pain on the life events screener. She was tall, thin, and frustrated with a medical system that she viewed as being consistently dismissive of her various concerns. The description of her recent injury particularly caught my attention. I don’t think I would have focused my 20 min encounter on medical concerns had I not seen her form. A few additional questions later, my follow-up plan went from simply increasing her SSRI to a genetics referral and a long call to her PCP to describe my concerns. The patient was subsequently confirmed to have Ehlers-Danlos Syndrome. A few months later, so were her mother and sister. All had been struggling with symptoms related to this for years. You could argue that screening for medical problems aka “primary care work,” takes away valuable time from “actual” psychiatric care. However, especially for that patient, the recognition of her underlying medical problems went much further towards helping improve her mood and anxiety than any cognitive therapy or medication I prescribed. Other cases have not been as dramatic but even minor things add up.
5. You get what you (do and don’t) ask for
Having gleamed the sheer amount of information I had been missing previously even in my more thorough clinical encounters, I designed my follow-up forms to allow me to always err on the side of gathering too much information rather than an inadequate amount. My greatest initial worry, that patients would find having to complete a follow-up form off-putting or overly tedious, has not come to fruition. Even the patients who initially griped about having to fill out a form rapidly changed their opinion when they realized that most of their sessions now revolved around their primary concerns rather than standard systems review questions. Furthermore, something about the simple, impartial nature of the questionnaires seemed to facilitate patient disclosure. Even a year into this experiment, I continue to be amazed at how much more readily patients report potentially concerning symptoms/behaviors on a checklist versus writing it in versus discussing it in person. This is especially true for substance use and self-injury.
a. Medication side effects
I have come to realize how frequently I had been previously missing common medication side effects. It seems that my standard direct question about side effects tended to be too broad and led to patients just saying no to move on to other concerns. The answers frequently seemed to change when the patient were explicitly asked about specific effects on the form. I found about that at least 1/3 of the patients who had chronically been on their treatment regimen “with no side effects” actually had significant dry mouth and periods of grogginess. I had lots of discussions about managing dry mouth in the 2 months after I first added a medication side effects screener to my form.
b. Suicidal ideation and self injury
The form has been surprisingly effective at screening for SIB and SI/suicide attempts, especially in my patients with personality disorder. On several occasions, patients initially denied SI/SIB during the interview only to begrudgingly disclose the events after I inquired as to ether they had accidentally checked off the wrong box on the form. The lethality section is probably the one I replicate the most often between the form and my appointment. I have yet to encounter a patient that admitted to having SI or engaging in SIB during the encounter that denied these things on the form. Curiously, I have identified a rare subset of patients who will frequently “miss” filling out the lethality section. They almost always have (a) severe personality disorder(s). Their lack of response on the form actually becomes an interesting starting point for an in-session discussion.
c. Substance Use
I have a bad habit of forgetting to inquire about substance use in patients who don’t have this in their history. Even when I had remembered ask, the answers I received did not seem to be as reliable as the ones I have obtained via the two screening questions on the questionnaire. Several times now, I’ve uncovered problematic drinking behaviors in patients who previously reported being “rare, social drinkers.”
Afterthoughts on using self assessment in my practice
Using this form, I could easily complete a 99214 encounter within 15 min (minus completing the note of course). With a cooperative patient and some luck, I could even get through a full 99215 within the same timeframe. Since finishing up at the moonlighting position and moving on to a real job where I get 30 min for follow-up appointments, I use the “extra” time I’ve freed up for psychoeducation and therapy. I am much happier with this alternative time distribution, as are my patients. I do stay quite cognizant of the risks of relying on “checklist medicine,” for a portion of my assessments. A checklist should never replace clinical judgment, rather, it should inform and enrich it. In the hands of a less therapy-oriented provider, I could easily see a similar assessment morphing into a means for decreasing patient interaction time and quality.
My current goals for the assessment are to transition it from paper to digital and “self-scoring.” The ideal scenario would be for patients to complete the form on their cellphones or home computer prior to their appointments and the results to flow directly into my note. Again, the idea is to free up even more time for the portions of the clinical encounter that I value the most. I just launched my digital pilot forms, which utilize HIPAA compliant google forms last month. Only time will tell how much the digital shift adds or detracts from the experience. For now, I’m more than happy to share the paper versions of the child and adult assessments with whomever wishes to make use of them.