Back To Good Reads

Equity and Accessibility Issues with Patient Experience Questionnaires

August 1, 2022 Janice Karin

patient experience

One of our employees recently received a patient experience questionnaire in the mail. These are the surveys designed to capture, help improve a patient's impression of their care journey, and make navigating the healthcare experience easier and more successful.

This particular survey may help gather the necessary information to improve the patient experience in healthcare, but only for certain types of patients. Other patients - those with certain medical conditions, certain types of disabilities, who don't read English well, or others the survey administrators want to reach - will likely experience some difficulty completing the survey. Unfortunately, we suspect most of these patients will wind up not participating.This survey was designed to capture the patient's experience with one office visit with a specific provider. However, there were several barriers that might affect some populations taking the survey.

  1. The survey did not come with any instructions or provide options to get further assistance.

  2. The survey was written in English (assuming the participants knew English)

  3. The survey was dense, multi-columned, used a small font, not printed clearly, and required filling in small bubbles (like a standardized test) to complete

  4. How participants answered certain questions might lead them to jump around rather than answer questions in the order presented.

These problems could impact many people. For example, without clear instructions, many may wonder if they need to use a pen or a pencil to fill out the bubbles. However, certain populations, like those with complex and chronic medical conditions, visual disabilities, non-English speakers, or those with fine motor skill difficulties, are certain to experience frustration. In order to be equitable themselves or address health equity issues experienced by patients, surveys must consider all populations in their design and implementation.

Patients with complex or chronic medical conditions

The lack of instructions and the inability to seek help, is a glaring omission for this cohort. The yes/no questions cannot always be answered that way. Guidance on whether to skip these questions, write in answers, choose to use yes or no (when the answer is maybe or sometimes), and so on, enables users to answer in some semblance of a consistent manner. Without clear instruction the resulting data is less consistent and reliable.

This is only the beginning. The first question of the survey asks if you've seen a particular physician within the last six months, without specifying or tying the questions to a specific visit. Many of the later questions then ask for additional information about this visit. This seems simple enough, but factor in the following:

Patients with complex or chronic conditions have multiple medical appointments every month (or even every week), perhaps with a dozen or more different clinics or offices. Are they supposed to clearly remember an encounter from six months ago (after 30, 40, 50 other visits) well enough to answer specific questions about it?

Patients with complex or chronic conditions see certain clinicians frequently. Should they refer to the most recent visit? Use the one they remember best? Use the one where they experienced the most problems? How do they remember each specific visit? Some questions do specifically reference the most recent appointment while others just reference the patient's visit with the named provider (as if there couldn't be more than one in six months).

Patients with complex or chronic conditions may see both a physician and other clinicians in the same office; often seeing the nurse practitioner more than their physician. Are these visits considered a visit to this doctor for the survey? If not, it may be more helpful to ask the patient about their appointments with the other clinical staff. Those may be more recent, more clearly remembered, and more indicative of the patient's experience in that office.

Later questions in the survey continue to pose problems. For example, there's a question inquiring about whether the patient has seen any specialists in the past six months and, if so, whether the doctor on the survey was familiar with the findings of those visits. It is almost certain the patient has seen multiple specialists in this timeframe and unlikely the answer to the second question is the same for all of a patient's specialists. Given the short length of appointments, even the longer ones given to more complex patients, it’s unlikely all of these other visits will be discussed so the patient may not even know whether the physician reviewed most of the relevant specialist visit notes. There’s no good way to answer this question. Other questions also assume minimal or homogenous experiences that are unlikely for patients in frequent contact with an office.

Other surveys include a specific visit date; while this may still cause issues, at least a patient receiving those surveys knows the visit they're supposed to evaluate.

Other changes to survey design to support these populations include offering options like "maybe", "sometimes", "periodically", "it depends on X", or being able to write in data. Open-ended questions are not as useful for research as radio buttons, but it would be better to get an accurate response than inaccurate data or no response.

Patients with visual impairments

This survey was not designed to support patients with visual impairments. There was no large print or online version of the form available (although neither of these is an automatic solution; large print standards vary greatly and many websites and online apps are not particularly good about being accessible to people with visual impairments).

The physical design of the survey - using small fonts, bubble fill-ins, and poor overall print quality - can impede involvement. Some people with visual impairment can experience headaches, dizziness, and other debilitating symptoms due to these factors. Taking these into consideration can be the difference between a participant successfully reading or not reading the form, even with available assistive devices.

Speaking of assistive devices, this style of survey (the two-column layout and the logic requiring non-linear progression), is especially difficult for some assistive devices to navigate. Finding the next survey section, when you can't easily visually scan for a particular question number is difficult. Assistive devices generally expect single column text formats.

Making the survey available online may increase the odds that a visually impaired patient will complete the survey. A successful online survey doesn't make assumptions about minimum screen resolutions, display fonts and their size, enforce specific color schemes on users, or disregards browser and computer settings in favor of enforcing a pretty (but unusable interface) on patients Having the option to dictate answers would also help.

Patients with fine motor skills difficulties

Patients with mobility issues in their hands or other restricted fine motor skills could have trouble filling in the small bubbles on the survey. Patients with severe issues could even have trouble holding the form or turning the pages.

Again, open-ended answer options might help, as would being able to note the appropriate answer outside of the bubbles. Having the option to dictate answers would also help here.

Patients who do not read English well

The survey is written in English with no other translations provided in the mailing and no mechanism to request the survey in another language. With an online survey, there are automated mechanisms for translation, but not with a paper survey. Someone with only a basic understanding of English could also have problems with the terminology used. Current recommendations for patient medical information should be written at a fourth-sixth grade level. The more complex the language used, the harder it is for this population to understand.

Offering the survey in different languages should be a no-brainer. If this is expensive, then offer participants an online option or a call-in translation service. Surveys cost money, but if you want accurate results then do your best to ensure equity.

On the subject of accessibility

This survey does not address accessibility in any way. There should be some mechanism to ask this question - even a simple "if applicable, did you encounter any accessibility issues during your visit?". Ideally, the survey would separate questions exploring physical issues, staff issues, and issues related to the physician. Another approach might be to ask participants if they would be willing to take a follow up survey on accessibility. This would allow a more detailed set of questions to capture specific areas the physician or physician office needs to work on to improve the experience of patients with various disabilities.

The onus is on the patient

Hidden away in the very bottom right-hand corner of the final page are two questions:

  1. Did anyone help you with this survey?
  2. If so, how?

This last question includes options like “someone read me the survey” and “someone translated the survey for me” so the survey authors know some respondents might not be able to complete the survey as is. However, they put the onus on the patient to overcome the difficulties. Assuming these patients will have someone to help them is presumptuous at best, especially during a pandemic that restricts contact. Many of those likely to have trouble completing the survey on their own are also likely at highest risk for bad outcomes from Covid. They are elderly, disabled, ethnic and racial minorities, patients with co-morbid conditions, etc. These patients often live alone and are still under greater restrictions than the general population. Having someone from outside enter their domain to help them deal with a paper survey (one that can't be handled remotely) is not feasible.

Collecting data from patients about their experiences within the healthcare system should be applauded. But until we do so in a way that is easy and encourages the various underserved populations, the efforts and data are flawed. We must adjust our processes to consider those who are most likely to encounter difficulties and who need the current status quo to change.

Share This: