Survey Administration Best Practices
Through survey administration, the survey designer has an opportunity to set the stage for honest, thoughtful responses. Careful consideration of the survey audience, delivery, and format plays a large role in inviting quality data.
Audience
Administration procedures should vary by audience. It is important to assess the survey audience's (i.e., respondents) motivation and role in a test when planning to give them a survey. The motivation of the survey respondents is critical. They must have the cognitive energy and desire to respond to a survey in order to provide honest answers. A good practice is to introduce the survey in such a way that respondents are encouraged to feel invested in the outcome of the test. They must believe their answers are meaningful and important, and that participation is worth their time. Conveying this information increases motivation. The introduction can also emphasize that responses are confidential, which encourages honest answers. After a test event, respondents' motivations are often low. To discourage respondents from quickly and thoughtlessly completing the survey so that they can leave, it is best to hold the entire group for a period of time before dismissing them all.
Timing
There are generally 3 options for the timing of a survey during a test: At the end of a test, during natural breaks (e.g., mission/run completion, end of day), or during the test in response to critical events. Posttest or Exit Survey: These can be the longest of the surveys, as there is no constraint on getting the participant back into the test. The questions that are appropriate are the thoughts and feelings that will not change based on different conditions or time. Examples include questions about overall satisfaction or preference and user interface component satisfaction or preference. End of Mission/Task or End of Day Surveys: These surveys are best for thoughts or feelings that will change over time or in response to different conditions or different tasks, such as workload and usability. These are especially useful in multi-day test events, particularly when the tasks will be different on each day. They are also very useful for tests comparing different systems. As these surveys are administered more than once during the test, the shorter they are, the more likely the respondent will take care in responding each time presented with the survey.
Event Driven Surveys: These are surveys that may or may not be administered during a test. That is, they will be administered in response to specific critical events such as an accident or near miss, a bug report, or an uncommon task. These surveys must be very brief. If they are not, there is a greater likelihood that the respondent will not notify the test team of the critical event in order to avoid the survey. Also, if they are long, they can impact the motivation to complete any other surveys administered in the test.
Some other common timing issues that degrade response quality include:-Administering the survey too long after an event or test (memories and impressions fade)
-Administering the survey too quickly after a demanding task (motivation is reduced due to frustration or fatigue)
-Administering the survey too frequently (motivation is reduced due to nuisance surveys)
Format & Environment
In addition to timing and delivery, the context and appearance of the survey play a key role in eliciting quality responses. Introduction: As mentioned, introductions are helpful to give respondents context and to inform them of the purpose of the survey. It is also helpful to tell respondents what to expect from the content and throughout the process of taking the survey. In doing so, it is important not to mention one's own thoughts or opinions about the system, or desired results from the survey. In an introduction, you can explain why responses are important, that the information is confidential, and how data will be used, all of which impact motivations to answer honestly and thoughtfully. Length & Perceived Effort: Respondents' motivations to complete a survey are impacted by how much effort it appears the survey requires. The length of the survey is a key indicator of required effort and, thus, should be kept to a minimum. Survey length must also match the time constraints of the test. Respondents are easily fatigued when surveys are not brief. Formatting: A respondent's motivation and ability to complete the survey are also influenced by the appearance of the survey. Uncluttered and consistently formatted surveys with clear directions receive the highest quality responses. Grouping similar questions and response sets in a distinct section, minimizing the number of questions per page, and using an easy to read font style and size are key elements of professional-looking surveys. Ordering questions and sections logically and from general to specific is another formatting aspect that can easily improve responses. Delivery Method: There are 3 primary forms of survey administration: paper, electronic, & verbal. Paper or electronic survey are recommended over verbal, and paper is generally the most common method. Electronic surveys are more convenient and less prone to error when a full-featured, flexible survey design software is used. Paper surveys provide data that must be manually entered into a database, but they are also generally easy to check and modify during a test. Verbal surveys can be beneficial when follow up questions are necessary, but they also provide the least amount of confidentiality and, thus, respondents are more prone to censoring their answers. Environment: The survey should be administered in the same environment in which the test was conducted. This helps to preserve respondents' memories and feelings concerning the system. At the same time, however, the setting in which the survey is administered should be as free from distractions as possible. The environment should also disallow interaction between respondents and the test team as much as possible, as this can influence responses. Consistency: These administration procedures should be performed consistently in order to minimize noise and enable comparisons among multiple surveys, particularly when comparing surveys concerning different systems (e.g., legacy vs. new).
