Regardless of the quality and “autonomy” of a system, an investigation of that system is incomplete without consideration of how it interfaces with humans. Whether its operation requires human input, it has maintenance requirements, or it provides information for human use, the properties of these interactions are important to fully understanding the system. Surveys are a research measurement tool designed to collect such information from the people who interact with the system.
Purpose of Surveys
Surveys are objective measures of subjective constructs such as the thoughts, perspectives, intentions, and mental processes of people. They do so by recording responses to prompts regarding characteristics of the survey respondents (e.g., demographic information) or their thoughts about and reactions to the system. As such, when used properly to collect information from system users, surveys complement system data and provide a more holistic view of operation and performance. Within the context of DOE, surveys can serve as measures of factors, control variables, and even response variables. For example, a Subject Matter Expert’s rating of a system’s effectiveness can be a primary response.
More often, surveys are administered to test participants such as operators and maintenance personnel for the purpose of assessing factors such as operator workload as well as system usability and utility. When using surveys in this way, it is important to include a variety of operators that represent the population, not just those with the most operating experience or best performance, for example. Surveys can also serve as diagnostic measures, providing insights into future modifications to the system as well as the development of Concept of Operations (CONOPS) and Tactics, Techniques, and Procedures.
Best Use of Surveys
As mentioned, surveys are best suited to measure people’s subjective experiences, attitudes and perspectives, and mental processes. Often, these are matters of system suitability. Surveys are not, however, well-suited to measure outcomes such as system accuracy, mission completeness, timeliness, etc. Matters of system performance are better captured with measures of physical, objective truth (e.g., actual time to completion) rather than subjective perceptions of that truth (e.g., operator ratings of timeliness).
When measured simultaneously, objective and subjective measures can complement one another, providing context for the other’s interpretation and allowing insight beyond either in isolation. For example, operators may perceive that their training was accurate, yet objective measures may show excessive errors. In conjunction, these measures highlight an area deserving attention that could have otherwise been overlooked. Similarly, surveys are not good measures of situational awareness, as they assess how aware a respondent thought he or she was, rather than testing how aware of environmental stimuli he or she actually was.
Commonly Used Surveys
Many questions about human-system interactions are commonly asked in operational test and evaluation scenarios. Thus, rigorous surveys have already been developed and are commonly used in similar situations. These are empirically validated, reliable instruments and their properties are documented in scientific literature. Whenever possible, it is best to use these preexisting surveys, as this both saves test resources and provides defensible, quality data. Below are listed several of the most commonly used surveys in operational test and evaluation.
|Crew Status Survey|
|Modified Cooper Harper|
|Multiple Resource Questionnaire|
|Usability||System Usability Scale|
|System Trust||System Trust Scale|
|Human Computer Trust Measure|
|Fatigue||Crew Status Survey|
|Profile of Mood States|
|Stress||Short Stress State Questionnaire|
Best practices for Constructing surveys
The decision of whether to use an existing survey or create your own, like any design choice, must be responsive to the purpose, context, and constraints of a given test. Often, evaluators must assess a particular feature of a system or measure specific requirements and existing surveys are inappropriate. In these cases, evaluators must write their own surveys, which, when done correctly, can be a time intensive task.
Survey development should be a collaborative effort involving the survey commissioners, respondents, survey designers, and analysts. By understanding the perspectives of these multiple stakeholders, designers can tailor the survey to address the appropriate goals and elicit quality responses. One of the most influential things a survey designer can do to influence data quality is to make a professional, easy to follow, and well-thought out survey.
Another key to quality survey design is to put yourself in the mindset of a respondent and to write survey items that would be readily understood and answered by that person. The language should be conversational rather than abbreviated or technical and should also require little interpretation. As a survey writer, it is your job to make participation as simple and painless as possible. These practices reduce the amount of work it takes to complete the survey and increase respondent’s motivation to answer honestly and thoughtfully.
Survey design has been carefully studied and several best practices exist to help evaluators gather trustworthy survey data. These cover the writing of question stems (e.g., “How strongly do you disagree or agree with this statement?”), response options (e.g., “Strongly Disagree” to “Strongly Agree”), and also organization and approach to survey writing as a whole. Double checking that your survey items follow these principles will greatly increase the quality of your results.
Several general questions that adhere to the above principles have been compiled and can serve as a starting point for writing your own survey. These questions are searchable by multiple categories including system type and suitability criteria. Many can be easily adapted to the specific goals and context of your own test.
In addition to the content of the survey, data quality is also influenced by administration practices. Not only must survey administrators be careful not to let their own opinions or desires bias respondents, but they must also carefully plan aspects of the survey delivery in order to encourage accurate responses. Follow this link to learn more about administration practices that aid the collection of quality information.