This case study describes a planned test with the goal of characterizing the performance of a new jamming system. The new jammer was required to demonstrate measurable improvement over the legacy jamming system. The test was also designed to screen for important factors to be included in future testing.
Scenario and Test Goals
The stakeholders decided to record two response variables. These were reduction in lethality and miss distance of missile shots. They also decided to vary the following factors and levels:
Because complete randomization was not possible, the evaluators used Design of Experiment principles to generate the following split-plot design:
Potential Complications and Design Modifications
Before proceeding to execute the test, the program manager, who was familiar with the performance of the additional systems involved in the test, posed several thoughtful questions to the design team in anticipation of complications. These questions along with possible design modifications to accommodate such complications are detailed below.
- What if a threat (i.e., missile simulator) goes down within a given mission?
- What if I can’t execute all of the countermeasures in this exact order?
»All of the planned runs should be executed during a sortie.
»The order should not be exactly the same across multiple sorties.
- What if I can’t execute the missions in this order?
- What if can’t I accomplish all of the missions in the design?
»We could eliminate missions 3 and 6, which would eliminate our ability to determine whether the operation of two aircraft affects the jammer's performance.
»We could eliminate missions 1 and 5, making the design a blocked design, but lose the ability to test for differences between the two aircraft variants.
Takeaways
There are two primary concerns that came up concerning test execution: –Changing from the run order laid out in the test plan –Deviations from test plan resulting in data reduction There is a rationale behind the generated test order, so it is best to stick to this when possible. The order of runs is flexible, but it cannot be systematic. Thus, if complications are anticipated, the evaluators should come up with an executable run order before conducting the test rather than making "on the fly" modifications. Deviations from test plans nearly always occur, but it is possible to minimize disruptions and information loss by anticipating common problems and knowing what aspects of the design are flexible. Becoming familiar with actual scenarios such as this case study can help you think through the execution of a successful test.

