Theme Year

1: Improving the Quality of Test & Evaluation

Talks under this theme may include (but are not limited to) the following topics: Design of Experiments, Integrated Testing, Sequential Testing, Model Based Systems Engineering, Digital Transformation

2023


2: Sharing Analysis Tools, Methods, and Collaboration Strategies

Talks under this theme may include (but are not limited to) the following topics: Bayesian Methods, Statistical Modeling, Data Visualization, Communication skills, Statistical Engineering

2023


3: Advancing Test & Evaluation of Emerging and Prevalent Technologies

Talks under this theme may include (but are not limited to) the following topics: Cybersecurity, Autonomy & Artificial Intelligence-Enabled Systems, Software Systems, Space Systems, Hypersonics, Electromagnetic Spectrum Operations

2023


4: Solving Program Evaluation Challenges

Talks under this theme may include (but are not limited to) the following topics: Sustainment, Data Management, Reproducible Research, Model and Simulation and Uncertainty Quantification, Reliability, Human System Interaction, DoD’s Five . . . Data Decrees, Evaluating Mission Level Effects

2023


1: Design of Experiments

The complexity of modern defense and aerospace systems, and the environment in which they operate, means that “live” testing is often expensive and time-consuming. Thus, test efficiency is key. Design of Experiments (DOE) techniques allow . . . test planners to maximize information gain for a given set of resources. Modeling & Simulation (M&S) and Integrated Test & Evaluation (IT&E) are a core practices that the T&E community can implement to aid in test efficiency and increase understanding of system performance, but they also require detailed and deliberate planning using DOE best practices. Sessions with this theme will focus on DOE methodologies and applications across all T&E domains. This can include test design techniques for verifying, validating, and accrediting M&S, quantifying the uncertainty in live or M&S outcomes, as well as methods for integrating, to the extent possible, all available information, including previous test outcomes, to sequentially update and refine future testing. Other methodological topics covered in these sessions may include reliability and Bayesian test design methods.

2022


2: Test and Evaluation Methods for Emerging Technology

The drive to develop new technologies is expanding to more actors with lower barriers of entry, and moving at accelerating speed. New threats to our systems will continue to emerge, especially in the cyber space, putting pressure on our . . . operational T&E community to include faster and more broadly-scoped strategies. As technology advances, researchers including the DoD are becoming more interested in how machines can perform without the aid of humans. Autonomy research includes concepts such as environment perception, decision making, operation, and ethics.   Sessions with this theme will: 1) investigate why and how to improve user engagement with systems that now include artificial intelligence, robotic teammates, and augmented reality; 2) explore Test & Evaluation methods and challenges for emerging technologies and domains including: Cybersecurity of software-enabled systems, hypersonics, directed energy weapons, electronic warfare, autonomous weapon systems, artificial intelligence, and additive manufacturing; and 3) discuss methods used to model how the quality of human-system interaction and human-machine teaming impacts operational performance, survey development, test design, scale validation, and network analysis.

2022


3: Data Management and Reproducible Research

The goal of data management is to make sure you get what you need from your data.  Achieving this goal requires organizations and test programs to have a plan for acquiring, cleaning, analyzing, documenting, organizing, and archiving data. . . .  Reproducibility is a key feature of successful data management. Reproducible analyses are transparent and easy for reviewers to verify because results and figures can be traced directly to the data and methods that produced them. Poor reproducibility habits result in analyses that are difficult or impossible to review, prone to compounded mistakes, and inefficient to re-run in the future.  They can lead to duplication of effort or even loss of accumulated knowledge when a researcher leaves your organization. These sessions will 1) share best practices, lessons learned, and open challenges for implementing data management strategies within T&E;  2) discuss the benefits of reproducible research and demonstrate ways that analysts can introduce reproducible or automated research practices during each phase of the analysis workflow: preparing for an analysis, performing the analysis, and presenting results.

2022


4: Analysis Tools and Techniques

Testers should use robust analytical techniques to ensure that conclusions are defensible and glean maximum information from the data collected from the test design. Statistical analysis techniques allow system evaluators to make rigorous . . . statements about system characteristics by objectively summarizing the data, determining which factors are significant, discovering trends in the data, and quantifying uncertainty in results. Often times these methodologies are complex or not widely understood; user-friendly software tools and case study demonstrations of techniques can help make these methods more accessible to the larger T&E community. Sessions with this theme will introduce statistical methodologies for T&E, discuss case study applications of analytical techniques, and showcase software or interactive tools for conducting rigorous analyses.  Specific topics may include statistical modeling, reliability analysis, or M&S analysis.

2022


5: Special Topics

Special topics sessions will include topics such as sustainment analyses and data visualization. Sustainment Analyses: Quantifying the benefits of specific “what if” scenarios are critical to making real-world evaluations and weapons system . . . understanding and development. To do this, researchers use end-to-end simulations that model all aspects of weapons system sustainment (spares, manpower, operations, and maintenance), which are powerful tools for understanding how investments in different parts of the sustainment system affect readiness. Data Visualization: The best way to effectively communicate one’s ideas is often to provide visual representations of conclusions and data that are easily understood by the audience. Visual perception is fast and efficient, allowing us to detect patterns and quickly distill large amounts of information. Effective data visualization takes advantage of this fact, aiding in comprehension of complex topics or key takeaways from our analyses.

2022


1: Modeling & Simulation

The complexity of modern defense and aerospace systems, and the environment in which they operate, means that “live” testing is often expensive or even impossible.  Thus, modeling and simulation (M&S) capabilities are critical to fully . . . understanding a system’s capabilities and limitations.   However, if operators or acquisition officials are going to depend on M&S results to inform decisions, testers should thoroughly investigate the degree to which a model or simulation provides an accurate representation of the real world. Sessions with this theme will explore techniques for M&S test design and analysis, uncertainty quantification, and methods for verification and validation (V&V).  Topics may cover what testers can do when there is no live data available for V&V, as well as Certification by Analysis

2021


2: Integrated Testing

Integrated test and evaluation (IT&E) is a core practice the Test & Evaluation community can implement to aid the DoD in shifting the acquisition process left. However, its use across the community is uneven. Integrated testing . . . focuses on the early and efficient collection of information to support independent evaluations. Sessions with this theme will focus on integrating, to the extent possible, all available information, including previous test outcomes, to sequentially update and refine future testing. Methodological topics covered in these sessions may include design of experiments, reliability, and Bayesian analysis methods.

2021


3: Test & Evaluation Methods for Emerging Technology and Domains

The drive to develop new technologies is relentless, expanding to more actors with lower barriers of entry, and moving at accelerating speed. New threats to our systems will continue to emerge, putting pressure on our operational T&E . . . community to include these in realistic scenarios. Our warfighters also need emerging technology at the speed of relevance, which calls for faster and more broadly-scoped T&E.    Sessions with this theme will explore the Test & Evaluation methods and challenges for emerging technologies and domains including: Cybersecurity of software-enabled systems, hypersonics, directed energy weapons, electronic warfare, autonomous weapon systems, artificial intelligence, and additive manufacturing.

2021


4: Modeling Human-System Interaction

Mission outcomes are heavily determined by how effectively operators interact with systems under operational conditions as decided by their objective performance and their perceived performance. In order to effectively assess these systems, . . . we need to observe team dynamics to predict how changing operating conditions will affect the dynamic and therefore productivity.   Sessions with this theme will discuss methods used to collect and model the quality of human-system integration and its impact on operational performance, survey development and test design, scale validation, and network analysis.

2021


5: Special Topics

Special topics sessions will include topics like Sustainment Analyses, Modeling Public Health Data, and Earth Science Problems

2021


1: Progress and Challenges in Program Evaluation

The Department of Defense (DoD) and NASA develop and acquire some of the world’s most sophisticated technological systems. In cooperation with these organizations, we face challenges in ensuring that these systems undergo testing and . . . evaluation (T&E) that is both adequate and efficient prior to their use in the field. Sessions with this theme will feature case study examples highlighting advancements in T&E methodologies that we have made as a community. Topics covered may include modeling and simulation validation, uncertainty quantification, experimental design usage, Bayesian analysis, among others. Sessions will also discuss the current challenges and areas in the field that require more research.

2020


2: Machine Learning and Artificial Intelligence

With the future of automation trending towards using machine learning (ML) and artificial intelligence (AI), we aim to provide resources, set precedent, and inspire innovation in these fields. Sessions with this theme will discuss how we . . . can quantify confidence that ML and/or AI algorithms function as intended. To do this we explore whether the functions are free of vulnerabilities that are either intentionally or unintentionally integrated as part of the data or algorithm. Topics covered may include testing and evaluation (T&E) metrics and methods for AI algorithms, T&E metrics and methods for cyber-physical systems with AI algorithms, threat portrayal for ML/AI algorithms, and robust ML/AI algorithm design.

2020


3: Cybersecurity and Software Test & Evaluation

The number of known cyber-attacks has been steadily increasing over the past ten years, exposing millions of data records. With the help of leading minds in the cyber field, we work to not only to discover new ways of identifying these . . . attacks, but also ways to counter future attacks. Sessions with this theme will discuss metrics and test methodologies for testing cyber-physical systems with an emphasis on characterizing cyber resiliency. Topics covered may include identifying cybersecurity threats (IP and non-IP) across critical operational missions, development and testing secure architectures, identifying critical system components that enable attack vectors, developing and testing countermeasures to cyber-attacks, and testing methods for cyber-physical systems that include machine learning algorithms.

2020


4: Modeling the Human-System Interaction

Mission outcomes are heavily determined by how effectively operators interact with systems under operational conditions as decided by their objective performance and their perceived performance. In order to effectively interpret these . . . systems, we analytically observe team dynamics to predict how changing operating conditions will affect the dynamic and therefore productivity. Sessions with this theme will discuss methods used to collect and model the quality of human-system interaction and its impact on operational performance. Topics covered will include survey development/test design, statistical methods for surveys, scale validation, and network analysis.

2020


5: Science Applications of Test and Evaluation

Scientists use a variety of tools to perform analyses within their areas of expertise. Our goal is to demonstrate knowledge of how to improve analyses that they already perform and to introduce new tools to increase analysis . . . efficiency. Sessions with this theme will cover simulation, prediction, uncertainty quantification, and inference for physical and physical-statistical modeling of geophysical processes. Topics covered may include Earth Science problems (e.g. ice sheet evolution models, atmospheric and ocean processes, the carbon cycle, land surface processes, natural hazards), astronomy and cosmology (e.g. exoplanet detection, galactic formation, cosmic microwave background), and planetary science (e.g. planetary atmospheres, formation).

2020


1: Verification, Validation, and Uncertainty Quantification

These sessions will focus on model verification and validation including case study examples as well as the use of physics and computational based modeling and simulation to benefit system evaluation.

2019


2: Data Integration and Sequential Testing

These sessions will focus on integrating, to the extent possible all available information, including previous test outcomes, to sequentially update and refine future testing.

2019


3: Autonomy, Artificial Intelligence, Machine Learning

These sessions will focus on introducing new tools for evaluating how operators interact with autonomous systems and demonstrate how members of the defense and aerospace communities have applied statistical approaches to evaluate autonomous . . . system performance.

2019


4: Data Curation and Engineering

These sessions will focus on best practices for storing, curating, cleaning and making data ready for analysis.

2019


5: Improving System Cybersecurity

These sessions will focus on methods for evaluating cyber survivability, specifically cyber-attack resiliency for weapon systems.

2019


6: Human-System Integration

These sessions demonstrate the importance of human-centered design in defense and aerospace systems and introduce the range of test design and statistical methods available for evaluating human-system integration.

2019


1: Statistical Engineering for Defense and Aerospace

2018


2: Statistical Engineering for Defense and Aerospace

2018


3: Humans are Part of the System: Implications for Test Planning and Analysis

2018


4: Rigorous Use of Modeling and Simulation

2018


5: Statistical Engineering for Autonomous Systems

2018


6: Data Science

2018


1: Improving Computational Modeling: CFD Vision 2030

This session will focus on the statistics in improving existing and developing new computational modeling capabilities.

2017


2: Experimental Design Methods for System Design and Characterization

This session will focus on the value of a structured experimental design process in both the design and evaluation of systems.

2017


3: Statistical Engineering for Defense and Aerospace

This session will include cases studies that illustrate how straightforward statistical approaches linked together in the right way and targeted towards answering the right question can be useful in addressing the complex data and design . . . challenges in defense and aerospace.

2017


4: Rigorous Use of Modeling and Simulation

This session will focus on the use of physics and computational based modeling and simulation to benefit system evaluation and group/flight/live testing.

2017


5: Improving System Reliability and Reliability Test Planning

This session will focus on advanced techniques in testing long term system reliability.

2017


6: Humans are Part of the System: Implications for test planning and analysis.

This session will focus on the design and test methods for assessing the coupled interaction of the human and the system.

2017