Session Title | Speaker | Type | Recording | Materials | Year | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Breakout Reliability Growth Modeling (Abstract)
Several optimization models are described for allocating resources to different testing activities in a system’s reliability growth program. These models assume availability of an underlying reliability growth model for the system, and capture the tradeoffs associated with focusing testing resources at various levels (e.g., system, subsystem, component) and/or how to divide resources within a given level. In order to demonstrate insights generated by solving the model, we apply the optimization models to an example series-parallel system in which reliability growth is assumed to follow the Crow/AMSAA reliability growth model. We then demonstrate how the optimization models can be extended to incorporate uncertainty in Crow/AMSAA parameters. |
Kellly Sullivan University of Arkansas |
Breakout | Materials | 2017 |
|||||||||
ASA SDNS Student Poster Awards |
Student Winners to be Announced | 2023 |
|||||||||||
Webinar The Role of Uncertainty Quantification in Machine Learning (Abstract)
Uncertainty is an inherent, yet often under-appreciated, component of machine learning and statistical modeling. Data-driven modeling often begins with noisy data from error-prone sensors collected under conditions for which no ground-truth can be ascertained. Analysis then continues with modeling techniques that rely on a myriad of design decisions and tunable parameters. The resulting models often provide demonstrably good performance, yet they illustrate just one of many plausible representations of the data – each of which may make somewhat different predictions on new data. This talk provides an overview of recent, application-driven research at Sandia Labs that considers methods for (1) estimating the uncertainty in the predictions made by machine learning and statistical models, and (2) using the uncertainty information to improve both the model and downstream decision making. We begin by clarifying the data-driven uncertainty estimation task and identifying sources of uncertainty in machine learning. We then present results from applications in both supervised and unsupervised settings. Finally, we conclude with a summary of lessons learned and critical directions for future work. |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
| 2020 |
|||||||||
Contributed Test Planning for Observational Studies using Poisson Process Modeling (Abstract)
Operational Test (OT) is occasionally conducted after a system is already fielded. Unlike a traditional test based on Design of Experiments (DOE) principles, it is often not possible to vary the levels of the factors of interest. Instead the test is of an observational nature. Test planning for observational studies involves choosing where, when, and how long to evaluate a system in order to observe the possible combinations of factor levels that define the battlespace. This presentation discusses a test-planning method that uses Poisson process modeling as a way to estimate the length of time required to observe factor level combinations in the operational environment. |
Brian Stone AFOTEC |
Contributed | Materials | 2018 |
|||||||||
Panel Featured Panel: AI Assurance |
John Stogoski Senior Systems Engineer Software Engineering Institute, Carnegie Mellon University (bio)
John Stogoski has been at Carnegie Mellon University’s Software Engineering Institute for 10 years including roles in the CERT and AI Divisions. He is currently a senior systems engineer working with DoD sponsors to research how artificial intelligence can be applied to increase capabilities and build the AI engineering discipline. In his previous role, he oversaw a prototyping lab focused on evaluating emerging technologies and design patterns for addressing cybersecurity operations at scale. John spent a significant portion of his career at a major telecommunications company where he served in director roles responsible for the security operations center and then establishing a homeland security office after the 9/11 attack. He worked with government and industry counterparts to advance policy and enhance our coordinated, operational capabilities to lessen impacts of future attacks or natural disaster events. Applying lessons from the maturing of the security field, along with considering the unique aspects of artificial intelligence, can help us enhance the system development lifecycle and realize the opportunities increasing our strategic advantage. |
Panel | Session Recording | 2023 |
|||||||||
Short Course Multivariate Data Analysis (Abstract)
In this one-day workshop, we will explore five techniques that are commonly used to model human behavior: principal component analysis, factor analysis, cluster analysis, mixture modeling, and multidimensional scaling. Brief discussions of the theory of each method will be provided, along with some examples showing how the techniques work and how the results are interpreted in practice. Accompanying R-code will be provided so attendees are able to implement these methods on their own. |
Doug Steinley University of Missouri |
Short Course | Materials | 2019 |
|||||||||
Webinar KC-46A Adaptive Relevant Testing Strategies to Enable Incremental Evaluation (Abstract)
The DoD’s challenge to provide capability at the “Speed of Relevance” has generated many new strategies to adapt to rapid development and acquisition. As a result, Operational Test Agencies (OTA) have had to adjust their test processes to accommodate rapid, but incremental delivery of capability to the warfighter. The Air Force Operational Test and Evaluation Center (AFOTEC) developed the Adaptive Relevant Testing (ART) concept to answer the challenge. In this session, AFOTEC Test Analysts will brief examples and lessons learned from implementing the ART principles on the KC-46A acquisition program to identify problems early and promote the delivery of individual capabilities as they are available to test. The AFOTEC goal is to accomplish these incremental tests while maintaining a rigorous statistical evaluation in a relevant and timely manner. This discussion will explain in detail how the KC-46A Initial Operational Test and Evaluation (IOT&E) was accomplished in a unique way that allowed the test team to discover, report on, and correct major system deficiencies much earlier than traditional methods. |
J. Quinn Stank Lead KC-46 Analyst AFOTEC ![]() (bio)
First Lieutenant J. Quinn Stank is the Lead Analyst for the Air Force Operational Test and Evaluation Center Detachment 5 at Outside Location Everett, Washington. The lieutenant serves as the advisor to the Operational Test and Evaluation team for the KC-46A. Lieutenant Stank, originally from Knoxville, Tn., received his commission as a second lieutenant upon graduation from the United States Air Force Academy in 2016. EDUCATION:
|
Webinar | Session Recording |
| 2020 |
||||||||
Tutorial Introduction to Survey Design (Abstract)
Surveys are a common tool for assessing user experiences with systems in various stages of development. This mini-tutorial introduces the social and cognitive processes involved in survey measurement and addresses best practices in survey design. Clarity of question wording, appropriate scale use, and methods for reducing survey-fatigue are emphasized. Attendees will learn practical tips to maximize the information gained from user surveys and should bring paper and pencils to practice writing and evaluating questions. |
Jonathan Snavely IDA |
Tutorial | Materials | 2016 |
|||||||||
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
|||||||||
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carlina State Univeristy |
Short Course | Materials | 2018 |
|||||||||
Poster Presentation The Calculus of Mixed Meal Tolerance Test Trajectories (Abstract)
BACKGROUND Post-prandial glucose response resulting from a mixed meal tolerance test is evaluated from trajectory data of measured glucose, insulin, C-peptide, GLP-1 and other measurements of insulin sensitivity and β-cell function. In order to compare responses between populations or different composition of mixed meals, the trajectories are collapsed into the area under the curve (AUC) or incremental area under the curve (iAUC) for statistical analysis. Both AUC and iAUC are coarse distillations of the post-prandial curves and important properties of the curve structure are lost. METHODS Visual Basic Application (VBA) code was written to automatically extract seven different key calculus-based curve-shape properties of post-prandial trajectories (glucose, insulin, C-peptide, GLP-1) beyond AUC. Through two-sample t-tests, the calculus-based markers were compared between outcomes (reactive hypoglycemia vs. healthy) and against demographic information. RESULTS Statistically significant p-values (p < .01) between multiple curve properties in addition to AUC were found between each molecule studied and the health outcome of subjects based on the calculus-based properties of their molecular response curves. A model was created which predicts reactive hypoglycemia based on individual curve properties most associated with outcomes. CONCLUSIONS There is a predictive power using response curve properties that was not present using solely AUC. In future studies, the response curve calculus-based properties will be used for predicting diabetes and other health outcomes. In this sense, response-curve properties can predict an individual's susceptibility to illness prior to its onset using solely mixed meal tolerance test results. |
Skyler Chauff Cadet United States Military Academy (bio)
Skyler Chauff is a third-year student at the United States Military Academy at West Point. He is studying "Operations Research" and hopes to further pursue a career in data science in the Army. His hobbies include scuba-diving, traveling, and tutoring. Skyler is the head of the West Point tutoring program and helps lead the Army Smart nonprofit in providing free tutoring services to enlisted soldiers pursuing higher-level education. Skyler specializes in bioinformatics given his pre-medical background interwoven with his passion for data science. |
Poster Presentation | 2023 |
||||||||||
Breakout Probabilistic Data Synthesis to Provide a Defensible Risk Assessment for Army Munition (Abstract)
Military grade energetics are, by design, required to operate under extreme conditions. As such, warheads in a munition must demonstrate a high level of structural integrity in order to ensure safe and reliable operation by the Warfighter. In this example which involved an artillery munition, a systematic analytics-driven approach was executed which synthesized physical test data results with probabilistic analysis, non-destructive evaluation, modeling and simulation, and comprehensive risk analysis tools in order to determine the probability of a catastrophic event. Once the severity, probability of detection, occurrence, were synthesized, a model was built to determine the risk of a catastrophic event during firing which then accounts for defect growth occurring as a result of rough-handling. This comprehensive analysis provided a defensible, credible, and dynamic snapshot of risk while allowing for a transparent assessment of contribution to risk of the various inputs through sensitivity analyses. This paper will illustrate intersection of product safety, reliability, systems-safety policy, and analytics, and highlight the impact of a holistic multidisciplinary approach. The benefits of this rigorous assessment included quantifying risk to the user, supporting effective decision-making, improving resultant safety and reliability of the munition, and supporting triage and prioritization of future Non-Destructive Evaluation (NDE) screening efforts by identifying at-risk subpopulations. |
Kevin Singer | Breakout |
| 2019 |
|||||||||
Keynote Retooling Design and Development |
Chris Singer NASA Deputy Chief Engineer NASA ![]() (bio)
Christopher (Chris) E. Singer is the NASA Deputy, Chief Engineer responsible for integrating engineering across the Agencies 10 field centers. Prior to this appointment in April 2016, he served as the Engineering Director at NASA's Marshall Space Flight Center in Huntsville, Alabama. Appointed in 2011, Mr. Singer led an organization of 1,400 civil service and 1,200 support contractor employees responsible for the design, testing, evaluation, and operation of hardware and software associated with space transportation, spacecraft systems, science instruments and payloads under development at the Marshall Center. The Engineering Directorate also manages NASA's Payload Operations Center at Marshall, which is the command post for scientific research activities on-board the International Space Station. Mr. Singer began his NASA career in 1983 as a rocket engine specialist. In 1992, he served a one-year assignment at NASA Headquarters in Washington, DC, as senior manager for the space shuttle main engine and external tank in the Space Shuttle Support Office. In 1994, Mr. Singer supervised the development and implementation of safety improvements and upgrades to shuttle propulsion components. In 2000, he was appointed chief engineer in the Space Transportation Directorate then was selected as deputy director of Marshall's Engineering Directorate from 2004 to 2011. Mr. Singer is an AIAA Associate Fellow. In 2006, he received the Presidential Rank Award for Meritorious Executives — the highest honor for career federal employees. He was awarded the NASA Outstanding Leadership Medal in 2001 and 2008 for his leadership. In 1989, he received the prestigious Silver Snoopy Award from the Astronaut Corps for his contributions to the success of human spaceflight missions. A native of Nashville, Tennessee, Mr. Singer earned a bachelor's degree in mechanical engineering in 1983 from Christian Brothers University in Memphis, Tennessee. Chris enjoys woodworking, fishing and Hang gliding. Chris is married to the former Jody Adams of Hartselle, Alabama. They have three children and live in Huntsville, Alabama. |
Keynote | Materials | 2017 |
|||||||||
Breakout Improving Sensitivity Experiments (Abstract)
This presentation will provide a brief overview of sensitivity testing, and emphasize applications to several products and system of importance to the Defense as well as private industry, including Insensitive Energetics, Ballistic testing of protective armor, testing of munition fuzes and Microelectromechanical Systems (MEMS) components, and safety testing of high-pressure test ammunition, and packaging for high-value materials. |
Kevin Singer US Army |
Breakout | Materials | 2017 |
|||||||||
Tutorial Power Anyalysis Concepts |
Jim Simpson JK Analytics |
Tutorial | Materials | 2016 |
|||||||||
Short Course Split-Plot and Restricted Randomization Designs (Abstract)
Have you ever built what you considered to be the ideal designed experiment, then passed it along to be run and learn later that your recommended run order was ignored? Or perhaps you were part of a test execution team and learned too late that one or more of your experimental factors are difficult or time-consuming to change. We all recognize that the best possible guard against lurking background noise is complete randomization, but often we find that a randomized run order is extremely impractical or even infeasible. Split-plot design and analysis methods have been around for over 80 years, but only in the last several years have the methods fully matured and been made available in commercial software. This class will introduce you to the world of practical split-plot design and analysis methods. We’ll provide you the skills to effectively build designs appropriate to your specific needs and demonstrate proper analysis techniques using general linear models, available in the statistical software. Topics include split-plots for 2-level and mixed-level factor sets, for first and second order models, as well as split-split-plot designs. |
Jim Simpson JK Analytics |
Short Course | Materials | 2017 |
|||||||||
Breakout Automated Software Testing Best Practices and Framework: A STAT COE Project (Abstract)
The process for testing military systems which are largely software intensive involves techniques and procedures often different from those for hardware-based systems. Much of the testing can be performed in laboratories at many of the acquisition stages, up to operational testing. Testing software systems is not different from testing hardware-based systems in that testing earlier and more intensively benefits the acquisition program in the long run. Automated testing of software systems enables more frequent and more extensive testing, allowing for earlier discovery of errors and faults in the code. Automated testing is beneficial for unit, integrated, functional and performance testing, but there are costs associated with automation tool license fees, specialized manpower, and the time to prepare and maintain the automation scripts. This presentation discusses some of the features unique to automated software testing and offers a framework organizations can implement to make the business case for, to organize for, and to execute and benefit from automating the right aspects of their testing needs. Automation has many benefits in saving time and money, but is most valuable in freeing test resources to perform higher value tasks. |
Jim Simpson JK Analytics |
Breakout | Materials | 2017 |
|||||||||
Breakout DOE and Test Automation for System of Systems TE (Abstract)
Rigorous, efficient and effective test science techniques are individually taking hold in many software centric DoD acquisition programs, both in developmental and operational test regimes. These techniques include agile software development, cybersecurity test and evaluation (T&E), design and analysis of experiments and automated software testing. Many software centric programs must also be tested together with other systems to demonstrate they can be successfully integrated into a more complex systems of systems. This presentation focuses on the two test science disciplines of designed experiments (DOE) and automated software testing (AST) and describes how they can be used effectively and leverage one another in planning for and executing a system of systems test strategy. We use the Navy’s Distributed Common Ground System as an example. |
Jim Simpson JK Analytics |
Breakout | Materials | 2018 |
|||||||||
Webinar Sequential Testing and Simulation Validation for Autonomous Systems (Abstract)
Autonomous systems expect to play a significant role in the next generation of DoD acquisition programs. New methods need to be developed and vetted, particularly for two groups we know well that will be facing the complexities of autonomy: a) test and evaluation, and b) modeling and simulation. For test and evaluation, statistical methods that are routinely and successfully applied throughout DoD need to be adapted to be most effective in autonomy, and some of our practices need to be stressed. One is sequential testing and analysis, which we illustrate to allow testers to learn and improve incrementally. The other group needing to rethink practices best for autonomy is the modeling and simulation. Proposed are some statistical methods appropriate for modeling and simulation validation for autonomous systems. We look forward to your comments and suggestions. |
Jim Simpson Principal JK Analytics ![]() |
Webinar | Session Recording |
| 2020 |
||||||||
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S - Part 2 |
Jim Simpson Principal JK Analytics ![]() (bio)
Jim Simpson is the Principal of JK Analytics where he currently coaches and trains across various industries and organizations. He has blended practical application and industrial statistics leadership with academic experience focused on researching new methods, teaching excellence and the development and delivery of statistics courseware for graduate and professional education. Previously, he led the Air Force’s largest test wing as Chief Operations Analyst. He has served as full-time faculty at the Air Force Academy and Florida State University, and is now an Adjunct Professor at the Air Force Institute of Technology (AFIT) and the University of Florida. He received his PhD in Industrial Engineering from Arizona State University. |
Tutorial |
| 2021 |
|||||||||
Breakout Sequential Experimentation for a Binary Response - The Break Separation Method (Abstract)
Binary response experiments are common in epidemiology, biostatistics as well as in military applications. The Up and Down method, Langlie’s Method, Neyer’s method, K in a Row method and 3 Phase Optimal Design are methods used for sequential experimental design when there is a single continuous variable and a binary response. During this talk, we will discuss a new sequential experimental design approach called the Break Separation Method (BSM). BSM provides an algorithm for determining sequential experimental trials that will be used to find a median quantile and fit a logistic regression model using Maximum Likelihood estimation. BSM results in a small sample size and is designed to efficiently compute the median quantile. |
Rachel Silvestrini RIT-S |
Breakout | Materials | 2017 |
|||||||||
Short Course Applied Bayesian Methods for Test Planning and Evaluation (Abstract)
Bayesian methods have been promoted as a promising way for test and evaluation analysts to leverage previous information across a continuum-of-testing approach to system evaluation. This short course will cover how to identify when Bayesian methods might be useful within a test and evaluation context, components required to accomplish a Bayesian analysis, and provide an understanding of how to interpret the results of that analysis. The course will apply these concepts to two hands-on examples (code and applications provided): one example focusing on system reliability and one focusing on system effectiveness. Furthermore, individuals will gain an understanding of the sequential nature of a Bayesian approach to test and evaluation, the limitations thereof, and gain a broad understanding of questions to ask to ensure a Bayesian analysis is appropriately accomplished. Additional Information:
|
Victoria Sieck Deputy Director STAT COE/AFIT (bio)
Dr. Victoria R. C. Sieck is the Deputy Director of the Scientific Test & Analysis Center of Excellence (STAT COE), where she works with major acquisition programs within the Department of Defense (DoD) to apply rigor and efficiency to current and emerging test and evaluation methodologies through the application of the STAT process. Additionally, she is an Assistant Professor of Statistics at the Air Force Institute of Technology (AFIT), where her research interests include design of experiments, and developing innovate Bayesian approaches to DoD testing. As an Operations Research Analyst in the US Air Force (USAF), her experiences in the USAF testing community include being a weapons and tactics analyst and an operational test analyst. Dr. Sieck has a M.S. in Statistics from Texas A&M University, and a Ph.D. in Statistics from the University of New Mexico. |
Short Course | 2023 |
||||||||||
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions (Abstract)
Cybersecurity Metrics and Quantification is a fundamental but notoriously hard problem. It is one of the pillars underlying the emerging Science of Cybersecurity. In this talk, I will describe a number of cybersecurity metrics quantification research problems that are encountered in evaluating the effectiveness of a range of cyber defense tools. I will review the research results we have obtained over the past years. I will also discuss future research directions, including the ones that are undertaken in my research group. |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() (bio)
Shouhuai Xu is the Gallogly Chair Professor in the Department of Computer Science, University of Colorado Colorado Springs (UCCS). Prior to joining UCCS, he was with the Department of Computer Science, University of Texas at San Antonio. He pioneered a systematic approach, dubbed Cybersecurity Dynamics, to modeling and quantifying cybersecurity from a holistic perspective. This approach has three orthogonal research thrusts: metrics (for quantifying security, resilience and trustworthiness/uncertainty, to which this talk belongs), cybersecurity data analytics, and cybersecurity first-principle modeling (for seeking cybersecurity laws). His research has won a number of awards, including the 2019 worldwide adversarial malware classification challenge organized by the MIT Lincoln Lab. His research has been funded by AFOSR, AFRL, ARL, ARO, DOE, NSF and ONR. He co-initiated the International Conference on Science of Cyber Security (SciSec) and is serving as its Steering Committee Chair. He has served as Program Committee co-chair for a number of international conferences and as Program Committee member for numerous international conferences. Â He is/was an Associate Editor of IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), IEEE Transactions on Information Forensics and Security (IEEE T-IFS), and IEEE Transactions on Network Science and Engineering (IEEE TNSE). More information about his research can be found at https://xu-lab.org. |
Breakout | Materials | 2021 |
|||||||||
Presentation Test and Evaluation of AI Cyber Defense Systems (Abstract)
Adoption of Artificial Intelligence and Machine Learning powered cybersecurity defenses (henceforth, AI defenses) has outpaced testing and evaluation (T&E) capabilities. Industrial and governmental organizations around the United States are employing AI defenses to protect their networks in ever increasing numbers, with the commercial market for AI defenses currently estimated at $15 billion and expected to grow to $130 billion by 2030. This adoption of AI defenses is powered by a shortage of over 500,000 cybersecurity staff in the United States, by a need to expeditiously handle routine cybersecurity incidents with minimal human intervention and at machine speeds, and by a need to protect against highly sophisticated attacks. It is paramount to establish, through empirical testing, trust and understanding of the capabilities and risks associated with employing AI defenses. While some academic work exists for performing T&E of individual machine learning models trained using cybersecurity data, we are unaware of any principled method for assessing the capabilities of a given AI defense within an actual network environment. The ability of AI defenses to learn over time poses a significant T&E challenge, above and beyond those faced when considering traditional static cybersecurity defenses. For example, an AI defense may become more (or less) effective at defending against a given cyberattack as it learns over time. Additionally, a sophisticated adversary may attempt to evade the capabilities of an AI defense by obfuscating attacks to maneuver them into its blind spots, by poisoning the training data utilized by the AI defense, or both. Our work provides an initial methodology for performing T&E of on-premises network-based AI defenses on an actual network environment, including the use of a network environment with generated user network behavior, automated cyberattack tools to test the capabilities of AI cyber defenses to detect attacks on that network, and tools for modifying attacks to include obfuscation or data poisoning. Discussion will also center on some of the difficulties with performing T&E on an entire system, instead of just an individual model. |
Shing-hon Lau Senior Cybersecurity Engineer Software Engineering Institute, Carnegie Mellon University (bio)
Shing-hon Lau is a Senior Cybersecurity Engineer at the CERT Division of the Software Engineering Institute at Carnegie Mellon University, where he investigates the intersection between cybersecurity, artificial intelligence, and machine learning. His research interests include rigorous testing of artificial intelligence systems, building secure and trustworthy machine learning systems, and understanding the linkage between cybersecurity and adversarial machine learning threats. One research effort concerns the development of a methodology to evaluate the capabilities of AI-powered cybersecurity defensive tools. Prior to joining the CERT Division, Lau obtained his PhD in Machine Learning in 2018 from Carnegie Mellon. His doctoral work focused on the application of keystroke dynamics, or the study of keyboard typing rhythms, for authentication, insider-threat detection, and healthcare applications. |
Presentation | Materials | 2023 |
|||||||||
Breakout Open Architecture Tradeoffs (OAT): A simple, computational game engine for rapidly exploring hypotheses in Battle Management Command and Control (BMC2) (Abstract)
We created the Open Architectures Tradeoff (OAT) tool, a simple, computational game engine for rapidly exploring hypotheses about mission effectiveness in Battle Management Command and Control (BMC2). Each run of an OAT game simulates a military mission in contested airspace. Game objects represent U.S., adversary, and allied assets, each of which moves through the simulated airspace. Each U.S. asset has a Command and Control (C2) package the controls its actions—currently, neural networks form the basis of each U.S. asset’s C2 package. The weights of the neural network are randomized at the beginning of each game and are updated over the course of the game as the U.S. asset learns which of its actions lead to rewards, e.g., intercepting an adversary. Weights are updated via a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) altered to accommodate a Reinforcement Learning paradigm. OAT allows a user to winnow down the trade space that should be considered when setting up more expensive and time-consuming campaign models. OAT could be used to weed out bad ideas for “fast failure”, thus avoiding waste of campaign modeling resources. Questions can be explored via OAT such as: Which combination of system capabilities is likely to be more or less effective in a particular military mission? For example, in an early analysis, OAT was used to test the hypothesis that increases in U.S. assets’ sensor range always lead to increases in mission effectiveness, quantified as the percent of adversaries intercepted. We ran over 2500 OAT games, each time varying the sensor range of U.S. assets and the density of adversary assets. Results show that increasing sensor range did lead to an increase in military effectiveness—but only up to a certain point. Once the sensor range surpassed approximately 10-15% of the simulated airspace size, no further gains were made in the percent of adversaries intercepted. Thus, campaign modelers should hesitate to devote resources to exploring sensor range in isolation. More recent OAT analyses are exploring more complex hypotheses regarding the trade space between sensor range and communications range. |
Shelley Cazares | Breakout |
| 2019 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Reliability Growth Modeling |
Kellly Sullivan University of Arkansas |
Breakout | Materials | 2017 |
|
ASA SDNS Student Poster Awards |
Student Winners to be Announced | 2023 |
|||
Webinar The Role of Uncertainty Quantification in Machine Learning |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
| 2020 |
|
Contributed Test Planning for Observational Studies using Poisson Process Modeling |
Brian Stone AFOTEC |
Contributed | Materials | 2018 |
|
Panel Featured Panel: AI Assurance |
John Stogoski Senior Systems Engineer Software Engineering Institute, Carnegie Mellon University |
Panel | Session Recording | 2023 |
|
Short Course Multivariate Data Analysis |
Doug Steinley University of Missouri |
Short Course | Materials | 2019 |
|
Webinar KC-46A Adaptive Relevant Testing Strategies to Enable Incremental Evaluation |
J. Quinn Stank Lead KC-46 Analyst AFOTEC ![]() |
Webinar | Session Recording |
| 2020 |
Tutorial Introduction to Survey Design |
Jonathan Snavely IDA |
Tutorial | Materials | 2016 |
|
Short Course Uncertainty Quantification |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
|
Short Course Uncertainty Quantification |
Ralph Smith North Carlina State Univeristy |
Short Course | Materials | 2018 |
|
Poster Presentation The Calculus of Mixed Meal Tolerance Test Trajectories |
Skyler Chauff Cadet United States Military Academy |
Poster Presentation | 2023 |
||
Breakout Probabilistic Data Synthesis to Provide a Defensible Risk Assessment for Army Munition |
Kevin Singer | Breakout |
| 2019 |
|
Keynote Retooling Design and Development |
Chris Singer NASA Deputy Chief Engineer NASA ![]() |
Keynote | Materials | 2017 |
|
Breakout Improving Sensitivity Experiments |
Kevin Singer US Army |
Breakout | Materials | 2017 |
|
Tutorial Power Anyalysis Concepts |
Jim Simpson JK Analytics |
Tutorial | Materials | 2016 |
|
Short Course Split-Plot and Restricted Randomization Designs |
Jim Simpson JK Analytics |
Short Course | Materials | 2017 |
|
Breakout Automated Software Testing Best Practices and Framework: A STAT COE Project |
Jim Simpson JK Analytics |
Breakout | Materials | 2017 |
|
Breakout DOE and Test Automation for System of Systems TE |
Jim Simpson JK Analytics |
Breakout | Materials | 2018 |
|
Webinar Sequential Testing and Simulation Validation for Autonomous Systems |
Jim Simpson Principal JK Analytics ![]() |
Webinar | Session Recording |
| 2020 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S - Part 2 |
Jim Simpson Principal JK Analytics ![]() |
Tutorial |
| 2021 |
|
Breakout Sequential Experimentation for a Binary Response - The Break Separation Method |
Rachel Silvestrini RIT-S |
Breakout | Materials | 2017 |
|
Short Course Applied Bayesian Methods for Test Planning and Evaluation |
Victoria Sieck Deputy Director STAT COE/AFIT |
Short Course | 2023 |
||
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() |
Breakout | Materials | 2021 |
|
Presentation Test and Evaluation of AI Cyber Defense Systems |
Shing-hon Lau Senior Cybersecurity Engineer Software Engineering Institute, Carnegie Mellon University |
Presentation | Materials | 2023 |
|
Breakout Open Architecture Tradeoffs (OAT): A simple, computational game engine for rapidly exploring hypotheses in Battle Management Command and Control (BMC2) |
Shelley Cazares | Breakout |
| 2019 |