Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 2 |
Jim Simpson Principal JK Analytics ![]() (bio)
Jim Simpson is the Principal of JK Analytics where he currently coaches and trains across various industries and organizations. He has blended practical application and industrial statistics leadership with academic experience focused on researching new methods, teaching excellence and the development and delivery of statistics courseware for graduate and professional education. Previously, he led the Air Force’s largest test wing as Chief Operations Analyst. He has served as full-time faculty at the Air Force Academy and Florida State University, and is now an Adjunct Professor at the Air Force Institute of Technology (AFIT) and the University of Florida. He received his PhD in Industrial Engineering from Arizona State University. |
Tutorial |
![]() | 2021 |
|
Breakout Entropy-Based Adaptive Design for Contour Finding and Estimating Reliability (Abstract)
In reliability, methods used to estimate failure probability are often limited by the costs associated with model evaluations. Many of these methods, such as multi-fidelity importance sampling (MFIS), rely upon a cheap, surrogate model like a Gaussian process (GP) to quickly generate predictions. The quality of the GP fit, at least in the vicinity of the failure region(s), is instrumental in propping up such estimation strategies. We introduce an entropy-based GP adaptive design that, when paired with MFIS, provides more accurate failure probability estimates and with higher confidence. We show that our greedy data acquisition scheme better identifies multiple failure regions compared to existing contour-finding schemes. We then extend the method to batch selection. Illustrative examples are provided on benchmark data as well as an application to the impact damage simulator of a NASA spacesuit design. |
Austin Cole PhD Candidate Virginia Tech ![]() (bio)
Austin Cole is a statistics PhD candidate at Virginia Tech. He previously taught high school math and statistics courses, and holds a Bachelor’s in Mathematics and Master’s in Secondary Education from the College of William and Mary. Austin has worked with dozens of researchers as a lead collaborator in Virginia Tech’s Statistical Applications and Innovations Group (SAIG). Under the supervision of Dr. Robert Gramacy, Austin has conducted research in the area of computer experiments with focuses on Bayesian optimization, sparse covariance matrices, and importance sampling. He is currently collaborating with researchers at NASA Langley, to evaluate the safety of the next generation of spacesuits. |
Breakout |
![]() | 2021 |
|
Panel Army’s Open Experimentation Test Range for Internet of Battlefield Things: MSA-DPG (Abstract)
One key feature of future Multi-Domain Operations (MDO) is expected to be the ubiquity of devices providing information connected in an Internet of Battlefield Things (IoBT). To this end, U.S. Army aims to advance the underlying science of pervasive and heterogeneous IoBT sensing, networking, and actuation. In this effort, IoBT experimentation testbed is an integral part of the capability development, which evaluates and validates the scientific theories, algorithms, and technologies integrated with C2 systems under the military scenarios. Originally conceived for this purpose, Multi-Purpose Sensing Area Distributed Proving Ground (MSA-DPG) is an open-range test bed developed by the Army Research Laboratory (ARL). We discuss the vision and the development of MSA-DPG and its fundamental roles of MSA-DPG in research serving the communities of Military Sciences. |
Jade Freeman Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
Dr. Jade Freeman currently serves as the Associate Branch Chief and a Team Lead at Battlefield Information Systems Branch. In this capacity, Dr. Freeman oversees information systems and engineering research projects and analyses. Prior to joining ARL, Dr. Freeman served as the Senior Statistician at the Office of Cybersecurity and Communications, Department of Homeland Security. Throughout the career, her work in operations and research includes cyber threat analyses, large survey design and analyses, experimental design, survival analysis, and missing data imputation methods. Dr. Freeman is also a PMP certified project manager, experienced in leading and managing IT development projects. Dr. Freeman obtained a Ph. D. in Statistics from the George Washington University. |
Panel |
![]() | 2021 |
|
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks (Abstract)
Many systems within the Department of Defense (DoD) contain networks that use both Internet Protocol (IP) and non-IP forms of information exchange. While IP communication is widely understood among the cybersecurity community, expertise and available test tools for non-IP protocols such as Controller Area Network (CAN), MIL-STD-1553, and SCADA are not as commonplace. Over the past decade, the DoD has repeatedly identified gaps in data collection and analysis when assessing the cybersecurity of non-IP buses. This roundtable is intended to open a discussion among testers and evaluators on the existing measurement and analysis tools for non-IP buses used across the community and also propose solutions to recurring roadblocks experienced when performing operational testing on non-IP components. Specific topics of discussion will include: What tools do you or your supporting teams use during cybersecurity events to attack, scan, and monitor non-IP communications? What raw quantitative data do you collect that captures the adversarial activity and/or system response from cyber aggression to non-IP components? Please provide examples of test instrumentation and data collection methods. What data analysis tools do you use to draw conclusions from measured data? What types of non-IP buses, including components on those buses, have you personally been able to test? What components were you not able to test? Why were you not able to test them? Was it due to safety concerns, lack of permission, lack of available tools and expertise, or other? Had you been given authority to test those components, do you think it would have improved the quality of test and comprehensiveness of the assessment? |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Roundtable | 2021 |
||
Breakout Characterizing Human-Machine Teaming Metrics for Test and Evaluation (Abstract)
As advanced technologies and capabilities are enabling machines to engage in tasks that only humans have done previously, new challenges have emerged for the rigorous testing and evaluation (T&E) of human-machine teaming (HMT) concepts. We differentiate the distinction between a HMT and a human using a tool, and new challenges are enumerated: Agents’ mental models are opaque, machine-to-human communications need to be evaluated, and self-tasking and autonomy need to be evaluated. We argue that a focus on mission outcomes cannot fully characterize team performance due to the increased problem space evaluated and that the T&E community needs to develop and refine new metrics for agents of teams and teammate interactions. Our IDA HMT framework outlines major categories for HMT evaluation, emphasizing team metrics and parallelizing agent metrics across humans and machines. Major categories are tied to the literature and proposed as a starting point for additional T&E metric specification for robust evaluation. |
Brian Vickers Research Staff Member Institute for Defense Analyses ![]() (bio)
Brian is a Research Staff Member at the Institute for Defense Analyses where he applies rigorous statistics and study design to evaluate, test, and report on various programs. Dr. Vickers holds a Ph.D. from the University of Michigan, Ann Arbor where he researched various factors that influence decision making, with a focus on how people allocate their money, time, and other resources. |
Breakout |
![]() | 2021 |
|
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative (Abstract)
In 2019, the DoD unveiled comprehensive strategies related to Artificial Intelligence, Digital Modernization, and Enterprise Data Analytics. Recognizing that data science and analytics are fundamental to these strategies, in October 2020 the DoD has issued a comprehensive Data Strategy for national security and defense. For over a hundred years, statistical sciences have played a pivotal role in our national defense, from quality assurance and reliability analysis of munitions fielded in WWII, to operational analyses defining battlefield force structure and tactics, to helping optimize the engineering design of complex products, to rigorous testing and evaluating of Warfighter systems. The American Statistical Association (ASA) in 2015 recognized in its statement on The Role of Statistics in Data Science that “statistics is foundational to data science… and its use in this emerging field empowers researchers to extract knowledge and obtain better results from Big Data and other analytics projects.” It is clearly recognized that data as information is a key asset to the DoD. The challenge we face is how to transform existing talent to add value where it counts. |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() (bio)
Dr. Laura Freeman is a Research Associate Professor of Statistics and the Director of the Intelligent Systems Lab at the Virginia Tech Hume Center. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance. She develops new methods for test and evaluation focusing on emerging system technology. She is also the Assistant Dean for Research in the National Capital Region, in that capacity she works to shape research directions and collaborations in across the College of Science in the National Capital Region. Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data. |
Roundtable | 2021 |
||
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() (bio)
Dr. Peter Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods. His expertise is in collaboratively integrating research objectives, measurement sciences, modeling and simulation, and test design to produce actionable knowledge that supports rigorous decision-making for aerospace research and development. After eight years in private industry, Dr. Parker joined Langley Research Center in 1997. He holds a B.S. in Mechanical Engineering from Old Dominion University, a M.S. in Applied Physics and Computer Science from Christopher Newport University, and a M.S. and Ph.D. in Statistics from Virginia Tech. He is a licensed Professional Engineer in the Commonwealth of Virginia. Dr. Parker is a senior member of the American Institute of Aeronautics and Astronautics, American Society for Quality, and American Statistical Association. He currently serves as Chair-Elect of the International Statistical Engineering Association. Dr. Parker is the past-Chair of the American Society for Quality’s Publication Management Board and Editor Emeritus of the journal Quality Engineering. |
Breakout | Session Recording |
Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 2 |
Daniel Hellman Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Daniel Hellmann is a Research Staff Member in the Operational Evaluation Division at the Institute for Defense Analyses. He is also a prior service U.S. Marine with multiple combat tours. Currently, Dr. Hellmann specializes in mixed methods research on topics related to distributed cognition, institutions and organizations, and Computer Supported Cooperative Work (CSCW).” |
Tutorial | 2021 |
||
Breakout Cognitive Work Analysis – From System Requirements to Validation and Verification (Abstract)
Human-system interaction is a critical yet often neglected aspect of the system development process. It is mostly commonly incorporated into system performance assessments late in the design process leaving little opportunity for any substantive changes to be made to ensure satisfactory system performance achieved. As a result, workarounds and compromises become a patchwork of “corrections” that end up in the final fielded system. But what if mission outcomes, the work context, and performance expectations can be articulated earlier in the process, thereby influencing the development process throughout? This presentation will discuss how a formative method from the field of cognitive systems engineering, cognitive work analysis, can be leveraged to derive design requirements compatible with traditional systems engineering processes. This method establishes not only requirements from which system designs can be constructed, but also how system performance expectations can be more acutely defined a priori to guide the validation and verification process. Cognitive work analysis methods will be described to highlight how ‘cognitive work’ and ‘information relationship’ requirements can be derived and will be showcased in a case-study application of building a decision support system for future human spaceflight operations. Specifically, a description of the testing campaign employed to verify and validate the fielded system will be provided. In summary, this presentation will cover how system requirements can be established early in the design phase, guide the development of design solutions, and subsequently be used to assess the operational performance of the solutions within the context of the work domain it is intended to support. |
Matthew Miller Exploration Research Engineer Jacobs/NASA Johnson Space Center ![]() (bio)
Matthew J. Miller is an Exploration Research Engineer within the Astromaterials Research and Exploration Sciences (ARES) division at NASA Johnson Space Center. His work focuses on advancing present-day tools, technologies and techniques to improve future EVA operations by applying cognitive systems engineering principles. He has over seven years of EVA flight operations and NASA analog experience where he has developed and deployed various EVA support systems and concept of operations. He received a B.S. (2012), M.S. (2014) and Ph.D. (2017) in aerospace engineering from the Georgia Institute of Technology. |
Breakout |
![]() | 2021 |
|
Panel Multi-Agent Adaptive Coordinated Autonomy in Contested Battlefields (Abstract)
Autonomous multi-robot systems have the potential to augment the future force with enhanced capability while reducing the risk to human personnel in multi-domain operations (MDO). Mobile robots can constitute nodes in a heterogeneous Internet of Battlefield Things (IoBT); they can offer additional capability in the form of mobility to effectively make observations useful for planning and executing military operations against adversaries. In this talk, I will present the result of a series of field experiments where robots are tasked to perform military-relevant missions in realistic environments, in addition to describing the integration of mobile robot assets in the Multi-Purpose Sensing Array Distributed Proving Ground (MSA-DPG) for the purpose of augmenting IoBT systems. |
John Rogers Senior Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
John Rogers is a research scientist specializing in autonomous mobile robotics at the Army Research Laboratory’s Intelligent Robotics Branch of the Computational and Information Sciences Directorate (CISD). John’s research has focused on autonomy for multi-robot teams as well as distributed multi-robot state estimation. John is currently leading the Tactical Behaviors group in the AI for mobility and maneuver essential research program, which is focused on developing deep learning and game theoretic maneuver for ground robots against adversaries. Prior to this, John led the multi-robot mapping research on the Autonomy Research Pilot Initiative (ARPI) on “autonomous collective defeat of hard and deeply buried targets” in collaboration with his colleagues at the Air Force Research Laboratory. John has also partnered with DCIST, MAST and Robotics CTA partners to extend funded research programs within-house collaborative projects. John completed his Ph.D. degree at the Georgia Institute of Technology in2012 with his advisor, Prof. Henrik Christensen from the Robotics and Intelligent Machines center. While at Georgia Tech, John participated in a variety of sponsored research projects including the Micro Autonomous Systems and Technology (MAST) project from the Army Research Laboratory, a counter-terrorism “red team” project for the Naval Research Laboratory in addition to his thesis research on semantic mapping and reasoning culminating in his thesis “Life-long mapping of objects and places for domestic robots”. Prior to attending Georgia Tech, John completed a M.S. degree in Computer Science at Stanford (2006) while working with Prof. Sebastian Thrun and Prof. Andrew Ng on the DARPA Learning Applied to Ground Robots (LAGR) project. John also holds M.S. and B.S. degrees in Electrical and Computer Engineering from Carnegie Mellon University (2002). John has authored or co-authored over 50 scientific publications in Robotics and Computer Vision journals and conferences. John’s current research interests are automatic exploration and mapping of large-scale indoor, outdoor, and subterranean environments, place recognition in austere locations, semantic scene understanding, and probabilistic reasoning for autonomous mobile robots. CV: Google Scholar: https://scholar.google.com/citations?user=uH_LDocAAAAJ&hl=en |
Panel |
![]() | 2021 |
|
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E (Abstract)
The majority of statistical analyses involves observing a fixed set of data and analyzing those data after the final observation has been collected to draw some inference about the population from which they came. Unlike these traditional methods, sequential analysis is concerned with situations for which the number, pattern, or composition of the data is not determined at the start of the investigation but instead depends upon the information acquired throughout the course of the investigation. Expanding the use of sequential analysis in DoD testing has the potential to save substantial test dollars and decrease test time. However, switching from traditional to sequential planning will likely induce unique challenges. The goal of this round table is to provide an open forum for topics related to sequential analyses. We aim to discuss potential challenges, identify potential ways to overcome them, and talk about successful stories of sequential analyses implementation and lessons learned. Specific questions for discussion will be provided to participants prior to the event. |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Rebecca Medlin is a Research Staff Member at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on the use of statistics in test & evaluation and has designed tests and conducted statistical analyses for several major defense programs including tactical vehicles, mobility aircraft, radars, and electronic warfare systems. Her areas of expertise include design of experiments, statistical modeling, and reliability. She has a Ph.D. in Statistics from Virginia Tech. |
Roundtable | 2021 |
||
Breakout Verification and Validation of Elastodynamic Simulation Software for Aerospace Research (Abstract)
Physics-based simulation of nondestructive evaluation (NDE) inspection can help to advance the inspectability and reliability of mechanical systems. However, NDE simulations applicable to non-idealized mechanical components often require large compute domains and long run times. This has prompted development of custom NDE simulation software tailored to high performance computing (HPC) hardware. Verification and validation (V&V) is an integral part of developing this software to ensure implementations are robust and applicable to inspection problems, producing tools and simulations suitable for computational NDE research. This presentation addresses factors common to V&V of several elastodynamic simulation codes applicable to ultrasonic NDE. Examples are drawn from in-house simulation software at NASA Langley Research Center, ranging from ensuring reliability in a 1D heterogeneous media wave equation solver to the V&V needs of 3D cluster-parallel elastodynamic software. Factors specific to a research environment are addressed, where individual simulation results can be as relevant as the software product itself. Distinct facets of V&V are discussed including testing to establish software reliability, employing systematic approaches for consistency with fundamental conservation laws, establishing the numerical stability of algorithms, and demonstrating concurrence with empirical data. This talk also addresses V&V practices for small groups of researchers. This includes establishing resources (e.g. time and personnel) for V&V during project planning to mitigate and control the risk of setbacks. Similarly, we identify ways for individual researchers to use V&V during simulation software development itself to both speed up the development process and reduce incurred technical debt. |
Erik Frankforter Research Engineer NASA Langley Research Center ![]() (bio)
Erik Frankforter is a research engineer in the Nondestructive Evaluation Sciences branch at NASA Langley Research Center. His research areas include the development of high performance computing nondestructive evaluation simulation software, advancement of inspection mechanics in advanced material systems, and application of physics-based simulation for inspection guidance. In 2017, he obtained a Ph. D. in mechanical engineering from the University of South Carolina, and served as a postdoctoral research scholar at the National Institute of Aerospace. |
Breakout |
![]() | 2021 |
|
Roundtable Test Design and Analysis for Modeling & Simulation Validation (Abstract)
System evaluations increasingly rely on modeling and simulation (M&S) to supplement live testing. It is thus crucial to thoroughly validate these M&S tools using rigorous data collection and analysis strategies. At this roundtable, we will identify and discuss some of the core challenges currently associated with implementing M&S validation for T&E. First, appropriate design of experiments (DOE) for M&S is not universally adopted across the T&E community. This arises in part due to limited knowledge of gold standard techniques from academic research (e.g., space filing designs; Gaussian Process emulators) as well as lack of expertise with the requisite software tools. Second, T&E poses unique demands in testing, such as extreme constraints in live testing conditions and reliance on binary outcomes. There is no consensus on how to incorporate these needs into the existing academic framework for M&S. Finally, some practical considerations lack clear solutions yet have direct consequences on design choice. In particular, we may discuss the following: (1) sample size determination when calculating power and confidence is not applicable, and (2) non-deterministic M&S output with high levels of noise, which may benefit from replication samples as in classical DOE. |
Kelly Avery Research Staff Member Institute for Defense Analyses ![]() (bio)
Kelly M. Avery is a Research Staff Member at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on the use of statistics in test & evaluation and modeling & simulation, and has designed tests and conducted statistical analyses for several major defense programs including tactical aircraft, missile systems, radars, satellite systems, and computer-based intelligence systems. Her areas of expertise include statistical modeling, design of experiments, modeling & simulation validation, and statistical process control. Dr. Avery has a B.S. in Statistics, a M.S. in Applied Statistics, and a Ph.D. in Statistics, all from Florida State University. |
Roundtable | 2021 |
||
Breakout Statistical Engineering in Practice |
Alex Varbanov Principal Scientist Procter and Gamble ![]() (bio)
Dr. Alex Varbanov is a Principal Scientist at Procter & Gamble. He was born in Bulgaria. He graduated with Ph.D. from the School of Statistics, University of Minnesota in 1999. Dr. Varbanov has been with P&G R&D for 21 years and provides statistical support for variety of company business units (e.g., Fabric & Home Care) and brands (e.g., Tide, Swiffer). He performs experimental design, statistical analysis, and advanced modeling for many different areas including genomics, consumer research, and product claims. He is currently developing scent character end-to-end models for perfume optimization in P&G consumer products. |
Breakout | Session Recording |
![]() Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 3 |
Emily Fedele Research Staff Member Institute for Defense Analyses ![]() (bio)
Emily Fedele is a Research Staff Member at the Institute for Defense Analyses in the Science and Technology Division. She joined IDA in 2018 and her work focuses on conducting and evaluating behavioral science research on a variety of defense related topics. She has expertise in research design, experimental methods, and statistical analysis. |
Tutorial | 2021 |
||
Breakout Spatio-Temporal Modeling of Pandemics (Abstract)
The spread of COVID-19 across the United States provides an interesting case study in the modeling of spatio-temporal data. In this breakout session we will provide an overview of commonly used spatio-temporal models and demonstrate how Bayesian Inference can be performed using both exact and approximate inferential techniques. Using COVID data, we will demonstrate visualization techniques in R and introduce “off-the-shelf” spatio-temporal models. We will introduce participants to the Integrated Nested LaPlace approximation (INLA) methodology and show how results from this technique compare to using Markov Chain Monte Carlo (MCMC) techniques. Finally, we will demonstrate the short-falls in using “off-the-shelf” models and show how epidemiological motivated partial differential equations can be used to generate spatio-temporal models and discuss inferential issues when we move away from common models. |
Nicholas Clark Assistant Professor West Point ![]() (bio)
LTC Nicholas Clark is an Academy Professor at the United States Military Academy where he heads the Center for Data Analysis and Statistics. Nick received his PhD in Statistics from Iowa State University and his research interests include spatio-temporal statistics and Bayesian methodology. |
Breakout |
![]() | 2021 |
|
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Sarah Burke STAT Expert STAT Center of Excellence ![]() (bio)
Dr. Sarah Burke is a scientific test and analysis techniques (STAT) Expert for the STAT Center of Excellence. She works with acquisition programs in the Air Force, Army, and Navy to improve test efficiency, plan tests effectively, and analyze the resulting test data to inform decisions on system development. She received her M.S. in Statistics and Ph.D. in Industrial Engineering from Arizona State University. |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() (bio)
Joseph B. Lyons is the Lead for the Collaborative Interfaces and Teaming Core Research Area within the 711 Human Performance Wing at Wright-Patterson AFB, OH. Dr. Lyons received his PhD in Industrial/Organizational Psychology from Wright State University in Dayton, OH, in 2005. Some of Dr. Lyons’ research interests include human-machine trust, interpersonal trust, human factors, and influence. Dr. Lyons has worked for the Air Force Research Laboratory as a civilian researcher since 2005, and between 2011-2013 he served as the Program Officer at the Air Force Office of Scientific Research where he created a basic research portfolio to study both interpersonal and human-machine trust as well as social influence. Dr. Lyons has published in a variety of peer-reviewed journals, and is an Associate Editor for the journal Military Psychology. Dr. Lyons is a Fellow of the American Psychological Association and the Society for Military Psychologists. |
Panel | Session Recording |
![]() Recording | 2021 |
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo (Abstract)
With the rise of machine learning and artificial intelligence, there has been a huge surge in data-driven approaches to solve computational science and engineering problems. In the context of uncertainty propagation, machine learning is often employed for the construction of efficient surrogate models (i.e., response surfaces) to replace expensive, physics-based simulations. However, relying solely on surrogate models without any recourse to the original high-fidelity simulation will produce biased estimators and can yield unreliable or non-physical results. This talk discusses multi-model Monte Carlo methods that combine predictions from both fast, low-fidelity models with reliable, high-fidelity simulations to enable efficient and accurate uncertainty propagation. For instance, the low-fidelity models could arise from coarsened discretizations in space/time (e.g., Multilevel Monte Carlo – MLMC) or from general data-driven or reduced order models (e.g., Multifidelity Monte Carlo – MFMC; Approximate Control Variates – ACV). Given a fixed computational budget and a collection of models of varying cost/accuracy, the goal of these methods is to optimally allocate and combine samples across the models. The talk will also present a NASA-developed open-source Python library that acts as a general multi-model uncertainty propagation capability. The effectiveness of the discussed methods and Python library is demonstrated on a trajectory simulation application. Here, orders of magnitude computational speedup and accuracy are obtained for predicting the landing location of an umbrella heat shield under significant uncertainties in initial state, atmospheric conditions, etc. |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() (bio)
Dr. Geoffrey Bomarito is a Materials Research Engineer at NASA Langley Research Center. Before joining NASA in 2014, he earned a PhD in Computational Solid Mechanics from Cornell University. He also holds an MEng from the Massachusetts Institute of Technology and a BS from Cornell University, both in Civil and Environmental Engineering. Dr. Bomarito’s work centers around machine learning and uncertainty quantification as applied to aerospace materials and structures. His current topics of interest are physics informed machine learning, symbolic regression, additive manufacturing, and trajectory simulation. |
Breakout |
![]() | 2021 |
|
Roundtable Organizing and Sharing Data within the T&E Community (Abstract)
Effective data sharing requires alignment of personnel, systems, and policies. Data are costly and precious, and to get the most value out of the data we collect, it is important that we share and reuse it whenever possible and appropriate. Data are typically collected and organized with a single specific use or goal in mind, and after that goal has been achieved (e.g., the report is published), the data are no longer viewed as important or useful. This process is self-fulfilling, as future analysts who might want to use these data will not be able to find them or will be unable to understand the data sufficiently due to lack of documentation and metadata. Implementing data standards and facilitating sharing are challenging in the national security environment. There are many data repositories within the DoD, but most of them are specific to certain organizations and are accessible by a limited number of people. Valid concerns about security make the process of sharing particular data sets challenging, and the opacity of data ownership often complicates the issue. This roundtable will facilitate discussion of these issues. Participants will have opportunities to share their experiences trying to share data and make use of data from previous testing. We hope to identify useful lessons learned and find ways to encourage data sharing within the community. |
Matthew Avery Research Staff Memeber Institute for Defense Analyses ![]() (bio)
Dr. Matthew Avery is a Research Staff Member at the Institute for Defense Analyses in the Operational Evaluation Division. As the Tactical Aerial Reconnaissance Systems Project Leader, he provides input to the DoD on test plans and reports for Army, Marine Corps, and Navy tactical unmanned aircraft systems. Dr. Avery co-chairs IDA’s Data Governance Committee and leads the Data Management Group within IDA’s Test Science team, where he helps develop internal policies and trainings on data usage, access and storage. From 2017 to 2018, Dr. Avery worked as an embedded analyst at the Department of Defense Office of Cost Analysis and Program Evaluation (CAPE), focusing on Space Control and Mobilization. Dr. Avery received his PhD in Statistics from North Carolina State University in 2012. |
Roundtable | 2021 |
||
Breakout Statistical Engineering in Practice (Abstract)
Problems faced in defense and aerospace often go well beyond textbook problems presented in academic settings. Textbook problems are typically very well defined, and can be solved through application of one “correct” tool. Conversely, many defense and aerospace problems are ill-defined, at least initially, and require an overall strategy to attack, one involving multiple tools and often multiple disciplines. Statistical engineering is an approach recently developed for addressing large, complex, unstructured problems, particularly those for which data can be effectively utilized. This session will present a brief overview of statistical engineering, and how it can be applied to engineer solutions to complex problems. Following this introduction, two case studies of statistical engineering will be presented, to illustrate the concepts. |
Roger Hoerl Associate Professor of Statistics Union College ![]() (bio)
Dr. Roger W. Hoerl is the Brate-Peschel Associate Professor of Statistics at Union College in Schenectady, NY. Previously, he led the Applied Statistics Lab at GE Global Research. While at GE, Dr. Hoerl led a team of statisticians, applied mathematicians, and computational financial analysts who worked on some of GE’s most challenging research problems, such as developing personalized medicine protocols, enhancing the reliability of aircraft engines, and management of risk for a half-trillion dollar portfolio. Dr. Hoerl has been named a Fellow of the American Statistical Association and the American Society for Quality, and has been elected to the International Statistical Institute and the International Academy for Quality. He has received the Brumbaugh and Hunter Awards, as well as the Shewhart Medal, from the American Society for Quality, and the Founders Award and Deming Lectureship Award from the American Statistical Association. While at GE Global Research, he received the Coolidge Fellowship, honoring one scientist a year from among the four global GE Research and Development sites for lifetime technical achievement. His book with Ron Snee, Statistical Thinking: Improving Business Performance, now in its 3rd edition, was called “the most practical introductory statistics textbook ever published in a business context” by the journal Technometrics. |
Breakout | Session Recording |
![]() Recording | 2021 |
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
|
Breakout Empirical Analysis of COVID-19 in U.S. States and Counties (Abstract)
The zoonotic emergence of the coronavirus SARS-CoV-2 at the beginning of 2020 and the subsequent global pandemic of COVID-19 has caused massive disruptions to economies and health care systems, particularly in the United States. Using the results of serology testing, we have developed true prevalence estimates for COVID-19 case counts in the U.S. over time, which allows for more clear estimates of infection and case fatality rates throughout the course of the pandemic. In order to elucidate policy, demographic, weather, and behavioral factors that contribute to or inhibit the spread of COVID-19, IDA compiled panel data sets of empirically derived, publicly available COVID-19 data and analyzed which factors were most highly correlated with increased and decreased spread within U.S. states and counties. These analyses lead to several recommendations for future pandemic response preparedness. |
Emily Heuring Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Emily Heuring received her PhD in Biochemistry, Cellular, and Molecular Biology from the Johns Hopkins University School of Medicine in 2004 on the topic of human immunodeficiency virus and its impact on the central nervous system. Since that time, she has been a Research Staff Member at the Institute for Defense Analyses, supporting operational testing of chemical and biological defense programs. More recently, Dr. Heuring has supported OSD-CAPE on Army and Marine Corps programs and the impact of COVID-19 on the general population and DOD. |
Breakout |
![]() | 2021 |
|
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
John Haman RSM Institute for Defense Analyses ![]() (bio)
Dr. John Haman is a statistician at the Institute for Defense Analyses, where he develops methods and tools for analyzing test data. He has worked with a variety of Army, Navy, and Air Force systems, including counter-UAS and electronic warfare systems. Currently, John is supporting the Joint Artificial Intelligence Center. |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Chad Bieber Director, Test and Evaluation. Senior Research Engineer. Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Chad Bieber is a Senior Research Engineer at the Johns Hopkins University Applied Physics Lab, is currently working as the Test and Evaluation Director for Project Maven, and was previously a Research Staff Member at IDA. A former pilot in the US Air Force, he received his Ph.D. in Aerospace Engineering from North Carolina State University. Chad is interested in how humans interact with complex, and increasingly autonomous, systems. |
Panel | Session Recording |
Recording | 2021 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 2 |
Jim Simpson Principal JK Analytics ![]() |
Tutorial |
![]() | 2021 |
|
Breakout Entropy-Based Adaptive Design for Contour Finding and Estimating Reliability |
Austin Cole PhD Candidate Virginia Tech ![]() |
Breakout |
![]() | 2021 |
|
Panel Army’s Open Experimentation Test Range for Internet of Battlefield Things: MSA-DPG |
Jade Freeman Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() |
Panel |
![]() | 2021 |
|
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout Characterizing Human-Machine Teaming Metrics for Test and Evaluation |
Brian Vickers Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
|
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() |
Roundtable | 2021 |
||
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() |
Breakout | Session Recording |
Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 2 |
Daniel Hellman Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | 2021 |
||
Breakout Cognitive Work Analysis – From System Requirements to Validation and Verification |
Matthew Miller Exploration Research Engineer Jacobs/NASA Johnson Space Center ![]() |
Breakout |
![]() | 2021 |
|
Panel Multi-Agent Adaptive Coordinated Autonomy in Contested Battlefields |
John Rogers Senior Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() |
Panel |
![]() | 2021 |
|
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout Verification and Validation of Elastodynamic Simulation Software for Aerospace Research |
Erik Frankforter Research Engineer NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
|
Roundtable Test Design and Analysis for Modeling & Simulation Validation |
Kelly Avery Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout Statistical Engineering in Practice |
Alex Varbanov Principal Scientist Procter and Gamble ![]() |
Breakout | Session Recording |
![]() Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 3 |
Emily Fedele Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | 2021 |
||
Breakout Spatio-Temporal Modeling of Pandemics |
Nicholas Clark Assistant Professor West Point ![]() |
Breakout |
![]() | 2021 |
|
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Sarah Burke STAT Expert STAT Center of Excellence ![]() |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() |
Panel | Session Recording |
![]() Recording | 2021 |
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
|
Roundtable Organizing and Sharing Data within the T&E Community |
Matthew Avery Research Staff Memeber Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout Statistical Engineering in Practice |
Roger Hoerl Associate Professor of Statistics Union College ![]() |
Breakout | Session Recording |
![]() Recording | 2021 |
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Breakout Empirical Analysis of COVID-19 in U.S. States and Counties |
Emily Heuring Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
|
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
John Haman RSM Institute for Defense Analyses ![]() |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Chad Bieber Director, Test and Evaluation. Senior Research Engineer. Johns Hopkins University Applied Physics Laboratory ![]() |
Panel | Session Recording |
Recording | 2021 |