Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Physics-Informed Deep Learning for Modeling and Simulation under Uncertainty (Abstract)
Certification by analysis (CBA) involves the supplementation of expensive physical testing with modeling and simulation. In high-risk fields such as defense and aerospace, it is critical that these models accurately represent the real world, and thus they must be verified, validated and provide measures of uncertainty. While machine learning (ML) algorithms such as deep neural networks have seen significant success in low-risk sectors, they are typically opaque, difficult to interpret and often fail to meet these stringent requirements. Recently, a Department of Energy (DOE) report was released on the concept of scientific machine learning (SML) [1] with the aim of generally improving confidence in ML and enabling broader use in the scientific and engineering communities. The report identified three critical attributes that ML algorithms should possess: domain-awareness, interpretability, and robustness. Recent advances in physics-informed neural networks (PINNs) are promising in that they can provide both domain awareness and a degree of interpretability [2, 3, 4] by using governing partial differential equations as constraints during training. In this way, PINNs output physically admissible, albeit deterministic, solutions. Another noteworthy deep learning algorithm is the generative adversarial network (GAN), which can learn probability distributions [5] and provide robustness through uncertainty quantification. A limited number of works have recently demonstrated success by combining these two methods into what is referred to as a physics-informed GAN, or PIGAN [6, 7]. The PIGAN has the ability to produce physically admissible, non-deterministic predictions as well as solve non-deterministic inverse problems, potentially meeting the goals of domain awareness, interpretability, and robustness. This talk will present an introduction to PIGANs as well as an example of current NASA research implementing these networks. REFERENCES [1] Nathan Baker, Frank Alexander, Timo Bremer, Aric Hagberg, Yannis Kevrekidis, Habib Najm, Manish Parashar, Abani Patra, James Sethian, Stefan Wild, Karen Willcox, and Steven Lee. Workshop report on basic research needs for scientific machine learning: Core technologies for artificial intelligence. Technical report, USDOE Office of Science (SC) Washington, DC (United States), 2019. [2] Maziar Raissi, Paris Perdikaris, and George Karniadakis. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686-707, 2019. [3] Alexandre Tartakovsky, Carlos Ortiz Marrero, Paris Perdikaris, Guzel Tartakovsky, and David Barajas-Solano. Learning parameters and constitutive relationships with physics informed deep neural networks. arXiv preprint arXiv:1808.03398, 2018. [4] Julia Ling, Andrew Kurzawski, and Jeremy Templeton. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807:155-166, 2016. [5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014. [6] Liu Yang, Dongkun Zhang, and George Karniadakis. Physics-informed generative adversarial networks for stochastic differential equations. arXiv preprint arXiv:1811.02033, 2018. [7] Yibo Yang and Paris Perdikaris. Adversarial uncertainty quantification in physics-informed neural networks. Journal of Computational Physics, 394:136-152, 2019. |
Patrick Leser Aerospace Technologist NASA Langley Research Center ![]() (bio)
Dr. Patrick Leser is a researcher in the Durability, Damage Tolerance, and Reliability Branch (DDTRB) at NASA Langley Research Center (LaRC) in Hampton, VA. After receiving a B.S. in Aerospace Engineering from North Carolina State University (NCSU), he became a NASA civil servant under the Pathways Intern Employment Program in 2014. In 2017, he received his PhD in Aerospace Engineering, also from NCSU. Dr. Leser’s research focuses primarily on uncertainty quantification (UQ), model calibration, and fatigue crack growth in metallic materials. Dr. Leser has applied these interests to various topics including structural health management, digital twin, additive manufacturing, battery health management and various NASA Engineering Safety Center (NESC) assessments, primarily focusing on the fatigue life of composite overwrapped pressure vessels (COPVs). A primary focus of his work has been the development of computationally-efficient UQ methods that utilize fields such as high performance computing and machine learning. |
Breakout |
![]() | 2021 |
|
Breakout Statistical Engineering in Practice (Abstract)
Problems faced in defense and aerospace often go well beyond textbook problems presented in academic settings. Textbook problems are typically very well defined, and can be solved through application of one “correct” tool. Conversely, many defense and aerospace problems are ill-defined, at least initially, and require an overall strategy to attack, one involving multiple tools and often multiple disciplines. Statistical engineering is an approach recently developed for addressing large, complex, unstructured problems, particularly those for which data can be effectively utilized. This session will present a brief overview of statistical engineering, and how it can be applied to engineer solutions to complex problems. Following this introduction, two case studies of statistical engineering will be presented, to illustrate the concepts. |
Angie Patterson Chief Consulting Engineer GE Aviation |
Breakout | Session Recording |
Recording | 2021 |
Breakout Method for Evaluating Bayesian Reliability Models for Developmental Testing (Abstract)
For analysis of military Developmental Test (DT) data, frequentist statistical models are increasingly challenged to meet the needs of analysts and decision-makers. Bayesian models have the potential to address this challenge. Although there is a substantial body of research on Bayesian reliability estimation, there appears to be a paucity of Bayesian applications to issues of direct interest to DT decision makers. To address this deficiency, this research accomplishes two tasks. First, this work provides a motivating example that analyzes reliability for a notional but representative system. Second, to enable the motivated analyst to apply Bayesian methods, it provides a foundation and best practices for Bayesian reliability analysis in DT. The first task is accomplished by applying Bayesian reliability assessment methods to notional DT lifetime data generated using a Bayesian reliability growth planning methodology (Wayne 2018). The tested system is assumed to be a generic complex system with a large number of failure modes. Starting from the Bayesian assessment methodology of (Wayne and Modarres, A Bayesian Model for Complex System Reliability 2015), this work explores the sensitivity of the Bayesian results to the choice of the prior distribution and compares the Bayesian results for the reliability point estimate and uncertainty interval with analogous results from traditional reliability assessment methods. The second task is accomplished by establishing a generic structure for systematically evaluating relevant statistical Bayesian models. It identifies what have been implicit reliability issues for DT programs using a structured poll of stakeholders combined with interviews of a selected set of Subject Matter Experts. Secondly, candidate solutions are identified in the literature. Thirdly, solutions matched to issues using criteria designed to evaluate the capability of a solution to improve support for decision-makers at critical points in DT programs. The matching process uses a model taxonomy structured according to decisions at each DT phase, plus criteria for model applicability and data availability. The end result is a generic structure that allows an analyst to identify and evaluate a specific model for use with a program and issue of interest. Wayne, Martin. 2018. “Modeling Uncertainty in Reliability Growth Plans.” 2018 Annual Reliability and Maintainability Symposium (RAMS). 1-6. Wayne, Martin, and Mohammad Modarres. 2015. “A Bayesian Model for Complex System Reliability.” IEEE Transactions on Reliability 64: 206-220. |
Paul Fanto and David Spalding Research Staff Member, System Evaluation Division IDA (bio)
Dr. Paul Fanto is a Research Staff Member at the Institute for Defense Analyses. He received a Ph.D. in Physics from Yale University, where he worked on the application of Monte Carlo methods and high-performance computing to the modeling of atomic nuclei. His current work involves the study of space systems and the application of Bayesian statistical methods to defense system testing. Dr. Spalding is a Research Staff member at the Institute for Defense Analyses. He has a Ph.D. degree from the University of Rochester in experimental particle physics and a master’s degree from George Washington University in Computer Science. At the Institute for Defense Analyses, he has analyses aircraft and missile system issues . For the past decade he has addressed programmatic and statistical problems in developmental testing. |
Breakout | Session Recording |
![]() Recording | 2022 |
Keynote NASA AERONAUTICS |
Bob Pearce Deputy Associate Administrator Strategy, Aeronautics Research Mission DirectorateNASA ![]() (bio)
Mr. Pearce is responsible for leading aeronautics research mission strategic planning to guide the conduct of the agency’s aeronautics research and technology programs, as well as leading ARMD portfolio planning and assessments, mission directorate budget development and approval processes, and review and evaluation of all of NASA’s aeronautics research mission programs for strategic progress and relevance. Pearce is also currently acting director for ARMD’s Airspace Operations and Safety Program, and responsible for the overall planning, management and evaluation of foundational air traffic management and operational safety research. Previously he was director for strategy, architecture and analysis for ARMD, responsible for establishing a strategic systems analysis capability focused on understanding the system-level impacts of NASA’s programs, the potential for integrated solutions, and the development of high-leverage options for new investment and partnership. From 2003 until July 2010, Pearce was the deputy director of the FAA-led Next Generation Air Transportation System (NextGen) Joint Planning and Development Office (JPDO). The JPDO was an interagency office tasked with developing and facilitating the implementation of a national plan to transform the air transportation system to meet the long-term transportation needs of the nation. Prior to the JPDO, Pearce held various strategic and program management positions within NASA. In the mid-1990s he led the development of key national policy documents including the National Science and Technology Council’s “Goals for a National Partnership in Aeronautics Research and Technology” and the “Transportation Science and Technology Strategy.” These two documents provided a substantial basis for NASA’s expanded investment in aviation safety and airspace systems. He began his career as a design engineer at the Grumman Corporation, working on such projects as the Navy’s F-14 Tomcat fighter and DARPA’s X-29 Forward Swept Wing Demonstrator. Pearce also has experience from the Department of Transportation’s Volpe National Transportation Systems Center where he made contributions in the area of advanced concepts for intercity transportation systems. Pearce has received NASA’s Exceptional Service Medal for sustained excellence in planning and advocating innovative aeronautics programs in conjunction with the White House and other federal agencies. He received NASA’s Exceptional Achievement Medal for outstanding leadership of the JPDO in support of the transformation of the nation’s air transportation system. Pearce has also received NASA’s Cooperative External Achievement Award and several Exceptional Performance and Group Achievement Awards.He earned a bachelor’s of science degree in mechanical and aerospace engineering from Syracuse University, and a master’s of science degree in technology and policy from the Massachusetts Institute of Technology. |
Keynote | Materials | 2018 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Frank Peri Deputy Director Langley Engineering Directorate |
Breakout | 2016 |
||
Breakout Data Visualization (Abstract)
Teams of people with many different talents and skills work together at NASA to improve our understanding of our planet Earth, our Sun and solar system, and the Universe. The Earth System is made up of complex interactions and dependencies of the solar, oceanic, terrestrial, atmospheric, and living components. Solar storms have been recognized as a cause of technological problems on Earth since the invention of the telegraph in the 19th century. Solar flares, coronal holes, and coronal mass ejections (CME’s) can emit large bursts of radiation, high speed electrons and protons, and other highly energetic particles that are released from the sun, and are sometimes directed at Earth. These particles and radiation can damage satellites in space, shutdown power grids on earth, cause GPS outages, and have serious health concerns to humans flying at high altitudes on earth, as well as astronauts in space. NASA builds and operates a fleet of satellites to study the sun and a fleet of satellites and aircraft to observe the Earth system. NASA’s Computer Models combine the observations with numerical models, to understand how these systems work. Using satellite observations alongside computer models we can combine many pieces of information to form a coherent view of Earth and the Sun. NASA research helps us understand how processes combine to affect life on Earth: this includes severe weather, health, changes in climate, and space weather. The Scientific Visualization Studio wants you to learn about NASA programs through visualization. The SVS works closely with scientists in the creation of data visualizations, animations, and images in order to promote a greater understanding of Earth and Space Science research activities at NASA and within the academic research community supported by NASA. |
Lori Perklins NASA |
Breakout | 2017 |
||
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks (Abstract)
Nearly all land, air, and sea maneuver systems (e.g. vehicles, ships, aircraft, and missiles) are becoming more software-reliant and blending internal communication across both Internet Protocol (IP) and non-IP buses. IP communication is widely understood among the cybersecurity community, whereas expertise and available test tools for non-IP protocols such as Controller Area Network (CAN) and MIL-STD-1553 are not as commonplace. However, a core tenet of operational cybersecurity testing is to asses all potential pathways of information exchange present on the system, to include IP and non-IP. In this presentation, we will introduce a few non-IP protocols (e.g. CAN, MIL-STD-1553) and provide a live demonstration of how to attack a CAN network using malicious message injection. We will also discuss how potential cyber effects on non-IP busses can lead to catastrophic mission effects to the target system. |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Breakout | Session Recording |
![]() Recording | 2021 |
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() (bio)
Dr. Peter Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods. His expertise is in collaboratively integrating research objectives, measurement sciences, modeling and simulation, and test design to produce actionable knowledge that supports rigorous decision-making for aerospace research and development. After eight years in private industry, Dr. Parker joined Langley Research Center in 1997. He holds a B.S. in Mechanical Engineering from Old Dominion University, a M.S. in Applied Physics and Computer Science from Christopher Newport University, and a M.S. and Ph.D. in Statistics from Virginia Tech. He is a licensed Professional Engineer in the Commonwealth of Virginia. Dr. Parker is a senior member of the American Institute of Aeronautics and Astronautics, American Society for Quality, and American Statistical Association. He currently serves as Chair-Elect of the International Statistical Engineering Association. Dr. Parker is the past-Chair of the American Society for Quality’s Publication Management Board and Editor Emeritus of the journal Quality Engineering. |
Breakout | Session Recording |
Recording | 2021 |
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber (Abstract)
Due to the immense potential use cases, configurations, and threat behaviors, thorough and efficient cyber testing is a significant challenge for the defense community. In this presentation, Phadke will present case studies where STO was successfully deployed for cyber testing, resulting in higher assurance, reduced schedule, and reduced testing cost. Phadke will also discuss importance first focusing on the engineering and science analysis, and only after that is complete, implementing statistical methods. |
Kedar Phadke | Breakout | 2019 |
||
Breakout System Level Uncertainty Quanification for Low-Boom Supersonic Flight Vehicles (Abstract)
Under current FAA regulations, civilian aircraft may not operate at supersonic speeds over land. However, over the past few decades, there have been renewed efforts to invest in technologies to mitigate sonic boom from supersonic aircraft through advances in both vehicle design and sonic boom prediction. NASA has heavily invested in tools and technologies to enable commercial supersonic flight and currently has several technical challenges related to sonic boom reduction. One specific technical challenge relates to the development of tools and methods to predict, under uncertainty, the noise on the ground generated by an aircraft flying at supersonic speeds. In attempting to predict ground noise, many factors from multiple disciplines must be considered. Further, classification and treatment of uncertainties in coupled systems, mutlifidelity simulations, experimental data, and community responses are all concerns in system level analysis of sonic boom prediction. This presentation will introduce the various methodologies and techniques utilized for uncertainty quantification with a focus on the build up to system level analysis. An overview of recent research activities and case studies investigating the impact of various disciplines and factors on variance in ground noise will be discussed. |
Ben Phillips | Breakout | 2018 |
||
Tutorial Presenting Complex Statistical Methodologies to Military Leadership (Abstract)
More often than not, the data we analyze for the military is plagued with statistical issues. Multicollinearity, small sample sizes, quasi-experimental designs, and convenience samples are some examples of what we commonly see in military data. Many of these complications can be resolved either in the design or analysis stage with appropriate statistical procedures. But, to keep our work useful, usable, and transparent to the military leadership who sponsors it, we must strike the elusive balance between explaining and justifying our design and analysis techniques and not inundating our audience with unnecessary details. It can be even more difficult to get military leadership to understand the statistical problems and solutions so well that they are enthused and supportive of our approaches. Using literature written on the subject as well as a variety of experiences, we will showcase several examples, as well as present ideas for keeping our clients actively engaged in statistical methodology discussions. |
Jane Pinelis John Hopkins University, Applied Physics Lab |
Tutorial | Materials | 2016 |
|
Breakout Communicating Complex Statistical Methodologies to Leadership (Abstract)
More often than not, the data we analyze for the military is plagued with statistical issues. Multicollinearity, small sample sizes, quasi-experimental designs, and convenience samples are some examples of what we commonly see in military data. Many of these complications can be resolved either in the design or analysis stage with appropriate statistical procedures. But, to keep our work useful, usable, and transparent to the military leadership who sponsors it, we must strike the elusive balance between explaining and justifying our design and analysis techniques and not inundating our audience with unnecessary details. It can be even more difficult to get military leadership to understand the statistical problems and solutions so well that they are enthused and supportive of our approaches. Using literature written on the subject as well as a variety of experiences, we will showcase several examples, as well as present ideas for keeping our clients actively engaged in statistical methodology discussions. |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
|
Breakout Do Asymmetries in Nuclear Arsenals Matter? (Abstract)
The importance of the nuclear balance vis-a-vis our principal adversary has been the subject of intense but unresolved debate in the international security community for almost seven decades. Perspectives on this question underlie national security policies regarding potential unilateral reductions in strategic nuclear forces, the imbalance of nonstrategic nuclear weapons in Europe, nuclear crisis management, nuclear proliferation, and nuclear doctrine. The overwhelming majority of past studies of the role of the nuclear balance in nuclear crisis evolution and outcome have been qualitative and focused on the relative importance of the nuclear balance and national resolve. Some recent analyses have invoked statistical methods, however, these quantitative studies have generated intense controversy because of concerns with analytic rigor. We apply a multi-disciplinary approach that combines historical case study, international relations theory, and appropriate statistical analysis. This approach results in defensible findings on causal mechanisms that regulate nuclear crisis resolution. Such findings should inform national security policy choices facing the Trump administration. |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
|
Tutorial Quality Control and Statistical Process Control (Abstract)
formance. On the other hand, the need to draw causal inference about factors not under the researchers’ control, calls for a specialized set of techniques developed for observational studies. The persuasiveness and adequacy of such an analysis depends in part on the ability to recover metrics from the data that would approximate those of an experiment. This tutorial will provide a brief overview of the common problems encountered with lack of randomization, as well as suggested approaches for rigorous analysis of observational studies. |
Jane Pinelis Research Staff Member IDA |
Tutorial |
![]() | 2018 |
|
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() (bio)
Dr. Jane Pinelis is the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for JAIC capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing. Her team developed metrics at various levels of testing for AI capabilities and provided leadership empirically-based recommendations for model fielding. Additionally, she oversaw operational and human-machine teaming testing, and conducted research and outreach to establish standards in T&E of systems using artificial intelligence. Dr. Pinelis has spent over 10 years working predominantly in the area of defense and national security. She has largely focused on operational test and evaluation, both in support of the service operational testing commands and also at the OSD level. In her previous job as the Test Science Lead at the Institute of Defense Analyses, she managed an interdisciplinary team of scientists supporting the Director and the Chief Scientist of the Department of Operational Test and Evaluation on integration of statistical test design and analysis and data-driven assessments into test and evaluation practice. Before, that, in her assignment at the Marine Corps Operational Test and Evaluation Activity, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.” In addition to T&E, Dr. Pinelis has several years of experience leading analyses for the DoD in the areas of wargaming, precision medicine, warfighter mental health, nuclear non-proliferation, and military recruiting and manpower planning. Her areas of statistical expertise include design and analysis of experiments, quasi-experiments, and observational studies, causal inference, and propensity score methods. Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor. |
Panel | Session Recording |
![]() Recording | 2021 |
Panelist 3 |
Jane Pinelis Joint Artificial Intelligence Center ![]() (bio)
Dr. Jane Pinelis is the Chief of AI Assurance at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) as well as Responsible AI (RAI) implementation for JAIC capabilities, as well as development of AI Assurance products and standards that will support testing of AI-enabled systems across the DoD. Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing. Her team developed metrics at various levels of testing for AI capabilities and provided leadership empirically-based recommendations for model fielding. Additionally, she oversaw operational and human-machine teaming testing, and conducted research and outreach to establish standards in T&E of systems using artificial intelligence. Dr. Pinelis has spent over 10 years working predominantly in the area of defense and national security. She has largely focused on operational test and evaluation, both in support of the service operational testing commands and also at the OSD level. In her previous job as the Test Science Lead at the Institute of Defense Analyses, she managed an interdisciplinary team of scientists supporting the Director and the Chief Scientist of the Department of Operational Test and Evaluation on integration of statistical test design and analysis and data-driven assessments into test and evaluation practice. Before, that, in her assignment at the Marine Corps Operational Test and Evaluation Activity, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.” In addition to T&E, Dr. Pinelis has several years of experience leading analyses for the DoD in the areas of wargaming, precision medicine, warfighter mental health, nuclear non-proliferation, and military recruiting and manpower planning. Her areas of statistical expertise include design and analysis of experiments, quasi-experiments, and observational studies, causal inference, and propensity score methods. Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor. |
Session Recording |
Recording | 2022 |
|
Breakout Bayesian Adaptive Design for Conformance Testing with Bernoulli Trials (Abstract)
Co-authors: Adam L. Pintar, Blaza Toman, and Dennis Leber. A task of the Domestic Nuclear Detection Office (DNDO) is the evaluation of radiation and nuclear (rad/nuc) detection systems used to detect and identify illicit rad/nuc materials. To obtain estimated system performance measures, such as probability of detection, and to determine system acceptability, the DNDO sometimes conduct large scale field tests of these systems at great cost. Typically, non adaptive designs are employed where each rad/nuc test source is presented to each system under test a predetermined and fixed number of times. This approach can lead to unnecessary cost if the system is clearly acceptable or unacceptable. In this presentation, an adaptive design with Bayesian decision theoretic foundations is discussed as an alternative to, and contrasted with, the more common single stage design. Although the basis of the method is Bayesian decision theory, designs may be tuned to have desirable type I and II error rates. While the focus of the presentation is a specific DNDO example, the method is applicable widely. Further, since constructing the designs is somewhat compute intensive, software in the form of an R package will be shown and is available upon request. |
Adamn Pintar NIST |
Breakout | Materials | 2016 |
|
Breakout Estimating the Distribution of an Extremum using a Peaks-Over-Threshold Model and Monte Carlo Simulation (Abstract)
Estimating the probability distribution of an extremum (maximum or minimum), for some fixed amount of time, using a single time series typically recorded for a shorter amount of time, is important in many application areas, e.g., structural design, reliability, quality, and insurance. When designing structural members, engineers are concerned with maximum wind effects, which are functions of wind speed. With respect to reliability and quality, extremes experienced during storage or transport, e.g., extreme temperatures, may substantially impact product quality, lifetime, or both. Insurance companies are of course concerned about very large claims. In this presentation, a method to estimate the distribution of an extremum using a well-known peaks-over-threshold (POT) model and Monte Carlo simulation is presented. Since extreme values have long been a subject of study, some brief history is first discussed. The POT model that underlies the approach is then laid out. A description of the algorithm follows. It leverages pressure data collected on scale models of buildings in a wind tunnel for context. Essentially, the POT model is fitted to the observed data and then used to simulate many times series of the desired length. The empirical distribution of the extrema is obtained from the simulated series. Uncertainty in the estimated distribution is quantified by a bootstrap algorithm. Finally, an R package implementing the computations is discussed. |
Adam Pintar NIST |
Breakout | Materials | 2017 |
|
Webinar Statistical Engineering for Service Life Prediction of Polymers (Abstract)
Economically efficient selection of materials depends on knowledge of not just the immediate properties, but the durability of those properties. For example, when selecting building joint sealant, the initial properties are critical to successful design. These properties change over time and can result in failure in the application (buildings leak, glass falls). A NIST led industry consortium has a research focus on developing new measurement science to determine how the properties of the sealant change with environmental exposure. In this talk, the two-decade history of the NIST led effort will be examined through the lens of Statistical Engineering, specifically its 6 phases: (1) Identify the problem. (2) Provide structure. (3)Understand the context. (4) Develop a strategy. (5) Develop and execute tactics. (6) Identify and deploy a solution. Phases 5 and 6 will be the primary focus of this talk, but all of the phases will be discussed. The tactics of phase 5 were often themselves multi-month or year research problems. Our approach to predicting outdoor degradation based only on accelerated weathering in the laboratory has been revised and improved many times over several years. In phase 6, because of NIST’s unique mission of promoting U.S. innovation and industrial competitiveness, the focus has been outward on technology transfer and the advancement of test standards. This may differ from industry and other government agencies where the focus may be improvement of processes inside of the organization. |
Adam Pintar Mathematical Statistician National Institute of Standards and Technology ![]() (bio)
Adam Pintar is a Mathematical Statistician at the National Institute of Standards and Technology. He applies statistical methods and thinking to diverse application areas including Physics, Chemistry, Biology, Engineering, and more recently Social Science. He received a PhD in Statistics from Iowa State University. |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Sources of Error and Bias in Experiments with Human Subjects (Abstract)
No set of experimental data is perfect and researchers are aware that data from experimental studies invariably contain some margin of error. This is particularly true of studies with human subjects since human behavior is vulnerable to a range of intrinsic and extrinsic influences beyond the variables being manipulated in a controlled experimental setting. Potential sources of error may lead to wide variations in the interpretation of results and the formulation of subsequent implications. This talk will discuss specific sources of error and bias in the design of experiments and present systematic ways to overcome these effects. First, some of the basic errors in general experimental design will be discussed, including human errors, systematic errors and random errors. Second, we will explore specific types of experimental error that appear in human subjects research. Lastly, we will discuss the role of bias in experiments with human subjects. Bias is a type of systematic error that is introduced into the sampling or testing phase and encourages one outcome over another. Often, bias is the result of the intentional or unintentional influence that an experimenter may exert on the outcomes of a study. We will discuss some common sources of bias in research with human subjects, including biases in sampling, selection, response, performance execution, and measurement. The talk will conclude with a discussion of how errors and bias influence the validity of human subjects research and will explore some strategies for controlling these errors and biases |
Poornima Madhavan | Breakout | 2019 |
||
Breakout Demystifying the Black Box: A Test Strategy for Autonomy (Abstract)
Systems with autonomy are beginning to permeate civilian, industrial, and military sectors. Though these technologies have the potential to revolutionize our world, they also bring a host of new challenges in evaluating whether these tools are safe, effective, and reliable. The Institute for Defense Analyses is developing methodologies to enable testing systems that can, to some extent, think for themselves. In this talk, we share how we think about this problem and how this framing can help you develop a test strategy for your own domain. |
Dan Porter | Breakout |
![]() | 2019 |
|
Breakout A Multi-method, Triangulation Approach to Operational Testing (Abstract)
Humans are not produced in quality-controlled assembly lines, and we typically are much more variable than the mechanical systems we employ. This mismatch means that when characterizing the effectiveness of a system, the system must be considered in the context of its users. Accurate measurement is critical to this endeavor, yet while human variability is large, effort to reduce measurement error of those humans is relatively small. The following talk discusses the importance of using multiple measurement methods—triangulation—to reduce error and increase confidence when characterizing the quality of HSI. A case study from an operational test of an attack helicopter demonstrates how triangulation enables more actionable recommendations. |
Daniel Porter Research Staff Member IDA |
Breakout | Materials | 2018 |
|
Webinar A HellerVVA Problem: The Catch-22 for Simulated Testing of Fully Autonomous Systems (Abstract)
In order to verify, validate, and accredit (VV&A) a simulation environment for testing the performance of an autonomous system, testers must examine more than just sensor physics—they must also provide evidence that the environmental features which drives system decision making are represented at all. When systems are black boxes though, these features are fundamentally unknown, necessitating that we first test to discover these features. An umbrella known as “model induction” provides approaches for demystifying black boxes and obtaining models of their decision making, but the current state of the art assumes testers can input large quantities of operationally relevant data. When systems only make passive perceptual decisions or operate in purely virtual environments, these assumptions are typically met. However, this will not be the case for black-box, fully autonomous systems. These systems can make decisions about the information they acquire—which cannot be changed in pre-recorded passive inputs—and a major reason to obtain a decision model is to VV&A the simulation environment—preventing the valid use of a virtual environment to obtain a model. Furthermore, the current consensus is that simulation will be used to get limited safety releases for live testing. This creates a catch-22 of needing data to obtain the decision-model, but needing the decision-model to validly obtain the data. In this talk, we provide a brief overview of this challenge and possible solutions. |
Daniel Porter Research Staff Member IDA ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities (Abstract)
Although artificial intelligence may take over tasks traditionally performed by humans or power systems that act autonomously, humans will still interact with these systems in some way. The need to ensure these interactions are fluid and effective does not disappear—if anything, this need only grows with AI-enabled capabilities. These technologies introduce multiple new hazards for achieving high quality human-system integration. Testers will need to evaluate both traditional HSI issues as well as these novel concerns in order to establish the trustworthiness of a system for activity in the field, and we will need to develop new T&E methods in order to do this. In this session, we will hear how three national security organizations are preparing for these HSI challenges, followed by a broader panel discussion on which of these problems is most pressing and which is most promising for DoD research investments. |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel | Session Recording |
Recording | 2021 |
Keynote Consensus Building |
Antonio Possolo NIST Fellow, Chief Statistician National Institute of Standards and Technology. ![]() (bio)
Antonio Possolo holds a Ph.D. in statistics from Yale University, and has been practicing the statistical arts for more than 35 years, in industry (General Electric, Boeing), academia (Princeton University, University of Washington in Seattle, Classical University of Lisboa), and government. He is committed to the development and application of probabilistic and statistical methods that contribute to advances in science and technology, and in particular to measurement science. |
Keynote | Materials | 2018 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Physics-Informed Deep Learning for Modeling and Simulation under Uncertainty |
Patrick Leser Aerospace Technologist NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
|
Breakout Statistical Engineering in Practice |
Angie Patterson Chief Consulting Engineer GE Aviation |
Breakout | Session Recording |
Recording | 2021 |
Breakout Method for Evaluating Bayesian Reliability Models for Developmental Testing |
Paul Fanto and David Spalding Research Staff Member, System Evaluation Division IDA |
Breakout | Session Recording |
![]() Recording | 2022 |
Keynote NASA AERONAUTICS |
Bob Pearce Deputy Associate Administrator Strategy, Aeronautics Research Mission DirectorateNASA ![]() |
Keynote | Materials | 2018 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Frank Peri Deputy Director Langley Engineering Directorate |
Breakout | 2016 |
||
Breakout Data Visualization |
Lori Perklins NASA |
Breakout | 2017 |
||
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Breakout | Session Recording |
![]() Recording | 2021 |
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() |
Breakout | Session Recording |
Recording | 2021 |
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber |
Kedar Phadke | Breakout | 2019 |
||
Breakout System Level Uncertainty Quanification for Low-Boom Supersonic Flight Vehicles |
Ben Phillips | Breakout | 2018 |
||
Tutorial Presenting Complex Statistical Methodologies to Military Leadership |
Jane Pinelis John Hopkins University, Applied Physics Lab |
Tutorial | Materials | 2016 |
|
Breakout Communicating Complex Statistical Methodologies to Leadership |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
|
Breakout Do Asymmetries in Nuclear Arsenals Matter? |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
|
Tutorial Quality Control and Statistical Process Control |
Jane Pinelis Research Staff Member IDA |
Tutorial |
![]() | 2018 |
|
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() |
Panel | Session Recording |
![]() Recording | 2021 |
Panelist 3 |
Jane Pinelis Joint Artificial Intelligence Center ![]() |
Session Recording |
Recording | 2022 |
|
Breakout Bayesian Adaptive Design for Conformance Testing with Bernoulli Trials |
Adamn Pintar NIST |
Breakout | Materials | 2016 |
|
Breakout Estimating the Distribution of an Extremum using a Peaks-Over-Threshold Model and Monte Carlo Simulation |
Adam Pintar NIST |
Breakout | Materials | 2017 |
|
Webinar Statistical Engineering for Service Life Prediction of Polymers |
Adam Pintar Mathematical Statistician National Institute of Standards and Technology ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Sources of Error and Bias in Experiments with Human Subjects |
Poornima Madhavan | Breakout | 2019 |
||
Breakout Demystifying the Black Box: A Test Strategy for Autonomy |
Dan Porter | Breakout |
![]() | 2019 |
|
Breakout A Multi-method, Triangulation Approach to Operational Testing |
Daniel Porter Research Staff Member IDA |
Breakout | Materials | 2018 |
|
Webinar A HellerVVA Problem: The Catch-22 for Simulated Testing of Fully Autonomous Systems |
Daniel Porter Research Staff Member IDA ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel | Session Recording |
Recording | 2021 |
Keynote Consensus Building |
Antonio Possolo NIST Fellow, Chief Statistician National Institute of Standards and Technology. ![]() |
Keynote | Materials | 2018 |