Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Bayesian Adaptive Design for Conformance Testing with Bernoulli Trials (Abstract)
Co-authors: Adam L. Pintar, Blaza Toman, and Dennis Leber. A task of the Domestic Nuclear Detection Office (DNDO) is the evaluation of radiation and nuclear (rad/nuc) detection systems used to detect and identify illicit rad/nuc materials. To obtain estimated system performance measures, such as probability of detection, and to determine system acceptability, the DNDO sometimes conduct large scale field tests of these systems at great cost. Typically, non adaptive designs are employed where each rad/nuc test source is presented to each system under test a predetermined and fixed number of times. This approach can lead to unnecessary cost if the system is clearly acceptable or unacceptable. In this presentation, an adaptive design with Bayesian decision theoretic foundations is discussed as an alternative to, and contrasted with, the more common single stage design. Although the basis of the method is Bayesian decision theory, designs may be tuned to have desirable type I and II error rates. While the focus of the presentation is a specific DNDO example, the method is applicable widely. Further, since constructing the designs is somewhat compute intensive, software in the form of an R package will be shown and is available upon request. |
Adamn Pintar NIST |
Breakout | Materials | 2016 |
|
Short Course Bayesian Analysis (Abstract)
This course will cover the basics of the Bayesian approach to practical and coherent statistical inference. Particular attention will be paid to computational aspects, including MCMC. Examples/practical hands-on exercises will the run gamut from toy illustration to real-world data analysis from all areas of science, with R implementations/coaching provided. The course closely follows P.D. Hoff’s “A First Course in Bayesian Statistical Methods”—Springer 2009. Some examples are borrowed from two other texts which are nice references to have. J. Albert’s’ “Bayesian Computation with R”— Springer 2nd ed. 2009; and “A. Gelman, J.B. Carlin, H.S. Stern, D. Dunson, A. Vehtari and D.B. Rubin’ s “Bayesian Data Analysis”—3rd ed. 2013. |
Robert Gramacy Virginia Tech |
Short Course | Materials | 2019 |
|
Contributed Bayesian Calibration and Uncertainty Analysis: A Case Study Using a 2-D CFD Turbulence Model (Abstract)
The growing use of simulations in the engineering design process promises to reduce the need for extensive physical testing, decreasing both development time and cost. However, as mathematician and statistician George E. P. Box said, “Essentially, all models are wrong, but some are useful.” There are many factors that determine simulation or, more broadly, model accuracy. These factors can be condensed into noise, bias, parameter uncertainty, and model form uncertainty. To counter these effects and ensure that models faithfully match reality to the extent required, simulation models must be calibrated to physical measurements. Further, the models must be validated, and their accuracy must be quantified before they can be relied on in lieu of physical testing. Bayesian calibration provides a solution for both requirements: it optimizes tuning of model parameters to improve simulation accuracy, and estimates any remaining discrepancy which is useful for model diagnosis and validation. Also, because model discrepancy is assumed to exist in this framework, it enables robust calibration even for inaccurate models. In this paper, we present a case study to investigate the potential benefits of using Bayesian calibration, sensitivity analyses, and Monte Carlo analyses for model improvement and validation. We will calibrate a 7-parameter k-𝜎 CFD turbulence model simulated in COMSOL Multiphysics®. The model predicts coefficient of lift and drag for an airfoil defined using a 6049-series airfoil parameterization from the National Advisory Committee for Aeronautics (NACA). We will calibrate model predictions using publicly available wind tunnel data from the University of Illinois Urbana-Champaign’s (UIUC) database. Bayesian model calibration requires intensive sampling of the simulation model to determine the most likely distribution of calibration parameters, which can be a large computational burden. We greatly reduce this burden by following a surrogate modeling approach, using Gaussian process emulators to mimic the CFD simulation. We train the emulator by sampling the simulation space using a Latin Hypercube (LHD) Design of Experiment (DOE), and assess the accuracy of the emulator using leave-oneout Cross Validation (CV) error. The Bayesian calibration framework involves calculating the discrepancy between simulation results and physical test results. We also use Gaussian process emulators to model this discrepancy. The discrepancy emulator will be used as a tool for model validation; characteristic trends in residual errors after calibration can indicate underlying model form errors which were not addressed via tuning the model calibration parameters. In this way, we will separate and quantify model form uncertainty and parameter uncertainty. The results of a Bayesian calibration include a posterior distribution of calibration parameter values. These distributions will be sampled using Monte Carlo methods to generate model predictions, whereby new predictions have a distribution of values which reflects the uncertainty in the tuned calibrated parameter. The resulting output distributions will be compared against physical data and the uncalibrated model to assess the effects of the calibration and discrepancy model. We will also perform global, variance based sensitivity analysis on the uncalibrated model and the calibrated models, and investigate any changes in the sensitivity indices from uncalibrated to calibrated. |
Peter Chien | Contributed | 2018 |
||
Breakout Bayesian Component Reliability Estimation: F-35 Case Study (Abstract)
A challenging aspect of a system reliability assessment is integrating multiple sources of information, including component, subsystem, and full-system data, previous test data, or subject matter expert opinion. A powerful feature of Bayesian analyses is the ability to combine these multiple sources of data and variability in an informed way to perform statistical inference. This feature is particularly valuable in assessing system reliability where testing is limited and only a small number (or no failures at all) are observed. The F-35 is DoD’s largest program; approximately one-third of the operations and sustainment cost is attributed to the cost of spare parts and the removal, replacement, and repair of components. The failure rate of those components is the driving parameter for a significant portion of the sustainment cost, and yet for many of these components, poor estimates of the failure rate exist. For many programs, the contractor produces estimates of component failure rates, based on engineering analysis and legacy systems with similar parts. While these are useful, the actual removal rates can provide a more accurate estimate of the removal and replacement rates the program anticipates to experience in future years. In this presentation, we show how we applied a Bayesian analysis to combine the engineering reliability estimates with the actual failure data to overcome the problems of cases where few data exist. Our technique is broadly applicable to any program where multiple sources of reliability information need be combined for the best estimation of component failure rates and ultimately sustainment costs. |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
|
Tutorial Bayesian Data Analysis in R/STAN (Abstract)
In an era of reduced budgets and limited testing, verifying that requirements have been met in a single test period can be challenging, particularly using traditional analysis methods that ignore all available information. The Bayesian paradigm is tailor made for these situations, allowing for the combination of multiple sources of data and resulting in more robust inference and uncertainty quantification. Consequently, Bayesian analyses are becoming increasingly popular in T&E. This tutorial briefly introduces the basic concepts of Bayesian Statistics, with implementation details illustrated in R through two case studies: reliability for the Core Mission functional area of the Littoral Combat Ship (LCS) and performance curves for a chemical detector in the Common Analytical Laboratory System (CALS) with different agents and matrices. Examples are also presented using RStan, a high-performance open-source software for Bayesian inference on multi-level models. |
Kassandra Fronczyk IDA |
Tutorial | Materials | 2016 |
|
Tutorial Bayesian Data Analysis in R/STAN (Abstract)
In an era of reduced budgets and limited testing, verifying that requirements have been met in a single test period can be challenging, particularly using traditional analysis methods that ignore all available information. The Bayesian paradigm is tailor made for these situations, allowing for the combination of multiple sources of data and resulting in more robust inference and uncertainty quantification. Consequently, Bayesian analyses are becoming increasingly popular in T&E. This tutorial briefly introduces the basic concepts of Bayesian Statistics, with implementation details illustrated in R through two case studies: reliability for the Core Mission functional area of the Littoral Combat Ship (LCS) and performance curves for a chemical detector in the Common Analytical Laboratory System (CALS) with different agents and matrices. Examples are also presented using RStan, a high-performance open-source software for Bayesian inference on multi-level models. |
James Brownlow U.S. Air Force 812TSS/ENT |
Tutorial | Materials | 2016 |
|
Bayesian Estimation for Covariate Defect Detection Model Based on Discrete Cox Proportiona (Abstract)
Traditional methods to assess software characterize the defect detection process as a function of testing time or effort to quantify failure intensity and reliability. More recent innovations include models incorporating covariates that explain defect detection in terms of underlying test activities. These covariate models are elegant and only introduce a single additional parameter per testing activity. However, the model forms typically exhibit a high degree of non-linearity. Hence, stable and efficient model fitting methods are needed to enable widespread use by the software community, which often lacks mathematical expertise. To overcome this limitation, this poster presents Bayesian estimation methods for covariate models, including the specification of informed priors as well as confidence intervals for the mean value function and failure intensity, which often serves as a metric of software stability. The proposed approach is compared to traditional alternative such as maximum likelihood estimation. Our results indicate that Bayesian methods with informed priors converge most quickly and achieve the best model fits. Incorporating these methods into tools should therefore encourage widespread use of the models to quantitatively assess software. |
Priscila Silva Graduate Student University of Massachusetts Dartmouth ![]() (bio)
Priscila Silva is a MS student in the Department of Electrical & Computer Engineering at the University of Massachusetts Dartmouth (UMassD). She received her BS (2017) in Electrical Engineering from the Federal University of Ouro Preto, Brazil. |
Session Recording |
![]() Recording | 2022 |
|
Breakout Bayesian Estimation of Reliability Growth |
Jim Brownlow U.S. Air Force 812TSS/ENT |
Breakout | Materials | 2016 |
|
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity (Abstract)
The application opportunities for behavioral analytics in the cybersecurity space are based upon simple realities. 1. The great majority of breaches across all cybersecurity venues is due to human choices and human error. 2. With communication and information technologies making for rapid availability of data, as well as behavioral strategies of bad actors getting cleverer, there is need for expanded perspectives in cybersecurity prevention. 3. Internally-focused paradigms must now be explored that place endogenous protection from security threats as an important focus and integral dimension of cybersecurity prevention. The development of cybersecurity monitoring metrics and tools as well as the creation of intrusion prevention standards and policies should always include an understanding of the underlying drivers of human behavior. As temptation follows available paths, cyber-attacks follow technology, business models, and behavioral habits. The human element will always be the most significant part in the anatomy of any final decision. Choice options – from input, to judgement, to prediction, to action – need to be better understood for their relevance to cybersecurity work. Behavioral Performance Indexes harness data about aggregate human participation in an active system, helping to capture some of the detail and nuances of this critically important dimension of cybersecurity. |
Robert Gough | Breakout |
![]() | 2019 |
|
Breakout Big Data, Big Think (Abstract)
The NASA Big Data, Big Think team jump-starts coordination, strategy, and progress for NASA applications of Big Data Analytics techniques, fosters collaboration and teamwork among centers and improves agency-wide understanding of Big Data research techniques & technologies and their application to NASA mission domains. The effort brings the Agency’s Big Data community together and helps define near term projects and leverages expertise throughout the agency. This presentation will share examples of Big Data activities from the Agency and discuss knowledge areas and experiences, including data management, data analytics and visualization. |
Robert Beil NASA |
Breakout | Materials | 2017 |
|
Breakout Blast Noise Event Classification from a Spectrogram (Abstract)
Spectrograms (i.e., squared magnitude of short-time Fourier transform) are commonly used as features to classify audio signals in the same way that social media companies (e.g., Google, Facebook, Yahoo) use images to classify or automatically tag people in photos. However, a serious problem arises when using spectrograms to classify acoustic signals, in that the user must choose the input parameters (hyperparameters), and such choices can have a drastic effect on the accuracy of the resulting classifier. Further, considering all possible combinations of the hyperparameters is a computationally intractable problem. In this study, we simplify the problem making it computationally tractable, explore the utility of response surface methods for sampling the hyperparameter space, and find that response surface methods are a computationally efficient means of identifying the hyperparameter combinations that are likely to give the best classification results. |
Edward Nykaza Army Engineering Research and Development Center, Construction Engineering Research Laboratory |
Breakout | Materials | 2017 |
|
Breakout Building A Universal Helicopter Noise Model Using Machine Learning (Abstract)
Helicopters serve a number of useful roles within the community; however, community acceptance of helicopter operations is often limited by the resulting noise. Because the noise characteristics of helicopters depend strongly on the operating condition of the vehicle, effective noise abatement procedures can be developed for a particular helicopter type, but only when the noisy regions of the operating envelope are identified. NASA Langley Research Center—often in collaboration with other US Government agencies, industry, and academia—has conducted noise measurements for a wide variety of helicopter types, from light commercial helicopters to heavy military utility helicopters. While this database is expansive, it covers only a fraction of helicopter types in current commercial and military service and was measured under a limited set of ambient conditions and vehicle configurations. This talk will describe a new “universal” helicopter noise model suitable for planning helicopter noise abatement procedures. Modern machine learning techniques will be combined with the principle of nondimensionalization and applied to NASA’s helicopter noise data in order to develop a model capable of estimating the noisy operating states of any conventional helicopter under any specific ambient conditions and vehicle configurations. |
Eric Greenwood Aeroacoustics Branch |
Breakout | Materials | 2018 |
|
Building Bridges: a Case Study of Assisting a Program from the Outside (Abstract)
STAT practitioners often find ourselves outsiders to the programs we assist. This session presents a case study that demonstrates some of the obstacles in communication of capabilities, purpose, and expectations that may arise due to approaching the project externally. Incremental value may open the door to greater collaboration in the future, and this presentation discusses potential solutions to provide greater benefit to testing programs in the face of obstacles that arise due to coming from outside the program team. DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. CLEARED on 5 Jan 2022. Case Number: 88ABW-2022-0002 |
Anthony Sgambellone Huntington Ingalls Industries ![]() (bio)
Dr. Tony Sgambellone is a STAT Expert (Huntington Ingalls Industries contractor) at the Scientific Test and Analysis Techniques (STAT) Center of Excellence (COE) at the Air Force Institute of Technology (AFIT). The STAT COE provides independent STAT consultation to designated acquisition programs and special projects to improve Test & Evaluation (T&E) rigor, effectiveness, and efficiency,. Dr. Sgambellone holds a Ph.D. in Statistics, a graduate minor in College and University Teaching, and has a decade of experience spanning the fields of finance, software, and test and development. His current interests include artificial neural networks and the application of machine learning. |
Session Recording |
![]() Recording | 2022 |
|
Webinar Can AI Predict Human Behavior? (Abstract)
Given the rapid increase of novel machine learning applications in cybersecurity and people analytics, there is significant evidence that these tools can give meaningful and actionable insights. Even so, great care must be taken to ensure that automated decision making tools are deployed in such a way as to mitigate bias in predictions and promote security of user data. In this talk, Dr. Burns will take a deep dive into an open source data set in the area of people analytics, demonstrating the application of basic machine learning techniques, while discussing limitations and potential pitfalls in using an algorithm to predict human behavior. In the end, Dustin will draw a comparison between the potential to predict human behavioral propensity to things such as becoming an insider threat to how assisted diagnosis tools are used in medicine to predict development or reoccurrence of illnesses. |
Dustin Burns Senior Scientist Exponent ![]() (bio)
Dr. Dustin Burns is a Senior Scientist in the Statistical and Data Sciences practice at Exponent, a multidisciplinary scientific and engineering consulting firm dedicated to responding to the world’s most impactful business problems. Combining his background in laboratory experiments with his expertise in data analytics and machine learning, Dr. Burns works across many industries, including security, consumer electronics, utilities, and health sciences. He supports clients’ goals to modernize data collection and analytics strategies, extract information from unused data such as images and text, and test and validate existing systems. |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Carrier Reliability Model Validation (Abstract)
Model Validation for Simulations of CVN-78 Sortie Generation As part of the test planning process, IDA is examining flight operations on the Navy’s newest carrier, CVN-78. The analysis uses a model, the IDA Virtual Carrier Model (IVCM), to examine sortie generation rates and whether aircraft can complete missions on time. Before using IVCM, it must be validated. However, CVN-78 has not been delivered to the Navy, and data from actual operations are to validate the model. Consequently, we will validate IVCM by comparing it to another model. This is a reasonable approach when a model is used in general analyses such as test planning, but is not acceptable when a model is used in the assessment of system effectiveness and suitability. The presentation examines the use of various statistical tools – Wilcoxon Rank Sum Test, Kolmogorov-Smirnov Test, and lognormal regression – to examine whether the results from two models provide similar results and to quantify the magnitude of any differences. From the analysis, IDA concluded that locations and distribution shapes are consistent, and that the differences between the models are less than 15 percent, which is acceptable for test planning. |
Dean Thomas IDA |
Breakout | 2017 |
||
Breakout Case Studies for Statistical Engineering Applied to Powered Rotorcraft Wind-Tunnel Tests (Abstract)
Co-Authors: Sean A. Commo, Ph.D., P.E. and Peter A. Parker, Ph.D., P.E. NASA Langley Research Center, Hampton, Virginia, USA Austin D. Overmeyer, Philip E. Tanner, and Preston B. Martin, Ph.D. U.S. Army Research, Development, and Engineering Command, Hampton, Virginia, USA. The application of statistical engineering to helicopter wind-tunnel testing was explored during two powered rotor entries. The U.S. Army Aviation Development Directorate Joint Research Program Office and the NASA Revolutionary Vertical Lift Project performed these tests jointly at the NASA Langley Research Center. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small segment of the overall tests devoted to developing case studies of a statistical engineering approach. Data collected during each entry were used to estimate response surface models characterizing vehicle performance, a novel contribution of statistical engineering applied to powered rotor-wing testing. Additionally, a 16- to 47-times reduction in the number of data points required was estimated when comparing a statistically-engineered approach to a conventional one-factor-at-a-time approach. |
Sean Commo NASA |
Breakout | 2016 |
||
Tutorial Case Study on Applying Sequential Methods in Operational Testing (Abstract)
Sequential methods concerns statistical evaluation in which the number, pattern, or composition of the data is not determined at the start of the investigation but instead depends on the information acquired during the investigation. Although sequential methods originated in ballistics testing for the Department of Defense (DoD), it is underutilized in the DoD. Expanding the use of sequential methods may save money and reduce test time. In this presentation, we introduce sequential methods, describe its potential uses in operational test and evaluation (OT&E), and present a method for applying it to the test and evaluation of defense systems. We evaluate the proposed method by performing simulation studies and applying the method to a case study. Additionally, we discuss some of the challenges we might encounter when using sequential analysis in OT&E. |
Keyla Pagán-Rivera Research Staff Member IDA ![]() (bio)
Dr. Keyla Pagán-Rivera has a Ph.D. in Biostatistics from The University of Iowa and serves as a Research Staff Member in the Operational Evaluation Division at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on training, research and applications of statistical methods. |
Tutorial | Session Recording |
![]() Recording | 2022 |
Breakout Cases of Second-Order Split-Plot Designs (Abstract)
The fundamental principles of experiment design are factorization, replication, randomization, and local control of error. In many industries, however, departure from these principles is commonplace. Often in our experiments complete randomization is not feasible because the factor level settings are hard, impractical, or inconvenient to change or the resources available to execute under homogeneous conditions are limited. These restrictions in randomization lead to split-plot experiments. We are also often interested in fitting second-order models leading to second-order split-plot experiments. Although response surface methodology has grown tremendously since 1951, the lack of alternatives for second-order split-plots remains largely unexplored. The literature and textbooks offer limited examples and provide guidelines that often are too general. This deficit of information leaves practitioners ill prepared to face the many roadblocks associated with these types of designs. This presentation provides practical strategies to help practitioners in dealing with second-order split-plot and by extension, split-split-plot experiments, including an innovative approach for the construction of a response surface design referred to as second-order sub-array Cartesian product split-plot design. This new type of design, which is an alternative to other classes of split-plot designs that are currently in use in defense and industrial applications, is economical, has a low prediction variance of the regression coefficients, and low aliasing between model terms. Based on an assessment using well accepted key design evaluation criterion, second-order sub-array Cartesian product split-plot designs perform as well as historical designs that have been considered standards up to this point. |
Luis Cortes MITRE |
Breakout | Materials | 2018 |
|
Short Course Categorical Data Analysis (Abstract)
Categorical data is abundant in the 21st century, and its analysis is vital to advance research across many domains. Thus, data-analytic techniques that are tailored for categorical data are an essential part of the practitioner’s toolset. The purpose of this short course is to help attendees develop and sharpen their abilities with these tools. Topics covered in this short course will include logistic regression, ordinal regression, and classification, and methods to assess predictive accuracy of these approaches will be discussed. Data will be analyzed using the R software package, and course content loosely follow Alan Agresti’s excellent textbook An Introduction to Categorical Data Analysis, Third Edition. |
Christopher Franck Virginia Tech |
Short Course | Materials | 2019 |
|
Short Course Categorical Data Analysis (Abstract)
Categorical data is abundant in the 21st century, and its analysis is vital to advance research across many domains. Thus, data-analytic techniques that are tailored for categorical data are an essential part of the practitioner’s toolset. The purpose of this short course is to help attendees develop and sharpen their abilities with these tools. Topics covered in this short course will include binary and multi-category logistic regression, ordinal regression, and classification, and methods to assess predictive accuracy of these approaches will be discussed. Data will be analyzed using the R software package, and course content loosely follow Alan Agresti’s excellent textbook “An Introduction to Categorical Data Analysis, Third Edition.” |
Chris Franck Assistant Professor Virginia Tech ![]() (bio)
Chris Franck is an Assistant Professor in the Department of Statistics at Virginia Tech. |
Short Course | Materials | 2022 |
|
Breakout Censored Data Analysis for Performance Data (Abstract)
Binomial metrics like probability-to-detect or probability-to-hit typically provide operationally meaningful and easy to interpret test outcomes. However, they are information poor metrics and extremely expensive to test. The standard power calculations to size a test employ hypothesis tests, which typically result in many tens to hundreds of runs. In addition to being expensive, the test is most likely inadequate for characterizing performance over a variety of conditions due to the inherently large statistical uncertainties associated with binomial metrics. A solution is to convert to a continuous variable, such as miss distance or time-to-detect. The common objection to switching to a continuous variable is that the hit/miss or detect/non-detect binomial information is lost, when the fraction of misses/no-detects is often the most important aspect of characterizing system performance. Furthermore, the new continuous metric appears to no longer be connected to the requirements document, which was stated in terms of a probability. These difficulties can be overcome with the use of censored data analysis. This presentation will illustrate the concepts and benefits of this approach, and will illustrate a simple analysis with data, including power calculations to show the cost savings for employing the methodology. |
Bram Lillard IDA |
Breakout | Materials | 2017 |
|
Breakout Certification by Analysis: A 20-year Vision for Virtual Flight and Engine Testing (Abstract)
Analysis-based means of compliance for airplane and engine certification, commonly known as “Certification by Analysis” (CbA), provides a strong motivation for the development and maturation of current and future flight and engine modeling technology. The most obvious benefit of CbA is streamlined product certification testing programs at lower cost while maintaining equivalent levels of safety. The current state of technologies and processes for analysis is not sufficient to adequately address most aspects of CbA today, and concerted efforts to drastically improve analysis capability are required to fully bring the benefits of CbA to fruition. While the short-term cost and schedule benefits of reduced flight and engine testing are clearly visible, the fidelity of analysis capability required to realize CbA across a much larger percentage of product certification is not yet sufficient. Higher-fidelity analysis can help reduce the product development cycle and avoid costly and unpredictable performance and operability surprises that sometimes happen late in the development cycle. Perhaps the greatest long-term value afforded by CbA is the potential to accelerate the introduction of more aerodynamically and environmentally efficient products to market, benefitting not just manufacturers, but also airlines, passengers, and the environment. A far-reaching vision for CbA has been constructed to offer guidance in developing lofty yet realizable expectations regarding technology development and maturity through stakeholder involvement. This vision is composed of the following four elements: The ability to numerically simulate the integrated system performance and response of full-scale airplane and engine configurations in an accurate, robust, and computationally efficient manner. The development of quantified flight and engine modeling uncertainties to establish appropriate confidence in the use of numerical analysis for certification. The rigorous validation of flight and engine modeling capabilities against full-scale data from critical airplane and engine testing. The use of flight and engine modeling to enable Certification by Simulation. Key technical challenges include the ability to accurately predict airplane and engine performance for a single discipline, the robust and efficient integration of multiple disciplines, and the appropriate modeling of system-level assessment. Current modeling methods lack the capability to adequately model conditions that exist at the edges of the operating envelope where the majority of certification testing generally takes place. Additionally, large-scale engine or airplane multidisciplinary integration has not matured to the level where it can be reliably used to efficiently model the intricate interactions that exist in current or future aerospace products. Logistical concerns center primarily on the future High Performance Computing capability needed to perform the large number of computationally intensive simulations needed for CbA. Complex, time-dependent, multidisciplinary analyses will require a computing capacity increase several orders of magnitude greater than is currently available. Developing methods to ensure credible simulation results is critically important for regulatory acceptance of CbA. Confidence in analysis methodology and solutions is examined so that application validation cases can be properly identified. Other means of measuring confidence such as uncertainty quantification and “validation-domain” approaches may increase the credibility and trust in the predictions. Certification by Analysis is a challenging long-term endeavor that will motivate many areas of simulation technology development, while driving the potential to decrease cost, improve safety, and improve airplane and engine efficiency. Requirements to satisfy certification regulations provide a measurable definition for the types of analytical capabilities required for success. There is general optimism that CbA is a goal that can be achieved, and that a significant amount of flight testing can be reduced in the next few decades. |
Timothy Mauery Boeing ![]() (bio)
For the past 20 years, Timothy Mauery has been involved in the development of low-speed CFD design processes. In this capacity, he has had the opportunity to interact with users and provide CFD support and training throughout the product development cycle. Prior to moving to the Commercial Airplanes division of The Boeing Company, he worked at the Lockheed Martin Aircraft Center, providing aerodynamic liaison support on a variety of military modification and upgrade programs. At Boeing, he has had the opportunity to support both future products as well as existing programs with CFD analysis and wind tunnel testing. Over the past ten years, he has been closely involved in the development and evaluation of analysis-based certification processes for commercial transport vehicles, for both derivative programs as well as new airplanes. Most recently he was the principal investigator on a NASA research announcement for developing requirements for airplane certification by analysis. Timothy received his bachelor’s degree from Brigham Young University, and his master’s degree from The George Washington University, where he was also a research assistant at NASA-Langley. |
Breakout |
![]() | 2021 |
|
Breakout Challenger Challenge: Pass-Fail Thinking Increases Risk Measurably (Abstract)
Binomial (pass-fail) response metrics are more far more commonly used in test, requirements, quality and engineering than they need to be. In fact, there is even an engineering school of thought that they’re superior to continuous-variable metrics. This is a serious, even dangerous problem in aerospace and other industries: think the Space Shuttle Challenger accident. There are better ways. This talk will cover some examples of methods available to engineers and statisticians in common statistical software. It will not dig far into the mathematics of the methods, but will walk through where each method might be most useful and some of the pitfalls inherent in their use – including potential sources of misinterpretation and suspicion by your teammates and customers. The talk is geared toward engineers, managers and professionals in the –ilities who run into frustrations dealing with pass-fail data and thinking. |
Ken Johnson Applied Statistician NASA Engineering and Safety Center |
Breakout | Materials | 2018 |
|
Breakout Challenges in Test and Evaluation of AI: DoD’s Project Maven (Abstract)
The Algorithmic Warfare Cross Functional Team (AWCFT or Project Maven) organizes DoD stakeholders to enhance intelligence support to the warfighter through the use of automation and artificial intelligence. The AWCFT’s objective is to turn the enormous volume of data available to DoD into actionable intelligence and insights at speed. This requires consolidating and adapting existing algorithm-based technologies as well as overseeing the development of new solutions. This brief will describe some of the methodological challenges in test and evaluation that the Maven team is working through to facilitate speedy and agile acquisition of reliable and effective AI / ML capabilities. |
Jane Pinelis | Breakout | 2019 |
||
Breakout Challenges in Verification and Validation of CFD for Industrial Aerospace Applications (Abstract)
Verification and validation represent important steps for appropriate use of CFD codes and it is presently considered the user’s responsibility to ensure that these steps are completed. Inconsistent definitions and use of these terms in aerospace complicate the effort. For industrial-use CFD codes, there are a number of challenges that can further confound these efforts including varying grid topology, non-linearities in the solution, challenges in isolating individual components, and difficulties in finding validation experiments. In this presentation, a number of these challenges will be reviewed with some specific examples that demonstrate why verification is much more involved and challenging than typically implied in numerical method courses, but remains an important exercise. Some of the challenges associated with validation will also be highlighted using a range of different cases, from canonical flow elements to complete aircraft models. Benchmarking is often used to develop confidence in CFD solutions for engineering purposes, but falls short of validation in the absence of being able to predict bounds on the simulation error. The key considerations in performing benchmarking and validation will be highlighted and some current shortcomings in practice will be presented, leading to recommendations for conducting validation exercises. CFD workshops have considerably improved in their application of these practices, but there continues to be need for additional steps. |
Andrew Cary Technical Fellow Boeing Research and Technology ![]() (bio)
Andrew Cary is a technical fellow of the Boeing Company in CFD and is the focal for the BCFD solver. In this capacity, he has a strong focus on supporting users of the code across the Boeing enterprise as well as leading the development team. These responsibilities align with his interests in verification, validation, and uncertainty quantification as an approach to ensure reliable results as well as in algorithm development, CFD-based shape optimization, and unsteady fluid dynamics. Since hiring into the CFD team in 1996, he has led CFD application efforts across a full range of Boeing products as well as working in grid generation methods, flow solver algorithms, post-processing approaches, and process automation. These assignments have given him the opportunity to work with teams around the world, both inside and outside Boeing. Andrew has been an active member of the American Institute of Aeronautics and Astronautics, serving in multiple technical committees, including his present role on the CFD Vision 2030 Integration Committee. Andrew has also been an adjunct professor at Washington University since 1999, teaching graduate classes in CFD and fluid dynamics. Andrew received a Ph.D. (97) in Aerospace Engineering from the University of Michigan and a B.S. (92) and M.S. (97) in Aeronautical and Astronautical Engineering from the University of Illinois Urbana-Champaign. |
Breakout |
![]() | 2021 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Bayesian Adaptive Design for Conformance Testing with Bernoulli Trials |
Adamn Pintar NIST |
Breakout | Materials | 2016 |
|
Short Course Bayesian Analysis |
Robert Gramacy Virginia Tech |
Short Course | Materials | 2019 |
|
Contributed Bayesian Calibration and Uncertainty Analysis: A Case Study Using a 2-D CFD Turbulence Model |
Peter Chien | Contributed | 2018 |
||
Breakout Bayesian Component Reliability Estimation: F-35 Case Study |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
|
Tutorial Bayesian Data Analysis in R/STAN |
Kassandra Fronczyk IDA |
Tutorial | Materials | 2016 |
|
Tutorial Bayesian Data Analysis in R/STAN |
James Brownlow U.S. Air Force 812TSS/ENT |
Tutorial | Materials | 2016 |
|
Bayesian Estimation for Covariate Defect Detection Model Based on Discrete Cox Proportiona |
Priscila Silva Graduate Student University of Massachusetts Dartmouth ![]() |
Session Recording |
![]() Recording | 2022 |
|
Breakout Bayesian Estimation of Reliability Growth |
Jim Brownlow U.S. Air Force 812TSS/ENT |
Breakout | Materials | 2016 |
|
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity |
Robert Gough | Breakout |
![]() | 2019 |
|
Breakout Big Data, Big Think |
Robert Beil NASA |
Breakout | Materials | 2017 |
|
Breakout Blast Noise Event Classification from a Spectrogram |
Edward Nykaza Army Engineering Research and Development Center, Construction Engineering Research Laboratory |
Breakout | Materials | 2017 |
|
Breakout Building A Universal Helicopter Noise Model Using Machine Learning |
Eric Greenwood Aeroacoustics Branch |
Breakout | Materials | 2018 |
|
Building Bridges: a Case Study of Assisting a Program from the Outside |
Anthony Sgambellone Huntington Ingalls Industries ![]() |
Session Recording |
![]() Recording | 2022 |
|
Webinar Can AI Predict Human Behavior? |
Dustin Burns Senior Scientist Exponent ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Carrier Reliability Model Validation |
Dean Thomas IDA |
Breakout | 2017 |
||
Breakout Case Studies for Statistical Engineering Applied to Powered Rotorcraft Wind-Tunnel Tests |
Sean Commo NASA |
Breakout | 2016 |
||
Tutorial Case Study on Applying Sequential Methods in Operational Testing |
Keyla Pagán-Rivera Research Staff Member IDA ![]() |
Tutorial | Session Recording |
![]() Recording | 2022 |
Breakout Cases of Second-Order Split-Plot Designs |
Luis Cortes MITRE |
Breakout | Materials | 2018 |
|
Short Course Categorical Data Analysis |
Christopher Franck Virginia Tech |
Short Course | Materials | 2019 |
|
Short Course Categorical Data Analysis |
Chris Franck Assistant Professor Virginia Tech ![]() |
Short Course | Materials | 2022 |
|
Breakout Censored Data Analysis for Performance Data |
Bram Lillard IDA |
Breakout | Materials | 2017 |
|
Breakout Certification by Analysis: A 20-year Vision for Virtual Flight and Engine Testing |
Timothy Mauery Boeing ![]() |
Breakout |
![]() | 2021 |
|
Breakout Challenger Challenge: Pass-Fail Thinking Increases Risk Measurably |
Ken Johnson Applied Statistician NASA Engineering and Safety Center |
Breakout | Materials | 2018 |
|
Breakout Challenges in Test and Evaluation of AI: DoD’s Project Maven |
Jane Pinelis | Breakout | 2019 |
||
Breakout Challenges in Verification and Validation of CFD for Industrial Aerospace Applications |
Andrew Cary Technical Fellow Boeing Research and Technology ![]() |
Breakout |
![]() | 2021 |