Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Webinar Connecting Software Reliability Growth Models to Software Defect Tracking (Abstract)
Co-Author: Melanie Luperon. Most software reliability growth models only track defect discovery. However, a practical concern is removal of high severity defects, yet defect removal is often assumed to occur instantaneously. More recently, several defect removal models have been formulated as differential equations in terms of the number of defects discovered but not yet resolved and the rate of resolution. The limitation of this approach is that it does not take into consideration data contained in a defect tracking database. This talk describes our recent efforts to analyze data from a NASA program. Two methods to model defect resolution are developed, namely (i) distributional and (ii) Markovian approaches. The distributional approach employs times between defect discovery and resolution to characterize the mean resolution time and derives a software defect resolution model from the corresponding software reliability growth model to track defect discovery. The Markovian approach develops a state model from the stages of the software defect lifecycle as well as a transition probability matrix and the distributions for each transition, providing a semi-Markov model. Both the distribution and Markovian approaches employ a censored estimation technique to identify the maximum likelihood estimates, in order to handle the case where some but not all of the defects discovered have been resolved. Furthermore, we apply a hypothesis test to determine if a first or second order Markov chain best characterizes the defect lifecycle. Our results indicate that a first order Markov chain was sufficient to describe the data considered and that the Markovian approach achieves modest improvements in predictive accuracy, suggesting that the simpler distributional approach may be sufficient to characterize the software defect resolution process during test. The practical inferences of such models include an estimate of the time required to discover and remove all defects. |
Lance Fiondella Associate Professor University of Massachusetts ![]() (bio)
Lance Fiondella is an associate professor of Electrical and Computer Engineering at the University of Massachusetts Dartmouth. He received his PhD (2012) in Computer Science and Engineering from the University of Connecticut. Dr. Fiondella’s papers have received eleven conference paper awards, including six with his students. His software and system reliability and security research has been funded by the DHS, NASA, Army Research Laboratory, Naval Air Warfare Center, and National Science Foundation, including a CAREER Award. |
Webinar |
![]() Recording | 2020 |
Breakout Insights, Predictions, and Actions: Descriptive Definitions of Data Science, Machine Learning, and Artificial Intelligence (Abstract)
The terms “Data Science”, “Machine Learning”, and “Artificial Intelligence” have become increasingly common in popular media, professional publications, and even in the language used by DoD leadership. But these terms are often not well understood, and may be used incorrectly and interchangeably. Even a textbook definition of these fields is unlikely to help with the distinction, as many definitions tend to lump everything under the umbrella of computer science or introduce unnecessary buzzwords. Leveraging a framework first proposed by David Robinson, Chief Data Scientist at DataCamp, we forgo the textbook definitions and instead focus on practical distinctions between the work of practitioners in each field, using examples relevant to the test and evaluation community where applicable. |
Andrew Flack Research Staff Member IDA |
Breakout | Materials | 2018 |
Short Course Categorical Data Analysis (Abstract)
Categorical data is abundant in the 21st century, and its analysis is vital to advance research across many domains. Thus, data-analytic techniques that are tailored for categorical data are an essential part of the practitioner’s toolset. The purpose of this short course is to help attendees develop and sharpen their abilities with these tools. Topics covered in this short course will include logistic regression, ordinal regression, and classification, and methods to assess predictive accuracy of these approaches will be discussed. Data will be analyzed using the R software package, and course content loosely follow Alan Agresti’s excellent textbook An Introduction to Categorical Data Analysis, Third Edition. |
Christopher Franck Virginia Tech |
Short Course | Materials | 2019 |
Breakout Updating R and Reliability Training with Bill Meeker (Abstract)
Since its publication, Statistical Methods for Reliability Data by W. Q. Meeker and L. A. Escobar has been recognized as a foundational resource in analyzing failure time to and survival data. Along with the text, the authors provided an S-Plus software package, called SPLIDA, to help readers utilize the methods presented in the text. Today, R is the most popular statistical computing language in the world, largely supplanting S-Plus. The SMRD package is the result of a multi-year effort to completely rebuild SPLIDA, to take advantage of the improved graphics and workflow capabilities available in R. This presentation introduces the SMRD package, outlines the improvements and shows how the package works seamlessly with the rmarkdown and shiny packages to dramatically speed up your workflow. The presentation concludes with a discussion on what improvements still need to be made prior to publishing the package on the CRAN. |
Jason Freels AFIT |
Breakout | Materials | 2017 |
Keynote Wednesday Keynote Speaker II |
Laura Freeman Associate Director, ISL Hume Center for National Security and Technology, Virginia Tech ![]() (bio)
Dr. Laura Freeman is an Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. Her focus areas include test design, statistical data analysis, modeling and simulation validation, human-system interactions, reliability analysis, software testing, and cybersecurity testing. Dr. Freeman currently leads a research task for the Chief Management Officer (CMO) aiming to reform DoD testing. She guides an interdisciplinary team in recommending changes and developing best practices. Reform initiatives include incorporating mission context early in the acquisition lifecycle, integrating all test activities, and improving data management processes. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She served as a liaison with Service technical advisors, General Officers, and members of the Senior Executive Service on key technical issues. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight. During her tenure at IDA, Dr. Freeman has designed tests and conducted statistical analyses for programs of national importance including weapon systems, missile defense, undersea warfare systems, command and control systems, and most recently the F-35. She prioritizes supporting the analytical community in the DoD workforce. She developed and taught numerous courses on advanced test design and statistical analysis, including two new Defense Acquisition University (DAU) Courses on statistical methods. She is a founding organizer of DATAWorks (Defense and Aerospace Test and Analysis Workshop), a workshop designed to share new methods, provide training, and share best practices between NASA, the DoD, and National Labs. Dr. Freeman is the recipient of the 2017 IDA Goodpaster Award for Excellence in Research and the 2013 International Test and Evaluation Association (ITEA) Junior Achiever Award. She is a member of the American Statistical Association, the American Society for Quality, the International Statistical Engineering Association, and ITEA. She serves on the editorial boards for Quality Engineering, Quality Reliability Engineering International, and the ITEA Journal. Her areas of statistical expertise include designed experiments, reliability analysis, and industrial statistics. Prior to joining IDA in 2010, Dr. Freeman worked at SAIC providing statistical guidance to the Director, Operational Test and Evaluation. She also consulted with NASA on various projects. In 2008, Dr. Freeman established the Laboratory for Interdisciplinary Statistical Analyses at Virginia Tech and Served as its inaugural Director. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data. |
Keynote |
![]() | 2019 |
Keynote Wednesday Lunchtime Keynote Speaker |
Jared Freeman Chief Scientist of Aptima and Chair of the Human Systems Division National Defense Industry Association ![]() (bio)
Jared Freeman, Ph.D., is Chief Scientist of Aptima and Chair of the Human Systems Division of the National Defense Industry Association. His research and publications address measurement, assessment, and enhancement of human learning, cognition, and performance in technologically complex military environments. |
Keynote |
![]() | 2019 |
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative (Abstract)
In 2019, the DoD unveiled comprehensive strategies related to Artificial Intelligence, Digital Modernization, and Enterprise Data Analytics. Recognizing that data science and analytics are fundamental to these strategies, in October 2020 the DoD has issued a comprehensive Data Strategy for national security and defense. For over a hundred years, statistical sciences have played a pivotal role in our national defense, from quality assurance and reliability analysis of munitions fielded in WWII, to operational analyses defining battlefield force structure and tactics, to helping optimize the engineering design of complex products, to rigorous testing and evaluating of Warfighter systems. The American Statistical Association (ASA) in 2015 recognized in its statement on The Role of Statistics in Data Science that “statistics is foundational to data science… and its use in this emerging field empowers researchers to extract knowledge and obtain better results from Big Data and other analytics projects.” It is clearly recognized that data as information is a key asset to the DoD. The challenge we face is how to transform existing talent to add value where it counts. |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() (bio)
Dr. Laura Freeman is a Research Associate Professor of Statistics and the Director of the Intelligent Systems Lab at the Virginia Tech Hume Center. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance. She develops new methods for test and evaluation focusing on emerging system technology. She is also the Assistant Dean for Research in the National Capital Region, in that capacity she works to shape research directions and collaborations in across the College of Science in the National Capital Region. Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data. |
Roundtable | 2021 |
|
Breakout Reliability Growth in T&E – Summary of National Research Council’s Committee on National Statistics Report Finding-Tuesday Morning |
Art Fries Research Staff Member IDA |
Breakout | Materials | 2016 |
Breakout Validation and Uncertainty Quantification of Complex Models (Abstract)
Advances in high performance computing have enabled detailed simulations of real-world physical processes, and these simulations produce large datasets. Even as detailed as they are, these simulations are only approximations of imperfect mathematical models, and furthermore, their outputs depend on inputs that are themselves uncertain. The main goal of a validation and uncertainty quantication methodology is to determine the uncertainty, that is, the relationship between the true value of a quantity of interest and its prediction by the simulation. The value of the computational results is limited unless the uncertainty can be quantied or bounded. Bayesian calibration is a common method for estimating model parameters and quantifying their associated uncertainties; however, calibration becomes more complicated when the data arise from dierent types of experiments. On an example from material science we will employ two types of data and demonstrate how one can obtain a set of material strength models that agree with both data sources. |
Kassie Fronczyk | Breakout |
![]() | 2019 |
Tutorial Bayesian Data Analysis in R/STAN (Abstract)
In an era of reduced budgets and limited testing, verifying that requirements have been met in a single test period can be challenging, particularly using traditional analysis methods that ignore all available information. The Bayesian paradigm is tailor made for these situations, allowing for the combination of multiple sources of data and resulting in more robust inference and uncertainty quantification. Consequently, Bayesian analyses are becoming increasingly popular in T&E. This tutorial briefly introduces the basic concepts of Bayesian Statistics, with implementation details illustrated in R through two case studies: reliability for the Core Mission functional area of the Littoral Combat Ship (LCS) and performance curves for a chemical detector in the Common Analytical Laboratory System (CALS) with different agents and matrices. Examples are also presented using RStan, a high-performance open-source software for Bayesian inference on multi-level models. |
Kassandra Fronczyk IDA |
Tutorial | Materials | 2016 |
Breakout Collaborative Human AI Red Teaming (Abstract)
The Collaborative Human AI Red Teaming (CHART) project is an effort to develop an AI Collaborator which can help human test engineers quickly develop test plans for AI systems. CHART was built around processes developed for cybersecurity red-teaming. Using a goal-focused approach based upon iteratively testing and attacking a system then updating the testers model to discover novel failure modes not discovered by traditional T&E processes. Red teaming is traditionally a time intensive process which requires subject matter expert to study the system they are testing for months in order to develop attack strategies. CHART will accelerate this process by guiding the user through the process of diagraming the AI system under test and drawing upon a pre-established body of knowledge to identify the most probably vulnerabilities. CHART was provided internal seedling funds during FY20 to perform a feasibility study of the technology. During this period the team developed a taxonomy of AI vulnerabilities and an ontology of AI irruptions. Irruptions being events (either caused by a malicious actor or unintended consequences) which trigger the vulnerability and lead to an undesirable result. Using this taxonomy we built a threat modeling tool that allows users to diagram their AI system and identifies all the possible irruptions which could occur. This initial demonstration was based around two scenarios. An smartphone-based ECG system for telemedicine and a UAV trained reinforcement learning to avoid mid-air collisions. In this talk we will first discuss how Red Teaming differs from adversarial machine learning and traditional testing and evaluation. Next, we will provide an overview of how industry is approaching the problem of AI Red Teaming and how our approach differs. Finally, we will discuss how we developed our taxonomy of AI vulnerabilities, how to apply goal-focused testing to AI systems, and our strategy for automatically generating test plans. |
Galen Mullins Senior AI Researcher Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Dr. Galen Mullins is a senior staff scientist in the Robotics Group of the Intelligent Systems branch at the Johns Hopkins Applied Physics Laboratory. His research is focused on developing intelligent testing techniques and adversarial tools for finding the vulnerabilities of AI systems. His recent project work has included the development of new imitation learning frameworks for modeling the behavior of autonomous vehicles, creating algorithms for generating adversarial environments, and developing red teaming procedures for AI systems. He is the secretary for the IEEE/RAS working group on Guidelines for Verification of Autonomous Systems and teaches the Introduction to Robotics course at the Johns Hopkins Engineering for Professionals program. Dr. Galen Mullins received his B.S degrees in Mechanical Engineering and Mathematics respectively from Carnegie Mellon University in 2007 and joined APL the same year. Since then he earned his M.S. in Applied Physics from Johns Hopkins University in 2010, and his Ph.D in Mechanical Engineering from the University of Maryland in 2018. His doctoral research was focused on developing active learning algorithms for generating adversarial scenarios for autonomous vehicles. |
Breakout |
![]() | 2021 |
Breakout Sample Size and Considerations for Statistical Power (Abstract)
Sample size drives the resources and supports the conclusions of operational test. Power analysis is a common statistical methodology used in planning efforts to justify the number of samples. Power analysis is sensitive to extreme performance (e.g. 0.1% correct responses or 99.999% correct responses) relative to a threshold value, extremes in response variable variability, numbers of factors and levels, system complexity, and a myriad of other design- and system-specific criteria. This discussion will describe considerations (correlation/aliasing, operational significance, thresholds, etc.) and relationships (design, difference to detect, noise, etc.) associated with power. The contribution of power to design selection or adequacy must often be tempered when significant uncertainty or test resources constraints exist. In these situations, other measures of merit and alternative analytical approaches become at least as important as power in the development of designs that achieve the desired technical adequacy. In conclusion, one must understand what power is, what factors influence the calculation, and when to leverage alternative measures of merit. |
Nick Garcia AFOTEC |
Breakout | Materials | 2017 |
Contributed Leveraging Anomaly Detection for Aircraft System Health Data Stability Reporting (Abstract)
Detecting and diagnosing aircraft system health poses a unique challenge as system complexity increases and software is further integrated. Anomaly detection algorithms systematically highlight unusual patterns in large datasets and are a promising methodology for detecting aircraft system health. The F-35A fighter aircraft is driven by complex, integrated subsystems with both software and hardware components. The F-35A operational flight program is the software that manages each subsystem within the aircraft and the flow of required information and support between subsystems. This information and support are critical to the successful operation of many subsystems. For example, the radar system supplies information to the fusion engine, without which the fusion engine would fail. ACC operational testing can be thought of as equivalent to beta testing for operational flight programs. As in other software, many faults result in minimal loss of functionality and are often unnoticed by the user. However, there are times when a software fault might result in catastrophic functionality loss (i.e., subsystem shutdown). It is critical to catch software problems that will result in catastrophic functionality loss before the flight software is fielded to the combat air forces. Subsystem failures and degradations can be categorized and quantified using simple system health data codes (e.g., degrade, fail, healthy). However, because the integrated nature of the F-35A, a subsystem degradation may be caused by another subsystem. The 59th Test and Evaluation Squadron collects autonomous system data, pilot questionnaires, and health report codes for F-35A subsystems. Originally, this information was analyzed using spreadsheet tools (i.e., Microsoft Excel). Using this method, analysts were unable to examine all subsystems or attribute cause for subsystem faults. The 59 TES is developing a new process that leverages anomaly detection algorithms to isolate flights with unusual patterns of subsystem failures and within those flights, highlight what subsystem faults are correlated with increased subsystem failures. This presentation will compare the performance of several anomaly detection algorithms (e.g., K-means, K-nearest neighbors, support vector machines) using simulated F-35A data. |
Kyle Gartrell | Contributed | Materials | 2018 |
Breakout The Future of Engineering at NASA Langley (Abstract)
In May 2016, the NASA Langley Research Center’s Engineering Director stood up a group consisting of employees within the directorate to assess the current state of engineering being done by the organization. The group was chartered to develop ideas, through investigation and benchmarking of other organizations within and outside of NASA, for how engineering should look in the future. This effort would include brainstorming, development of recommendations, and some detailed implementation plans which could be acted upon by the directorate leadership as part of an enduring activity. The group made slow and sporadic progress in several specific, self-selected areas including: training and development; incorporation of non-traditional engineering disciplines; capturing and leveraging historical data and knowledge; revolutionizing project documentation; and more effective use of design reviews. The design review investigations have made significant progress by leveraging lessons learned and techniques gained by collaboration with operations research analysts within the local Lockheed Martin Center for Innovation (the “Lighthouse”) and pairing those techniques with advanced data analysis tools available through the IBM Watson Content Analytics environment. Trials with these new techniques are underway but show promising results for the future of providing objective, quantifiable data from the design review environment – an environment which to this point has remained essentially unchanged for the past 50 years. |
Joe Gasbarre NASA |
Breakout | Materials | 2017 |
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo (Abstract)
With the rise of machine learning and artificial intelligence, there has been a huge surge in data-driven approaches to solve computational science and engineering problems. In the context of uncertainty propagation, machine learning is often employed for the construction of efficient surrogate models (i.e., response surfaces) to replace expensive, physics-based simulations. However, relying solely on surrogate models without any recourse to the original high-fidelity simulation will produce biased estimators and can yield unreliable or non-physical results. This talk discusses multi-model Monte Carlo methods that combine predictions from both fast, low-fidelity models with reliable, high-fidelity simulations to enable efficient and accurate uncertainty propagation. For instance, the low-fidelity models could arise from coarsened discretizations in space/time (e.g., Multilevel Monte Carlo – MLMC) or from general data-driven or reduced order models (e.g., Multifidelity Monte Carlo – MFMC; Approximate Control Variates – ACV). Given a fixed computational budget and a collection of models of varying cost/accuracy, the goal of these methods is to optimally allocate and combine samples across the models. The talk will also present a NASA-developed open-source Python library that acts as a general multi-model uncertainty propagation capability. The effectiveness of the discussed methods and Python library is demonstrated on a trajectory simulation application. Here, orders of magnitude computational speedup and accuracy are obtained for predicting the landing location of an umbrella heat shield under significant uncertainties in initial state, atmospheric conditions, etc. |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() (bio)
Dr. Geoffrey Bomarito is a Materials Research Engineer at NASA Langley Research Center. Before joining NASA in 2014, he earned a PhD in Computational Solid Mechanics from Cornell University. He also holds an MEng from the Massachusetts Institute of Technology and a BS from Cornell University, both in Civil and Environmental Engineering. Dr. Bomarito’s work centers around machine learning and uncertainty quantification as applied to aerospace materials and structures. His current topics of interest are physics informed machine learning, symbolic regression, additive manufacturing, and trajectory simulation. |
Breakout |
![]() | 2021 |
Keynote Welcoming & Opening Keynote-Tuesday AM |
Mike Gilmore Director DOT&E ![]() (bio)
Link to Bio unavail |
Keynote | Materials | 2016 |
Contributed Application of Statistical Methods and Designed Experiments to Development of Technical Requirements (Abstract)
The Army relies heavily on the voice of the customer to develop and refine technical requirements for developmental systems, but too often the approach is reactive. The ARDEC (Armament Research, Development & Engineering Center) Statistics Group at Picatinny Arsenal, NJ, working closely with subject matter experts, has been implementing market research and web development techniques and Design of Experiments (DOE) best practices to design and analyze surveys that provide insight into the customer’s perception of utility for various developmental commodities. Quality organizations tend to focus on ensuring products meet technical requirements, with far less of an emphasis placed on whether or not the specification actually captures customer needs. The employment of techniques and best practices spanning the fields of Market Research, Design of Experiments, and Web Development (choice design, conjoint analysis, contingency analysis, psychometric response scales, stratified random sampling) converge towards a more proactive and risk-mitigating approach to the development of technical and training requirements, and encourages strategic decision-making when faced with the inarticulate nature of human preference. Establishing a hierarchy of customer preference for objective and threshold values of key performance parameters enriches the development process of emerging systems by making the process simultaneously more effective and more efficient. |
Eli Golden U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT & ENGINEERING CENTER |
Contributed | Materials | 2018 |
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity (Abstract)
The application opportunities for behavioral analytics in the cybersecurity space are based upon simple realities. 1. The great majority of breaches across all cybersecurity venues is due to human choices and human error. 2. With communication and information technologies making for rapid availability of data, as well as behavioral strategies of bad actors getting cleverer, there is need for expanded perspectives in cybersecurity prevention. 3. Internally-focused paradigms must now be explored that place endogenous protection from security threats as an important focus and integral dimension of cybersecurity prevention. The development of cybersecurity monitoring metrics and tools as well as the creation of intrusion prevention standards and policies should always include an understanding of the underlying drivers of human behavior. As temptation follows available paths, cyber-attacks follow technology, business models, and behavioral habits. The human element will always be the most significant part in the anatomy of any final decision. Choice options – from input, to judgement, to prediction, to action – need to be better understood for their relevance to cybersecurity work. Behavioral Performance Indexes harness data about aggregate human participation in an active system, helping to capture some of the detail and nuances of this critically important dimension of cybersecurity. |
Robert Gough | Breakout |
![]() | 2019 |
Short Course Bayesian Analysis (Abstract)
This course will cover the basics of the Bayesian approach to practical and coherent statistical inference. Particular attention will be paid to computational aspects, including MCMC. Examples/practical hands-on exercises will the run gamut from toy illustration to real-world data analysis from all areas of science, with R implementations/coaching provided. The course closely follows P.D. Hoff’s “A First Course in Bayesian Statistical Methods”—Springer 2009. Some examples are borrowed from two other texts which are nice references to have. J. Albert’s’ “Bayesian Computation with R”— Springer 2nd ed. 2009; and “A. Gelman, J.B. Carlin, H.S. Stern, D. Dunson, A. Vehtari and D.B. Rubin’ s “Bayesian Data Analysis”—3rd ed. 2013. |
Robert Gramacy Virginia Tech |
Short Course | Materials | 2019 |
Short Course Introduction to Bayesian (Abstract)
This course will cover the basics of the Bayesian approach to practical and coherent statistical inference. Particular attention will be paid to computational aspects, including MCMC. Examples will the run gamut from toy illustration to real-world data analysis from all areas of science, with R implementations provided. |
Robert Gramacy Associate Professor University of Chicago (bio)
Professor Gramacy is an Associate Professor of Econometrics and Statistics in the Booth School of business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimizaton under uncertainty. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. |
Short Course | 2016 |
|
Short Course Modern Response Surface Methods & Computer Experiments (Abstract)
This course details statistical techniques at the interface between mathematical modeling via computer simulation, computer model meta-modeling (i.e., emulation/surrogate modeling), calibration of computer models to data from field experiments, and model-based sequential design and optimization under uncertainty (a.k.a. Bayesian Optimization). The treatment will include some of the historical methodology in the literature, and canonical examples, but will primarily concentrate on modern statistical methods, computation and implementation, as well as modern application/data type and size. The course will return at several junctures to real-word experiments coming from the physical and engineering sciences, such as studying the aeronautical dynamics of a rocket booster re-entering the atmosphere; modeling the drag on satellites in orbit; designing a hydrological remediation scheme for water sources threatened by underground contaminants; studying the formation of super-nova via radiative shock hydrodynamics. The course material will emphasize deriving and implementing methods over proving theoretical properties. |
Robert Gramacy Virginia Polytechnic Institute and State University |
Short Course | Materials | 2018 |
Webinar A Practical Introduction To Gaussian Process Regression (Abstract)
Abstract: Gaussian process regression is ubiquitous in spatial statistics, machine learning, and the surrogate modeling of computer simulation experiments. Fortunately their prowess as accurate predictors, along with an appropriate quantification of uncertainty, does not derive from difficult-to-understand methodology and cumbersome implementation. We will cover the basics, and provide a practical tool-set ready to be put to work in diverse applications. The presentation will involve accessible slides authored in Rmarkdown, with reproducible examples spanning bespoke implementation to add-on packages. Instructor Bio: Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians. Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities. |
Robert “Bobby” Gramacy Virginia Tech ![]() (bio)
Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians. Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities. |
Webinar |
![]() | 2020 |
Breakout Building A Universal Helicopter Noise Model Using Machine Learning (Abstract)
Helicopters serve a number of useful roles within the community; however, community acceptance of helicopter operations is often limited by the resulting noise. Because the noise characteristics of helicopters depend strongly on the operating condition of the vehicle, effective noise abatement procedures can be developed for a particular helicopter type, but only when the noisy regions of the operating envelope are identified. NASA Langley Research Center—often in collaboration with other US Government agencies, industry, and academia—has conducted noise measurements for a wide variety of helicopter types, from light commercial helicopters to heavy military utility helicopters. While this database is expansive, it covers only a fraction of helicopter types in current commercial and military service and was measured under a limited set of ambient conditions and vehicle configurations. This talk will describe a new “universal” helicopter noise model suitable for planning helicopter noise abatement procedures. Modern machine learning techniques will be combined with the principle of nondimensionalization and applied to NASA’s helicopter noise data in order to develop a model capable of estimating the noisy operating states of any conventional helicopter under any specific ambient conditions and vehicle configurations. |
Eric Greenwood Aeroacoustics Branch |
Breakout | Materials | 2018 |
Breakout An Adaptive Approach to Shock Train Detection (Abstract)
Development of new technology always incorporates model testing. This is certainly true for hypersonics, where flight tests are expensive and testing of component- and system-level models has significantly advanced the field. Unfortunately, model tests are often limited in scope, being only approximations of reality and typically only partially covering the range of potential realistic conditions. In this talk, we focus on the problem of real-time detection of the shock train leading edge in high-speed air-breathing engines, such as dual-mode scramjets. Detecting and controlling the shock train leading edge is important to the performance and stability of such engines, and a problem that has seen significant model testing on the ground and some flight testing. Often, methods developed for shock train detection are specific to the model used. Thus, they may not generalize well when tested in another facility or in flight as they typically require a significant amount of prior characterization of the model and flow regime. A successful method for shock train detection needs to be robust to changes in features like isolator geometry, inlet and combustor states, flow regimes, and available sensors. Such data can be difficult or impossible to obtain if the isolator operating regime is large. To this end, we propose the an approach for real-time detection of the isolator shock train. Our approach uses real-time pressure measurements to adaptively estimate the shock train position in a data-driven manner. We show that the method works well across different isolator models, placement of pressure transducers, and flow regimes. We believe that a data-driven approach is the way forward for bridging the gap between testing and reality, saving development time and money. |
Greg Hunt Assistant Professor William & Mary ![]() (bio)
Greg is an interdisciplinary researcher that builds scientific tools. He is trained as a statistician, mathematician and computer scientist. Currently he work on a diverse set of problems in biology, physics, and engineering. |
Breakout |
![]() | 2021 |
Tutorial Creating Shiny Apps in R for Sharing Automated Statistical Products (Abstract)
Interactive web apps can be built straight from R with the R package, Shiny. hiny apps are becoming more prevalent as a way to automate statistical products and share them with others who do not know R. This tutorial will cover Shiny app syntax and how to create basic Shiny apps. Participants will create basic apps by working through several examples and explore how to change and improve these apps. Participants will leave the session with the tools to create their own complicated applications. Participants will need a computer with R, R Studio, and the shiny R package installed. |
Randy Griffiths U.S. Army Evaluation Center |
Tutorial | Materials | 2018 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Webinar Connecting Software Reliability Growth Models to Software Defect Tracking |
Lance Fiondella Associate Professor University of Massachusetts ![]() |
Webinar |
![]() Recording | 2020 |
Breakout Insights, Predictions, and Actions: Descriptive Definitions of Data Science, Machine Learning, and Artificial Intelligence |
Andrew Flack Research Staff Member IDA |
Breakout | Materials | 2018 |
Short Course Categorical Data Analysis |
Christopher Franck Virginia Tech |
Short Course | Materials | 2019 |
Breakout Updating R and Reliability Training with Bill Meeker |
Jason Freels AFIT |
Breakout | Materials | 2017 |
Keynote Wednesday Keynote Speaker II |
Laura Freeman Associate Director, ISL Hume Center for National Security and Technology, Virginia Tech ![]() |
Keynote |
![]() | 2019 |
Keynote Wednesday Lunchtime Keynote Speaker |
Jared Freeman Chief Scientist of Aptima and Chair of the Human Systems Division National Defense Industry Association ![]() |
Keynote |
![]() | 2019 |
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() |
Roundtable | 2021 |
|
Breakout Reliability Growth in T&E – Summary of National Research Council’s Committee on National Statistics Report Finding-Tuesday Morning |
Art Fries Research Staff Member IDA |
Breakout | Materials | 2016 |
Breakout Validation and Uncertainty Quantification of Complex Models |
Kassie Fronczyk | Breakout |
![]() | 2019 |
Tutorial Bayesian Data Analysis in R/STAN |
Kassandra Fronczyk IDA |
Tutorial | Materials | 2016 |
Breakout Collaborative Human AI Red Teaming |
Galen Mullins Senior AI Researcher Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
Breakout Sample Size and Considerations for Statistical Power |
Nick Garcia AFOTEC |
Breakout | Materials | 2017 |
Contributed Leveraging Anomaly Detection for Aircraft System Health Data Stability Reporting |
Kyle Gartrell | Contributed | Materials | 2018 |
Breakout The Future of Engineering at NASA Langley |
Joe Gasbarre NASA |
Breakout | Materials | 2017 |
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
Keynote Welcoming & Opening Keynote-Tuesday AM |
Mike Gilmore Director DOT&E ![]() |
Keynote | Materials | 2016 |
Contributed Application of Statistical Methods and Designed Experiments to Development of Technical Requirements |
Eli Golden U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT & ENGINEERING CENTER |
Contributed | Materials | 2018 |
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity |
Robert Gough | Breakout |
![]() | 2019 |
Short Course Bayesian Analysis |
Robert Gramacy Virginia Tech |
Short Course | Materials | 2019 |
Short Course Introduction to Bayesian |
Robert Gramacy Associate Professor University of Chicago |
Short Course | 2016 |
|
Short Course Modern Response Surface Methods & Computer Experiments |
Robert Gramacy Virginia Polytechnic Institute and State University |
Short Course | Materials | 2018 |
Webinar A Practical Introduction To Gaussian Process Regression |
Robert “Bobby” Gramacy Virginia Tech ![]() |
Webinar |
![]() | 2020 |
Breakout Building A Universal Helicopter Noise Model Using Machine Learning |
Eric Greenwood Aeroacoustics Branch |
Breakout | Materials | 2018 |
Breakout An Adaptive Approach to Shock Train Detection |
Greg Hunt Assistant Professor William & Mary ![]() |
Breakout |
![]() | 2021 |
Tutorial Creating Shiny Apps in R for Sharing Automated Statistical Products |
Randy Griffiths U.S. Army Evaluation Center |
Tutorial | Materials | 2018 |