Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Computing Statistical Tolerance Regions Using the R Package ‘tolerance’ (Abstract)
Statistical tolerance intervals of the form (1−α, P) provide bounds to capture at least a specified proportion P of the sampled population with a given confidence level 1−α. The quantity P is called the content of the tolerance interval and the confidence level 1−α reflects the sampling variability. Statistical tolerance intervals are ubiquitous in regulatory documents, especially regarding design verification and process validation. Examples of such regulations are those published by the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), the International Atomic Energy Agency (IAEA), and the standard 16269-6 of the International Organization for Standardization (ISO). Research and development in the area of statistical tolerance intervals has undoubtedly been guided by the needs and demands of industry experts. Some of the broad applications of tolerance intervals include their use in quality control of drug products, setting process validation acceptance criteria, establishing sample sizes for process validation, assessing biosimilarity, and establishing statistically-based design limits. While tolerance intervals are available for numerous parametric distributions, procedures are also available for regression models, mixed-effects models, and multivariate settings (i.e., tolerance regions). Alternatively, nonparametric procedures can be employed when assumptions of a particular parametric model are not met. Tools for computing such tolerance intervals and regions are a necessity for researchers and practitioners alike. This was the motivation for designing the R package ‘tolerance,’ which not only has the capability of computing a wide range of tolerance intervals and regions for both standard and non-standard settings, but also includes some supplementary visualization tools. This session will provide a high-level introduction to the ‘tolerance’ package and its many features. Relevant data examples will be integrated with the computing demonstration, and specifically designed to engage researchers and practitioners from industry and government. A recently-launched Shiny app corresponding to the package will also be highlighted. |
Derek Young Associate Professor of Statistics University of Kentucky ![]() (bio)
Derek Young received their PhD in Statistics from Penn State University in 2007, where his research focused on computational aspects of novel finite mixture models. He subsequently worked as a Senior Statistician for the Naval Nuclear Propulsion Program (Bettis Lab) for 3.5 years and then as a Research Mathematical Statistician for the US Census Bureau for 3 years. He then joined the faculty of the Department of Statistics at the University of Kentucky in the fall of 2014, where he is currently a tenured Associate Professor. While at the Bettis Lab, he engaged with engineers and nuclear regulators, often regarding the calculation of tolerance regions. While at the Census Bureau, he wrote several methodological and computational papers for applied survey data analysis, many as the sole author. Since being at the University of Kentucky, he has further progressed his research agenda in finite mixture modeling, zero-inflated modeling, and tolerance regions. He also has extensive teaching experience spanning numerous undergraduate and graduate Statistics courses, as well as professional development presentations in Statistics. |
Breakout | Session Recording |
![]() | 2022 |
Convolutional Neural Networks and Semantic Segmentation for Cloud and Ice Detection (Abstract)
Recent research shows the effectiveness of machine learning on image classification and segmentation. The use of artificial neural networks (ANNs) on image datasets such as the MNIST dataset of handwritten digits is highly effective. However, when presented with a more complex image, ANNs and other simple computer vision algorithms tend to fail. This research uses Convolutional Neural Networks (CNNs) to determine how we can differentiate between ice and clouds in the imagery of the Arctic. Instead of using ANNs, where we analyze the problem in one dimension, CNNs identify features using the spatial relationships between the pixels in an image. This technique allows us to extract spatial features, presenting us with higher accuracy. Using a CNN named the Cloud-Net Model, we analyze how a CNN performs when analyzing satellite images. First, we examine recent research on the Cloud-Net Model’s effectiveness on satellite imagery, specifically from Landsat data, with four channels: red, green, blue, and infrared. We extend and modify this model, allowing us to analyze data from the most common channels used by satellites: red, green, and blue. By training on different combinations of these three channels, we extend this analysis by testing on an entirely different data set: GOES imagery. This gives us an understanding of the impact of each individual channel in image classification. By selecting images that exist in the same geographic location and containing both ice and clouds, such as the Landsat, we test GOES analyzing the CNN’s generalizability. Finally, we present CNN’s ability to accurately identify the clouds and ice in the GOES data versus the Landsat data. |
Prarabdha Ojwaswee Yonzon Cadet United States Military Academy (West Point) ![]() ![]() (bio)
CDT Prarabdha “Osho” Yonzon is a first-generation Nepalese American raised in Brooklyn Park, Minnesota. He initially enlisted into the Minnesota National Guard in 2015 as an Aviation Operation Specialist, and he was later accepted into USMAPS in 2017. He is an Applied Statistics Data Science Major from the United States Military Academy. Osho is passionate about his research. He first started working with West Point Department of Physics to examine impacts on GPS solutions. Later, he published a few articles and presented them at the AWRA annual conference for modeling groundwater flow with the Math department. Currently, he is working with the West Point Department of Mathematics and Lockheed Martin to create machine learning algorithms to detect objects in images. He plans to attend graduate school for data science and serve as a cyber officer. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Tutorial Data Integrity For Deep Learning Models (Abstract)
Deep learning models are built from algorithm frameworks that fit parameters over a large set of structured historical examples. Model robustness relies heavily on the accuracy and quality of the input training datasets. This mini-tutorial seeks to explore the practical implications of data quality issues when attempting to build reliable and accurate deep learning models. The tutorial will review the basics of neural networks, model building, and then dive deep into examples and data quality considerations using practical examples. An understanding of data integrity and data quality is pivotal for verification and validation of deep learning models, and this tutorial will provide students with a foundation of this topic. |
Victoria Gerardi and John Cilli US Army, CCDC Armaments Center ![]() ![]() ![]() ![]() |
Tutorial | Materials | 2022 |
|
Tutorial Data Integrity For Deep Learning Models (Abstract)
Deep learning models are built from algorithm frameworks that fit parameters over a large set of structured historical examples. Model robustness relies heavily on the accuracy and quality of the input training datasets. This mini-tutorial seeks to explore the practical implications of data quality issues when attempting to build reliable and accurate deep learning models. The tutorial will review the basics of neural networks, model building, and then dive deep into examples and data quality considerations using practical examples. An understanding of data integrity and data quality is pivotal for verification and validation of deep learning models, and this tutorial will provide students with a foundation of this topic. |
Roshan Patel Systems Engineer/Data Scientist US Army ![]() ![]() (bio)
Mr. Roshan Patel is a systems engineer and data scientist working at CCDC Armament Center. His role focuses on systems engineering infrastructure, statistical modeling, and the analysis of weapon systems. He holds a Masters of Computer Science from Rutgers University, where he specialized in operating systems programming and machine learning. Mr. Patel is the current AI lead for the Systems Engineering Directorate at CCDC Armaments Center. |
Tutorial | 2022 |
||
Data Science & ML-Enabled Terminal Effects Optimization (Abstract)
Warhead design and performance optimization against a range of targets is a foundational aspect of the Department of the Army’s mission on behalf of the warfighter. The existing procedures utilized to perform this basic design task do not fully leverage the exponential growth in data science, machine learning, distributed computing, and computational optimization. Although sound in practice and methodology, existing implementations are laborious and computationally expensive, thus limiting the ability to fully explore the trade space of all potentially viable solutions. An additional complicating factor is the fast paced nature of many Research and Development programs which require equally fast paced conceptualization and assessment of warhead designs. By utilizing methods to take advantage of data analytics, the workflow to develop and assess modern warheads will enable earlier insights, discovery through advanced visualization, and optimal integration of multiple engineering domains. Additionally, a framework built on machine learning would allow for the exploitation of past studies and designs to better inform future developments. Combining these approaches will allow for rapid conceptualization and assessment of new and novel warhead designs. US overmatch capability is quickly eroding across many tactical and operational weapon platforms. Traditional incremental improvement approaches are no longer generating appreciable performance improvements to warrant investment. Novel next generation techniques are required to find efficiencies in designs and leap forward technologies to maintain US superiority. The proposed approach seeks to shift existing design mentality to meet this challenge. |
John Cilli Computer Scientist Picatinny Arsenal ![]() ![]() (bio)
My name is John Cilli, I am a recent graduate of East Stroudsburg University with a bachelor’s in Computer Science. I have been working at Picatinny within the Systems Analysis Division as a computer scientist for little over a year now. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Deep learning aided inspection of additively manufactured metals (Abstract)
The performance and reliability of additively manufactured (AM) metals is limited by the ubiquitous presence of void- and crack-like defects that form during processing. Many applications require non-destructive evaluation of AM metals to detect potentially critical flaws. To this end, we propose a deep learning approach that can help with the interpretation of inspection reports. Convolutional neural networks (CNN) are developed to predict the elastic stress fields in images of defect-containing metal microstructures, and therefore directly identify critical defects. A large dataset consisting of the stress response of 100,000 random microstructure images is generated using high-resolution Fast Fourier Transform-based finite element (FFT-FE) calculations, which is then used to train a modified U-Net style CNN model. The trained U-Net model more accurately predicted the stress response compared to previous CNN architectures, exceeded the accuracy of low-resolution FFT-FE calculations, and were evaluated more than 100 times faster than conventional FE techniques. The model was applied to images of real AM microstructures with severe lack of fusion defects, and predicted a strong linear increase of maximum stress as a function of pore fraction. This work shows that CNNs can aid the rapid and accurate inspection of defect-containing AM material. |
Brendan Croom Postdoctoral Fellow JHU Applied Physics Laboratory ![]() ![]() (bio)
Dr. Croom joined Applied Physics Laboratory in 2020 as a Postdoctoral Researcher within the Multifunctional Materials and Nanostructures group. At APL, my work has focused on developing quantitative inspection, analysis and testing tools to ensure the reliability of additively manufactured metals, which commonly fail due to defects that were created during processing. This work involves pushing the capabilities of X-ray Computed Tomography imaging techniques in terms of speed and resolution to better resolve defects, and using machine learning to improve defect detection and measurement interpretation. Before joining APL, Dr. Croom was an NRC Postdoctoral Research Fellow at the Materials and Manufacturing Directorate at Air Force Research Laboratory, where he worked to study the fiber alignment, defect formation, and fracture behavior of additively manufactured composites. He completed his Ph.D. at the University of Virginia in 2019, where he developed several in situ X-ray Computed Tomography mechanical testing techniques. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Enabling Enhanced Validation of NDE Computational Models and Simulations (Abstract)
Enabling Enhanced Validation of NDE Computational Models and Simulations William C. Schneck, III, Ph.D. Elizabeth D. Gregory, Ph.D. NASA Langley Research Center Computer simulations of physical processes are increasingly used in the development, design, deployment, and life-cycle maintenance of many engineering systems [1] [2]. Non-Destructive Evaluation (NDE) and Structural Health Monitoring (SHM) must employ effective methods to inspect increasingly complex structural and material systems developed for new aerospace systems. Reliably and comprehensively interrogating this multidimensional [3] problem domain from a purely experimental perspective can become cost and time prohibitive. The emerging way to confront these new complexities in a timely and cost-effective manner is to utilize computer simulations. These simulations must be Verified and Validated [4] [5] to assure reliable use for these NDE/SHM applications. Beyond the classical use of models for engineering applications for equipment or system design efforts, NDE/SHM are necessarily applied to as-built and as-used equipment. While most structural or CFD models are applied to ascertain performance of as-designed systems, the performance of an NDE/SHM system is necessarily tied to the indications of damage/defects/deviations (collectively, flaws) within as-built and as-used structures and components. Therefore, the models must have sufficient fidelity to determine the influence of these aberrations on the measurements collected during interrogation. To assess the accuracy of these models, the Validation data sets must adequately encompass these flaw states. Due to the extensive parametric spaces that this coverage would entail, this talk proposes an NDE Benchmark Validation Data Repository, which should contain inspection data covering representative structures and flaws. This data can be reused from project to project, amortizing the cost of performing high quality Validation testing. Works Cited [1] Director, Modeling and Simulation Coordination Office, “Department of Defense Standard Practice: Documentation of Verification, Validation, and Accredation (VV&A) for Models and Simulations,” Department of Defense, 2008. [2] Under Secretary of Defense (Acquisition, Technology and Logistics), “DoD Modeling and Simulation (M&S) Verification, Validation, and Accredation (VV&A),” Department of Defense, 2003. [3] R. C. Martin, Clean Architecture: A Craftsman’s Guide to Software Structure and Design, Boston: Prentice Hall, 2018. [4] C. J. Roy and W. L. Oberkampf, “A Complete Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing (Invited),” in 48th AIAA Aerospace Sciences Meeting, Orlando, 2010. [5] ASME Performance Test Code Committee 60, “Guide for Verification and Validation in Computational Solid Mechanics,” ASME International, New York, 2016. |
William C. Schneck, III Research AST NASA LaRC |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Poster Estimating the time of sudden shift in the location or scale of ergodic-stationary process (Abstract)
Autocorrelated sequences arise in many modern-day industrial applications. In this paper, our focus is on estimating the time of sudden shift in the location or scale of a continuous ergodic-stationary sequence following a genuine signal from a statistical process control chart. Our general approach involves “clipping” the continuous sequence at the median or interquartile range (IQR) to produce a binary sequence, and then modeling the joint mass function for the binary sequence using a Bahadur approximation. We then derive a maximum likelihood estimator for the time of sudden shift in the mean of the binary sequence. Performance comparisons are made between our proposed change point estimator and two other viable alternatives. Although the literature contains existing methods for estimating the time of sudden shift in the mean and/or variance of a continuous process, most are derived under strict independence and distributional assumptions. Such assumptions are often too restrictive, particularly when applications involve Industry 4.0 processes where autocorrelation is prevalent and the distribution of the data is likely unknown. The change point estimation strategy proposed in this work easily incorporates autocorrelation and is distribution-free. Consequently, it is widely applicable to modern-day industrial processes. |
Zhi Wang Data Scientist Contractor Bayer Crop Science ![]() ![]() (bio)
Zhi Wang is currently a data scientist contractor at Bayer Crop Science focused on advancing field operation using modern data analytic tools and methods. Dr. Wang’s research interests include changepoint detection and estimation, statistical process monitoring, business analytics, and geospatial environmental modeling. |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Everyday Reproducibility (Abstract)
Modern data analysis is typically quite computational. Correspondingly, sharing scientific and statistical work now often means sharing code and data in addition writing papers and giving talks. This type of code sharing faces several challenges. For example, it is often difficult to take code from one computer and run it on another due to software configuration, version, and dependency issues. Even if the code runs, writing code that is easy to understand or interact with can be difficult. This makes it difficult to assess third-party code and its findings, for example, in a review process. In this talk we describe a combination of two computing technologies that help make analyses shareable, interactive, and completely reproducible. These technologies are (1) analysis containerization, which leverages virtualization to fully encapsulate analysis, data, code and dependencies into an interactive and shareable format, and (2) code notebooks, a literate programming format for interacting with analyses. This talks reviews both the problems at the high-level and also provides concrete solutions to the challenges faced. In addition to discussing reproducibility and data/code sharing generally, we will touch upon several such issues that arise specifically in the defense and aerospace communities. |
Gregory J. Hunt Assistant Professor William & Mary ![]() ![]() (bio)
Greg is an Assistant Professor of Mathematics at the College of William & Mary. He is an interdisciplinary researcher that builds scientific tools and is trained as a statistician, mathematician and computer scientist. Currently he works on a diverse set of problems in high-throughput micro-biology, research reproducibility, hypersonics, and spectroscopy. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Experiment Design and Visualization Techniques for an X-59 Low-boom Variability Study (Abstract)
This presentation outlines the design of experiments approach and data visualization techniques for a simulation study of sonic booms from NASA’s X-59 supersonic aircraft. The X-59 will soon be flown over communities across the contiguous USA as it produces a low-loudness sonic boom, or low-boom. Survey data on human perception of low-booms will be collected to support development of potential future commercial supersonic aircraft noise regulatory standards. The macroscopic atmosphere plays a critical role in the loudness of sonic booms. The extensive sonic boom simulation study presented herein was completed to assess climatological, geographical, and seasonal effects on the variability of the X-59’s low-boom loudness and noise exposure region size in order to inform X-59 community test planning. The loudness and extent of the noise exposure region make up the “sonic boom carpet.” Two spatial and temporal resolutions of atmospheric input data to the simulation were investigated. A Fast Flexible Space-Filling Design was used to select the locations across the USA for the two spatial resolutions. Analysis of simulated X-59 low-boom loudness data within a regional subset of the northeast USA was completed using a bootstrap forest to determine the final spatial and temporal resolution of the countrywide simulation study. Atmospheric profiles from NOAA’s Climate Forecast System Version 2 database were used to generate over one million simulated X-59 carpets at the final selected 138 locations across the USA. Effects of aircraft heading, season, geography, and climate zone on low-boom levels and noise exposure region size were analyzed. Models were developed to estimate loudness metrics throughout the USA for X-59 supersonic cruise overflight, and results were visualized on maps to show geographical and seasonal trends. These results inform regulators and mission planners on expected variations in boom levels and carpet extent from atmospheric variations. Understanding potential carpet variability is important when planning community noise surveys using the X-59. |
William J Doebler Research Aerospace Engineer NASA Langley Research Center ![]() ![]() (bio)
Will Doebler is a research engineer in NASA Langley’s Structural Acoustic Branch. He supports NASA’s Commercial Supersonic Technology project as a member of the Community Test Planning and Execution team for the X-59 low-boom supersonic aircraft. He has a M.S. in Acoustics from Penn State, and a B.A. in Physics from Gustavus Adolphus College in MN. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Exploring the behavior of Bayesian adaptive design of experiments (Abstract)
Physical experiments in the national security arena, including nuclear deterrence, are often expensive and time-consuming resulting in small sample sizes which make it difficult to achieve desired statistical properties. Bayesian adaptive design of experiments (BADE) is a sequential design of experiment approach which updates the test design in real time, in order to optimally collect data. BADE recommends ending experiments early by either concluding that the experiment would have ended in efficacy or futility, had the testing completely finished, with sufficiently high probability. This is done by using data already collected and marginalizing over the remaining uncollected data and updating the Bayesian posterior distribution in near real-time. BADE has seen successes in clinical trials, resulting in quicker and more effective assessments of drug trials while also reducing ethical concerns. BADE has typically only been used in futility studies rather than efficacy studies for clinical trials, although there hasn’t been much debate for this current paradigm. BADE has been proposed for testing in the national security space for similar reasons of quicker and cheaper test series. Given the high-consequence nature of the tests performed in the national security space, a strong understanding of new methods is required before being deployed. The main contribution of this research was to reproduce results seen in previous studies, for different aspects of model performance. A large simulation inspired by a real testing problem at Sandia National Laboratories was performed to understand the behavior of BADE under various scenarios, including shifts to mean, standard deviation, and distributional family, all in addition to the presence of outliers. The results help explain the behavior of BADE under various assumption violations. Using the results of this simulation, combined with previous work related to BADE in this field, it is argued this approach could be used as part of an “evidence package” for deciding to stop testing early due to futility, or with stronger evidence, efficacy. The combination of expert knowledge with statistical quantification provides the stronger evidence necessary for a method in its infancy in a high-consequence, new application area such as national security. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. |
Daniel Ries Sandia National Laboratories ![]() ![]() (bio)
Daniel Ries is a Senior Member of the Technical Staff at Sandia National Laboratories in the Statistics and Data Analytics Department. As an applied research statistician, Daniel collaborates with scientists and engineers in fields including nuclear deterrence, nuclear forensics, nuclear non-proliferation, global security, and climate science. His statistical work spans the topics of experimental design, inverse modeling, uncertainty quantification for machine learning and deep learning, spatio-temporal data analysis, and Bayesian methodology. Daniel completed his PhD in statistics at Iowa State University. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Forecasting with Machine Learning (Abstract)
The Department of Defense (DoD) has a considerable interest in forecasting key quantities of interest including demand signals, personnel flows, and equipment failure. Many forecasting tools exist to aid in predicting future outcomes, and there are many methods to evaluate the quality and uncertainty in those forecasts. When used appropriately, these methods can facilitate planning and lead to dramatic reductions in costs. This talk explores the application of machine learning algorithms, specifically gradient-boosted tree models, to forecasting and presents some of the various advantages and pitfalls of this approach. We conclude with an example where we use gradient-boosted trees to forecast Air National Guard personnel retention. |
Akshay Jain Data Science Fellow IDA (bio)
Akshay earned his Bachelor of Arts in Math, Political Science, Mathematical Methods in the Social Sciences (MMSS) from Northwestern University. He is currently a Data Science Fellow in the Strategy, Forces, and Resources Division at the Institute for Defense Analyses. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout From Gripe to Flight: Building an End-to-End Picture of DOD Sustainment (Abstract)
The DOD has to maintain readiness across a staggeringly diverse array of modern weapon systems, yet no single person or organization in the DOD has an end-to-end picture of the sustainment system that supports them. This shortcoming can lead to bad decisions when it comes to allocating resources in a funding-constrained environment. The underlying problem is driven by stovepiped databases, a reluctance to share data even internally, and a reliance on tribal knowledge of often cryptic data sources. Notwithstanding these difficulties, we need to create a comprehensive picture of the sustainment system to be able to answer pressing questions from DOD leaders. To that end, we have created a documented and reproducible workflow that shepherds raw data from DOD databases through cleaning and curation steps, and then applies logical rules, filters, and assumptions to transform the raw data into concrete values and useful metrics. This process gives us accurate, up-to-date data that we use to support quick-turn studies, and to rapidly build (and efficiently maintain) a suite of readiness models for a wide range of complex weapon systems. |
Benjamin Ashwell Research Staff Member IDA ![]() ![]() (bio)
Dr. Benjamin Ashwell has been a Research Staff Member at the Institute for Defense Analyses (IDA) since 2015. A founding member of IDA’s Sustainment Analysis Group, he leads the NAVAIR Sustainment analysis task. This work combines deep data analysis with end-to-end stochastic simulations to tie resource investments to flight line readiness outcomes. Before moving to the Sustainment Group in 2019, Dr. Ashwell spent three years supporting the Director of Operational Test and Evaluation’s (DOT&E’s) analysis of the Navy’s Littoral Combat Ship, specializing in surface warfare and system reliability. Dr. Ashwell received his PhD in Chemistry from Northwestern University in 2015. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Tutorial Introducing git for reproducible research (Abstract)
Version control software manages different versions of files, providing both an archive of files, a means to manage multiple versions of a file, and perhaps distribution. Perhaps the most popular program in the computer science community for version control is git, which serves as the backbone for websites such as Github, Bitbucket, and others. In this mini-tutorial we will introduce basics of version control in general, git in particular. We explain what role git plays in a reproducible research context. The goal of the course is to get participants started using git. We will create and clone repositories, add and track files in a repository, and manage git branches. We also discuss a few git best practices. |
Curtis Miller Research Staff Member IDA (bio)
Curtis Miller is a Research Staff Member at the Institute for Defense Analyses in the Operational Evaluation Division, where he is a member of the Test Science and Naval Warfare groups. He obtained a PhD in Mathematics at the University of Utah in 2020, where he studied mathematical statistics. He provides statistical expertise to the rest of OED and works primarily on design of experiments and analysis of modeling and simulation data. |
Tutorial | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Kernel Regression, Bernoulli Trial Responses, and Designed Experiments (Abstract)
Boolean responses are common for both tangible and simulation experiments. Well known approaches to fit models to Boolean responses include ordinary regression with normal approximations or variance stabilizing transforms, and logistic regression. Less well known is kernel regression. This session will present properties of kernel regression, its application to Bernoulli trial experiments, and other lessons learned from using kernel regression in the wild. Kernel regression is a non-parametric method. This requires modifications to many analyses, such as the required sample size. Unlike ordinary regression, the experiment design and model solution interact with each other. Consequently, the number of experiment samples for a desired modeling accuracy depends on the true state of nature. There has been trend in increasingly large simulation sample sizes as computing horsepower has grown. With kernel regression there is a point of diminishing return on sample sizes. That is, an experiment is better off with more data sites once a sufficient sample size is reached. Confidence interval accuracy is also dependent on the true state of nature. Parsimonious model tuning is required for accurate confidence intervals. Kernel tuning to build a parsimonious model using cross validation methods will be illustrated. |
John Lipp LM Fellow Lockheed Martin, Systems Engineering ![]() ![]() (bio)
Dr. John Lipp received his Electrical Engineering PhD from Michigan Technological University in the area of stochastic signal processing. He currently is employed at Lockheed Martin where he holds the position of Fellow. He teaches statistics and probability for engineers, Kalman filtering, design of experiments, and statistical verification and validation. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Legal, Moral, and Ethical Implications of Machine Learning (Abstract)
Machine learning algorithms can help to distill vast quantities of information to support decision making. However, machine learning also presents unique legal, moral, and ethical concerns – ranging from potential discrimination in personnel applications to misclassifying targets on the battlefield. Building on foundational principles in ethical philosophy, this presentation summarizes key legal, moral, and ethical criteria applicable to machine learning and provides pragmatic considerations and recommendations. |
Alan B. Gelder Research Staff Member IDA (bio)
Alan earned his PhD in Economics from the University of Iowa in 2014 and currently leads the Human Capital Group in the Strategy, Forces, and Resources Division at the Institute for Defense Analyses. He specializes in microeconomics, game theory, experimental and behavioral economics, and machine learning, and his research focuses on personnel attrition and related questions for the DOD. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Let’s stop talking about “transparency” with regard to AI (Abstract)
For AI-enabled and autonomous systems, issues of safety, security, and mission effectiveness are not separable—the same underlying data and software give rise to interrelated risks in all of these dimensions. If treated separately, there is considerable unnecessary duplication (and sometimes mutual interference) among efforts needed to satisfy commanders, operators, and certification authorities of the systems’ dependability. Assurances cases, pioneered within the safety and cybersecurity communities, provide a structured approach to simultaneously verifying all dimensions of system dependability with minimal redundancy of effort. In doing so, they also provide a more concrete and useful framework for system development and explanation of behavior than is generally seen in discussions of “transparency” and “trust” in AI and autonomy. Importantly, trust generally cannot be “built in” to systems, because the nature of the assurance arguments needed for various stakeholders requires iterative identification of evidence structures that cannot be anticipated by developers. |
David Sparrow and David Tate Senior Analyst IDA ![]() ![]() (bio)
David Tate joined the research staff of IDA’s Cost Analysis and Research Division in 2000. In his 20 years with CARD, he has worked on a wide variety of topics, including research into
|
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Leveraging Data Science and Cloud Tools to Enable Continuous Reporting (Abstract)
The DoD’s challenge to provide test results at the “Speed of Relevance” has generated many new strategies to accelerate data collection, adjudication, and analysis. As a result, the Air Force Operational Test and Evaluation Center (AFOTEC), in conjunction with the Air Force Chief Data Office’s Visible, Accessible, Understandable, Linked and Trusted Data Platform (VAULT), is developing a Survey Application. This new cloud-based application will be deployable on any AFNET-connected computer or tablet and merges a variety of tools for collection, storage, analytics, and decision-making into one easy-to-use platform. By placing cloud-computing power in the hands of operators and testers, authorized users can view report-quality visuals and statistical analyses the moment a survey is submitted. Because the data is stored in the cloud, demanding computations such as machine learning are run at the data source to provide even more insight into both quantitative and qualitative metrics. The T-7A Red Hawk will be the first operational test (OT) program to utilize the Survey Application. Over 1000 flying and simulator test points have been loaded into the application, with many more coming from developmental test partners. The Survey app development will continue as USAF testing commences. Future efforts will focus on making the Survey Application configurable to other research and test programs to enhance their analytic and reporting capabilities. |
Timothy Dawson Lead Mobility Test Operations Analyst AFOTEC Detachment 5 ![]() ![]() (bio)
irst Lieutenant Timothy Dawson is an operational test analyst assigned with the Air Force Operational Test and Evaluation Center, Detachment 5, at Edwards AFB, Ca. The lieutenant serves as the lead AFOTEC Mobility Training Operations analyst, splitting his work between the T-7A Red Hawk high performance trainer, KC-46A Pegasus tanker, and VC-25B presidential transport. Lieutenant Dawson also serves alongside the 416th Flight Test Squadron as a flight test engineer on the T-38C Talon. Lieutenant Dawson, originally from Olympia, Wa., received his commission as a second lieutenant upon completing ROTC at the University of California, Berkeley in 2019. He served as a student pilot at Vance AFB, Ok., leading data analysis and software development projects before arriving to his current duty location at Edwards. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout M&S approach for quantifying readiness impact of sustainment investment scenarios (Abstract)
Sustainment for weapon systems involves multiple components that influence readiness outcomes through a complex array of interactions. While military leadership can use simple analytical approaches to yield insights into current metrics (e.g., dashboard for top downtime drivers) or historical trends of a given sustainment structure (e.g., correlative studies between stock sizes and backorders), they are inadequate tools for guiding decision-making due to their inability to quantify the impact on readiness. In this talk, we discuss the power of IDA’s end-to-end modeling and simulation (M&S) approach that estimates time-varying readiness outcomes based on real-world data on operations, supply, and maintenance. These models are designed to faithfully emulate fleet operations at the level of individual components and operational units, as well as to incorporate the multi-echelon inventory system used in military sustainment. We showcase a notional example in which our M&S approach produces a set of recommended component-level investments and divestments in wholesale supply that would improve the readiness of a weapon system. We argue for the urgency of increased end-to-end M&S efforts across the Department of Defense to guide the senior leadership in its data-driven decision-making for readiness initiatives. |
Andrew C. Flack, Han G. Yi Research Staff Member IDA (OED) (bio)
Han Yi is a Research Staff Member in the Operational Evaluation division at IDA. His work focuses on weapons system sustainment and readiness modeling. Prior to joining IDA in 2020, he completed his PhD in Communication Sciences and Disorders at The University of Texas at Austin and served as a Postdoctoral Scholar at the University of California, San Francisco. Andrew Flack is a Research Staff Member in the Operational Evaluation division at IDA. His work focuses on weapons system sustainment and readiness modeling. Prior to joining IDA in 2016, Andrew was an analyst at the Defense Threat Reduction Agency (DTRA) studying M&S tools for chemical and biological defense. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Machine Learning for Efficient Fuzzing (Abstract)
A high level of security in software is a necessity in today’s world; the best way to achieve confidence in security is through comprehensive testing. This paper covers the development of a fuzzer that explores the massively large input space of a program using machine learning to find the inputs most associated with errors. A formal methods model of the software in question is used to generate and evaluate test sets. Using those test sets, a two-part algorithm is used: inputs get modified according to their Hamming distance from error-causing inputs and then a tree-based model learns the relative importance of each variable in causing errors. This architecture was tested against a model of an aircraft’s thrust reverser and predefined model properties offered a starting test set. From there, the hamming algorithm and importance model expand upon the original set to offer a more informed set of test cases. This system has great potential in producing efficient and effective test sets and has further applications in verifying the security of software programs and cyber-physical systems, contributing to national security in the cyber domain. |
John Richie Cadet USAFA |
Session Recording | 2022 |
||
Breakout Machine Learning for Uncertainty Quantification: Trusting the Black Box (Abstract)
Adopting uncertainty quantification (UQ) has become a prerequisite for providing credibility in modeling and simulation (M&S) applications. It is well known, however, that UQ can be computationally prohibitive for problems involving expensive high-fidelity models, since a large number of model evaluations is typically required. A common approach for improving efficiency is to replace the original model with an approximate surrogate model (i.e., metamodel, response surface, etc.) using machine learning that makes predictions in a fraction of the time. While surrogate modeling has been commonplace in the UQ field for over a decade, many practitioners still remain hesitant to rely on “black box” machine learning models over trusted physics-based models (e.g., FEA) for their analyses. This talk discusses the role of machine learning in enabling computational speedup for UQ, including traditional limitations and modern efforts to overcome them. An overview of surrogate modeling and its best practices for effective use is first provided. Then, some emerging methods that aim to unify physics-based and data-based approaches for UQ are introduced, including multi-model Monte Carlo simulation and physics-informed machine learning. The use of both traditional surrogate modeling and these more advanced machine learning methods for UQ are highlighted in the context of applications at NASA, including trajectory simulation and spacesuit certification. |
James Warner Computational Scientist NASA Langley Research Center (bio)
Dr. James (Jim) Warner joined NASA Langley Research Center (LaRC) in 2014 as a Research Computer Engineer after receiving his PhD in Computational Solid Mechanics from Cornell University. Previously, he received his B.S. in Mechanical Engineering from SUNY Binghamton University and held temporary research positions at the National Institute of Standards and Technology and Duke University. Dr. Warner is a member of the Durability, Damage Tolerance, and Reliability Branch (DDTRB) at LaRC, where he focuses on developing computationally-efficient approaches for uncertainty quantification for a range of applications including structural health management, additive manufacturing, and trajectory simulation. Additionally, he works to bridge the gap between UQ research and NASA mission impact, helping to transition state-of-the-art methods to solve practical engineering problems. To that end, he is currently involved in efforts to certify the xEMU spacesuit and develop guidance systems for entry, descent, and landing for Mars landing. His other research interests include machine learning, high performance computing, and topology optimization. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Measuring training efficacy: Structural validation of the Operational Assessment of Training (Abstract)
Effective training of the broad set of users/operators of systems has downstream impacts on usability, workload, and ultimate system performance that are related to mission success. In order to measure training effectiveness, we designed a survey called the Operational Assessment of Training Scale (OATS) in partnership with the Army Test and Evaluation Center (ATEC). Two subscales were designed to assess the degrees to which training covered relevant content for real operations (Relevance subscale) and enabled self-rated ability to interact with systems effectively after training (Efficacy subscale). The full list of 15 items were given to over 700 users/operators across a range of military systems and test events (comprising both developmental and operational testing phases). Systems included vehicles, aircraft, C3 systems, and dismounted squad equipment, among other types. We evaluated reliability of the factor structure across these military samples using confirmatory factor analysis. We confirmed that OATS exhibited a two-factor structure for training relevance and training efficacy. Additionally, a shortened, six-item measure of the OATS with three items per subscale continues to fit observed data well, allowing for quicker assessments of training. We discuss various ways that the OATS can be applied to one-off, multi-day, multi-event, and other types of training events. Additional OATS details and information about other scales for test and evaluation are available at the Institute for Defense Analyses’ web site, https://testscience.org/validated-scales-repository/. |
Brian Vickers Research Staff Member IDA ![]() ![]() (bio)
Dr. Brian Vickers received his PhD in Cognition and Cognitive Neuroscience from the University of Michigan in 2015 on the topic of how decision architectures and human factors influence people’s decisions about time, money, and material options. Since that time, he was worked in data and decision science, and is currently a Research Staff Member at the Institute for Defense Analyses supporting topics including operational testing, test science, and artificial intelligence. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Method for Evaluating Bayesian Reliability Models for Developmental Testing (Abstract)
For analysis of military Developmental Test (DT) data, frequentist statistical models are increasingly challenged to meet the needs of analysts and decision-makers. Bayesian models have the potential to address this challenge. Although there is a substantial body of research on Bayesian reliability estimation, there appears to be a paucity of Bayesian applications to issues of direct interest to DT decision makers. To address this deficiency, this research accomplishes two tasks. First, this work provides a motivating example that analyzes reliability for a notional but representative system. Second, to enable the motivated analyst to apply Bayesian methods, it provides a foundation and best practices for Bayesian reliability analysis in DT. The first task is accomplished by applying Bayesian reliability assessment methods to notional DT lifetime data generated using a Bayesian reliability growth planning methodology (Wayne 2018). The tested system is assumed to be a generic complex system with a large number of failure modes. Starting from the Bayesian assessment methodology of (Wayne and Modarres, A Bayesian Model for Complex System Reliability 2015), this work explores the sensitivity of the Bayesian results to the choice of the prior distribution and compares the Bayesian results for the reliability point estimate and uncertainty interval with analogous results from traditional reliability assessment methods. The second task is accomplished by establishing a generic structure for systematically evaluating relevant statistical Bayesian models. It identifies what have been implicit reliability issues for DT programs using a structured poll of stakeholders combined with interviews of a selected set of Subject Matter Experts. Secondly, candidate solutions are identified in the literature. Thirdly, solutions matched to issues using criteria designed to evaluate the capability of a solution to improve support for decision-makers at critical points in DT programs. The matching process uses a model taxonomy structured according to decisions at each DT phase, plus criteria for model applicability and data availability. The end result is a generic structure that allows an analyst to identify and evaluate a specific model for use with a program and issue of interest. Wayne, Martin. 2018. “Modeling Uncertainty in Reliability Growth Plans.” 2018 Annual Reliability and Maintainability Symposium (RAMS). 1-6. Wayne, Martin, and Mohammad Modarres. 2015. “A Bayesian Model for Complex System Reliability.” IEEE Transactions on Reliability 64: 206-220. |
Paul Fanto and David Spalding Research Staff Member, System Evaluation Division IDA (bio)
Dr. Paul Fanto is a Research Staff Member at the Institute for Defense Analyses. He received a Ph.D. in Physics from Yale University, where he worked on the application of Monte Carlo methods and high-performance computing to the modeling of atomic nuclei. His current work involves the study of space systems and the application of Bayesian statistical methods to defense system testing. Dr. Spalding is a Research Staff member at the Institute for Defense Analyses. He has a Ph.D. degree from the University of Rochester in experimental particle physics and a master’s degree from George Washington University in Computer Science. At the Institute for Defense Analyses, he has analyses aircraft and missile system issues . For the past decade he has addressed programmatic and statistical problems in developmental testing. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Tutorial Mixed Models: A Critical Tool for Dependent Observations (Abstract)
The use of fixed and random effects have a rich history. They often go by other names, including blocking models, variance component models, nested and split-plot designs, hierarchical linear models, multilevel models, empirical Bayes, repeated measures, covariance structure models, and random coefficient models. Mixed models are one of the most powerful and practical ways to analyze experimental data, and investing time to become skilled with them is well worth the effort. Many, if not most, real-life data sets do not satisfy the standard statistical assumption of independent observations. Failure to appropriately model design structure can easily result in biased inferences. With an appropriate mixed model we can estimate primary effects of interest as well as compare sources of variability using common forms of dependence among sets of observations. Mixed Models can readily become the most handy method in your analytical toolbox and provide a foundational framework for understanding statistical modeling in general. In this course we will cover many types of mixed models, including blocking, split-plot, and random coefficients. |
Elizabeth Claassen Research Statistician Developer JMP Statistical Discovery ![]() ![]() (bio)
Elizabeth A. Claassen, PhD, is a Research Statistician Developer at JMP Statistical Discovery. Dr. Claassen has over a decade of experience in statistical modeling in a variety of software packages. Her chief interest is generalized linear mixed models. Dr. Claassen earned an MS and PhD in statistics from the University of Nebraska–Lincoln, where she received the Holling Family Award for Teaching Excellence from the College of Agricultural Sciences and Natural Resources. She is an author of the third edition of “SAS® for Mixed Models: An Introduction and Basic Applications” (2018) and “JMP® for Mixed Models” (2021). |
Tutorial | Session Recording | 2022 |
|
Moderator (Abstract)
For organizations to make data-driven decisions, they must be able to understand and organize their mission critical data. Recently, the DoD, NASA and other federal agencies have declared their intention to become “data-centric” organizations, but transitioning from an existing mode of operation and architecture can be challenging. Moreover, the DoD is pushing for artificial intelligence enabled systems (AIES) and wide scale digital transformation. These concepts in the abstract seem straightforward, but because they can only evolve when people, processes, and technology change together, they have proven challenging in execution. Since the structure and quality of an organization’s data limits what an organization can do with that data it is imperative to get data processes right before embarking on other initiatives that depend on quality data. Despite the importance of data quality, many organizations treat data architecture as an emergent phenomenon and not something to be planned or thought through holistically. In this discussion, panelists will explore what it means to be data-centric, what a data-centric architecture is, how it is different from the other data architectures, why an organization might prefer a data-centric approach, and the challenges associated with becoming data-centric. |
Matthew Avery Assistant Director, Operational Evaluation IDA ![]() ![]() (bio)
Matthew Avery is an Assistant Director in the Operational Evaluation Division (OED) at the Institute for Defense Analyses (IDA)and part of OED’s Sustainment group. He represents OED on IDA’s Data Governance Council and acts as the Deputy to IDA’s Director of Data Strategy and Chief Data Officer, helping craft data-related strategy and policy. Matthew leads IDA’s sustainment modeling efforts for the V-22 fleet, developing end-to-end multi-echelon models to evaluate options for improving mission-capable rates for the CV-22 and MV-22 fleets. Prior to this, Matthew was on the Test Science team, where he helped develop analytical methods and tools for operational test and evaluation. As the Test Science Data Management lead, he was responsible for delivering an annual summary of major activity undertaken by the Office of the Director, Operational Test and Evaluation. Additionally, Matthew wrote and implemented OED policy on data management and reproducible research. In addition to working with the Test Science team, Matthew also led operational test and evaluation efforts of Army and Marine Corps unmanned aircraft systems. In 2018-19 Matthew served as an embedded analyst in the Pentagon’s Office of Cost Assessment and Program Evaluation, where he built state-space models in support of the Space Control Strategic Portfolio Review. Matthew earned his PhD in Statistics from North Carolina State University in 2012, his MS in Statistics from North Carolina State in 2009, and a BA from New College of Florida in 2006. He is a member of the American Statistical Association. |
2022 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Computing Statistical Tolerance Regions Using the R Package ‘tolerance’ |
Derek Young Associate Professor of Statistics University of Kentucky ![]() |
Breakout | Session Recording |
![]() | 2022 |
Convolutional Neural Networks and Semantic Segmentation for Cloud and Ice Detection |
Prarabdha Ojwaswee Yonzon Cadet United States Military Academy (West Point) ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Tutorial Data Integrity For Deep Learning Models |
Victoria Gerardi and John Cilli US Army, CCDC Armaments Center ![]() ![]() ![]() ![]() |
Tutorial | Materials | 2022 |
|
Tutorial Data Integrity For Deep Learning Models |
Roshan Patel Systems Engineer/Data Scientist US Army ![]() ![]() |
Tutorial | 2022 |
||
Data Science & ML-Enabled Terminal Effects Optimization |
John Cilli Computer Scientist Picatinny Arsenal ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Deep learning aided inspection of additively manufactured metals |
Brendan Croom Postdoctoral Fellow JHU Applied Physics Laboratory ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Enabling Enhanced Validation of NDE Computational Models and Simulations |
William C. Schneck, III Research AST NASA LaRC |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Poster Estimating the time of sudden shift in the location or scale of ergodic-stationary process |
Zhi Wang Data Scientist Contractor Bayer Crop Science ![]() ![]() |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Everyday Reproducibility |
Gregory J. Hunt Assistant Professor William & Mary ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Experiment Design and Visualization Techniques for an X-59 Low-boom Variability Study |
William J Doebler Research Aerospace Engineer NASA Langley Research Center ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Exploring the behavior of Bayesian adaptive design of experiments |
Daniel Ries Sandia National Laboratories ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Forecasting with Machine Learning |
Akshay Jain Data Science Fellow IDA |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout From Gripe to Flight: Building an End-to-End Picture of DOD Sustainment |
Benjamin Ashwell Research Staff Member IDA ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Tutorial Introducing git for reproducible research |
Curtis Miller Research Staff Member IDA |
Tutorial | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Kernel Regression, Bernoulli Trial Responses, and Designed Experiments |
John Lipp LM Fellow Lockheed Martin, Systems Engineering ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Legal, Moral, and Ethical Implications of Machine Learning |
Alan B. Gelder Research Staff Member IDA |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Let’s stop talking about “transparency” with regard to AI |
David Sparrow and David Tate Senior Analyst IDA ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Leveraging Data Science and Cloud Tools to Enable Continuous Reporting |
Timothy Dawson Lead Mobility Test Operations Analyst AFOTEC Detachment 5 ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout M&S approach for quantifying readiness impact of sustainment investment scenarios |
Andrew C. Flack, Han G. Yi Research Staff Member IDA (OED) |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Machine Learning for Efficient Fuzzing |
John Richie Cadet USAFA |
Session Recording | 2022 |
||
Breakout Machine Learning for Uncertainty Quantification: Trusting the Black Box |
James Warner Computational Scientist NASA Langley Research Center |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Measuring training efficacy: Structural validation of the Operational Assessment of Training |
Brian Vickers Research Staff Member IDA ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Method for Evaluating Bayesian Reliability Models for Developmental Testing |
Paul Fanto and David Spalding Research Staff Member, System Evaluation Division IDA |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Tutorial Mixed Models: A Critical Tool for Dependent Observations |
Elizabeth Claassen Research Statistician Developer JMP Statistical Discovery ![]() ![]() |
Tutorial | Session Recording | 2022 |
|
Moderator |
Matthew Avery Assistant Director, Operational Evaluation IDA ![]() ![]() |
2022 |