Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Short Course Introduction to R (Abstract)
This course is designed to introduce participants to the R programming language and the R studio editor. R is a free and open-source software for summarizing data, creating visuals of data, and conducting statistical analyses. R can offer many advantages over programs such as Excel including faster computation, customized analyses, access to the latest statistical techniques, automation of tasks, and the ability to easily reproduce research. After completing this course, a new user should be able to: • Import/export data from/to external files. • Create and manipulate new variables. • Conduct basic statistical analyses (such as t-tests and linear regression). • Create basic graphs. • Install and use R packages Participants should bring a laptop for the interactive components of the course. |
Justin Post North Carlina State Univeristy |
Short Course | Materials | 2018 |
|
Short Course Using R Markdown & the Tidyverse to Create Reproducible Research (Abstract)
R is one of the major platforms for doing statistical analysis and research. This course introduces the powerful and popular R software through the use of the RStudio IDE. This course covers the use of the tidyverse suite of packages to import raw data (readr), do common data manipulations (dplyr and tidyr), and summarize data numerically (dplyr) and graphically (ggplot2). In order to promote reproducibility of analyses, we will discuss how to code using R Markdown – a method of R coding that allows one to easily create PDF and HTML documents that interweave narrative, R code, and results. List of packages to install: tidyverse, GGally, Lahman, tinytex |
Justin Post Teaching Associate Professor NCSU ![]() (bio)
Justin Post is a Teaching Associate Professor and the Director of Online Education in the Department of Statistics at North Carolina State University. Teaching has always been his passion and that is his main role at NCSU. He teaches undergraduate and graduate courses in both face-to-face and distance settings. Justin is an R enthusiast and has taught many short courses on R, the tidyverse, R shiny, and more. |
Short Course | Materials | 2022 |
|
Breakout TRMC Big Data Analytics Investments & Technology Review (Abstract)
To properly test and evaluate today’s advanced military systems, the T&E community must utilize big data analytics (BDA) and techniques to quickly process, visualize, understand, and report on massive amounts of data. This tutorial/presentation/TBD will inform the audience how to transform the current T&E data infrastructure and analysis techniques to one employing enterprise BDA and Knowledge Management (BDKM) that supports the current warfighter T&E needs and the developmental and operational testing of future weapon platforms. The TRMC enterprise BDKM will improve acquisition efficiency, keep up with the rapid pace of acquisition technological advancement, and ensure that effective weapon systems are delivered to warfighters at the speed of relevance – all while enabling T&E analysts across the acquisition lifecycle to make better and faster decisions using data previously inaccessible or unusable. This capability encompasses a big data architecture framework – its supporting resources, methodologies, and guidance – to properly address the current and future data needs of systems testing and analysis, as well as an implementation framework, the Cloud Hybrid Edge-to-Enterprise Evaluation and Test Analysis Suite (CHEETAS). In combination with the TRMC’s Joint Mission Environment Test Capability (JMETC) which provides readily-available connectivity to the Services’ distributed test capabilities and simulations, the TRMC has demonstrated that applying enterprise-distributed BDA tools and techniques to distributed T&E leads to faster and more informed decision-making – resulting in reduced overall program cost and risk. |
Edward Powell Lead Architect and Systems Engineer Test Resource Management Center ![]() (bio)
Dr. Edward T. Powell is a lead architect and systems engineer for the Test Resource Management Center. He has worked in the military simulation, intelligence, and Test and Evaluation fields during his thirty-year career, specializing in systems and simulation architecture and engineering. His current focus is on integrating various OSD and Service big data analysis initiatives into a single seamless cloud-based system-of-systems. He holds a PhD in Astrophysics from Princeton University is the principal of his own consulting company based in Northern Virginia. |
Breakout | 2022 |
||
Convolutional Neural Networks and Semantic Segmentation for Cloud and Ice Detection (Abstract)
Recent research shows the effectiveness of machine learning on image classification and segmentation. The use of artificial neural networks (ANNs) on image datasets such as the MNIST dataset of handwritten digits is highly effective. However, when presented with a more complex image, ANNs and other simple computer vision algorithms tend to fail. This research uses Convolutional Neural Networks (CNNs) to determine how we can differentiate between ice and clouds in the imagery of the Arctic. Instead of using ANNs, where we analyze the problem in one dimension, CNNs identify features using the spatial relationships between the pixels in an image. This technique allows us to extract spatial features, presenting us with higher accuracy. Using a CNN named the Cloud-Net Model, we analyze how a CNN performs when analyzing satellite images. First, we examine recent research on the Cloud-Net Model’s effectiveness on satellite imagery, specifically from Landsat data, with four channels: red, green, blue, and infrared. We extend and modify this model, allowing us to analyze data from the most common channels used by satellites: red, green, and blue. By training on different combinations of these three channels, we extend this analysis by testing on an entirely different data set: GOES imagery. This gives us an understanding of the impact of each individual channel in image classification. By selecting images that exist in the same geographic location and containing both ice and clouds, such as the Landsat, we test GOES analyzing the CNN’s generalizability. Finally, we present CNN’s ability to accurately identify the clouds and ice in the GOES data versus the Landsat data. |
Prarabdha Ojwaswee Yonzon Cadet United States Military Academy (West Point) ![]() (bio)
CDT Prarabdha “Osho” Yonzon is a first-generation Nepalese American raised in Brooklyn Park, Minnesota. He initially enlisted into the Minnesota National Guard in 2015 as an Aviation Operation Specialist, and he was later accepted into USMAPS in 2017. He is an Applied Statistics Data Science Major from the United States Military Academy. Osho is passionate about his research. He first started working with West Point Department of Physics to examine impacts on GPS solutions. Later, he published a few articles and presented them at the AWRA annual conference for modeling groundwater flow with the Math department. Currently, he is working with the West Point Department of Mathematics and Lockheed Martin to create machine learning algorithms to detect objects in images. He plans to attend graduate school for data science and serve as a cyber officer. |
Session Recording |
![]() Recording | 2022 |
|
Bayesian Estimation for Covariate Defect Detection Model Based on Discrete Cox Proportiona (Abstract)
Traditional methods to assess software characterize the defect detection process as a function of testing time or effort to quantify failure intensity and reliability. More recent innovations include models incorporating covariates that explain defect detection in terms of underlying test activities. These covariate models are elegant and only introduce a single additional parameter per testing activity. However, the model forms typically exhibit a high degree of non-linearity. Hence, stable and efficient model fitting methods are needed to enable widespread use by the software community, which often lacks mathematical expertise. To overcome this limitation, this poster presents Bayesian estimation methods for covariate models, including the specification of informed priors as well as confidence intervals for the mean value function and failure intensity, which often serves as a metric of software stability. The proposed approach is compared to traditional alternative such as maximum likelihood estimation. Our results indicate that Bayesian methods with informed priors converge most quickly and achieve the best model fits. Incorporating these methods into tools should therefore encourage widespread use of the models to quantitatively assess software. |
Priscila Silva Graduate Student University of Massachusetts Dartmouth ![]() (bio)
Priscila Silva is a MS student in the Department of Electrical & Computer Engineering at the University of Massachusetts Dartmouth (UMassD). She received her BS (2017) in Electrical Engineering from the Federal University of Ouro Preto, Brazil. |
Session Recording |
![]() Recording | 2022 |
|
Breakout Uncertainty Quantification: What is it and Why it is Important to Test, Evaluation, and Modeling and Simulation in Defense and Aerospace (Abstract)
Uncertainty appears in many aspects of systems design including stochastic design parameters, simulation inputs, and forcing functions. Uncertainty Quantification (UQ) has emerged as the science of quantitative characterization and reduction of uncertainties in both simulation and test results. UQ is a multidisciplinary field with a broad base of methods including sensitivity analysis, statistical calibration, uncertainty propagation, and inverse analysis. Because of their ability to bring greater degrees of confidence to decisions, uncertainty quantification methods are playing a greater role in test, evaluation, and modeling and simulation in defense and aerospace. The value of UQ comes with better understanding of risk from assessing the uncertainty in test and modeling and simulation results. The presentation will provide an overview of UQ and then discuss the use of some advanced statistical methods, including DOEs and emulation for multiple simulation solvers and statistical calibration, for efficiently quantifying uncertainties. These statistical methods effectively link test, evaluation and modeling and simulation by coordinating the valuation of uncertainties, simplifying verification and validation activities. |
Peter Qian University of Wisconsin and SmartUQ |
Breakout | Materials | 2017 |
|
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases (Abstract)
Generating aerodynamic coefficients can be computationally expensive, especially for the viscous CFD solvers in which multiple complex models are iteratively solved. When filling large design spaces, utilizing only a high accuracy viscous CFD solver can be infeasible. We apply state-of-the-art methods for design and analysis of computer experiments to efficiently develop an emulator for high-fidelity simulations. First, we apply a cokriging model to leverage information from fast low-fidelity simulations to improve predictions with more expensive high-fidelity simulations. Combining space-filling designs with a Gaussian process model-based sequential sampling criterion allows us to efficiently generate sample points and limit the number of costly simulations needed to achieve the desired model accuracy. We demonstrate the effectiveness of these methods with an aerodynamic simulation study using a conic shape geometry. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Release Number: LLNL-ABS-818163 |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Poster T&E of Responsible AI (Abstract)
Getting Responsible AI (RAI) right is difficult and demands expertise. All AI-relevant skill sets, including ethics, are in high demand and short supply, especially regarding AI’s intersection with test and evaluation (T&E). Frameworks, guidance, and tools are needed to empower working-level personnel across DOD to generate RAI assurance cases with support from RAI SMEs. At a high level, framework should address the following points: 1. T&E is a necessary piece of the RAI puzzle–testing provides a feedback mechanism for system improvement and builds public and warfighter confidence in our systems, and RAI should be treated just like performance, reliability, and safety requirements. 2. We must intertwine T&E and RAI across the cradle-to-grave product life cycle. Programs must embrace T&E and RAI from inception; as development proceeds, these two streams must be integrated in tight feedback loops to ensure effective RAI implementation. Furthermore, many AI systems, along with their operating environments and use cases, will continue to update and evolve and thus will require continued evaluation after fielding. 3. The five DOD RAI principles are a necessary north star, but alone they are not enough to implement or ensure RAI. Programs will have to integrate multiple methodologies and sources of evidence to construct holistic arguments for how much the programs have reduced RAI risks. 4. RAI must be developed, tested, and evaluated in context–T&E without operationally relevant context will fail to ensure that fielded tools achieve RAI. Mission success depends on technology that must interact with warfighters and other systems in complex environments, while constrained by processes and regulation. AI systems will be especially sensitive to operational context and will force T&E to expand what it considers. |
Rachel Haga Research Associate IDA ![]() |
Poster | Session Recording |
![]() Recording | 2022 |
Breakout Dose-Response Model of Recent Sonic Boom Community Annoyance Data (Abstract)
To enable quiet supersonic passenger flight overland, NASA is providing national and international noise regulators with a low-noise sonic boom database. The database will consist of dose-response curves, which quantify the relationship between low-noise sonic boom exposure and community annoyance. The recently-updated international standard for environmental noise assessment, ISO 1996-1:2016, references multiple fitting methods for dose-response analysis. One of these fitting methods, Fidell’s community tolerance level method, is based on theoretical assumptions that fix the slope of the curve, allowing only the intercept to vary. This fitting method is applied to an existing pilot sonic boom community annoyance data set from 2011 with a small sample size. The purpose of this exercise is to develop data collection and analysis recommendations for future sonic boom community annoyance surveys. |
Jonathan Rathsam NASA |
Breakout | 2017 |
||
Breakout Improving Sensitivity Experiments (Abstract)
This presentation will provide a brief overview of sensitivity testing, and emphasize applications to several products and system of importance to the Defense as well as private industry, including Insensitive Energetics, Ballistic testing of protective armor, testing of munition fuzes and Microelectromechanical Systems (MEMS) components, and safety testing of high-pressure test ammunition, and packaging for high-value materials. |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
|
Breakout VV&UQ – Uncertainty Quantification for Model-Based Engineering of DoD Systems (Abstract)
The US Army ARDEC has recently established an initiative to integrate statistical and probabilistic techniques into engineering modeling and simulation (M&S) analytics typically used early in the design lifecycle to guide technology development. DOE-driven Uncertainty Quantification techniques, including statistically rigorous model verification and validation (V&V) approaches, enable engineering teams to identify, quantify, and account for sources of variation and uncertainties in design parameters, and identify opportunities to make technologies more robust, reliable, and resilient earlier in the product’s lifecycle. Several recent armament engineering case studies – each with unique considerations and challenges – will be discussed. |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
|
Breakout Bayesian Component Reliability Estimation: F-35 Case Study (Abstract)
A challenging aspect of a system reliability assessment is integrating multiple sources of information, including component, subsystem, and full-system data, previous test data, or subject matter expert opinion. A powerful feature of Bayesian analyses is the ability to combine these multiple sources of data and variability in an informed way to perform statistical inference. This feature is particularly valuable in assessing system reliability where testing is limited and only a small number (or no failures at all) are observed. The F-35 is DoD’s largest program; approximately one-third of the operations and sustainment cost is attributed to the cost of spare parts and the removal, replacement, and repair of components. The failure rate of those components is the driving parameter for a significant portion of the sustainment cost, and yet for many of these components, poor estimates of the failure rate exist. For many programs, the contractor produces estimates of component failure rates, based on engineering analysis and legacy systems with similar parts. While these are useful, the actual removal rates can provide a more accurate estimate of the removal and replacement rates the program anticipates to experience in future years. In this presentation, we show how we applied a Bayesian analysis to combine the engineering reliability estimates with the actual failure data to overcome the problems of cases where few data exist. Our technique is broadly applicable to any program where multiple sources of reliability information need be combined for the best estimation of component failure rates and ultimately sustainment costs. |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
|
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E (Abstract)
The majority of statistical analyses involves observing a fixed set of data and analyzing those data after the final observation has been collected to draw some inference about the population from which they came. Unlike these traditional methods, sequential analysis is concerned with situations for which the number, pattern, or composition of the data is not determined at the start of the investigation but instead depends upon the information acquired throughout the course of the investigation. Expanding the use of sequential analysis in DoD testing has the potential to save substantial test dollars and decrease test time. However, switching from traditional to sequential planning will likely induce unique challenges. The goal of this round table is to provide an open forum for topics related to sequential analyses. We aim to discuss potential challenges, identify potential ways to overcome them, and talk about successful stories of sequential analyses implementation and lessons learned. Specific questions for discussion will be provided to participants prior to the event. |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Rebecca Medlin is a Research Staff Member at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on the use of statistics in test & evaluation and has designed tests and conducted statistical analyses for several major defense programs including tactical vehicles, mobility aircraft, radars, and electronic warfare systems. Her areas of expertise include design of experiments, statistical modeling, and reliability. She has a Ph.D. in Statistics from Virginia Tech. |
Roundtable | 2021 |
||
Breakout Aerospace Measurement and Experimental System Development Characterization (Abstract)
Co-Authors: Sean A. Commo, Ph.D., P.E. and Peter A. Parker, Ph.D., P.E. NASA Langley Research Center, Hampton, Virginia, USA Austin D. Overmeyer, Philip E. Tanner, and Preston B. Martin, Ph.D. U.S. Army Research, Development, and Engineering Command, Hampton, Virginia, USA. The application of statistical engineering to helicopter wind-tunnel testing was explored during two powered rotor entries. The U.S. Army Aviation Development Directorate Joint Research Program Office and the NASA Revolutionary Vertical Lift Project performed these tests jointly at the NASA Langley Research Center. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small segment of the overall tests devoted to developing case studies of a statistical engineering approach. Data collected during each entry were used to estimate response surface models characterizing vehicle performance, a novel contribution of statistical engineering applied to powered rotor-wing testing. Additionally, a 16- to 47-times reduction in the number of data points required was estimated when comparing a statistically-engineered approach to a conventional one-factor-at-a-time approach. |
Ray Rhew NASA |
Breakout | Materials | 2016 |
|
Breakout Application of Design of Experiments to a Calibration of the National Transonic Facility (Abstract)
Recent work at the National Transonic Facility (NTF) at the NASA Langley Research Center has shown that a substantial reduction in freestream pressure fluctuations can be achieved by positioning the moveable model support walls and plenum re-entry flaps to choke the flow just downstream of the test section. This choked condition reduces the upstream propagation of disturbances from the diffuser into the test section, resulting in improved Mach number control and reduced freestream variability. The choked conditions also affect the Mach number gradient and distribution in the test section, so a calibration experiment was undertaken to quantify the effects of the model support wall and re-entry flap movements on the facility freestream flow using a centerline static pipe. A design of experiments (DOE) approach was used to develop restricted-randomization experiments to determine the effects of total pressure, reference Mach number, model support wall angle, re-entry flap gap height, and test section longitudinal location on the centerline static pressure and local Mach number distributions for a reference Mach number range from 0.7 to 0.9. Tests were conducted using air as the test medium at a total temperature of 120 °F as well as for gaseous nitrogen at cryogenic total temperatures of -50, -150, and -250 °F. The resulting data were used to construct quadratic polynomial regression models for these factors using a Restricted Maximum Likelihood (REML) estimator approach. Independent validation data were acquired at off-design conditions to check the accuracy of the regression models. Additional experiments were designed and executed over the full Mach number range of the facility (0.2 £ Mref £ 1.1) at each of the four total temperature conditions, but with the model support walls and re-entry flaps set to their nominal positions, in order to provide calibration regression models for operational experiments where a choked condition downstream of the test section is either not feasible or not required. This presentation focuses on the design, execution, analysis, and results for the two experiments performed using air at a total temperature of 120 °F. Comparisons are made between the regression model output and validation data, as well as the legacy NTF calibration results, and future work is discussed. |
Matt Rhode NASA |
Breakout | Materials | 2018 |
|
Breakout Using Bayesian Neural Networks for Uncertainty Quantification of Hyperspectral Image Target Detection (Abstract)
Target detection in hyperspectral images (HSI) has broad value in defense applications, and neural networks have recently begun to be applied for this problem. A common criticism of neural networks is they give a point estimate with no uncertainty quantification (UQ). In defense applications, UQ is imperative because the cost of a false positive or negative is high. Users desire high confidence in either “target” or “not target” predictions, and if high confidence cannot be achieved, more inspection is warranted. One possible solution is Bayesian neural networks (BNN). Compared to traditional neural networks which are constructed by choosing a loss function, BNN take a probabilistic approach and place a likelihood function on the data and prior distributions for all parameters (weights and biases), which in turn implies a loss function. Training results in posterior predictive distributions, from which prediction intervals can be computed, rather than only point estimates. Heatmaps show where and how much uncertainty there is at any location and give insight into the physical area being imaged as well as possible improvements to the model. Using pytorch and pyro software, we test BNN on a simulated HSI scene produced using the Rochester Institute of Technology (RIT) Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The scene geometry used is also developed by RIT and is a detailed representation of a suburban neighborhood near Rochester, NY, named “MegaScene.” Target panels were inserted for this effort, using paint reflectance and bi-directional reflectance distribution function (BRDF) data acquired from the Nonconventional Exploitation Factors Database System (NEFDS). The target panels range in size from large to subpixel, with some targets only partially visible. Multiple renderings of this scene are created under different times of day and with different atmospheric conditions to assess model generalization. We explore the uncertainty heatmap for different times and environments on MegaScene as well as individual target predictive distributions to gain insight into the power of BNN. |
Daniel Ries | Breakout |
![]() | 2019 |
|
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort (Abstract)
Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a “black box”. Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow. |
Robert Baurle | Breakout |
![]() | 2019 |
|
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Paul Roberts Chief Engineer Engineering and Safety Center |
Breakout | 2016 |
||
Panelist 4 |
Calvin Robinson NASA ![]() (bio)
Calvin Robinson is a Data Architect within the Information and Applications Division at NASA Glenn Research Center. He has over 10 years of experience supporting data analysis and simulation development for research, and currently supports several key data management efforts to make data more discoverable and aligned with FAIR principles. Calvin oversees the Center’s Information Management Program and supports individuals leading strategic AIML efforts within the Agency. Calvin holds a BS in Computer Science and Engineering from the University of Toledo. |
Session Recording |
Recording | 2022 |
|
Contributed Infrastructure Lifetimes (Abstract)
Infrastructure refers to the structures, utilities, and interconnected roadways that support the work carried out at a given facility. In the case of the Lawrence Livermore National Laboratory infrastructure is considered exclusive of scientific apparatus, safety and security systems. LLNL inherited it’s infrastructure management policy from the University of California which managed the site during LLNL’s first 5 decades. This policy is quite different from that used in commercial property management. Commercial practice weighs reliability over cost by replacing infrastructure at industry standard lifetimes. LLNL practice weighs overall lifecycle cost seeking to mitigate reliability issues through inspection. To formalize this risk management policy a careful statistical study was undertaken using 20 years of infrastructure replacement data. In this study care was taken to adjust for left truncation as-well-as right censoring. 57 distinct infrastructure class data sets were fitted using MLE to the Generalized Gamma distribution. This distribution is useful because it produces a weighted blending of discrete failure (Weibull model) and complex system failure (Lognormal model). These parametric fittings then yielded median lifetimes and conditional probabilities of failure. From conditional probabilities bounds on budget costs could be computed as expected values. This has provided a scientific basis for rational budget management as-well-as aided operations by prioritizing inspection, repair and replacement activities. |
William Romine Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
|
Contributed Workforce Analytics (Abstract)
Several statistical methods have been used effectively to model workforce behavior, specifically attrition due to retirement and voluntary separation[1]. Additionally various authors have introduced career development[2] as a meaningful aspect of workforce planning. While both general and more specific attrition modeling techniques yield useful results only limited success has followed attempts to quantify career stage transition probabilities. A complete workforce model would include quantifiable flows both vertically and horizontally in the network described pictorially here at a single time point in Figure 1. The horizontal labels in Figure 1 convey one possible meaning assignable to career stage transition – in this case, competency. More formal examples might include rank within a hierarchy such as in a military organization or grade in a civil service workforce. In the case of the Nuclear Weapons labs knowing that the specialized, classified knowledge needed to deal with Stockpile Stewardship is being preserved as evidenced by the production of Masters, individuals capable of independent technical work, is also of interest to governmental oversight. In this paper we examine the allocation of labor involved in a specific Life Extension program at LLNL. This growing workforce is described by discipline and career stage to determine how well the Norden-Rayleigh development cost model[3] fits the data. Since this model underlies much budget estimation within both DOD and NNSA the results should be of general interest. Data is also examined as a possible basis for quantifying horizontal flows in Figure 1. |
William Romine Lawrence Livermore National Laboratory |
Contributed | 2018 |
||
Tutorial Data Integrity For Deep Learning Models (Abstract)
Deep learning models are built from algorithm frameworks that fit parameters over a large set of structured historical examples. Model robustness relies heavily on the accuracy and quality of the input training datasets. This mini-tutorial seeks to explore the practical implications of data quality issues when attempting to build reliable and accurate deep learning models. The tutorial will review the basics of neural networks, model building, and then dive deep into examples and data quality considerations using practical examples. An understanding of data integrity and data quality is pivotal for verification and validation of deep learning models, and this tutorial will provide students with a foundation of this topic. |
Roshan Patel Systems Engineer/Data Scientist US Army ![]() (bio)
Mr. Roshan Patel is a systems engineer and data scientist working at CCDC Armament Center. His role focuses on systems engineering infrastructure, statistical modeling, and the analysis of weapon systems. He holds a Masters of Computer Science from Rutgers University, where he specialized in operating systems programming and machine learning. Mr. Patel is the current AI lead for the Systems Engineering Directorate at CCDC Armaments Center. |
Tutorial | 2022 |
||
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels (Abstract)
The use of statistically rigorous methods to support testing at Arnold Engineering Development Complex (AEDC) has been an area of focus in recent years. As part of this effort, the use of Design of Experiments (DOE) has been introduced for calibration of AEDC wind tunnels. Historical calibration efforts used One- Factor-at-a-Time (OFAT) test matrices, with a concentration on conditions of interest to test customers. With the introduction of DOE, the number of test points collected during the calibration decreased, and were not necessary located at historical calibration points. To validate the use of DOE for calibration purposes, the 4-ft Aerodynamic Wind Tunnel 4T was calibrated using both DOE and OFAT methods. The results from the OFAT calibration were compared to model developed from the DOE data points and it was determined that the DOE model sufficiently captured the tunnel behavior within the desired levels of uncertainty. DOE analysis also showed that within Tunnel 4T, systematic errors are insignificant as indicated by agreement noted between the two methods. Based on the results of this calibration, a decision was made to apply DOE methods to future tunnel calibrations, as appropriate. The development of the DOE matrix in Tunnel 4T required the consideration of operational limitations, measurement uncertainties, and differing tunnel behavior over the performance map. Traditional OFAT methods allowed tunnel operators to set conditions efficiently while minimizing time consuming plant configuration changes. DOE methods, however, require the use of randomization which had the potential to add significant operation time to the calibration. Additionally, certain tunnel parameters, such as variable porosity, are only of interest in a specific region of the performance map. In addition to operational concerns, measurement uncertainty was an important consideration for the DOE matrix. At low tunnel total pressures, the uncertainty in the Mach number measurements increase significantly. Aside from introducing non-constant variance into the calibration model, the large uncertainties at low pressures can increase overall uncertainty in the calibration in high pressure regions where the uncertainty would otherwise be lower. At high pressures and transonic Mach numbers, low Mach number uncertainties are required to meet drag count uncertainty requirements. To satisfy both the operational and calibration requirements, the DOE matrix was divided into multiple independent models over the tunnel performance map. Following the Tunnel 4T calibration, AEDC calibrated the Propulsion Wind Tunnel 16T, Hypersonic Wind Tunnels B and C, and the National Full-Scale Aerodynamics Complex (NFAC). DOE techniques were successfully applied to the calibration of Tunnel B and NFAC, while a combination of DOE and OFAT test methods were used in Tunnel 16T because of operational and uncertainty requirements over a portion of the performance map. Tunnel C was calibrated using OFAT because of operational constraints. The cost of calibrating these tunnels has not been significantly reduced through the use of DOE, but the characterization of test condition uncertainties is firmly based in statistical methods. |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
|
Breakout “High Velocity Analytics for NASA JPL Mars Rover Experimental Design” (Abstract)
Rigorous characterization of system capabilities is essential for defensible decisions in test and evaluation (T&E). Analysis of designed experiments is not usually associated “big” data analytics as there are typically a modest number of runs, factors, and responses. The Mars Rover program has recently conducted several disciplined DOEs on prototype coring drill performance with approximately 10 factors along with scores of responses and hundreds of recorded covariates. The goal is to characterize the ‘atthis-time’ capability to confirm what the scientists and engineers already know about the system, answer specific performance and quality questions across multiple environments, and inform future tests to optimize performance. A ‘rigorous’ characterization required that not just one analytical path should be taken, but a combination of interactive data visualization, classic DOE analysis screening methods, and newer methods from predictive analytics such as decision trees. With hundreds of response surface models across many test series and qualitative factors, these methods used had to efficiently find the signals hidden in the noise. Participants will be guided through an end-to-end analysis workflow with actual data from many tests (often Definitive Screening Designs) of the Rover prototype coring drill. We will show data assembly, data cleaning (e.g. missing values and outliers), data exploration with interactive graphical designs, variable screening, response partitioning, data tabulation, model building with stepwise and other methods, and model diagnostics. Software packages such as R and JMP will be used. |
Heath Rushing Co-founder/Principle Adsurgo (bio)
Heath Rushing is the cofounder of Adsurgo and author of the book Design and Analysis of Experiments by Douglas Montgomery: A Supplement for using JMP. Previously, he was the JMP Training Manager at SAS, a quality engineer at Amgen, an assistant professor at the Air Force Academy, and a scientific analyst for OT&E in the Air Force. In addition, over the last six years, he has taught Science of Tests (SOT) courses to T&E organizations throughout the DoD. |
Breakout | Materials | 2016 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Short Course Introduction to R |
Justin Post North Carlina State Univeristy |
Short Course | Materials | 2018 |
|
Short Course Using R Markdown & the Tidyverse to Create Reproducible Research |
Justin Post Teaching Associate Professor NCSU ![]() |
Short Course | Materials | 2022 |
|
Breakout TRMC Big Data Analytics Investments & Technology Review |
Edward Powell Lead Architect and Systems Engineer Test Resource Management Center ![]() |
Breakout | 2022 |
||
Convolutional Neural Networks and Semantic Segmentation for Cloud and Ice Detection |
Prarabdha Ojwaswee Yonzon Cadet United States Military Academy (West Point) ![]() |
Session Recording |
![]() Recording | 2022 |
|
Bayesian Estimation for Covariate Defect Detection Model Based on Discrete Cox Proportiona |
Priscila Silva Graduate Student University of Massachusetts Dartmouth ![]() |
Session Recording |
![]() Recording | 2022 |
|
Breakout Uncertainty Quantification: What is it and Why it is Important to Test, Evaluation, and Modeling and Simulation in Defense and Aerospace |
Peter Qian University of Wisconsin and SmartUQ |
Breakout | Materials | 2017 |
|
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Poster T&E of Responsible AI |
Rachel Haga Research Associate IDA ![]() |
Poster | Session Recording |
![]() Recording | 2022 |
Breakout Dose-Response Model of Recent Sonic Boom Community Annoyance Data |
Jonathan Rathsam NASA |
Breakout | 2017 |
||
Breakout Improving Sensitivity Experiments |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
|
Breakout VV&UQ – Uncertainty Quantification for Model-Based Engineering of DoD Systems |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
|
Breakout Bayesian Component Reliability Estimation: F-35 Case Study |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
|
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout Aerospace Measurement and Experimental System Development Characterization |
Ray Rhew NASA |
Breakout | Materials | 2016 |
|
Breakout Application of Design of Experiments to a Calibration of the National Transonic Facility |
Matt Rhode NASA |
Breakout | Materials | 2018 |
|
Breakout Using Bayesian Neural Networks for Uncertainty Quantification of Hyperspectral Image Target Detection |
Daniel Ries | Breakout |
![]() | 2019 |
|
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort |
Robert Baurle | Breakout |
![]() | 2019 |
|
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Paul Roberts Chief Engineer Engineering and Safety Center |
Breakout | 2016 |
||
Panelist 4 |
Calvin Robinson NASA ![]() |
Session Recording |
Recording | 2022 |
|
Contributed Infrastructure Lifetimes |
William Romine Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
|
Contributed Workforce Analytics |
William Romine Lawrence Livermore National Laboratory |
Contributed | 2018 |
||
Tutorial Data Integrity For Deep Learning Models |
Roshan Patel Systems Engineer/Data Scientist US Army ![]() |
Tutorial | 2022 |
||
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
|
Breakout “High Velocity Analytics for NASA JPL Mars Rover Experimental Design” |
Heath Rushing Co-founder/Principle Adsurgo |
Breakout | Materials | 2016 |