Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Short Course Data Farming (Abstract)
This tutorial is designed for newcomers to simulation-based experiments. Data farming is the process of using computational experiments to “grow” data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. The focus of the tutorial will be on gaining practical experience with setting up and running simulation experiments, leveraging recent advances in large-scale simulation experimentation pioneered by the Simulation Experiments & Efficient Designs (SEED) Center for Data Farming at the Naval Postgraduate School (http://harvest.nps.edu). Participants will be introduced to fundamental concepts, and jointly explore simulation models in an interactive setting. Demonstrations and written materials will supplement guided, hands-on activities through the setup, design, data collection, and analysis phases of an experiment-driven simulation study. |
Susan Sanchez Naval Postgraduate School |
Short Course | Materials | 2017 |
Tutorial Pseudo-Exhaustive Testing – Part 1 (Abstract)
Exhaustive testing is infeasible when testing complex engineered systems. Fortunately, a combinatorial testing approach can be almost as effective as exhaustive testing but at dramatically lower cost. The effectiveness of this approach is due to the underlying construct on which it is based, that is a mathematical construct known as a covering array. This tutorial is divided into two sections. Section 1 introduces covering arrays, introduces a few covering array metrics, and then shows how covering arrays are used in combinatorial testing methodologies. Section 2 focuses on practical applications of combinatorial testing, including a commercial aviation example, an example that focuses on a widely used machine learning library, plus other examples that illustrate how common testing challenges can be addressed. In the process of working through these examples, an easy-to-use tool for generating covering arrays will be demonstrated. |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() (bio)
Ryan Lekivetz is a Principal Research Statistician Developer for the JMP Division of SAS where he implements features for the Design of Experiments platforms in JMP software. |
Tutorial |
![]() Recording | 2021 |
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks (Abstract)
Collaborative autonomous sensor networks have recently been used in many applications including inspection, law enforcement, search and rescue, and national security. They offer scalable, low cost solutions which are robust to the loss of multiple sensors in hostile or dangerous environments. While often comprised of less capable sensors, the performance of a large network can approach the performance of far more capable and expensive platforms if nodes are effectively coordinating their sensing actions and data processing. This talk will summarize work to date at LLNL on distributed signal processing and decentralized optimization algorithms for collaborative autonomous sensor networks, focusing on ADMM-based solutions for detection/estimation problems and sequential greedy optimization solutions which maximize submodular functions, e.g. mutual information. |
Ryan Goldhahn | Breakout | 2019 |
|
Breakout Resampling Methods (Abstract)
Resampling Methods: This tutorial presents widely used resampling methods to include bootstrapping, cross-validation, and permutation tests. Underlying theories will be presented briefly, but the primary focus will be on applications. A new graph-theoretic approach to change detection will be discussed as a specific application of permutation testing. Examples will be demonstrated in R; participants are encouraged to bring their own portable computers to follow along using datasets provided by the instructor. |
David Ruth United States Naval Academy |
Breakout | Materials | 2017 |
Breakout “High Velocity Analytics for NASA JPL Mars Rover Experimental Design” (Abstract)
Rigorous characterization of system capabilities is essential for defensible decisions in test and evaluation (T&E). Analysis of designed experiments is not usually associated “big” data analytics as there are typically a modest number of runs, factors, and responses. The Mars Rover program has recently conducted several disciplined DOEs on prototype coring drill performance with approximately 10 factors along with scores of responses and hundreds of recorded covariates. The goal is to characterize the ‘atthis-time’ capability to confirm what the scientists and engineers already know about the system, answer specific performance and quality questions across multiple environments, and inform future tests to optimize performance. A ‘rigorous’ characterization required that not just one analytical path should be taken, but a combination of interactive data visualization, classic DOE analysis screening methods, and newer methods from predictive analytics such as decision trees. With hundreds of response surface models across many test series and qualitative factors, these methods used had to efficiently find the signals hidden in the noise. Participants will be guided through an end-to-end analysis workflow with actual data from many tests (often Definitive Screening Designs) of the Rover prototype coring drill. We will show data assembly, data cleaning (e.g. missing values and outliers), data exploration with interactive graphical designs, variable screening, response partitioning, data tabulation, model building with stepwise and other methods, and model diagnostics. Software packages such as R and JMP will be used. |
Heath Rushing Co-founder/Principle Adsurgo (bio)
Heath Rushing is the cofounder of Adsurgo and author of the book Design and Analysis of Experiments by Douglas Montgomery: A Supplement for using JMP. Previously, he was the JMP Training Manager at SAS, a quality engineer at Amgen, an assistant professor at the Air Force Academy, and a scientific analyst for OT&E in the Air Force. In addition, over the last six years, he has taught Science of Tests (SOT) courses to T&E organizations throughout the DoD. |
Breakout | Materials | 2016 |
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels (Abstract)
The use of statistically rigorous methods to support testing at Arnold Engineering Development Complex (AEDC) has been an area of focus in recent years. As part of this effort, the use of Design of Experiments (DOE) has been introduced for calibration of AEDC wind tunnels. Historical calibration efforts used One- Factor-at-a-Time (OFAT) test matrices, with a concentration on conditions of interest to test customers. With the introduction of DOE, the number of test points collected during the calibration decreased, and were not necessary located at historical calibration points. To validate the use of DOE for calibration purposes, the 4-ft Aerodynamic Wind Tunnel 4T was calibrated using both DOE and OFAT methods. The results from the OFAT calibration were compared to model developed from the DOE data points and it was determined that the DOE model sufficiently captured the tunnel behavior within the desired levels of uncertainty. DOE analysis also showed that within Tunnel 4T, systematic errors are insignificant as indicated by agreement noted between the two methods. Based on the results of this calibration, a decision was made to apply DOE methods to future tunnel calibrations, as appropriate. The development of the DOE matrix in Tunnel 4T required the consideration of operational limitations, measurement uncertainties, and differing tunnel behavior over the performance map. Traditional OFAT methods allowed tunnel operators to set conditions efficiently while minimizing time consuming plant configuration changes. DOE methods, however, require the use of randomization which had the potential to add significant operation time to the calibration. Additionally, certain tunnel parameters, such as variable porosity, are only of interest in a specific region of the performance map. In addition to operational concerns, measurement uncertainty was an important consideration for the DOE matrix. At low tunnel total pressures, the uncertainty in the Mach number measurements increase significantly. Aside from introducing non-constant variance into the calibration model, the large uncertainties at low pressures can increase overall uncertainty in the calibration in high pressure regions where the uncertainty would otherwise be lower. At high pressures and transonic Mach numbers, low Mach number uncertainties are required to meet drag count uncertainty requirements. To satisfy both the operational and calibration requirements, the DOE matrix was divided into multiple independent models over the tunnel performance map. Following the Tunnel 4T calibration, AEDC calibrated the Propulsion Wind Tunnel 16T, Hypersonic Wind Tunnels B and C, and the National Full-Scale Aerodynamics Complex (NFAC). DOE techniques were successfully applied to the calibration of Tunnel B and NFAC, while a combination of DOE and OFAT test methods were used in Tunnel 16T because of operational and uncertainty requirements over a portion of the performance map. Tunnel C was calibrated using OFAT because of operational constraints. The cost of calibrating these tunnels has not been significantly reduced through the use of DOE, but the characterization of test condition uncertainties is firmly based in statistical methods. |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
Contributed Infrastructure Lifetimes (Abstract)
Infrastructure refers to the structures, utilities, and interconnected roadways that support the work carried out at a given facility. In the case of the Lawrence Livermore National Laboratory infrastructure is considered exclusive of scientific apparatus, safety and security systems. LLNL inherited it’s infrastructure management policy from the University of California which managed the site during LLNL’s first 5 decades. This policy is quite different from that used in commercial property management. Commercial practice weighs reliability over cost by replacing infrastructure at industry standard lifetimes. LLNL practice weighs overall lifecycle cost seeking to mitigate reliability issues through inspection. To formalize this risk management policy a careful statistical study was undertaken using 20 years of infrastructure replacement data. In this study care was taken to adjust for left truncation as-well-as right censoring. 57 distinct infrastructure class data sets were fitted using MLE to the Generalized Gamma distribution. This distribution is useful because it produces a weighted blending of discrete failure (Weibull model) and complex system failure (Lognormal model). These parametric fittings then yielded median lifetimes and conditional probabilities of failure. From conditional probabilities bounds on budget costs could be computed as expected values. This has provided a scientific basis for rational budget management as-well-as aided operations by prioritizing inspection, repair and replacement activities. |
William Romine Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
Contributed Workforce Analytics (Abstract)
Several statistical methods have been used effectively to model workforce behavior, specifically attrition due to retirement and voluntary separation[1]. Additionally various authors have introduced career development[2] as a meaningful aspect of workforce planning. While both general and more specific attrition modeling techniques yield useful results only limited success has followed attempts to quantify career stage transition probabilities. A complete workforce model would include quantifiable flows both vertically and horizontally in the network described pictorially here at a single time point in Figure 1. The horizontal labels in Figure 1 convey one possible meaning assignable to career stage transition – in this case, competency. More formal examples might include rank within a hierarchy such as in a military organization or grade in a civil service workforce. In the case of the Nuclear Weapons labs knowing that the specialized, classified knowledge needed to deal with Stockpile Stewardship is being preserved as evidenced by the production of Masters, individuals capable of independent technical work, is also of interest to governmental oversight. In this paper we examine the allocation of labor involved in a specific Life Extension program at LLNL. This growing workforce is described by discipline and career stage to determine how well the Norden-Rayleigh development cost model[3] fits the data. Since this model underlies much budget estimation within both DOD and NNSA the results should be of general interest. Data is also examined as a possible basis for quantifying horizontal flows in Figure 1. |
William Romine Lawrence Livermore National Laboratory |
Contributed | 2018 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Paul Roberts Chief Engineer Engineering and Safety Center |
Breakout | 2016 |
|
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort (Abstract)
Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a “black box”. Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow. |
Robert Baurle | Breakout |
![]() | 2019 |
Breakout Using Bayesian Neural Networks for Uncertainty Quantification of Hyperspectral Image Target Detection (Abstract)
Target detection in hyperspectral images (HSI) has broad value in defense applications, and neural networks have recently begun to be applied for this problem. A common criticism of neural networks is they give a point estimate with no uncertainty quantification (UQ). In defense applications, UQ is imperative because the cost of a false positive or negative is high. Users desire high confidence in either “target” or “not target” predictions, and if high confidence cannot be achieved, more inspection is warranted. One possible solution is Bayesian neural networks (BNN). Compared to traditional neural networks which are constructed by choosing a loss function, BNN take a probabilistic approach and place a likelihood function on the data and prior distributions for all parameters (weights and biases), which in turn implies a loss function. Training results in posterior predictive distributions, from which prediction intervals can be computed, rather than only point estimates. Heatmaps show where and how much uncertainty there is at any location and give insight into the physical area being imaged as well as possible improvements to the model. Using pytorch and pyro software, we test BNN on a simulated HSI scene produced using the Rochester Institute of Technology (RIT) Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The scene geometry used is also developed by RIT and is a detailed representation of a suburban neighborhood near Rochester, NY, named “MegaScene.” Target panels were inserted for this effort, using paint reflectance and bi-directional reflectance distribution function (BRDF) data acquired from the Nonconventional Exploitation Factors Database System (NEFDS). The target panels range in size from large to subpixel, with some targets only partially visible. Multiple renderings of this scene are created under different times of day and with different atmospheric conditions to assess model generalization. We explore the uncertainty heatmap for different times and environments on MegaScene as well as individual target predictive distributions to gain insight into the power of BNN. |
Daniel Ries | Breakout |
![]() | 2019 |
Breakout Application of Design of Experiments to a Calibration of the National Transonic Facility (Abstract)
Recent work at the National Transonic Facility (NTF) at the NASA Langley Research Center has shown that a substantial reduction in freestream pressure fluctuations can be achieved by positioning the moveable model support walls and plenum re-entry flaps to choke the flow just downstream of the test section. This choked condition reduces the upstream propagation of disturbances from the diffuser into the test section, resulting in improved Mach number control and reduced freestream variability. The choked conditions also affect the Mach number gradient and distribution in the test section, so a calibration experiment was undertaken to quantify the effects of the model support wall and re-entry flap movements on the facility freestream flow using a centerline static pipe. A design of experiments (DOE) approach was used to develop restricted-randomization experiments to determine the effects of total pressure, reference Mach number, model support wall angle, re-entry flap gap height, and test section longitudinal location on the centerline static pressure and local Mach number distributions for a reference Mach number range from 0.7 to 0.9. Tests were conducted using air as the test medium at a total temperature of 120 °F as well as for gaseous nitrogen at cryogenic total temperatures of -50, -150, and -250 °F. The resulting data were used to construct quadratic polynomial regression models for these factors using a Restricted Maximum Likelihood (REML) estimator approach. Independent validation data were acquired at off-design conditions to check the accuracy of the regression models. Additional experiments were designed and executed over the full Mach number range of the facility (0.2 £ Mref £ 1.1) at each of the four total temperature conditions, but with the model support walls and re-entry flaps set to their nominal positions, in order to provide calibration regression models for operational experiments where a choked condition downstream of the test section is either not feasible or not required. This presentation focuses on the design, execution, analysis, and results for the two experiments performed using air at a total temperature of 120 °F. Comparisons are made between the regression model output and validation data, as well as the legacy NTF calibration results, and future work is discussed. |
Matt Rhode NASA |
Breakout | Materials | 2018 |
Breakout Aerospace Measurement and Experimental System Development Characterization (Abstract)
Co-Authors: Sean A. Commo, Ph.D., P.E. and Peter A. Parker, Ph.D., P.E. NASA Langley Research Center, Hampton, Virginia, USA Austin D. Overmeyer, Philip E. Tanner, and Preston B. Martin, Ph.D. U.S. Army Research, Development, and Engineering Command, Hampton, Virginia, USA. The application of statistical engineering to helicopter wind-tunnel testing was explored during two powered rotor entries. The U.S. Army Aviation Development Directorate Joint Research Program Office and the NASA Revolutionary Vertical Lift Project performed these tests jointly at the NASA Langley Research Center. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small segment of the overall tests devoted to developing case studies of a statistical engineering approach. Data collected during each entry were used to estimate response surface models characterizing vehicle performance, a novel contribution of statistical engineering applied to powered rotor-wing testing. Additionally, a 16- to 47-times reduction in the number of data points required was estimated when comparing a statistically-engineered approach to a conventional one-factor-at-a-time approach. |
Ray Rhew NASA |
Breakout | Materials | 2016 |
Breakout Bayesian Component Reliability Estimation: F-35 Case Study (Abstract)
A challenging aspect of a system reliability assessment is integrating multiple sources of information, including component, subsystem, and full-system data, previous test data, or subject matter expert opinion. A powerful feature of Bayesian analyses is the ability to combine these multiple sources of data and variability in an informed way to perform statistical inference. This feature is particularly valuable in assessing system reliability where testing is limited and only a small number (or no failures at all) are observed. The F-35 is DoD’s largest program; approximately one-third of the operations and sustainment cost is attributed to the cost of spare parts and the removal, replacement, and repair of components. The failure rate of those components is the driving parameter for a significant portion of the sustainment cost, and yet for many of these components, poor estimates of the failure rate exist. For many programs, the contractor produces estimates of component failure rates, based on engineering analysis and legacy systems with similar parts. While these are useful, the actual removal rates can provide a more accurate estimate of the removal and replacement rates the program anticipates to experience in future years. In this presentation, we show how we applied a Bayesian analysis to combine the engineering reliability estimates with the actual failure data to overcome the problems of cases where few data exist. Our technique is broadly applicable to any program where multiple sources of reliability information need be combined for the best estimation of component failure rates and ultimately sustainment costs. |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E (Abstract)
The majority of statistical analyses involves observing a fixed set of data and analyzing those data after the final observation has been collected to draw some inference about the population from which they came. Unlike these traditional methods, sequential analysis is concerned with situations for which the number, pattern, or composition of the data is not determined at the start of the investigation but instead depends upon the information acquired throughout the course of the investigation. Expanding the use of sequential analysis in DoD testing has the potential to save substantial test dollars and decrease test time. However, switching from traditional to sequential planning will likely induce unique challenges. The goal of this round table is to provide an open forum for topics related to sequential analyses. We aim to discuss potential challenges, identify potential ways to overcome them, and talk about successful stories of sequential analyses implementation and lessons learned. Specific questions for discussion will be provided to participants prior to the event. |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Rebecca Medlin is a Research Staff Member at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on the use of statistics in test & evaluation and has designed tests and conducted statistical analyses for several major defense programs including tactical vehicles, mobility aircraft, radars, and electronic warfare systems. Her areas of expertise include design of experiments, statistical modeling, and reliability. She has a Ph.D. in Statistics from Virginia Tech. |
Roundtable | 2021 |
|
Breakout Improving Sensitivity Experiments (Abstract)
This presentation will provide a brief overview of sensitivity testing, and emphasize applications to several products and system of importance to the Defense as well as private industry, including Insensitive Energetics, Ballistic testing of protective armor, testing of munition fuzes and Microelectromechanical Systems (MEMS) components, and safety testing of high-pressure test ammunition, and packaging for high-value materials. |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
Breakout VV&UQ – Uncertainty Quantification for Model-Based Engineering of DoD Systems (Abstract)
The US Army ARDEC has recently established an initiative to integrate statistical and probabilistic techniques into engineering modeling and simulation (M&S) analytics typically used early in the design lifecycle to guide technology development. DOE-driven Uncertainty Quantification techniques, including statistically rigorous model verification and validation (V&V) approaches, enable engineering teams to identify, quantify, and account for sources of variation and uncertainties in design parameters, and identify opportunities to make technologies more robust, reliable, and resilient earlier in the product’s lifecycle. Several recent armament engineering case studies – each with unique considerations and challenges – will be discussed. |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
Breakout Dose-Response Model of Recent Sonic Boom Community Annoyance Data (Abstract)
To enable quiet supersonic passenger flight overland, NASA is providing national and international noise regulators with a low-noise sonic boom database. The database will consist of dose-response curves, which quantify the relationship between low-noise sonic boom exposure and community annoyance. The recently-updated international standard for environmental noise assessment, ISO 1996-1:2016, references multiple fitting methods for dose-response analysis. One of these fitting methods, Fidell’s community tolerance level method, is based on theoretical assumptions that fix the slope of the curve, allowing only the intercept to vary. This fitting method is applied to an existing pilot sonic boom community annoyance data set from 2011 with a small sample size. The purpose of this exercise is to develop data collection and analysis recommendations for future sonic boom community annoyance surveys. |
Jonathan Rathsam NASA |
Breakout | 2017 |
|
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases (Abstract)
Generating aerodynamic coefficients can be computationally expensive, especially for the viscous CFD solvers in which multiple complex models are iteratively solved. When filling large design spaces, utilizing only a high accuracy viscous CFD solver can be infeasible. We apply state-of-the-art methods for design and analysis of computer experiments to efficiently develop an emulator for high-fidelity simulations. First, we apply a cokriging model to leverage information from fast low-fidelity simulations to improve predictions with more expensive high-fidelity simulations. Combining space-filling designs with a Gaussian process model-based sequential sampling criterion allows us to efficiently generate sample points and limit the number of costly simulations needed to achieve the desired model accuracy. We demonstrate the effectiveness of these methods with an aerodynamic simulation study using a conic shape geometry. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Release Number: LLNL-ABS-818163 |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
Breakout Uncertainty Quantification: What is it and Why it is Important to Test, Evaluation, and Modeling and Simulation in Defense and Aerospace (Abstract)
Uncertainty appears in many aspects of systems design including stochastic design parameters, simulation inputs, and forcing functions. Uncertainty Quantification (UQ) has emerged as the science of quantitative characterization and reduction of uncertainties in both simulation and test results. UQ is a multidisciplinary field with a broad base of methods including sensitivity analysis, statistical calibration, uncertainty propagation, and inverse analysis. Because of their ability to bring greater degrees of confidence to decisions, uncertainty quantification methods are playing a greater role in test, evaluation, and modeling and simulation in defense and aerospace. The value of UQ comes with better understanding of risk from assessing the uncertainty in test and modeling and simulation results. The presentation will provide an overview of UQ and then discuss the use of some advanced statistical methods, including DOEs and emulation for multiple simulation solvers and statistical calibration, for efficiently quantifying uncertainties. These statistical methods effectively link test, evaluation and modeling and simulation by coordinating the valuation of uncertainties, simplifying verification and validation activities. |
Peter Qian University of Wisconsin and SmartUQ |
Breakout | Materials | 2017 |
Short Course Introduction to R (Abstract)
This course is designed to introduce participants to the R programming language and the R studio editor. R is a free and open-source software for summarizing data, creating visuals of data, and conducting statistical analyses. R can offer many advantages over programs such as Excel including faster computation, customized analyses, access to the latest statistical techniques, automation of tasks, and the ability to easily reproduce research. After completing this course, a new user should be able to: • Import/export data from/to external files. • Create and manipulate new variables. • Conduct basic statistical analyses (such as t-tests and linear regression). • Create basic graphs. • Install and use R packages Participants should bring a laptop for the interactive components of the course. |
Justin Post North Carlina State Univeristy |
Short Course | Materials | 2018 |
Keynote Consensus Building |
Antonio Possolo NIST Fellow, Chief Statistician National Institute of Standards and Technology. ![]() (bio)
Antonio Possolo holds a Ph.D. in statistics from Yale University, and has been practicing the statistical arts for more than 35 years, in industry (General Electric, Boeing), academia (Princeton University, University of Washington in Seattle, Classical University of Lisboa), and government. He is committed to the development and application of probabilistic and statistical methods that contribute to advances in science and technology, and in particular to measurement science. |
Keynote | Materials | 2018 |
Breakout Demystifying the Black Box: A Test Strategy for Autonomy (Abstract)
Systems with autonomy are beginning to permeate civilian, industrial, and military sectors. Though these technologies have the potential to revolutionize our world, they also bring a host of new challenges in evaluating whether these tools are safe, effective, and reliable. The Institute for Defense Analyses is developing methodologies to enable testing systems that can, to some extent, think for themselves. In this talk, we share how we think about this problem and how this framing can help you develop a test strategy for your own domain. |
Dan Porter | Breakout |
![]() | 2019 |
Breakout A Multi-method, Triangulation Approach to Operational Testing (Abstract)
Humans are not produced in quality-controlled assembly lines, and we typically are much more variable than the mechanical systems we employ. This mismatch means that when characterizing the effectiveness of a system, the system must be considered in the context of its users. Accurate measurement is critical to this endeavor, yet while human variability is large, effort to reduce measurement error of those humans is relatively small. The following talk discusses the importance of using multiple measurement methods—triangulation—to reduce error and increase confidence when characterizing the quality of HSI. A case study from an operational test of an attack helicopter demonstrates how triangulation enables more actionable recommendations. |
Daniel Porter Research Staff Member IDA |
Breakout | Materials | 2018 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Short Course Data Farming |
Susan Sanchez Naval Postgraduate School |
Short Course | Materials | 2017 |
Tutorial Pseudo-Exhaustive Testing – Part 1 |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks |
Ryan Goldhahn | Breakout | 2019 |
|
Breakout Resampling Methods |
David Ruth United States Naval Academy |
Breakout | Materials | 2017 |
Breakout “High Velocity Analytics for NASA JPL Mars Rover Experimental Design” |
Heath Rushing Co-founder/Principle Adsurgo |
Breakout | Materials | 2016 |
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
Contributed Infrastructure Lifetimes |
William Romine Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
Contributed Workforce Analytics |
William Romine Lawrence Livermore National Laboratory |
Contributed | 2018 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Paul Roberts Chief Engineer Engineering and Safety Center |
Breakout | 2016 |
|
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort |
Robert Baurle | Breakout |
![]() | 2019 |
Breakout Using Bayesian Neural Networks for Uncertainty Quantification of Hyperspectral Image Target Detection |
Daniel Ries | Breakout |
![]() | 2019 |
Breakout Application of Design of Experiments to a Calibration of the National Transonic Facility |
Matt Rhode NASA |
Breakout | Materials | 2018 |
Breakout Aerospace Measurement and Experimental System Development Characterization |
Ray Rhew NASA |
Breakout | Materials | 2016 |
Breakout Bayesian Component Reliability Estimation: F-35 Case Study |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
|
Breakout Improving Sensitivity Experiments |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
Breakout VV&UQ – Uncertainty Quantification for Model-Based Engineering of DoD Systems |
Douglas Ray US Army RDECOM ARDEC |
Breakout | Materials | 2017 |
Breakout Dose-Response Model of Recent Sonic Boom Community Annoyance Data |
Jonathan Rathsam NASA |
Breakout | 2017 |
|
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
Breakout Uncertainty Quantification: What is it and Why it is Important to Test, Evaluation, and Modeling and Simulation in Defense and Aerospace |
Peter Qian University of Wisconsin and SmartUQ |
Breakout | Materials | 2017 |
Short Course Introduction to R |
Justin Post North Carlina State Univeristy |
Short Course | Materials | 2018 |
Keynote Consensus Building |
Antonio Possolo NIST Fellow, Chief Statistician National Institute of Standards and Technology. ![]() |
Keynote | Materials | 2018 |
Breakout Demystifying the Black Box: A Test Strategy for Autonomy |
Dan Porter | Breakout |
![]() | 2019 |
Breakout A Multi-method, Triangulation Approach to Operational Testing |
Daniel Porter Research Staff Member IDA |
Breakout | Materials | 2018 |