Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout A Statistical Approach for Uncertainty Quantification with Missing Data (Abstract)
Uncertainty quantification (UQ) has emerged as the science of quantitative characterization and reduction of uncertainties in simulation and testing. Stretching across applied mathematics, statistics, and engineering, UQ is a multidisciplinary field with broad applications. A popular UQ method to analyze the effects of input variability and uncertainty on the system responses is generalized Polynomial Chaos Expansion (gPCE). This method was developed using applied mathematics and does not require knowledge of a simulation’s physics. Thus, gPCE may be used across disparate industries and is applicable to both individual component and system level simulations. The gPCE method can encounter problems when any of the input configurations fail to produce valid simulation results. gPCE requires that results be collected on a sparse grid Design of Experiment (DOE), which is generated based on probability distributions of the input variables. A failure to run the simulation at any one input configuration can result in a large decrease in the accuracy of a gPCE. In practice, simulation data sets with missing values are common because simulations regularly yield invalid results due to physical restrictions or numerical instability. We propose a statistical approach to mitigating the cost of missing values. This approach yields accurate UQ results if simulation failure makes gPCE methods unreliable. The proposed approach addresses this missing data problem by introducing an iterative machine learning algorithm. This methodology allows gPCE modelling to handle missing values in the sparse grid DOE. The study will demonstrate the convergence characteristics of the methodology to reach steady state values for the missing points using a series of simulations and numerical results. Remarks about the convergence rate and the advantages and feasibility of the proposed methodology will be provided. Several examples are used to demonstrate the proposed framework and its utility including a secondary air system example from the jet engine industry and several non-linear test functions. This is based on joint work with Dr. Mark Andrews at SmartUQ. |
Mark Andrews | Breakout | 2019 |
|
Breakout Exploring Problems in Shipboard Air Defense with Modeling (Abstract)
One of the primary roles of navy surface combatants is defending high-value units against attack by anti-ship cruise missiles (ASCMs). They accomplish this either by launching their interceptor missiles and shooting the ASCMs down with rapid-firing guns (hard kill), or through the use of deceptive jamming, decoys, or other non-kinetic means (soft kill) to defeat the threat. The wide range of hostile ASCM capabilities and the different properties of friendly defenses, combined with the short time-scale for defeating these ASCMs, makes this a difficult problem to study. IDA recently completed a study focusing on the extent to which friendly forces were vulnerable to massed ASCM attacks, and possible avenues for improvement. To do this we created a pair of complementary models with the combined flexibility to explore a wide range of questions. The first model employed a set of closed-form equations, and the second a time-dependent Monte Carlo simulation. This presentation discusses the thought processes behind the models and their relative strengths and weaknesses. |
Ralph Donnelly & Benjamin Ashwell | Breakout |
![]() | 2019 |
Breakout Sequential Testing for Fast Jet Life Support Systems (Abstract)
The concept of sequential testing has many disparate meanings. Often, for statisticians it takes on a purely mathematical context while possibly meaning multiple disconnected test events for some practitioners. Here we present a pedagogical approach to creating test designs involving constrained factors using JMP software. Recent experiences testing one of the U.S. military’s fast jet life support systems (LSS) serves as a case study and back drop to support the presentation. The case study discusses several lessons learned during LSS testing, applicable to all practitioners of scientific test and analysis techniques (STAT) and design of experiments (DOE). We conduct a short analysis to specifically determine a test region with a set of factors pertinent to modeling human breathing and the use of breathing machines as part of the laboratory setup. A comparison of several government and industry laboratory test points and regions with governing documentation is made, along with the our proposal for determining a necessary and sufficient test region for tests involving human breathing as a factor. |
Darryl Ahner
(bio)
Steven Thorsen, Sarah Burke & |
Breakout |
![]() | 2019 |
Breakout Bayesian Component Reliability Estimation: F-35 Case Study (Abstract)
A challenging aspect of a system reliability assessment is integrating multiple sources of information, including component, subsystem, and full-system data, previous test data, or subject matter expert opinion. A powerful feature of Bayesian analyses is the ability to combine these multiple sources of data and variability in an informed way to perform statistical inference. This feature is particularly valuable in assessing system reliability where testing is limited and only a small number (or no failures at all) are observed. The F-35 is DoD’s largest program; approximately one-third of the operations and sustainment cost is attributed to the cost of spare parts and the removal, replacement, and repair of components. The failure rate of those components is the driving parameter for a significant portion of the sustainment cost, and yet for many of these components, poor estimates of the failure rate exist. For many programs, the contractor produces estimates of component failure rates, based on engineering analysis and legacy systems with similar parts. While these are useful, the actual removal rates can provide a more accurate estimate of the removal and replacement rates the program anticipates to experience in future years. In this presentation, we show how we applied a Bayesian analysis to combine the engineering reliability estimates with the actual failure data to overcome the problems of cases where few data exist. Our technique is broadly applicable to any program where multiple sources of reliability information need be combined for the best estimation of component failure rates and ultimately sustainment costs. |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
Breakout Applying Functional Data Analysis throughout Aerospace Testing (Abstract)
Sensors abound in aerospace testing and while many scientists look at the data from a physics perspective, the comparative statistics information is what drives decisions. A multi-company project was comparing launch data from the 1980’s to a current set of data that included 30 sensors. Each sensor was designed to gather 3000 data points during the 3-second launch event. The data included temperature, acceleration, and pressure information. This talk will compare the data analysis methods developed for this project as well as the use of the new Functional Data Analysis tool within JMP for its ability to discern in-family launch performances. |
David Harrison | Breakout |
![]() | 2019 |
Breakout Adopting Optimized Software Test Design Methods at Scale (Abstract)
Using Combinatorial Test Design methods to select software test scenarios has repeatedly delivered large efficiency and thoroughness gains – which begs the questions: • Why are these proven methods not used everywhere? • Why do some efforts to promote adoption of new approaches stagnate? • What steps can leaders take to introduce successfully introduce and spread new test design methods? For more than a decade, Justin Hunter has helped large global organizations across six continents adopt new test design techniques at scale. Working in some environments, he has felt like Sisyphus, forever condemned to roll a boulder uphill only to watch it roll back down again. In other situations, things clicked; teams smoothly adopted new tools and techniques, and impressive results were quickly achieved. In this presentation, Justin will discuss several common challenges faced by large organizations, explain why adopting test design tools is more challenging than adopting other types of development and testing tools, and share actionable recommendations to consider when you roll out new test design approaches. |
Justin Hunter | Breakout |
![]() | 2019 |
Breakout A Quantitative Assessment of the Science Robustness of the Europa Clipper Mission (Abstract)
Existing characterization of Europa’s environment is enabled by the Europa Clipper mission’s successful predecessors: Pioneer, Voyager, Galileo, and most recently, Juno. These missions reveal high intensity energetic particle fluxes at Europa’s orbit, requiring a multidimensional design challenge to ensure mission success (i.e. meeting Level 1 science requirements). Risk averse JPL Design Principles and the Europa Environment Requirement Document (ERD) dictate practices and policy, which if masterfully followed, are designed to protect Clipper from failure or degradation due to radiation. However, even if workmanship is flawless and no waivers are assessed, modeling errors, shielding uncertainty, and natural variation in the Europa environment are cause for residual concern. While failure and part degradation are of paramount concern, the occurrence of temporary outages, causing loss or degradation of science observations, is also a critical mission risk, left largely unmanaged by documents like the ERD. The referenced risk is monitored and assessed through a Project Systems Engineering-led mission robustness effort, which attempts balance the risk of science data loss with potential design cost and increased mission complexity required to mitigate such risk. The Science Sensitivity Model (SSM) was developed to assess mission and science robustness, with its primary goal being to ensure a high probability of achieving Level 1 (L1) science objectives by informing the design of a robust spacecraft, instruments, and mission design. This discussion will provide an overview of the problem, the model, and solution strategies. Subsequent presentations discuss the experimental design used to understand the problem space and the graphics and visualization used to reveal important conclusions. |
Kelli McCoy | Breakout |
![]() | 2019 |
Breakout Identifying and Contextualizing Maximum Instrument Fault Rates and Minimum Instrument Recovery Times for Europa Clipper Science through Applied Statistics and Strategic Visualizations (Abstract)
Using the right visualizations as part of broad system and statistical Monte Carlo analysis supports interpretation of key drivers and relationships between variables, provides context about the full system, and communicates to non-statistician stakeholders. An experimental design was used to understand the relationships between instrument and spacecraft fault rate and recovery time in relation to the probability of achieving Europa Clipper science objectives during the Europa Clipper tour. Given spacecraft and instrument outages, requirement achievement checks were performed to determine the probability of meeting scientific objectives. Visualizations of the experimental design output enabled analysis of the full parameter set. Correlation between individual instruments and specific scientific objectives is not straight forward; some scientific objectives require a single instrument to be on at certain times and during varying conditions across the trajectory, while other science objectives require multiple instruments to function concurrently. By examining the input conditions that meet scientific objectives with the highest probability, and comparing those to trials with the lowest probability of meeting scientific objectives, key relationships could be visualized, enabling valuable mission and engineering design insights. Key system drivers of scientific success were identified, such as fault rate tolerance and recovery time required for each instrument and the spacecraft. Key steps, methodologies, difficulties and result-highlights are presented, along with a discussion of next steps and options for refinement and future analysis. |
Thomas Youmans | Breakout |
![]() | 2019 |
Breakout Design and Analysis of Experiments for Europa Clipper’s Science Sensitivity Model (Abstract)
The Europa Clipper Science Sensitivity Model (SSM) can be thought of as a graph in which the nodes are mission requirements at ten levels in a hierarchy, and edges represent how requirements at one level of the hierarchy depend on those at lower levels. At the top of the hierarchy, there are ten nodes representing ten, Level 1 science requirements for the mission. At the bottom of the hierarchy, there are 100 or so nodes representing instrument-specific science requirements. In between, nodes represent intermediate science requirements with complex interdependencies. Meeting, or failing to meet, bottom-level requirements depends on the frequency of faults and the lengths of recovery times on the nine Europa Clipper instruments and the spacecraft. Our task was to design and analyze the results of a Monte Carlo experiment to estimate the probabilities of meeting the Level 1 science requirements based on parameters of the distributions of time between failures and of recovery times. We simulated an ensemble of synthetic missions in which failures and recoveries were random realizations from those distributions. The pass-fail status of the bottom-level instrument-specific requirements were propagated up the graph for each of the synthetic missions. Aggregating over the collection of synthetic missions produced estimates of the pass-fail probabilities for the Level 1 requirements. We constructed a definitive screening design and supplemented it with additional space-filing runs, using JMP 14 software. Finally, we used the vectors of failure and recovery parameters as predictors, and the pass-fail probabilities of the high-level requirements as responses, and built statistical models to predict the latter from the former. In this talk, we will describe the design considerations and review the fitted models and their implications for mission success. |
Amy Braverman | Breakout |
![]() | 2019 |
Breakout SLS Structural Dynamics Sensor Optimization Study (Abstract)
A crucial step in the design and development of a fight vehicle, such as NASA’s Space Launch System (SLS), is understanding its vibration behavior while in fight. Vehicle designers rely on low-cost finite element analysis (FEA) to predict the vibration behavior of the vehicle. During ground and flight tests, sensors are strategically placed at predefined locations that contribute the most vibration information under the assumption that FEA is accurate, producing points to validate the FEA models. This collaborative work focused on developing optimal sensor placement algorithms to validate FEA models against test data, and to characterize the vehicles vibration characteristics. |
Ken Toro & Jon Stallrich | Breakout | 2019 |
|
Breakout The 80/20 rule, can and should we break it using efficient data management tools? (Abstract)
Abstract: Data scientists spend approximately 80% of their time preparing, cleaning, and feature engineering data sets. In this talk I will share use cases that show why this is important and why we need to do it. I will also describe the Earth System Grid Federation (ESGF) which is an open source effort providing a robust, distributed data and computation platform, enabling world wide access to Peta/Exa-scale scientific data. ESGF will help reduce the amount of effort needed for climate data preprocessing by integrating the necessary analysis and data sharing tools. |
Ghaleb Abdulla | Breakout |
![]() | 2019 |
Breakout Time Machine Learning: Getting Navy Maintenance Duration Right (Abstract)
In support of the Navy’s effort to obtain improved outcomes through data-driven decision making, The Center for Naval Analyses’ Data Science Program (CNA/DSP) supports the performance-to plan(P2P) forum, which is co chaired by the Vice Chief of Naval Operations and the Assistant Secretary of the Navy (RD&A). The P2P forum provides senior Navy leadership forward looking performance forecasts, which are foundational to articulating Navy progress toward readiness and capability goals. While providing analytical support for this forum, CNA/DSP leveraged machine learning techniques, including Random Forests and Artificial Neural Networks, to develop improved estimates of future maintenance durations for the Navy. When maintenance durations exceed their estimated timelines, these delays can affect training, manning, and deployments in support of operational commanders. Currently, the Navy creates maintenance estimates during numerous timeframes including the program objective memorandum (POM) process, the Presidential Budget (PB), and at contract award leading to evolving estimates over time. The limited historical accuracy for these estimates, especially with the POM and PB estimates, have persisted over the last decade. These errors have led to a gap between planned funding and actual costs in addition to changes in the assets available for operational commanders each year. The CNA/DSP prediction model reduces the average error in forecasted maintenance duration days from 128 days to 31 days for POM estimates. Improvements in duration accuracy for the PB and contract award time frames were also achieved using similar ML processes. The data curation for these models involved numerous data sources of varying quality and required significant feature engineering to provide usable model inputs that could allow for forecasts over the Future Years Defense Program (FYDP) in order to support improved resource allocation and scheduling in support of the optimized fleet response training plan (OFRTP). |
Tim Kao | Breakout |
![]() | 2019 |
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity (Abstract)
The application opportunities for behavioral analytics in the cybersecurity space are based upon simple realities. 1. The great majority of breaches across all cybersecurity venues is due to human choices and human error. 2. With communication and information technologies making for rapid availability of data, as well as behavioral strategies of bad actors getting cleverer, there is need for expanded perspectives in cybersecurity prevention. 3. Internally-focused paradigms must now be explored that place endogenous protection from security threats as an important focus and integral dimension of cybersecurity prevention. The development of cybersecurity monitoring metrics and tools as well as the creation of intrusion prevention standards and policies should always include an understanding of the underlying drivers of human behavior. As temptation follows available paths, cyber-attacks follow technology, business models, and behavioral habits. The human element will always be the most significant part in the anatomy of any final decision. Choice options – from input, to judgement, to prediction, to action – need to be better understood for their relevance to cybersecurity work. Behavioral Performance Indexes harness data about aggregate human participation in an active system, helping to capture some of the detail and nuances of this critically important dimension of cybersecurity. |
Robert Gough | Breakout |
![]() | 2019 |
Breakout 3D Mapping, Plotting, and Printing in R with Rayshader (Abstract)
Is there ever a place for the third dimension in visualizing data? Is the use of 3D inherently bad, or can a 3D visualization be used as an effective tool to communicate results? In this talk, I will show you how you can create beautiful 2D and 3D maps and visualizations in R using the rayshader package. Additionally, I will talk about the value of 3D plotting and how good aesthetic choices can more clearly communicate results to stakeholders. Rayshader is a free and open source package for transforming geospatial data into engaging visualizations using a simple, scriptable workflow. It provides utilities to interactively map, plot, and 3D print data from within R. It was nominated by Hadley Wickham to be one of 2018’s Data Visualizations of the Year for the online magazine Quartz. |
Tyler Morgan-Wall | Breakout | 2019 |
|
Breakout Functional Data Analysis for Design of Experiments (Abstract)
With nearly continuous recording of sensor values now common, a new type of data called “functional data” has emerged. Rather than the individual readings being modeled, the shape of the stream of data over time is being modeled. As an example, one might model many historical vibration-over-time streams of a machine at start-up to identify functional data shapes associated with the onset of system failure. Functional Principal Components (FPC) analysis is a new and increasingly popular method for reducing the dimensionality of functional data so that only a few FPCs are needed to closely approximate any of a set of unique data streams. When combined with Design of Experiments (DoE) methods the response to be modeled in as fewest tests as possible is now the shape of a stream of data over time. Example analyses will be shown where the form of the curve is modeled as the function of several input variables allowing one to determine the input settings associated with shapes indicative of good or poor system performance. This allows the analyst to predict the shape of the curve as a function of the input variables. |
Tom Donnelly | Breakout |
![]() | 2019 |
Breakout Test and Evaluation of Emerging Technologies |
Dr. Greg Zacharias, Chief Scientist Operational Test and Evaluation | Breakout |
![]() | 2019 |
Breakout Challenges in Test and Evaluation of AI: DoD’s Project Maven (Abstract)
The Algorithmic Warfare Cross Functional Team (AWCFT or Project Maven) organizes DoD stakeholders to enhance intelligence support to the warfighter through the use of automation and artificial intelligence. The AWCFT’s objective is to turn the enormous volume of data available to DoD into actionable intelligence and insights at speed. This requires consolidating and adapting existing algorithm-based technologies as well as overseeing the development of new solutions. This brief will describe some of the methodological challenges in test and evaluation that the Maven team is working through to facilitate speedy and agile acquisition of reliable and effective AI / ML capabilities. |
Jane Pinelis | Breakout | 2019 |
|
Breakout Demystifying the Black Box: A Test Strategy for Autonomy (Abstract)
Systems with autonomy are beginning to permeate civilian, industrial, and military sectors. Though these technologies have the potential to revolutionize our world, they also bring a host of new challenges in evaluating whether these tools are safe, effective, and reliable. The Institute for Defense Analyses is developing methodologies to enable testing systems that can, to some extent, think for themselves. In this talk, we share how we think about this problem and how this framing can help you develop a test strategy for your own domain. |
Dan Porter | Breakout |
![]() | 2019 |
Breakout Satellite Affordability in LEO (SAL) (Abstract)
The Satellite Affordability in LEO (SAL) model identifies the cheapest constellation capable of providing a desired level of performance within certain constraints. SAL achieves this using a combination of analytical models, statistical emulators, and geometric relationships. SAL is flexible and modular, allowing users to customize certain components while retaining default behavior in other cases. This is desirable if users wish to consider an alternative cost formulation or different types of payload. Uses for SAL include examining cost tradeoffs with respect to factors like constellation size and desired performance level, evaluating the sensitivity of constellation costs to different assumptions about cost behavior, and providing a first-pass look at what proliferated smallsats might be capable of. At this point, SAL is limited to Walker constellations with sun-synchronous, polar orbits. |
Matthew Avery | Breakout |
![]() | 2019 |
Breakout Statistical Process Control and Capability Study on the Water Content Measurements in NASA Glenn’s Icing Research Tunnel (Abstract)
The Icing Research Tunnel (IRT) at NASA Glenn Research Center follows the recommended practice for icing tunnel calibration outlined in SAE’s ARP5905 document. The calibration team has followed the schedule of a full calibration every five years with a check calibration done every six months following. The liquid water content of the IRT has maintained stability within in the specifications presented to customers that the variation is within +/- 10% of the calibrated, target measurement. With recent measurements and instrumentation errors, a more thorough assessment of error source was desired. By constructing statistical process control charts, the ability to determine how the instrument varies in the short term, mid term, and long term was gained. The control charts offer a view of instrument error, facility error, or installation changes. It was discovered that there was a shift from target to mean baseline thus leading to the study of the overall capability indices of the liquid water content measuring instrument to perform within specifications defined in the IRT. This presentation describes data processing procedures for the Multi-Element Sensor in the IRT, including collision efficiency corrections, canonical correlation analysis, Chauvenet’s Criterion for rejection of data, distribution check of data, and mean, median and mode for construction of control charts. Further data is presented to describe the repeatability of the IRT with the Multi-Element Sensor and the ability to maintain a stable process for the defined calibration schedule. |
Emily Timko | Breakout | 2019 |
|
Breakout Reasoning about Uncertainty with the Stan Modeling Language (Abstract)
This briefing discusses the practical advantages of using the probabilistic programming language (PPL) Stan to answer statistical questions, especially those related to the quantification of uncertainty. Stan is a relatively new statistical tool that allows users to specify probability models and reason about the processes that generate the data they encounter. Stan has quickly become a popular language for writing statistical models because it allows one to specify rich (or sparse) Bayesian models using high level language. Further, Stan is fast, memory efficient, and robust. Stan requires users be explicit about the model they wish to evaluate, which makes the process of statistical modeling more transparent to users and decision makers. This is valuable because it forces practitioners to consider assumptions at the beginning of the model building procedure, rather than at the end (or not at all). In this sense, Stan is the opposite of a “black box” modeling approach. This approach may be tedious and labor intensive at first, but the pay-offs are large. For example, once a model is set-up inferential tasks all essentially automatic, as changing the model does not change the how one analyzes the data. This is a generic approach to inference. To illustrate these points, we use Stan to study a ballistic miss distance problem. In ballistic missile testing, the p-content circular error probable (CEP) in the circle that contains p percent of future shots fired, on average. Statistically, CEP is a bivariate prediction region, constrained by the model to be circular. In Frequentist statistics, the determination of CEP is highly dependent on the model fit, and a different calculation of CEP must be produced for each plausible model. However, with Stan, we can approach the CEP calculation invariant of the model we use to fit the data. We show how to use Stan to calculate CEP and uncertainty intervals for the parameters using summary statistics. Statistical practitioners can access Stan from several programming languages, including R and Python. |
John Haman | Breakout |
![]() | 2019 |
Breakout Target Location Error Estimation Using Parametric Models |
James Brownlow | Breakout |
![]() | 2019 |
Breakout Anatomy of a Cyberattack: Standardizing Data Collection for Adversarial and Defensive Analyses (Abstract)
Hardly a week goes by without news of a cybersecurity breach or an attack by cyber adversaries against a nation’s infrastructure. These incidents have wide-ranging effects, including reputational damage and lawsuits against corporations with poor data handling practices. Further, these attacks do not require the direction, support, or funding of technologically advanced nations; instead, significant damage can be – and has been – done with small teams, limited budgets, modest hardware, and open source software. Due to the significance of these threats, it is critical to analyze past events to predict trends and emerging threats. In this document, we present an implementation of a cybersecurity taxonomy and a methodology to characterize and analyze all stages of a cyberattack. The chosen taxonomy, MITRE ATT&CK™, allows for detailed definitions of aggressor actions which can be communicated, referenced, and shared uniformly throughout the cybersecurity community. We translate several open source cyberattack descriptions into the analysis framework, thereby constructing cyberattack data sets. These data sets (supplemented with notional defensive actions) illustrate example Red Team activities. The data collection procedure, when used during penetration testing and Red Teaming, provides valuable insights about the security posture of an organization, as well as the strengths and shortcomings of the network defenders. Further, these records can support past trends and future outlooks of the changing defensive capabilities of organizations. From these data, we are able to gather statistics on the timing of actions, detection rates, and cyberattack tool usage. Through analysis, we are able to identify trends in the results and compare the findings to prior events, different organizations, and various adversaries. |
Jason Schlup | Breakout |
![]() | 2019 |
Breakout A Survey of Statistical Methods in Aeronautical Ground Testing |
Drew Landman | Breakout |
![]() | 2019 |
Breakout Your Mean May Not Mean What You Mean It to Mean (Abstract)
The average and standard deviation of, say, strength or dimensional test data are basic engineering math, simple to calculate. What those resulting values actually mean, however, may not be simple, and can be surprisingly different from what a researcher wants to calculate and communicate. Mistakes can lead to overlarge estimates of spread, structures that are over- or under-designed and other challenges to understanding or communicating what your data is really telling you. This talk will discuss some common errors and missed opportunities seen in engineering and scientific analyses along with mitigations that can be applied through smart and efficient test planning and analysis. It will cover when – and when not – to report a simple mean of a dataset based on the way the data was taken; why ignoring this often either hides or overstates risk; and a standard method for planning tests and analyses to avoid this problem. And it will cover what investigators can correctly (or incorrectly) say about means and standard deviations of data, including how and why to describe uncertainty and assumptions depending on what a value will be used for. The presentation is geared toward the engineer, scientist or project manager charged with test planning, data analysis or understanding findings from tests and other analyses. Attenders’ basic understanding of quantitative data analysis is recommended; more-experienced participants will grasp correspondingly more nuance from the pitch. Some knowledge of statistics is helpful, but not required. Participants will be challenged to think about an average as not just “the average”, but a valuable number that can and must relate to the engineering problem to be solved, and must be firmly based in the data. Attenders will leave the talk with a more sophisticated understanding of this basic, ubiquitous but surprisingly nuanced statistic and greater appreciation of its power as an engineering tool. |
Ken Johnson | Breakout |
![]() | 2019 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout A Statistical Approach for Uncertainty Quantification with Missing Data |
Mark Andrews | Breakout | 2019 |
|
Breakout Exploring Problems in Shipboard Air Defense with Modeling |
Ralph Donnelly & Benjamin Ashwell | Breakout |
![]() | 2019 |
Breakout Sequential Testing for Fast Jet Life Support Systems |
Darryl Ahner | Breakout |
![]() | 2019 |
Breakout Bayesian Component Reliability Estimation: F-35 Case Study |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
Breakout Applying Functional Data Analysis throughout Aerospace Testing |
David Harrison | Breakout |
![]() | 2019 |
Breakout Adopting Optimized Software Test Design Methods at Scale |
Justin Hunter | Breakout |
![]() | 2019 |
Breakout A Quantitative Assessment of the Science Robustness of the Europa Clipper Mission |
Kelli McCoy | Breakout |
![]() | 2019 |
Breakout Identifying and Contextualizing Maximum Instrument Fault Rates and Minimum Instrument Recovery Times for Europa Clipper Science through Applied Statistics and Strategic Visualizations |
Thomas Youmans | Breakout |
![]() | 2019 |
Breakout Design and Analysis of Experiments for Europa Clipper’s Science Sensitivity Model |
Amy Braverman | Breakout |
![]() | 2019 |
Breakout SLS Structural Dynamics Sensor Optimization Study |
Ken Toro & Jon Stallrich | Breakout | 2019 |
|
Breakout The 80/20 rule, can and should we break it using efficient data management tools? |
Ghaleb Abdulla | Breakout |
![]() | 2019 |
Breakout Time Machine Learning: Getting Navy Maintenance Duration Right |
Tim Kao | Breakout |
![]() | 2019 |
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity |
Robert Gough | Breakout |
![]() | 2019 |
Breakout 3D Mapping, Plotting, and Printing in R with Rayshader |
Tyler Morgan-Wall | Breakout | 2019 |
|
Breakout Functional Data Analysis for Design of Experiments |
Tom Donnelly | Breakout |
![]() | 2019 |
Breakout Test and Evaluation of Emerging Technologies |
Dr. Greg Zacharias, Chief Scientist Operational Test and Evaluation | Breakout |
![]() | 2019 |
Breakout Challenges in Test and Evaluation of AI: DoD’s Project Maven |
Jane Pinelis | Breakout | 2019 |
|
Breakout Demystifying the Black Box: A Test Strategy for Autonomy |
Dan Porter | Breakout |
![]() | 2019 |
Breakout Satellite Affordability in LEO (SAL) |
Matthew Avery | Breakout |
![]() | 2019 |
Breakout Statistical Process Control and Capability Study on the Water Content Measurements in NASA Glenn’s Icing Research Tunnel |
Emily Timko | Breakout | 2019 |
|
Breakout Reasoning about Uncertainty with the Stan Modeling Language |
John Haman | Breakout |
![]() | 2019 |
Breakout Target Location Error Estimation Using Parametric Models |
James Brownlow | Breakout |
![]() | 2019 |
Breakout Anatomy of a Cyberattack: Standardizing Data Collection for Adversarial and Defensive Analyses |
Jason Schlup | Breakout |
![]() | 2019 |
Breakout A Survey of Statistical Methods in Aeronautical Ground Testing |
Drew Landman | Breakout |
![]() | 2019 |
Breakout Your Mean May Not Mean What You Mean It to Mean |
Ken Johnson | Breakout |
![]() | 2019 |