Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Uncertainty Quantification: Combining Large Scale Computational Models with Physical Data for Inference (Abstract)
Combining physical measurements with computational models is key to many investigations involving validation and uncertainty quantification (UQ). This talk surveys some of the many approaches taken for validation and UQ, with large-scale computational models. Experience with such applications suggests classifications of different types of problems with common features (e.g. data size, amount of empiricism in the model, computational demands, availability of data, extent of extrapolation required, etc.). More recently, social and social-technical systems are being considered for similar analyses, bringing new challenges to this area. This talk will approaches for such problems and will highlight what might be new research directions for application and methodological development in UQ. |
Dave Higdon | Breakout |
![]() | 2019 |
|
Breakout Target Location Error Estimation Using Parametric Models |
James Brownlow | Breakout |
![]() | 2019 |
|
Tutorial Tutorial: Cyber Attack Resilient Weapon Systems (Abstract)
This tutorial is an abbreviated version of a 36-hour short course recently provided by UVA to a class composed of engineers working at the Defense Intelligence Agency. The tutorial provides a definition for cyber attack resilience that is an extension of earlier definitions of system resilience that were not focused on cyber attacks. Based upon research results derived by the University of Virginia over an eight year period through DoD/Army/AF/Industry funding , the tutorial will illuminate the following topics: 1) A Resilence Design Requirements methodology and the need for supporting analysis tools, 2) a System Architecture approach for achieving resilience, 3) Example resilience design patterns and example prototype implementations, 4) Experimental results regarding resilience-related roles and readiness of system operators, and 5) Test and Evaluation Issues. The tutorial will be presented by UVA Munster Professor Barry Horowitz. |
Barry Horowitz Professor, Systems Engineering University of Virginia |
Tutorial |
![]() | 2019 |
|
Breakout Bayesian Component Reliability Estimation: F-35 Case Study (Abstract)
A challenging aspect of a system reliability assessment is integrating multiple sources of information, including component, subsystem, and full-system data, previous test data, or subject matter expert opinion. A powerful feature of Bayesian analyses is the ability to combine these multiple sources of data and variability in an informed way to perform statistical inference. This feature is particularly valuable in assessing system reliability where testing is limited and only a small number (or no failures at all) are observed. The F-35 is DoD’s largest program; approximately one-third of the operations and sustainment cost is attributed to the cost of spare parts and the removal, replacement, and repair of components. The failure rate of those components is the driving parameter for a significant portion of the sustainment cost, and yet for many of these components, poor estimates of the failure rate exist. For many programs, the contractor produces estimates of component failure rates, based on engineering analysis and legacy systems with similar parts. While these are useful, the actual removal rates can provide a more accurate estimate of the removal and replacement rates the program anticipates to experience in future years. In this presentation, we show how we applied a Bayesian analysis to combine the engineering reliability estimates with the actual failure data to overcome the problems of cases where few data exist. Our technique is broadly applicable to any program where multiple sources of reliability information need be combined for the best estimation of component failure rates and ultimately sustainment costs. |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
|
Breakout Machine Learning Prediction With Streamed Sensor Data: Fitting Neural Networks using Functional Principal Components (Abstract)
Sensors that record sequences of measurements are now embedded in many products from wearable exercise watches to chemical and semiconductor manufacturing equipment. There is information in the shapes of the sensor stream curves that is highly predictive of a variety of outcomes such as the likelihood of a product failure event or batch yield. Despite this data now being common and readily available, it is often being used either inefficiently or not at all due to lack of knowledge and tools for how to properly leverage it. In this presentation, we will propose fitting splines to sensor streams and extracting features called functional principal component scores that offer a highly efficient low dimensional compression of the signal data. Then, we use these features as inputs into machine learning models like neural networks and LASSO regression models. Once one sees sensor data in this light, answering a wide variety of applied questions becomes a straightforward two stage process of data cleanup/functional feature extraction followed by modeling using those features as inputs. |
Chris Gotwalt | Breakout |
![]() | 2019 |
|
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity (Abstract)
The application opportunities for behavioral analytics in the cybersecurity space are based upon simple realities. 1. The great majority of breaches across all cybersecurity venues is due to human choices and human error. 2. With communication and information technologies making for rapid availability of data, as well as behavioral strategies of bad actors getting cleverer, there is need for expanded perspectives in cybersecurity prevention. 3. Internally-focused paradigms must now be explored that place endogenous protection from security threats as an important focus and integral dimension of cybersecurity prevention. The development of cybersecurity monitoring metrics and tools as well as the creation of intrusion prevention standards and policies should always include an understanding of the underlying drivers of human behavior. As temptation follows available paths, cyber-attacks follow technology, business models, and behavioral habits. The human element will always be the most significant part in the anatomy of any final decision. Choice options – from input, to judgement, to prediction, to action – need to be better understood for their relevance to cybersecurity work. Behavioral Performance Indexes harness data about aggregate human participation in an active system, helping to capture some of the detail and nuances of this critically important dimension of cybersecurity. |
Robert Gough | Breakout |
![]() | 2019 |
|
Breakout Valuing Human Systems Integration: A Test and Data Perspective (Abstract)
Technology advances are accelerating at a rapid pace, with the potential to enable greater capability and power to the Warfighter. However, if human capabilities and limitations are not central to concepts, requirements, design, and development then new/upgraded weapons and systems will be difficult to train, operate, and maintain, may not result in the skills, job, grade, and manpower mix as projected, and may result in serious human error, injury or Soldier loss. The Army Human Systems Integration (HSI) program seeks to overcome these challenges by ensuring appropriate consideration and integration of seven technical domains: Human Factors Engineering (e.g., usability), Manpower, Personnel, Training, Safety and Occupational Health, Habitability, Force Protection and Survivability. The tradeoffs, constraints, and limitations occurring among and between these technical domains allows HSI to execute a coordinated, systematic process for putting the warfighter at the center of the design process – equipping the warfighter rather than manning equipment. To that end, the Army HSI Headquarters, currently as a directorate within the Army Headquarters Deputy Chief of Staff (DCS), G-1 develops strategies and ensures human systems factors are early key drivers in concepts, strategy, and requirements, and are fully integrated throughout system design, development, testing and evaluation, and sustainment The need to consider HSI factors early in the development cycle is critical. Too often, man-machine interface issues are not addressed until late in the development cycle (i.e. production and deployment phase) after the configuration of a particular weapon or system has been set. What results is a degraded combat capability, suboptimal system and system-of-systems integration, increased training and sustainment requirements, or fielded systems not in use. Acquisition test data are also good sources to glean HSI return on investment (ROI) metrics. Defense acquisition reports such as test and evaluation operational assessments identifies HSI factors as root causes when Army programs experience increase cost, schedule overruns, or low performance. This is identifiable by the number and type of systems that require follow-on test and evaluation (FOT&E), over reliance on field service representatives (FSRs), costly and time consuming engineering change requests (ECRs), or failures in achieving reliability, availability, and maintainability (RAM) key performance parameters (KPPs) and key system attributes (KSAs). In this presentation, we will present these data and submit several return on investment (ROI) metrics, closely aligned to the defense acquisition process, to emphasize and illustrate the value of HSI. Optimizing Warfighter-System performance and reducing human errors, minimizing risk of Soldier loss or injury, and reducing personnel and materiel life cycle costs produces data that are inextricably linked to early, iterative, and measurable HSI processes within the defense acquisition system. |
Jeffrey Thomas | Breakout |
![]() | 2019 |
|
Tutorial Tutorial: Reproducible Research (Abstract)
Analyses are “reproducible” if the same methods applied to the same data produce identical results when run again by another researcher (or you in the future). Reproducible analyses are transparent and easy for reviewers to verify, as results and figures can be traced directly to the data and methods that produced them. There are also direct benefits to the researcher. Real-world analysis workflows inevitably require changes to incorporate new or additional data, or to address feedback from collaborators, reviewers, or sponsors. These changes are easier to make when reproducible research best practices have been considered from the start. Poor reproducibility habits result in analyses that are difficult or impossible to review, are prone to compounded mistakes, and are inefficient to re-run in the future. They can lead to duplication of effort or even loss of accumulated knowledge when a researcher leaves your organization. With larger and more complex datasets, along with more complex analysis techniques, reproducibility is more important than ever. Although reproducibility is critical, it is often not prioritized either due to a lack of time or an incomplete understanding of end-to-end opportunities to improve reproducibility. This tutorial will discuss the benefits of reproducible research and will demonstrate ways that analysts can introduce reproducible research practices during each phase of the analysis workflow: preparing for an analysis, performing the analysis, and presenting results. A motivating example will be carried throughout to demonstrate specific techniques, useful tools, and other tips and tricks where appropriate. The discussion of specific techniques and tools is non-exhaustive; we focus on things that are accessible and immediately useful for someone new to reproducible research. The methods will focus mainly on work performed using R, but the general concepts underlying reproducible research techniques can be implemented in other analysis environments, such as JMP and Excel, and are briefly discussed. By implementing the approaches and concepts discussed during this tutorial, analysts in defense and aerospace will be equipped to produce more credible and defensible analyses of T&E data. |
Andrew Flack, Kevin Kirshenbaum, and John Haman IDA |
Tutorial |
![]() | 2019 |
|
Breakout Anatomy of a Cyberattack: Standardizing Data Collection for Adversarial and Defensive Analyses (Abstract)
Hardly a week goes by without news of a cybersecurity breach or an attack by cyber adversaries against a nation’s infrastructure. These incidents have wide-ranging effects, including reputational damage and lawsuits against corporations with poor data handling practices. Further, these attacks do not require the direction, support, or funding of technologically advanced nations; instead, significant damage can be – and has been – done with small teams, limited budgets, modest hardware, and open source software. Due to the significance of these threats, it is critical to analyze past events to predict trends and emerging threats. In this document, we present an implementation of a cybersecurity taxonomy and a methodology to characterize and analyze all stages of a cyberattack. The chosen taxonomy, MITRE ATT&CK™, allows for detailed definitions of aggressor actions which can be communicated, referenced, and shared uniformly throughout the cybersecurity community. We translate several open source cyberattack descriptions into the analysis framework, thereby constructing cyberattack data sets. These data sets (supplemented with notional defensive actions) illustrate example Red Team activities. The data collection procedure, when used during penetration testing and Red Teaming, provides valuable insights about the security posture of an organization, as well as the strengths and shortcomings of the network defenders. Further, these records can support past trends and future outlooks of the changing defensive capabilities of organizations. From these data, we are able to gather statistics on the timing of actions, detection rates, and cyberattack tool usage. Through analysis, we are able to identify trends in the results and compare the findings to prior events, different organizations, and various adversaries. |
Jason Schlup | Breakout |
![]() | 2019 |
|
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
|
Breakout Applying Functional Data Analysis throughout Aerospace Testing (Abstract)
Sensors abound in aerospace testing and while many scientists look at the data from a physics perspective, the comparative statistics information is what drives decisions. A multi-company project was comparing launch data from the 1980’s to a current set of data that included 30 sensors. Each sensor was designed to gather 3000 data points during the 3-second launch event. The data included temperature, acceleration, and pressure information. This talk will compare the data analysis methods developed for this project as well as the use of the new Functional Data Analysis tool within JMP for its ability to discern in-family launch performances. |
David Harrison | Breakout |
![]() | 2019 |
|
Breakout Screening Designs for Resource Constrained Deterministic M&S Experiments: A Munitions Case Study (Abstract)
Abstract: In applications where modeling and simulation runs are quick and cheap, space filling designs will give the tester all the information they need to make decisions about their system. In some applications however, this luxury does not exist, and each M&S run can be time consuming and expensive. In these scenarios, a sequential test approach provides an efficient solution where an initial screening is conducted, followed by an augmentation to fit specified models of interest. Until this point, no dedicated screening designs for UQ applications in resource constrained situations existed. Due to the Army’s frequent exposure to this type of situation, the need sparked a collaboration between Picatinny’s Statistical Methods and Analysis group and Professor V. Roshan Joseph of Georgie Tech, where a new type of UQ screening design was created. This paper provides a brief introduction to the design, its intended use, and a case study in which this new methodology was applied. |
Christopher Drake | Breakout |
![]() | 2019 |
|
Breakout 3D Mapping, Plotting, and Printing in R with Rayshader (Abstract)
Is there ever a place for the third dimension in visualizing data? Is the use of 3D inherently bad, or can a 3D visualization be used as an effective tool to communicate results? In this talk, I will show you how you can create beautiful 2D and 3D maps and visualizations in R using the rayshader package. Additionally, I will talk about the value of 3D plotting and how good aesthetic choices can more clearly communicate results to stakeholders. Rayshader is a free and open source package for transforming geospatial data into engaging visualizations using a simple, scriptable workflow. It provides utilities to interactively map, plot, and 3D print data from within R. It was nominated by Hadley Wickham to be one of 2018’s Data Visualizations of the Year for the online magazine Quartz. |
Tyler Morgan-Wall | Breakout | 2019 |
||
Breakout Air Force Human Systems Integration Program (Abstract)
The Air Force (AF) Human Systems Integration (HSI) program is led by the 711th Human Performance Wing’s Human Systems Integration Directorate (711 HPW/HP). 711 HPW HP provides direct support to system program offices and AF Major Commands (MAJCOMs) across the acquisition lifecycle from requirements development to fielding and sustainment in addition to providing home office support. With an ever-increasing demand signal for support, HSI practitioners within 711 HPW/HP assess HSI domain areas for human-centered risks and strive to ensure systems are designed and developed to safely, effectively, and affordably integrate with human capabilities and limitations. In addition to system program offices and MAJCOMs, 711 HPW/HP provides HSI support to AF Centers (e.g., AF Sustainment Center, AF Test Center), the AF Medical Service, and special cases as needed. The AF Global Strike Command (AFGSC) is the largest MAJCOM with several Programs of Record (POR), such as the B-1, B-2, and B-52 bombers, Intercontinental Ballistic Missiles (ICBM), Ground-Based Strategic Deterrent (GBSD), Airborne Launch Control System (ALCS), and other support programs/vehicles like the UH-1N. Mr. Anthony Thomas (711 HPW/HP), the AFGSC HSI representative, will discuss how 711 HPW/HP supports these programs at the MAJCOM headquarters level and in the system program offices. |
Anthony Thomas | Breakout |
![]() | 2019 |
|
Breakout A User-Centered Design Approach to Military Software Development (Abstract)
This case study highlights activities performed during the front-end process of a software development effort undertaken by the Fire Support Command and Control Program Office. This program office provides the U.S. Army, Joint and coalition commanders with the capability to plan, execute and deliver both lethal and non-lethal fires. Recently, the program office has undertaken modernization of its primary field artillery command and control system that has been in use for over 30 years. The focus of this case study is on the user-centered design process and activities taken prior to and immediately following contract award. A modified waterfall model comprised of three cyclic, yet overlapping phases (observation, visualization, and evaluation) provided structure for the iterative, user-centered design process. Gathering and analyzing data collected during focus groups, observational studies, and workflow process mapping, enabled the design team to identify 1) design patterns across the role/duty, unit and echelon matrix (a hierarchical organization structure), 2) opportunities to automate manual processes, 3) opportunities to increase efficiencies for fire mission processing, 4) bottlenecks and workarounds to be eliminated through design of the modernized system, 5) shortcuts that can be leveraged in design, 6) relevant and irrelevant content for each user population for streamlining access to functionality, 7) a usability baseline for later comparison (e.g., the number of steps and time taken to perform a task as captured in workflows for comparison to the same task in the modernized system), and provided the basis for creating visualizations using wireframes. Heuristic evaluations were conducted early to obtain initial feedback from users. In the next few months, usability studies will enable users to provide feedback based on actual interaction with the newly designed software. Included in this case study are descriptions of the methods used to collect user-centered design data, how results were visualized/documented for use by the development team, and lessons learned from applying user-centered design techniques during software development of a military field artillery command and control system. |
Pam Savage-Knepshield | Breakout |
![]() | 2019 |
|
Breakout A Survey of Statistical Methods in Aeronautical Ground Testing |
Drew Landman | Breakout |
![]() | 2019 |
|
Short Course Design of Experiments (Abstract)
Overview/Course Outcomes- Well-designed experiments are a powerful tool for developing and validating cause and effect relationships when evaluating and improving product and process performance and for operational testing of complex systems. Designed experiments are the only efficient way to verify the impact of changes in product or process factors on actual performance. The course outcomes are: • Ability to plan and execute experiments • Ability to collect data and analyze and interpret these data to provide the knowledge required for business success • Knowledge of a wide range of modern experimental tools that enable practitioners to customize their experiment to meet practical resource constraints The topics covered during the course are: • Fundamentals of DOX – randomization, replication, and blocking. • Planning for a designed experiment – type and size of design, factor selection, levels and ranges, response measurement, sample sizes. • Graphical and statistical approaches to DOX analysis. • Blocking to eliminate the impact of nuisance factors on experimental results. • Factorial experiments and interactions. • Fractional factorials – efficient and effective use of experimental resources. • Optimal designs • Response surface methods • A demonstration illustrating and comparing the effectiveness of different experimental design strategies. This course is focused on helping you and your organization make the most effective utilization of DOX. Software usage is fully integrated into the course Who Should Attend- The course is suitable for participants from an engineering or technical background. Participants will need some previous experience and background in statistical methods. Reference Materials- The course is based on the textbook Design and Analysis of Experiments, 9th Edition, by Douglas C. Montgomery. JMP Software will be discussed and illustrated. |
Dr. Doug Montgomery and Dr. Caleb King Arizona State University, JMP |
Short Course | 2019 |
||
Breakout Adopting Optimized Software Test Design Methods at Scale (Abstract)
Using Combinatorial Test Design methods to select software test scenarios has repeatedly delivered large efficiency and thoroughness gains – which begs the questions: • Why are these proven methods not used everywhere? • Why do some efforts to promote adoption of new approaches stagnate? • What steps can leaders take to introduce successfully introduce and spread new test design methods? For more than a decade, Justin Hunter has helped large global organizations across six continents adopt new test design techniques at scale. Working in some environments, he has felt like Sisyphus, forever condemned to roll a boulder uphill only to watch it roll back down again. In other situations, things clicked; teams smoothly adopted new tools and techniques, and impressive results were quickly achieved. In this presentation, Justin will discuss several common challenges faced by large organizations, explain why adopting test design tools is more challenging than adopting other types of development and testing tools, and share actionable recommendations to consider when you roll out new test design approaches. |
Justin Hunter | Breakout |
![]() | 2019 |
|
Breakout The Isle of Misfit Designs: A Guided Tour of Optimal Designs That Break the Mold (Abstract)
Whether it was in a Design of Experiments course or through your own work, you’ve no doubt seen and become well acquainted with the standard experimental design. You know the features: they’re “orthogonal” (no messy correlations to deal with), their correlation matrices are nice pretty diagonals, and they can only happen with run sizes of 4, 8, 12, 16, and so on. Well what if I told you that there existed optimal designs that defied convention. What if I told you that, yes, you can run an optimal design with, say, 5 factors in 9 runs. Or 10. Or even 11 runs! Join me as I show you a strange new world of optimal designs that are the best at what they do, even though they might not look very nice. |
Caleb King | Breakout |
![]() | 2019 |
|
Breakout Functional Data Analysis for Design of Experiments (Abstract)
With nearly continuous recording of sensor values now common, a new type of data called “functional data” has emerged. Rather than the individual readings being modeled, the shape of the stream of data over time is being modeled. As an example, one might model many historical vibration-over-time streams of a machine at start-up to identify functional data shapes associated with the onset of system failure. Functional Principal Components (FPC) analysis is a new and increasingly popular method for reducing the dimensionality of functional data so that only a few FPCs are needed to closely approximate any of a set of unique data streams. When combined with Design of Experiments (DoE) methods the response to be modeled in as fewest tests as possible is now the shape of a stream of data over time. Example analyses will be shown where the form of the curve is modeled as the function of several input variables allowing one to determine the input settings associated with shapes indicative of good or poor system performance. This allows the analyst to predict the shape of the curve as a function of the input variables. |
Tom Donnelly | Breakout |
![]() | 2019 |
|
Breakout Toward Real-Time Decision Making in Experimental Settings (Abstract)
Materials scientists, computer scientists and statisticians at LANL have teamed up to investigate how to make near real time decisions during fast-paced experiments. For instance, a materials scientist at a beamline typically has a short window in which to perform a number of experiments, after which they analyze the experimental data, determine interesting new experiments and repeat. In typical circumstances, that cycle could take a year. The goal of this research and development project is to accelerate that cycle so that interesting leads are followed during the short window for experiments, rather than in years to come. We detail some of our UQ work in materials science, including emulation, sensitivity analysis, and solving inverse problems, with an eye toward real-time decision making in experimental settings. |
Devin Francom | Breakout | 2019 |
||
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber (Abstract)
Due to the immense potential use cases, configurations, and threat behaviors, thorough and efficient cyber testing is a significant challenge for the defense community. In this presentation, Phadke will present case studies where STO was successfully deployed for cyber testing, resulting in higher assurance, reduced schedule, and reduced testing cost. Phadke will also discuss importance first focusing on the engineering and science analysis, and only after that is complete, implementing statistical methods. |
Kedar Phadke | Breakout | 2019 |
||
Breakout Your Mean May Not Mean What You Mean It to Mean (Abstract)
The average and standard deviation of, say, strength or dimensional test data are basic engineering math, simple to calculate. What those resulting values actually mean, however, may not be simple, and can be surprisingly different from what a researcher wants to calculate and communicate. Mistakes can lead to overlarge estimates of spread, structures that are over- or under-designed and other challenges to understanding or communicating what your data is really telling you. This talk will discuss some common errors and missed opportunities seen in engineering and scientific analyses along with mitigations that can be applied through smart and efficient test planning and analysis. It will cover when – and when not – to report a simple mean of a dataset based on the way the data was taken; why ignoring this often either hides or overstates risk; and a standard method for planning tests and analyses to avoid this problem. And it will cover what investigators can correctly (or incorrectly) say about means and standard deviations of data, including how and why to describe uncertainty and assumptions depending on what a value will be used for. The presentation is geared toward the engineer, scientist or project manager charged with test planning, data analysis or understanding findings from tests and other analyses. Attenders’ basic understanding of quantitative data analysis is recommended; more-experienced participants will grasp correspondingly more nuance from the pitch. Some knowledge of statistics is helpful, but not required. Participants will be challenged to think about an average as not just “the average”, but a valuable number that can and must relate to the engineering problem to be solved, and must be firmly based in the data. Attenders will leave the talk with a more sophisticated understanding of this basic, ubiquitous but surprisingly nuanced statistic and greater appreciation of its power as an engineering tool. |
Ken Johnson | Breakout |
![]() | 2019 |
|
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort (Abstract)
Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a “black box”. Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow. |
Robert Baurle | Breakout |
![]() | 2019 |
|
Keynote Wednesday Keynote Speaker III |
Timothy Dare Deputy Director, Developmental Test, Evaluation, and Prototyping SES OUSD(R&E) ![]() (bio)
Mr. Timothy S. Dare is the Deputy Director for Developmental Test, Evaluation and Prototyping (DD(DTEP)). As the DD(DTEP), he serves as the principal advisor on developmental test and evaluation (DT&E) to the Secretary of Defense, Under Secretary of Defense for Research and Engineering, and Director of Defense Research and Engineering for Advanced Capabilities. Mr. Dare is responsible for DT&E policy and guidance in support of the acquisition of major Department of Defense (DoD) systems, and providing advocacy, oversight, and guidance to the DT&E acquisition workforce. He informs policy and advances leading edge technologies through the development of advanced technology concepts, and developmental and operational prototypes. By working closely with interagency partners, academia, industry and governmental labs, he identifies, develops and demonstrates multi-domain technologies and concepts that address high-priority DoD, multi-Service, and Combatant Command warfighting needs. Prior to his appointment in December 2018, Mr. Dare was a Senior Program Manager for program management and capture at Lockheed Martin (LM) Space. In this role he was responsible for the capture and execution phases of multiple Intercontinental Ballistic Missile programs for Minuteman III, including a new airborne Nuclear Command and Control (NC2) development program. His major responsibilities included establishing program working environments at multiple locations, policies, processes, staffing, budget and technical baselines. Mr. Dare has extensive T&E and prototyping experience. As the Engineering Program Manager for the $1.8B Integrated Space C2 programs for NORAD/NORTHCOM systems at Cheyenne Mountain, Mr. Dare was the Integration and Test lead focusing on planning, executing, and evaluating the integration and test phases (developmental and operational T&E) for Missile Warning and Space Situational Awareness (SSA) systems. Mr. Dare has also been the Engineering Lead/Integration and Test lead on other systems such as the Hubble Space Telescope; international border control systems; artificial intelligence (AI) development systems (knowledge-based reasoning); Service-based networking systems for the UK Ministry of Defence; Army C2 systems; Space Fence C2; and foreign intelligence, surveillance, and reconnaissance systems. As part of the Department’s strategic defense portfolio, Mr. Dare led the development of advanced prototypes in SSA C2 (Space Fence), Information Assurance (Single Sign-on), AI systems, and was the sponsoring program manager for NC2 capability development. Mr. Dare is a graduate of Purdue University and is a member of both the Association for Computing Machinery and Program Management Institute. He has been recognized by the U.S. Air Force for his contributions supporting NORAD/NORTHCOM’s strategic defense missions, and the National Aeronautics and Space Administration for his contributions to the original Hubble Space Telescope program. Mr. Dare holds a U.S. Patent for Single Sign-on architectures. |
Keynote |
![]() | 2019 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Uncertainty Quantification: Combining Large Scale Computational Models with Physical Data for Inference |
Dave Higdon | Breakout |
![]() | 2019 |
|
Breakout Target Location Error Estimation Using Parametric Models |
James Brownlow | Breakout |
![]() | 2019 |
|
Tutorial Tutorial: Cyber Attack Resilient Weapon Systems |
Barry Horowitz Professor, Systems Engineering University of Virginia |
Tutorial |
![]() | 2019 |
|
Breakout Bayesian Component Reliability Estimation: F-35 Case Study |
V. Bram Lillard & Rebecca Medlin | Breakout |
![]() | 2019 |
|
Breakout Machine Learning Prediction With Streamed Sensor Data: Fitting Neural Networks using Functional Principal Components |
Chris Gotwalt | Breakout |
![]() | 2019 |
|
Breakout Behavioral Analytics: Paradigms and Performance Tools of Engagement in System Cybersecurity |
Robert Gough | Breakout |
![]() | 2019 |
|
Breakout Valuing Human Systems Integration: A Test and Data Perspective |
Jeffrey Thomas | Breakout |
![]() | 2019 |
|
Tutorial Tutorial: Reproducible Research |
Andrew Flack, Kevin Kirshenbaum, and John Haman IDA |
Tutorial |
![]() | 2019 |
|
Breakout Anatomy of a Cyberattack: Standardizing Data Collection for Adversarial and Defensive Analyses |
Jason Schlup | Breakout |
![]() | 2019 |
|
Short Course Uncertainty Quantification |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
|
Breakout Applying Functional Data Analysis throughout Aerospace Testing |
David Harrison | Breakout |
![]() | 2019 |
|
Breakout Screening Designs for Resource Constrained Deterministic M&S Experiments: A Munitions Case Study |
Christopher Drake | Breakout |
![]() | 2019 |
|
Breakout 3D Mapping, Plotting, and Printing in R with Rayshader |
Tyler Morgan-Wall | Breakout | 2019 |
||
Breakout Air Force Human Systems Integration Program |
Anthony Thomas | Breakout |
![]() | 2019 |
|
Breakout A User-Centered Design Approach to Military Software Development |
Pam Savage-Knepshield | Breakout |
![]() | 2019 |
|
Breakout A Survey of Statistical Methods in Aeronautical Ground Testing |
Drew Landman | Breakout |
![]() | 2019 |
|
Short Course Design of Experiments |
Dr. Doug Montgomery and Dr. Caleb King Arizona State University, JMP |
Short Course | 2019 |
||
Breakout Adopting Optimized Software Test Design Methods at Scale |
Justin Hunter | Breakout |
![]() | 2019 |
|
Breakout The Isle of Misfit Designs: A Guided Tour of Optimal Designs That Break the Mold |
Caleb King | Breakout |
![]() | 2019 |
|
Breakout Functional Data Analysis for Design of Experiments |
Tom Donnelly | Breakout |
![]() | 2019 |
|
Breakout Toward Real-Time Decision Making in Experimental Settings |
Devin Francom | Breakout | 2019 |
||
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber |
Kedar Phadke | Breakout | 2019 |
||
Breakout Your Mean May Not Mean What You Mean It to Mean |
Ken Johnson | Breakout |
![]() | 2019 |
|
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort |
Robert Baurle | Breakout |
![]() | 2019 |
|
Keynote Wednesday Keynote Speaker III |
Timothy Dare Deputy Director, Developmental Test, Evaluation, and Prototyping SES OUSD(R&E) ![]() |
Keynote |
![]() | 2019 |