Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Keynote Closing Remarks |
Robert Behler Director DOT&E ![]() (bio)
Robert F. Behler was sworn in as Director of Operational Test and Evaluation on December 11, 2017. A Presidential appointee confirmed by the United States Senate, he serves as the senior advisor to the Secretary of Defense on operational and live fire test and evaluation of Department of Defense weapon systems. Prior to his appointment, he was the Chief Operating Officer and Deputy Director of the Carnegie Mellon University Software Engineering Institute (SEI), a Federally Funded Research and Development Center. SEI is a global leader in advancing software development and cybersecurity to solve the nation’s toughest problems through focused research, development, and transition to the broader software engineering community. Before joining the SEI, Mr. Behler was the President and CEO of SRC, Inc. (formerly the Syracuse Research Corporation). SRC is a not-for-profit research and development corporation with a forprofit manufacturing subsidiary that focuses on radar, electronic warfare and cybersecurity technologies. Prior to working at SRC, Mr. Behler was the General Manager and Senior Vice President of the MITRE Corp where he provided leadership to more than 2,500 technical staff in 65 worldwide locations. He joined MITRE from the Johns Hopkins University Applied Physics Laboratory where he was a General Manager for more than 350 scientists and engineers as they made significant contributions to critical Department of Defense (DOD) precision engagement challenges. General Behler served 31 years in the United States Air Force, retiring as a Major General in 2003. During his military career, he was the Principal Adviser for Command and Control, Intelligence, Surveillance and Reconnaissance (C21SR) to the Secretary and Chief of Staff of the U.S. Air Force (USAF). International assignments as a general officer included the Deputy Commander for NATO’s Joint Headquarters North in Stavanger, Norway. He was the Director of the Senate Liaison Office for the USAF during the 104th congress. Mr. Behler also served as the assistant for strategic systems to the Director of Operational Test and Evaluation. As an experimental test pilot, he flew more than 65 aircraft types. Operationally he flew worldwide reconnaissance missions in the fastest aircraft in the world, the SR-71 Blackbird. Mr. Behler is a Fellow of the Society of Experimental Test Pilots and an Associate Fellow of the American Institute of Aeronautics and Astronautics. He is a graduate of the University of Oklahoma where he received a B.S. and M.S. in aerospace engineering, has a MBA from Marymount University and was a National Security Fellow at the JFK School of Government at Harvard University. Mr. Behler has recently been on several National Research Council studies for the National Academy of Sciences including: “Critical Code,” “Software Producibility, Achieving Effective Acquisition of Information Technology in the Department of Defense” and “Development Planning: A Strategic Approach to Future Air Force Capabilities.” |
Keynote | 2018 |
||
Breakout B-52 Radar Modernization Test Design Considerations (Abstract)
Inherent system processes, restrictions on collection, or cost may impact the practical execution of an operational test. This study presents the use of blocking and split-plot designs when complete randomization is not feasible in operational test. Specifically, the USAF B-52 Radar Modernization Program test design is used to present tradeoffs of different design choices and the impacts of those choices on cost, operational relevance, and analytical rigor. |
Joseph Maloney AFOTEC |
Breakout | Materials | 2018 |
|
Contributed Assessing Human Visual Inspection for Acceptance Testing: An Attribute Agreement Analysis Case Study (Abstract)
In today’s manufacturing, inspection, and testing world, understanding the capability of the measurement system being used via the use of Measurement Systems Analyses (MSA) is a crucial activity that provides the foundation for the use of Design of Experiments (DOE) and Statistical Process Control (SPC). Although undesirable, there are times when human observation is the only measurement system available. In these types of situations, traditional MSA tools are often ineffectual due to the nature of the data collected. When there are no other alternatives, we need some method for assessing the adequacy and effectiveness of the human observations. When multiple observers are involved, Attribute Agreement Analyses are a powerful tool for quantifying the Agreement and Effectiveness of a visual inspection system. This talk will outline best practices and rules of thumb for Attribute Agreement Analyses, and will highlight a recent Army case study to further demonstrate the tool’s use and potential. |
Christopher Drake Lead Statistician QE&SA Statistical Methods & Analysis Group |
Contributed | Materials | 2018 |
|
Breakout Building A Universal Helicopter Noise Model Using Machine Learning (Abstract)
Helicopters serve a number of useful roles within the community; however, community acceptance of helicopter operations is often limited by the resulting noise. Because the noise characteristics of helicopters depend strongly on the operating condition of the vehicle, effective noise abatement procedures can be developed for a particular helicopter type, but only when the noisy regions of the operating envelope are identified. NASA Langley Research Center—often in collaboration with other US Government agencies, industry, and academia—has conducted noise measurements for a wide variety of helicopter types, from light commercial helicopters to heavy military utility helicopters. While this database is expansive, it covers only a fraction of helicopter types in current commercial and military service and was measured under a limited set of ambient conditions and vehicle configurations. This talk will describe a new “universal” helicopter noise model suitable for planning helicopter noise abatement procedures. Modern machine learning techniques will be combined with the principle of nondimensionalization and applied to NASA’s helicopter noise data in order to develop a model capable of estimating the noisy operating states of any conventional helicopter under any specific ambient conditions and vehicle configurations. |
Eric Greenwood Aeroacoustics Branch |
Breakout | Materials | 2018 |
|
Breakout The Effect of Extremes in Small Sample Size on Simple Mixed Models: A Comparison of Level-1 and Level-2 Size (Abstract)
Mixed models are ideally suited to analyzing nested data from within-persons designs, designs that are advantageous in applied research. Mixed models have the advantage of enabling modeling of random effects, facilitating an accounting of the intra-person variation captured by multiple observations of the same participants and suggesting further lines of control to the researcher. However, the sampling requirements for mixed models are prohibitive to other areas which could greatly benefit from them. This simulation study examines the impact of small sample sizes in both levels of the model on the fixed effect bias, type I error, and power of a simple mixed model analysis. Despite the need for adjustments to control for type I error inflation, findings indicate that smaller samples than previously recognized can be used for mixed models under certain conditions prevalent in applied research. Examination of the marginal benefit of increases in sample subject and observation size provides applied researchers with guidance for developing mixed-model repeated measures designs that maximize power. |
Kristina Carter Research Staff Member IDA |
Breakout | Materials | 2018 |
|
Contributed Sound level recommendations for quiet sonic boom dose-response community surveys (Abstract)
The current ban on commercial overland supersonic flight may be replaced by a noise limit on sonic boom sound level. NASA is establishing a quiet sonic boom database to guide the new regulation. The database will consist of multiple community surveys used to model the dose-response relationship between sonic boom sound levels and human annoyance. There are multiple candidate dose-response modeling techniques, such as classical logistic regression and multilevel modeling. To plan for these community surveys, recommendations for data collection will be developed from pilot community test data. Two important aspects are selecting sample size and sound level range. Selection of sample size must be strategic as large sample sizes are costly whereas small sample sizes may result in more uncertainty in the estimates. Likewise, there are trade-offs associated with selection of the sound level range. If the sound level range includes excessively high sound levels, the public may misunderstand the potential impact of quiet sonic booms, resulting in a negative backlash against a promising technological advancement. Conversely, a narrow range that includes only low sound levels might exclude the eventual noise limit. This presentation will focus on recommendations for sound level range given the expected shape of the dose-response curve. |
Jasme Lee North Carlina State Univeristy |
Contributed | Materials | 2018 |
|
Breakout Sage III Seu Statistical Analysis Model (Abstract)
The Stratospheric Aerosol and Gas Experiment (SAGE III) aboard the International Space Station (ISS) was experiencing a series of anomalies called Single Event Upsets (SEUs). Booz Allen Hamilton was tasked with conducting a statistical analysis to model the incidence of SEUs in the SAGE III equipment aboard the ISS. The team identified factors correlated with SEU incidences, set up a model to track degradation of Sage III, and showed current and past probabilities as a function of the space environment. The space environment of SAGE III was studied to identify possible causes of SEUs. The analysis revealed variables most correlated with the anomalies, including solar wind strength, solar and geomagnetic field behavior, and location/orientation of the ISS, sun, and moon. The data was gathered from a variety of sources including US government agencies, foreign and domestic academic centers, and state-of-the-art simulation algorithms and programs. Logistic regression was used to analyze SEUs and gain preliminary results. The data was divided into small time intervals to approximate independence and allow logistic regression. Due to the rarity of events the initial model results were based on few SEUs. The team set up a Graphical User Interface (GUI) program to automatically analyze new data as it became available to the SAGE III team. A GUI was built to allow the addition of more data over the life of the SAGE III mission. As more SEU incidents occur and are entered into the model, its predictive power will grow significantly. The GUI enables the user to easily rerun the regression analysis and visualize its results to inform operational decision making. |
Ray McCollum Booz Allen Hamilton |
Breakout | Materials | 2018 |
|
Tutorial Robust Parameter Design (Abstract)
The Japanese industrial engineer, Taguchi, introduced the concept of robust parameter design in the 1950s. Since then, it has seen widespread, successful application in automotive and aerospace applications. Engineers have applied this methodology both to physical and computer experimentation. This tutorial provides a basic introduction to these concepts, with an emphasis on how robust parameter design provides a proper basis for the evaluation and confirmation of system performance. The goal is to show how to modify basic robust parameter designs to meet the specific needs of the weapons testing community.This tutorial targets systems engineers, analysts, and program managers who must evaluate and confirm complex system performance. The tutorial illustrates new ideas that are useful for the evaluation and the confirmation of the performance for such systems.What students will learn: • The basic concepts underlying robust parameter design • The importance of the statistical concept of interaction to robust parameter design • How statistical interaction is the key concept underlying much of the evaluation and confirmation of system performance, particularly of weapon systems |
Geoff Vining | Tutorial | Materials | 2018 |
|
Keynote Testing and Analytical Challenges on the Path to Hypersonic Flight |
Mark Lewis Director IDA-STPI ![]() (bio)
Dr. Mark J. Lewis is the Director of IDA’s Science and Technology Policy Institute, a federally funded research and development center. He leads an organization that provides analysis of national and international science and technology issues to the Office of Science and Technology Policy in the White House, as well as other Federal agencies including the National Institutes of Health, the National Science Foundation, NASA, the Department of Energy, Homeland Security, and the Federal Aviation Administration, among others.Prior to taking charge of STPI, Dr. Lewis served as the Willis Young, Jr. Professor and Chair of the Department of Aerospace Engineering at the University of Maryland. A faculty member at Maryland for 24 years, Dr. Lewis taught and conducted basic and applied research. From 2004 to 2008, Dr. Lewis was the Chief Scientist of the U.S. Air Force. From 2010 to 2011, he was President of the American Institute of Aeronautics and Astronautics (AIAA).Dr. Lewis attended the Massachusetts Institute of Technology, where he received a Bachelor of Science degree in aeronautics and astronautics, Bachelor of Science degree in earth and planetary science (1984), Master of Science (1985), and Doctor of Science (1988) in aeronautics and astronautics.Dr. Lewis is the author of more than 300 technical publications and has been an adviser to more than 70 graduate students. Dr. Lewis has also served on various advisory boards for NASA, the Air Force, and DoD, including two terms on the Air Force Scientific Advisory Board, the NASA Advisory Council, and the Aeronautics and Space Engineering Board of the National Academies.Dr. Lewis’s awards include the Meritorious Civilian Service Award and Exceptional Civilian Service Award; he was also recognized as the 1994 AIAA National Capital Young Scientist/Engineer of the Year, IECEC/ AIAA Lifetime Achievement Award, and is an Aviation Week and Space Technology Laureate (2007). |
Keynote |
![]() | 2018 |
|
Breakout Experimental Design of a Unique Force Measurement System Calibration (Abstract)
Aerodynamic databases for space flight vehicles rely on wind-tunnel tests utilizing precision force measurement systems (FMS). Recently, NASA’s Space Launch System (SLS) program has conducted numerous wind-tunnel testing. This presentation will focus on the calibration of a unique booster FMS through the use of design of experiments (DoE) and regression modeling. Utilization of DoE resulted in a sparse, time-efficient, design with results exceeding researcher’s expectations. |
Ken Toro | Breakout | 2018 |
||
Contributed Leveraging Anomaly Detection for Aircraft System Health Data Stability Reporting (Abstract)
Detecting and diagnosing aircraft system health poses a unique challenge as system complexity increases and software is further integrated. Anomaly detection algorithms systematically highlight unusual patterns in large datasets and are a promising methodology for detecting aircraft system health. The F-35A fighter aircraft is driven by complex, integrated subsystems with both software and hardware components. The F-35A operational flight program is the software that manages each subsystem within the aircraft and the flow of required information and support between subsystems. This information and support are critical to the successful operation of many subsystems. For example, the radar system supplies information to the fusion engine, without which the fusion engine would fail. ACC operational testing can be thought of as equivalent to beta testing for operational flight programs. As in other software, many faults result in minimal loss of functionality and are often unnoticed by the user. However, there are times when a software fault might result in catastrophic functionality loss (i.e., subsystem shutdown). It is critical to catch software problems that will result in catastrophic functionality loss before the flight software is fielded to the combat air forces. Subsystem failures and degradations can be categorized and quantified using simple system health data codes (e.g., degrade, fail, healthy). However, because the integrated nature of the F-35A, a subsystem degradation may be caused by another subsystem. The 59th Test and Evaluation Squadron collects autonomous system data, pilot questionnaires, and health report codes for F-35A subsystems. Originally, this information was analyzed using spreadsheet tools (i.e., Microsoft Excel). Using this method, analysts were unable to examine all subsystems or attribute cause for subsystem faults. The 59 TES is developing a new process that leverages anomaly detection algorithms to isolate flights with unusual patterns of subsystem failures and within those flights, highlight what subsystem faults are correlated with increased subsystem failures. This presentation will compare the performance of several anomaly detection algorithms (e.g., K-means, K-nearest neighbors, support vector machines) using simulated F-35A data. |
Kyle Gartrell | Contributed | Materials | 2018 |
|
Breakout Machine Learning to Assess Piolts’ Cognitive State (Abstract)
The goal of the Crew State Monitoring (CSM) project is to use machine learning models trained with physiological data to predict unsafe cognitive states in pilots such as Channelized Attention (CA) and Startle/Surprise (SS). These models will be used in a real-time system that predicts a pilot’s mental state every second, a tool that can be used to help pilots recognize and recover from these mental states. Pilots wore different sensors that collected physiological data such as a 20-channel electroencephalography (EEG), respiration, and galvanic skin response (GSR). Pilots performed non-flight benchmark tasks designed to induce these states, and a flight simulation with “surprising” or “channelizing” events. The team created a pipeline to generate pilot-dependent models that trains on benchmark data, tune on a portion of a flight task, and be deployed onto the remaining flight task. The model is a series of anomaly-detection based ensembles, where each ensemble focuses on predicting a single state. Ensembles were comprised of several anomaly detectors such as One Class SVMs, each focusing on a different subset of sensor data. We will discuss the performance of these models, as well as the ongoing research generalizing models across pilots and improving accuracy. |
Tina Heinich Computer Engineer, OCIO Data Science Team AST, Data Systems |
Breakout | Materials | 2018 |
|
Breakout Evaluating Deterministic Models of Time Series by Comparison to Observations (Abstract)
A standard paradigm for assessing the quality of model simulations is to compare what these models produce to experimental or observational samples of what the models seek to predict. Often these comparisons are based on simple summary statistics, even when the objects of interest are time series. Here, we propose a method of evaluation through probabilities derived from tests of hypotheses that model-simulated and observed time sequences share common signals. The probabilities are based on the behavior of summary statistics of model output and observational data, over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise. We demonstrate with an example from climate model evaluation for which this methodology was developed. |
Amy Braverman Jet Propulsion Laboratory, California Institute of Technology |
Breakout | Materials | 2018 |
|
Contributed Development of a Locking Setback Mass for Cluster Munition Applications: A UQ Case Study (Abstract)
The Army is currently developing a cluster munition that is required to meet functional reliability requirements of 99%. This effort focuses on the design process for a setback lock within the safe and arm (S&A) device in the submunition fuze. This lock holds the arming rotor in place, thus preventing the fuze from beginning its arming sequence until the setback lock detracts during a launch event. Therefore, the setback lock is required to not arm (remain in place) during a drop event (safety) and to arm during a launch event (reliability). In order to meet these requirements, uncertainty quantification techniques were used to evaluate setback lock designs. We designed a simulation experiment, simulated the setback lock behavior in a drop event and in a launch event, fit a model to the results, and optimized the design for safety and reliability. Currently, 8 candidate designs that meet the requirements are being manufactured, and adaptive sensitivity testing is planned to inform the surrogate models and improve their predictive capability. A final optimized design will be chosen based on the improved models, and realistic drop safety and arm reliability predictions will be obtained using Monte-Carlo simulations of the surrogate models. |
Melissa Jablonski U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT & ENGINEERING CENTER |
Contributed | Materials | 2018 |
|
Breakout Initial Validation of the Trust of Automated System Test (Abstract)
Automated systems are technologies that actively select data, transform information, make decisions, and control processes. The U.S. military uses automated systems to perform search and rescue and reconnaissance missions, and to assume control of aircraft to avoid ground collision. Facilitating appropriate trust in automated systems is essential to improving the safety and performance of human-system interactions. In two studies, we developed and validated an instrument to measure trust in automated systems. In study 1, we demonstrated that the scale has a 2-factor structure and demonstrates concurrent validity. We replicated these results using an independent sample in study 2. |
Heather Wojton Research Staff Member IDA |
Breakout | Materials | 2018 |
|
Tutorial Exploratory Data Analysis (Abstract)
After decades of seminal methodological research on the subject—accompanied by a myriad of applications—John Tukey formally created the statistical discipline known as EDA with the publication of his book “Exploratory Data Analysis” in 1977. The breadth and depth of this book was staggering, and its impact pervasive, running the gamut from today’s routine teaching of box plots in elementary schools, to the existent core philosophy of data exploration “in-and-for-itself” embedded in modern day statistics and AI/ML. As important as EDA was at its inception, it is even more essential now, with data sets increasing in both complexity and size. Given a science & engineering problem/question, and given an existing data set, we argue that the most important deliverable in the problem-solving process is data-driven insight; EDA visualization techniques lie at the core of extracting that insight. This talk has 3 parts: 1. Data Diamond: In light of the focus of DATAWorks to share essential methodologies for operational testing/evaluation, we first present a problem-solving framework (simple in form but rich in content) constructed and fine-tuned over 4 decades of scientific/engineering problem-solving: the data diamond. This data-centric structure has proved essential for systematically approaching a variety of research and operational problems, for determining if the data on hand has the capacity to answer the question at hand, and for identifying weaknesses in the total experimental effort that might compromise the rigor/correctness of derived solutions. 2. EDA Methods & Block Plot: We discuss those EDA graphical tools that have proved most important/insightful (for the presenter) in attacking the wide variety of physical/chemical/ biological/engineering/infotech problems existent in the NIST environment. Aside from some more commonly-known EDA tools in use, we discuss the virtues/applications of the block plot, which is a tool specifically designed for the “comparative” problem type–ascertaining as to whether the (yes/no) conclusion about the statistical significance of a single factor under study, is in fact robustly true over the variety of other factors (material/machine/method/operator/ environment, etc.) that co-exist in most systems. The testing of army bullet-proof vests is used as an example. 3. 10-Step DEX Sensitivity Analysis: Since the rigor/robustness of testing & evaluation conclusions are dictated not only by the choice of (post-data) analysis methodologies, but more importantly by the choice of (pre-data) experiment design methodologies, we demonstrate a recommended procedure for the important “sensitivity analysis” problem–determining what factors most affect the output of a multi-factor system. The deliverable is a ranked list (ordered by magnitude) of main effects (and interactions). Design-wise, we demonstrate the power and efficiency of orthogonal fractionated 2-level designs for this problem; analysis-wise, we present a structured 10-step graphical analysis which provides detailed data-driven insight into what “drives” the system, what optimal settings exist for the system, what prediction model exists for the system, and what direction future experiments should be to further optimize the system. The World Trade Center collapse analysis is used as an example. |
Jim Filliben | Tutorial | 2018 |
||
Keynote Journey to a Data Centric Approcach for National Security |
Marcey Hoover Quality Assurance Director Sandia National Labortories ![]() (bio)
As Quality Assurance Director, Dr. Marcey Hoover is responsible for designing and sustaining the Laboratories’ quality assurance system and the associated technical capabilities needed for flawless execution of safe, secure, and efficient work to deliver exceptional products and services to its customers.Marcey previously served as the Senior Manager responsible for developing the science and engineering underpinning efforts to predict and influence the behavior of complex, highly interacting systems critical to our nation’s security posture. In her role as Senior Manager and Chief of Operations for Sandia’s Energy and Climate program, Marcey was responsible for strategic planning, financial management, business development, and communications. In prior positions, she managed organizations responsible for (1) quality engineering on new product development programs, (2) research and development of advanced computational techniques in the engineering sciences, and (3) development and execution of nuclear weapon testing and evaluation programs. Marcey has also led several executive office functions, including corporate- level strategic planning.Active in both the American Statistical Association and the American Society for Quality (ASQ), Marcey served two terms as the elected ASQ Statistics Division Treasurer. She was recognized as the Outstanding Alumni of the Purdue University Statistics Department in 2009 and nominated in 2011 for the YWCA Middle Rio Grande Women on the Move award. She currently serves on both the Purdue Strategic Research Advisory Council and the Statistics Alumni Advisory Board, and as a mentor for Big Brothers Big Sisters of New Mexico.Marcey received her bachelor of science degree in mathematics from Michigan State University, and her master of science and doctor of philosophy degrees in mathematical statistics from Purdue University. |
Keynote |
![]() | 2018 |
|
Breakout Application of Design of Experiments to a Calibration of the National Transonic Facility (Abstract)
Recent work at the National Transonic Facility (NTF) at the NASA Langley Research Center has shown that a substantial reduction in freestream pressure fluctuations can be achieved by positioning the moveable model support walls and plenum re-entry flaps to choke the flow just downstream of the test section. This choked condition reduces the upstream propagation of disturbances from the diffuser into the test section, resulting in improved Mach number control and reduced freestream variability. The choked conditions also affect the Mach number gradient and distribution in the test section, so a calibration experiment was undertaken to quantify the effects of the model support wall and re-entry flap movements on the facility freestream flow using a centerline static pipe. A design of experiments (DOE) approach was used to develop restricted-randomization experiments to determine the effects of total pressure, reference Mach number, model support wall angle, re-entry flap gap height, and test section longitudinal location on the centerline static pressure and local Mach number distributions for a reference Mach number range from 0.7 to 0.9. Tests were conducted using air as the test medium at a total temperature of 120 °F as well as for gaseous nitrogen at cryogenic total temperatures of -50, -150, and -250 °F. The resulting data were used to construct quadratic polynomial regression models for these factors using a Restricted Maximum Likelihood (REML) estimator approach. Independent validation data were acquired at off-design conditions to check the accuracy of the regression models. Additional experiments were designed and executed over the full Mach number range of the facility (0.2 £ Mref £ 1.1) at each of the four total temperature conditions, but with the model support walls and re-entry flaps set to their nominal positions, in order to provide calibration regression models for operational experiments where a choked condition downstream of the test section is either not feasible or not required. This presentation focuses on the design, execution, analysis, and results for the two experiments performed using air at a total temperature of 120 °F. Comparisons are made between the regression model output and validation data, as well as the legacy NTF calibration results, and future work is discussed. |
Matt Rhode NASA |
Breakout | Materials | 2018 |
|
Contributed Application of Adaptive Sampling to Advance the Metamodeling and Uncertainty Quantification Process (Abstract)
Over the years the aerospace industry has continued to implement design of experiments and metamodeling (e.g., response surface methodology) in order to shift the knowledge curve forward in the systems design process. While the adoption of these methods is still incomplete across aerospace sub-disciplines, they comprise the state-of-the-art during systems design and for design evaluation using modeling and simulation or ground testing. In the context of modeling and simulation, while national infrastructure in high performance computing becomes higher performance, so do the demands placed on those resources in terms of simulation fidelity and number of researchers. Furthermore, with recent emphasis placed on the uncertainty quantification of aerospace system design performance, the number of simulation cases needed to properly characterize a system’s uncertainty across the entire design space increases by orders of magnitude, further stressing available resources. This leads to advanced development groups either sticking to ad hoc estimates of uncertainty (e.g., subject matter expert estimates based on experience) or neglecting uncertainty quantification all together. Advancing the state-of-the-art of aerospace systems design and evaluation requires a practical adaptive sampling scheme that responds to the characteristics of the underlying design or uncertainty space. For example, when refining a system metamodel gradually, points should be chosen for design variable combinations that are located in high curvature regions or where metamodel uncertainty is the greatest. The latter method can be implemented by defining a functional form of the metamodel variance and using it to define the next best point to sample. For schemes that require n points to be sampled simultaneously, considerations can be made to ensure proper sample dispersion. The implementation of adaptive sampling schemes to the design and evaluation process will enable similar fidelity with fewer samples of the design space compared to fixed or ad hoc sampling methods (i.e., shorter time or human resources required). Alternatively, the uncertainty of the design space can be reduced to a greater extent for the same number of samples or with fewer samples using higher fidelity simulations. The purpose of this presentation will be to examine the benefits of adaptive sampling as applied to challenging design problems. Emphasis will be placed on methods that are accessible to engineering |
Erik Axdahl Hypersonic Airbreathing Propulsion Branch NASA Langley Research Center |
Contributed | Materials | 2018 |
|
Breakout CYBER Penetration Testing and Statistical Analysis in DT&E (Abstract)
Reconnaissance, footprinting, and enumeration are critical steps in the CYBER penetration testing process because if these steps are not fully and extensively executed, the information available for finding a system’s vulnerabilities may be limited. During the CYBER testing process, penetration testers often find themselves doing the same initial enumeration scans over and over for each system under test. Because of this, automated scripts have been developed that take these mundane and repetitive manual steps and perform them automatically with little user input. Once automation is present in the penetration testing process, Scientific Test and Analysis Techniques (STAT) can be incorporated. By combining automation and STAT in the CYBER penetration testing process, Mr. Tim McLean at Marine Corps Tactical Systems Support Activity (MCTSSA) coined a new term called CYBERSTAT. CYBERSTAT is applying scientific test and analysis techniques to offensive CYBER penetration testing tools with an important realization that CYBERSTAT assumes the system under test is the offensive penetration test tool itself. By applying combinatorial testing techniques to the CYBER tool, the CYBER tool’s scope is expanded beyond “one at a time” uses as the combinations of the CYBER tool’s capabilities and options are explored and executed as test cases against the target system. In CYBERSTAT, the additional test cases produced by STAT can be run automatically using scripts. This talk will show how MCTSSA is preparing to use CYBERSTAT in the Developmental Test and Evaluation process of USMC Command and Control systems. |
Timothy McLean | Breakout | Materials | 2018 |
|
Breakout Challenger Challenge: Pass-Fail Thinking Increases Risk Measurably (Abstract)
Binomial (pass-fail) response metrics are more far more commonly used in test, requirements, quality and engineering than they need to be. In fact, there is even an engineering school of thought that they’re superior to continuous-variable metrics. This is a serious, even dangerous problem in aerospace and other industries: think the Space Shuttle Challenger accident. There are better ways. This talk will cover some examples of methods available to engineers and statisticians in common statistical software. It will not dig far into the mathematics of the methods, but will walk through where each method might be most useful and some of the pitfalls inherent in their use – including potential sources of misinterpretation and suspicion by your teammates and customers. The talk is geared toward engineers, managers and professionals in the –ilities who run into frustrations dealing with pass-fail data and thinking. |
Ken Johnson Applied Statistician NASA Engineering and Safety Center |
Breakout | Materials | 2018 |
|
Short Course Modern Response Surface Methods & Computer Experiments (Abstract)
This course details statistical techniques at the interface between mathematical modeling via computer simulation, computer model meta-modeling (i.e., emulation/surrogate modeling), calibration of computer models to data from field experiments, and model-based sequential design and optimization under uncertainty (a.k.a. Bayesian Optimization). The treatment will include some of the historical methodology in the literature, and canonical examples, but will primarily concentrate on modern statistical methods, computation and implementation, as well as modern application/data type and size. The course will return at several junctures to real-word experiments coming from the physical and engineering sciences, such as studying the aeronautical dynamics of a rocket booster re-entering the atmosphere; modeling the drag on satellites in orbit; designing a hydrological remediation scheme for water sources threatened by underground contaminants; studying the formation of super-nova via radiative shock hydrodynamics. The course material will emphasize deriving and implementing methods over proving theoretical properties. |
Robert Gramacy Virginia Polytechnic Institute and State University |
Short Course | Materials | 2018 |
|
Breakout Operational Evaluation of a Flight-deck Software Application (Abstract)
Traffic Aware Strategic Aircrew Requests (TASAR) is a NASA-developed operational concept for flight efficiency and route optimization for the near-term airline flight deck. TASAR provides the aircrew with a cockpit automation tool that leverages a growing number of information sources on the flight deck to make fuel- and time-saving route optimization recommendations while in route. In partnership with a commercial airline, a research prototype software that implements TASAR has been installed on three aircraft to enable the evaluation of this software in operational use. During the flight trials, data are being collected to quantify operational performance, which will enable NASA to improve algorithms and enhance functionality in the software based on real-world user experience. This presentation highlights statistical challenges and discusses lessons learned during the initial stages of the operational evaluation. |
Sara Wilson NASA |
Breakout | Materials | 2018 |
|
Tutorial Quality Control and Statistical Process Control (Abstract)
formance. On the other hand, the need to draw causal inference about factors not under the researchers’ control, calls for a specialized set of techniques developed for observational studies. The persuasiveness and adequacy of such an analysis depends in part on the ability to recover metrics from the data that would approximate those of an experiment. This tutorial will provide a brief overview of the common problems encountered with lack of randomization, as well as suggested approaches for rigorous analysis of observational studies. |
Jane Pinelis Research Staff Member IDA |
Tutorial |
![]() | 2018 |
|
Keynote NASA AERONAUTICS |
Bob Pearce Deputy Associate Administrator Strategy, Aeronautics Research Mission DirectorateNASA ![]() (bio)
Mr. Pearce is responsible for leading aeronautics research mission strategic planning to guide the conduct of the agency’s aeronautics research and technology programs, as well as leading ARMD portfolio planning and assessments, mission directorate budget development and approval processes, and review and evaluation of all of NASA’s aeronautics research mission programs for strategic progress and relevance. Pearce is also currently acting director for ARMD’s Airspace Operations and Safety Program, and responsible for the overall planning, management and evaluation of foundational air traffic management and operational safety research. Previously he was director for strategy, architecture and analysis for ARMD, responsible for establishing a strategic systems analysis capability focused on understanding the system-level impacts of NASA’s programs, the potential for integrated solutions, and the development of high-leverage options for new investment and partnership. From 2003 until July 2010, Pearce was the deputy director of the FAA-led Next Generation Air Transportation System (NextGen) Joint Planning and Development Office (JPDO). The JPDO was an interagency office tasked with developing and facilitating the implementation of a national plan to transform the air transportation system to meet the long-term transportation needs of the nation. Prior to the JPDO, Pearce held various strategic and program management positions within NASA. In the mid-1990s he led the development of key national policy documents including the National Science and Technology Council’s “Goals for a National Partnership in Aeronautics Research and Technology” and the “Transportation Science and Technology Strategy.” These two documents provided a substantial basis for NASA’s expanded investment in aviation safety and airspace systems. He began his career as a design engineer at the Grumman Corporation, working on such projects as the Navy’s F-14 Tomcat fighter and DARPA’s X-29 Forward Swept Wing Demonstrator. Pearce also has experience from the Department of Transportation’s Volpe National Transportation Systems Center where he made contributions in the area of advanced concepts for intercity transportation systems. Pearce has received NASA’s Exceptional Service Medal for sustained excellence in planning and advocating innovative aeronautics programs in conjunction with the White House and other federal agencies. He received NASA’s Exceptional Achievement Medal for outstanding leadership of the JPDO in support of the transformation of the nation’s air transportation system. Pearce has also received NASA’s Cooperative External Achievement Award and several Exceptional Performance and Group Achievement Awards.He earned a bachelor’s of science degree in mechanical and aerospace engineering from Syracuse University, and a master’s of science degree in technology and policy from the Massachusetts Institute of Technology. |
Keynote | Materials | 2018 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Keynote Closing Remarks |
Robert Behler Director DOT&E ![]() |
Keynote | 2018 |
||
Breakout B-52 Radar Modernization Test Design Considerations |
Joseph Maloney AFOTEC |
Breakout | Materials | 2018 |
|
Contributed Assessing Human Visual Inspection for Acceptance Testing: An Attribute Agreement Analysis Case Study |
Christopher Drake Lead Statistician QE&SA Statistical Methods & Analysis Group |
Contributed | Materials | 2018 |
|
Breakout Building A Universal Helicopter Noise Model Using Machine Learning |
Eric Greenwood Aeroacoustics Branch |
Breakout | Materials | 2018 |
|
Breakout The Effect of Extremes in Small Sample Size on Simple Mixed Models: A Comparison of Level-1 and Level-2 Size |
Kristina Carter Research Staff Member IDA |
Breakout | Materials | 2018 |
|
Contributed Sound level recommendations for quiet sonic boom dose-response community surveys |
Jasme Lee North Carlina State Univeristy |
Contributed | Materials | 2018 |
|
Breakout Sage III Seu Statistical Analysis Model |
Ray McCollum Booz Allen Hamilton |
Breakout | Materials | 2018 |
|
Tutorial Robust Parameter Design |
Geoff Vining | Tutorial | Materials | 2018 |
|
Keynote Testing and Analytical Challenges on the Path to Hypersonic Flight |
Mark Lewis Director IDA-STPI ![]() |
Keynote |
![]() | 2018 |
|
Breakout Experimental Design of a Unique Force Measurement System Calibration |
Ken Toro | Breakout | 2018 |
||
Contributed Leveraging Anomaly Detection for Aircraft System Health Data Stability Reporting |
Kyle Gartrell | Contributed | Materials | 2018 |
|
Breakout Machine Learning to Assess Piolts’ Cognitive State |
Tina Heinich Computer Engineer, OCIO Data Science Team AST, Data Systems |
Breakout | Materials | 2018 |
|
Breakout Evaluating Deterministic Models of Time Series by Comparison to Observations |
Amy Braverman Jet Propulsion Laboratory, California Institute of Technology |
Breakout | Materials | 2018 |
|
Contributed Development of a Locking Setback Mass for Cluster Munition Applications: A UQ Case Study |
Melissa Jablonski U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT & ENGINEERING CENTER |
Contributed | Materials | 2018 |
|
Breakout Initial Validation of the Trust of Automated System Test |
Heather Wojton Research Staff Member IDA |
Breakout | Materials | 2018 |
|
Tutorial Exploratory Data Analysis |
Jim Filliben | Tutorial | 2018 |
||
Keynote Journey to a Data Centric Approcach for National Security |
Marcey Hoover Quality Assurance Director Sandia National Labortories ![]() |
Keynote |
![]() | 2018 |
|
Breakout Application of Design of Experiments to a Calibration of the National Transonic Facility |
Matt Rhode NASA |
Breakout | Materials | 2018 |
|
Contributed Application of Adaptive Sampling to Advance the Metamodeling and Uncertainty Quantification Process |
Erik Axdahl Hypersonic Airbreathing Propulsion Branch NASA Langley Research Center |
Contributed | Materials | 2018 |
|
Breakout CYBER Penetration Testing and Statistical Analysis in DT&E |
Timothy McLean | Breakout | Materials | 2018 |
|
Breakout Challenger Challenge: Pass-Fail Thinking Increases Risk Measurably |
Ken Johnson Applied Statistician NASA Engineering and Safety Center |
Breakout | Materials | 2018 |
|
Short Course Modern Response Surface Methods & Computer Experiments |
Robert Gramacy Virginia Polytechnic Institute and State University |
Short Course | Materials | 2018 |
|
Breakout Operational Evaluation of a Flight-deck Software Application |
Sara Wilson NASA |
Breakout | Materials | 2018 |
|
Tutorial Quality Control and Statistical Process Control |
Jane Pinelis Research Staff Member IDA |
Tutorial |
![]() | 2018 |
|
Keynote NASA AERONAUTICS |
Bob Pearce Deputy Associate Administrator Strategy, Aeronautics Research Mission DirectorateNASA ![]() |
Keynote | Materials | 2018 |