Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carlina State Univeristy |
Short Course | Materials | 2018 |
Tutorial Tutorial: Statistics Boot Camp (Abstract)
In the test community, we frequently use statistics to extract meaning from data. These inferences may be drawn with respect to topics ranging from system performance to human factors. In this mini-tutorial, we will begin by discussing the use of descriptive and inferential statistics, before exploring the basics of interval estimation and hypothesis testing. We will introduce common statistical techniques and when to apply them, and conclude with a brief discussion of how to present your statistical findings graphically for maximum impact. |
Kelly Avery IDA |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Reproducible Research (Abstract)
Analyses are “reproducible” if the same methods applied to the same data produce identical results when run again by another researcher (or you in the future). Reproducible analyses are transparent and easy for reviewers to verify, as results and figures can be traced directly to the data and methods that produced them. There are also direct benefits to the researcher. Real-world analysis workflows inevitably require changes to incorporate new or additional data, or to address feedback from collaborators, reviewers, or sponsors. These changes are easier to make when reproducible research best practices have been considered from the start. Poor reproducibility habits result in analyses that are difficult or impossible to review, are prone to compounded mistakes, and are inefficient to re-run in the future. They can lead to duplication of effort or even loss of accumulated knowledge when a researcher leaves your organization. With larger and more complex datasets, along with more complex analysis techniques, reproducibility is more important than ever. Although reproducibility is critical, it is often not prioritized either due to a lack of time or an incomplete understanding of end-to-end opportunities to improve reproducibility. This tutorial will discuss the benefits of reproducible research and will demonstrate ways that analysts can introduce reproducible research practices during each phase of the analysis workflow: preparing for an analysis, performing the analysis, and presenting results. A motivating example will be carried throughout to demonstrate specific techniques, useful tools, and other tips and tricks where appropriate. The discussion of specific techniques and tools is non-exhaustive; we focus on things that are accessible and immediately useful for someone new to reproducible research. The methods will focus mainly on work performed using R, but the general concepts underlying reproducible research techniques can be implemented in other analysis environments, such as JMP and Excel, and are briefly discussed. By implementing the approaches and concepts discussed during this tutorial, analysts in defense and aerospace will be equipped to produce more credible and defensible analyses of T&E data. |
Andrew Flack, Kevin Kirshenbaum, and John Haman IDA |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Learning Python and Julia (Abstract)
In recent years, the programming language Python with its supporting ecosystem has established itself as a significant capability to support the activities of the typical data scientist. Recently, version 1.0 of the programming language Julia has been released; from a software engineering perspective, it can be viewed as a modern alternative. This tutorial presents both Python and Julia from both a user and developer point of view. From a user’s point of view, the basic syntax of each, along with fundamental prerequisite knowledge presented. From a developers point of view the underlying infrastructure of the programming language / interpreter / compiler is discussed. |
Douglas Hodson Associate Professor Air Force Institute of Technology |
Tutorial | 2019 |
|
Tutorial Tutorial: Developing Valid and Reliable Scales (Abstract)
The DoD uses psychological measurement to aid in decision-making about a variety of issues including the mental health of military personnel before and after combat, and the quality of human-systems interactions. To develop quality survey instruments (scales) and interpret the data obtained from these instruments appropriately, analysts and decision-makers must understand the factors that affect the reliability and validity of psychological measurement. This tutorial covers the basics of scale development and validation and discusses current efforts by IDA, DOT&E, ATEC, and JITC to develop validated scales for use in operational test and evaluation. |
Heather Wojton & Shane Hall IDA / USARMY ATEC |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Cyber Attack Resilient Weapon Systems (Abstract)
This tutorial is an abbreviated version of a 36-hour short course recently provided by UVA to a class composed of engineers working at the Defense Intelligence Agency. The tutorial provides a definition for cyber attack resilience that is an extension of earlier definitions of system resilience that were not focused on cyber attacks. Based upon research results derived by the University of Virginia over an eight year period through DoD/Army/AF/Industry funding , the tutorial will illuminate the following topics: 1) A Resilence Design Requirements methodology and the need for supporting analysis tools, 2) a System Architecture approach for achieving resilience, 3) Example resilience design patterns and example prototype implementations, 4) Experimental results regarding resilience-related roles and readiness of system operators, and 5) Test and Evaluation Issues. The tutorial will be presented by UVA Munster Professor Barry Horowitz. |
Barry Horowitz Professor, Systems Engineering University of Virginia |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Combinatorial Methods for Testing and Analysis of Critical Software and Security Systems (Abstract)
Combinatorial methods have attracted attention as a means of providing strong assurance at reduced cost, but when are these methods practical and cost-effective? This tutorial includes two sections on the basis and application of combinatorial test methods: The first section explains the background, process, and tools available for combinatorial testing, with illustrations from industry experience with the method. The focus is on practical applications, including an industrial example of testing to meet FAA-required standards for life-critical software for commercial aviation. Other example applications include modeling and simulation, mobile devices, network configuration, and testing for a NASA spacecraft. The discussion will also include examples of measured resource and cost reduction in case studies from a variety of application domains. The second part explains combinatorial testing-based techniques for effective security testing of software components and large-scale software systems. It will develop quality assurance and effective re-verification for security testing of web applications and testing of operating systems. It will further address how combinatorial testing can be applied to ensure proper error-handling of network security protocols and provide the theoretical guarantees for detecting Trojans injected in cryptographic hardware. Procedures and techniques, as well as workarounds will be presented and captured as guidelines for a broader audience. |
Rick Kuhn, Dimitris Simos, and Raghu Kacker National Institute of Standards & Technology |
Tutorial |
![]() | 2019 |
Keynote Tuesday Keynote |
David Chu President Institute for Defense Analyses ![]() (bio)
David Chu serves as President of the Institute for Defense Analyses. IDA is a non-profit corporation operating in the public interest. Its three federally funded research and development centers provide objective analyses of national security issues and related national challenges, particularly those requiring extraordinary scientific and technical expertise. As president, Dr. Chu directs the activities of more than 1,000 scientists and technologists. Together, they conduct and support research requested by federal agencies involved in advancing national security and advising on science and technology issues. Dr. Chu served in the Department of Defense as Under Secretary of Defense for Personnel and Readiness from 2001-2009, and earlier as Assistant Secretary of Defense and Director for Program Analysis and Evaluation from 1981-1993. From 1978-1981 he was the Assistant Director of the Congressional Budget Office for National Security and International Affairs. Dr. Chu served in the U. S. Army from 1968-1970. He was an economist with the RAND Corporation from 1970-1978, director of RAND’s Washington Office from 1994-1998, and vice president for its Army Research Division from 1998-2001. He earned a bachelor of arts in economics and mathematics, and his doctorate in economics, from Yale University. Dr. Chu is a member of the Defense Science Board and a Fellow of the National Academy of Public Administration. He is a recipient of the Department of Defense Medal for Distinguished Public Service with Gold Palm, the Department of Veterans Affairs Meritorious Service Award, the Department of the Army Distinguished Civilian Service Award, the Department of the Navy Distinguished Public Service Award, and the National Academy of Public Administration’s National Public Service Award. |
Keynote | 2019 |
|
Breakout Trust in Automation (Abstract)
This brief talk will focus on the process of human-machine trust in context of automated intelligence tools. The trust process is multifaceted and this talk will define concepts such as trust, trustworthiness, trust behavior, and will examine how these constructs might be operationalized in user studies. The talk will walk through various aspects of what might make an automated intelligence tool more or less trustworthy. Further, the construct of transparency will be discussed as a mechanism to foster shared awareness and shared intent between humans and machines. |
Joseph Lyons Technical Advisor Air Force Research Laboratory |
Breakout | Materials | 2017 |
Breakout Toward Real-Time Decision Making in Experimental Settings (Abstract)
Materials scientists, computer scientists and statisticians at LANL have teamed up to investigate how to make near real time decisions during fast-paced experiments. For instance, a materials scientist at a beamline typically has a short window in which to perform a number of experiments, after which they analyze the experimental data, determine interesting new experiments and repeat. In typical circumstances, that cycle could take a year. The goal of this research and development project is to accelerate that cycle so that interesting leads are followed during the short window for experiments, rather than in years to come. We detail some of our UQ work in materials science, including emulation, sensitivity analysis, and solving inverse problems, with an eye toward real-time decision making in experimental settings. |
Devin Francom | Breakout | 2019 |
|
Breakout Time Machine Learning: Getting Navy Maintenance Duration Right (Abstract)
In support of the Navy’s effort to obtain improved outcomes through data-driven decision making, The Center for Naval Analyses’ Data Science Program (CNA/DSP) supports the performance-to plan(P2P) forum, which is co chaired by the Vice Chief of Naval Operations and the Assistant Secretary of the Navy (RD&A). The P2P forum provides senior Navy leadership forward looking performance forecasts, which are foundational to articulating Navy progress toward readiness and capability goals. While providing analytical support for this forum, CNA/DSP leveraged machine learning techniques, including Random Forests and Artificial Neural Networks, to develop improved estimates of future maintenance durations for the Navy. When maintenance durations exceed their estimated timelines, these delays can affect training, manning, and deployments in support of operational commanders. Currently, the Navy creates maintenance estimates during numerous timeframes including the program objective memorandum (POM) process, the Presidential Budget (PB), and at contract award leading to evolving estimates over time. The limited historical accuracy for these estimates, especially with the POM and PB estimates, have persisted over the last decade. These errors have led to a gap between planned funding and actual costs in addition to changes in the assets available for operational commanders each year. The CNA/DSP prediction model reduces the average error in forecasted maintenance duration days from 128 days to 31 days for POM estimates. Improvements in duration accuracy for the PB and contract award time frames were also achieved using similar ML processes. The data curation for these models involved numerous data sources of varying quality and required significant feature engineering to provide usable model inputs that could allow for forecasts over the Future Years Defense Program (FYDP) in order to support improved resource allocation and scheduling in support of the optimized fleet response training plan (OFRTP). |
Tim Kao | Breakout |
![]() | 2019 |
Keynote Thursday Lunchtime Keynote Speaker |
T. Charles Clancy Bradley Professor of Electrical and Computer Engineering Virginia Tech ![]() (bio)
Charles Clancy is the Bradley Professor of Electrical and Computer Engineering at Virginia Tech where he serves as the Executive Director of the Hume Center for National Security and Technology. Clancy leads a range of strategic programs at Virginia Tech related to security, including the Commonwealth Cyber Initiative. Prior to joining VT in 2010, Clancy was an engineering leader in the National Security Agency, leading research programs in digital communications and signal processing. He received his PhD from the University of Maryland, MS from University of Illinois, and BS from the Rose-Hulman Institute of Technology. He is co-author to over 200 peer-reviewed academic publications, six books, over twenty patents, and co-founder to five venture-backed startup companies. |
Keynote |
![]() | 2019 |
Keynote Thursday Keynote Speaker II |
Michael Little Program Manager, Advanced Information Systems Technology Earth Science Technology Office, NASA Headquarters ![]() (bio)
Over the past 45 years, Mike’s primary focus has been on the management of research and development, focusing on making the results more useful in meeting the needs of the user community. Since 1984, he has specialized in communications, data and processing systems, including projects in NASA, the US Air Force, the FAA and the Census Bureau. Before that, he worked on Major System Acquisition Programs, in the Department of Defense including Marine Corps combat vehicles and US Navy submarines. Currently, Mike manages a comprehensive program to provide NASA’s Earth Science research efforts with the information technologies it will need in the 2020-2035 time-frame to characterize, model and understand the Earth. This Program addresses the full range of data lifecycle from generating data using instruments and models, through the management of the data and including the ways in which information technology can help to exploit the data. Of particular interest today are the ways in which NASA can measure and understand transient and transitional phenomena and the impact of climate change. The AIST Program focuses the application of applied math and statistics, artificial intelligence, case-based reasoning, machine learning and automation to improve our ability to use observational data and model output in understanding Earth’s physical processes and natural phenomena. Training and odd skills: Application of cloud computing US Government Computer Security US Navy Nuclear Propulsion operations and maintenance on two submarines |
Keynote |
![]() | 2019 |
Keynote Thursday Keynote Speaker I |
Wendy Martinez Director, Mathematical Statistics Research Center, Bureau of Labor Statistics ASA President-Elect (2020) ![]() (bio)
Wendy Martinez has been serving as the Director of the Mathematical Statistics Research Center at the Bureau of Labor Statistics (BLS) for six years. Prior to this, she served in several research positions throughout the Department of Defense. She held the position of Science and Technology Program Officer at the Office of Naval Research, where she established a research portfolio comprised of academia and industry performers developing data science products for the future Navy and Marine Corps. Her areas of interest include computational statistics, exploratory data analysis, and text data mining. She is the lead author of three books on MATLAB and statistics. Dr. Martinez was elected as a Fellow of the American Statistical Association (ASA) in 2006 and is an elected member of the International Statistical Institute. She was honored by the American Statistical Association when she received the ASA Founders Award at the JSM 2017 conference. Wendy is also proud and grateful to have been elected as the 2020 ASA President. |
Keynote | Materials | 2019 |
Breakout Three Case Studies Comparing Traditional versus Modern Test Designs (Abstract)
There are many testing situations that historically involve a large number of runs. The use of experimental design methods can reduce the number of runs required to obtain the information desired. Example applications include wind tunnel test campaigns, computational experiments and live fire tests. In this work we present three case studies conducted under the auspices of the Science of Test Research Consortium comparing the information obtained via a historical experimental approach with the information obtained via an experimental design approach. The first case study involves a large scale wind tunnel experimental campaign. The second involves a computational fluid dynamics model of a missile through various speeds and angles of attack. The third case involves ongoing live fire testing involve hot surface testing. In each case, results suggest a tremendous opportunity to reduce experimental test efforts without losing test information. |
Ray Hill Air Force instite of Technology |
Breakout | Materials | 2016 |
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels (Abstract)
The use of statistically rigorous methods to support testing at Arnold Engineering Development Complex (AEDC) has been an area of focus in recent years. As part of this effort, the use of Design of Experiments (DOE) has been introduced for calibration of AEDC wind tunnels. Historical calibration efforts used One- Factor-at-a-Time (OFAT) test matrices, with a concentration on conditions of interest to test customers. With the introduction of DOE, the number of test points collected during the calibration decreased, and were not necessary located at historical calibration points. To validate the use of DOE for calibration purposes, the 4-ft Aerodynamic Wind Tunnel 4T was calibrated using both DOE and OFAT methods. The results from the OFAT calibration were compared to model developed from the DOE data points and it was determined that the DOE model sufficiently captured the tunnel behavior within the desired levels of uncertainty. DOE analysis also showed that within Tunnel 4T, systematic errors are insignificant as indicated by agreement noted between the two methods. Based on the results of this calibration, a decision was made to apply DOE methods to future tunnel calibrations, as appropriate. The development of the DOE matrix in Tunnel 4T required the consideration of operational limitations, measurement uncertainties, and differing tunnel behavior over the performance map. Traditional OFAT methods allowed tunnel operators to set conditions efficiently while minimizing time consuming plant configuration changes. DOE methods, however, require the use of randomization which had the potential to add significant operation time to the calibration. Additionally, certain tunnel parameters, such as variable porosity, are only of interest in a specific region of the performance map. In addition to operational concerns, measurement uncertainty was an important consideration for the DOE matrix. At low tunnel total pressures, the uncertainty in the Mach number measurements increase significantly. Aside from introducing non-constant variance into the calibration model, the large uncertainties at low pressures can increase overall uncertainty in the calibration in high pressure regions where the uncertainty would otherwise be lower. At high pressures and transonic Mach numbers, low Mach number uncertainties are required to meet drag count uncertainty requirements. To satisfy both the operational and calibration requirements, the DOE matrix was divided into multiple independent models over the tunnel performance map. Following the Tunnel 4T calibration, AEDC calibrated the Propulsion Wind Tunnel 16T, Hypersonic Wind Tunnels B and C, and the National Full-Scale Aerodynamics Complex (NFAC). DOE techniques were successfully applied to the calibration of Tunnel B and NFAC, while a combination of DOE and OFAT test methods were used in Tunnel 16T because of operational and uncertainty requirements over a portion of the performance map. Tunnel C was calibrated using OFAT because of operational constraints. The cost of calibrating these tunnels has not been significantly reduced through the use of DOE, but the characterization of test condition uncertainties is firmly based in statistical methods. |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
Breakout The System Usability Scale: A measurement Instrument Should Suit the Measurement Needs (Abstract)
The System Usability Scale (SUS) was developed by John Brooke in 1986 “to take a quick measurement of how people perceived the usability of (office) computer systems on which they were working.” The SUS is a 10-item, generic usability scale that is assumed to be system agnostic, and it results in a numerical score that ranges from 0-100. It has been widely employed and researched with non-military systems. More recently, it has been strongly recommended for use with military systems in operational test and evaluation, in part because of its widespread commercial use, but largely because it produces a numerical score that makes it amendable to statistical operations. Recent lessons learned with SUS in operational test and evaluation strongly question its use with military systems, most of which differ radically from non-military systems. More specifically, (1) usability measurement attributes need to be tailored to the specific system under test and meet the information needs of system users, and (2) a SUS numerical cutoff score of 70—a common benchmark with non-military systems—does not accurately reflect “system usability” from an operator or test team perspective. These findings will be discussed in a psychological and human factors measurement context, and an example of system-specific usability attributes will be provided as a viable way forward. In the event that the SUS is used in operational test and evaluation, some recommendations for interpreting the outcomes will be provided. |
Keith Kidder AFOTEC |
Breakout | Materials | 2017 |
Breakout The Sixth Sense: Clarity through Statistical Engineering (Abstract)
Two responses to an expensive, time consuming test on a final product will be referred to as “adhesion” and “strength”. A screening test was performed on compounds that comprise the final product. These screening tests are multivariate profile measurements. Previous models to predict the expensive, time consuming test lacked accuracy and precision. Data visualization was used to guide a statistical engineering model that makes use of multiple statistical techniques. The modeling approach raised some interesting statistical questions for partial least square models regarding over-fitting and cross validation. Ultimately, the model interpretation and the visualization both make engineering sense and led to interesting insights regarding the product development program and screening compounds. |
Jennifer Van-Mullekom DuPont |
Breakout | 2016 |
|
Webinar The Science of Trust of Autonomous Unmanned Systems (Abstract)
The world today is witnessing a significant investment in autonomy and artificial intelligence that most certainly will result in ever-increasing capabilities of unmanned systems. Driverless vehicles are a great example of systems that can make decisions and perform very complex actions. The reality though is that while it is well understood what these systems are doing, but not well at all ‘how’ the intelligence engines are generating decisions to accomplish those actions. Therein lies the underlying challenge of accomplishing formal test and evaluation of these systems and related, how to engender trust in their performance. This presentation will outline and define the problem space, discuss those challenges, and offer solution constructs. |
Reed Young Program Manager for Robotics and Autonomy Johns Hopkins University Applied Physics Laboratory ![]() |
Webinar |
![]() Recording | 2020 |
Webinar The Role of Uncertainty Quantification in Machine Learning (Abstract)
Uncertainty is an inherent, yet often under-appreciated, component of machine learning and statistical modeling. Data-driven modeling often begins with noisy data from error-prone sensors collected under conditions for which no ground-truth can be ascertained. Analysis then continues with modeling techniques that rely on a myriad of design decisions and tunable parameters. The resulting models often provide demonstrably good performance, yet they illustrate just one of many plausible representations of the data – each of which may make somewhat different predictions on new data. This talk provides an overview of recent, application-driven research at Sandia Labs that considers methods for (1) estimating the uncertainty in the predictions made by machine learning and statistical models, and (2) using the uncertainty information to improve both the model and downstream decision making. We begin by clarifying the data-driven uncertainty estimation task and identifying sources of uncertainty in machine learning. We then present results from applications in both supervised and unsupervised settings. Finally, we conclude with a summary of lessons learned and critical directions for future work. |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative (Abstract)
In 2019, the DoD unveiled comprehensive strategies related to Artificial Intelligence, Digital Modernization, and Enterprise Data Analytics. Recognizing that data science and analytics are fundamental to these strategies, in October 2020 the DoD has issued a comprehensive Data Strategy for national security and defense. For over a hundred years, statistical sciences have played a pivotal role in our national defense, from quality assurance and reliability analysis of munitions fielded in WWII, to operational analyses defining battlefield force structure and tactics, to helping optimize the engineering design of complex products, to rigorous testing and evaluating of Warfighter systems. The American Statistical Association (ASA) in 2015 recognized in its statement on The Role of Statistics in Data Science that “statistics is foundational to data science… and its use in this emerging field empowers researchers to extract knowledge and obtain better results from Big Data and other analytics projects.” It is clearly recognized that data as information is a key asset to the DoD. The challenge we face is how to transform existing talent to add value where it counts. |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() (bio)
Dr. Laura Freeman is a Research Associate Professor of Statistics and the Director of the Intelligent Systems Lab at the Virginia Tech Hume Center. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance. She develops new methods for test and evaluation focusing on emerging system technology. She is also the Assistant Dean for Research in the National Capital Region, in that capacity she works to shape research directions and collaborations in across the College of Science in the National Capital Region. Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data. |
Roundtable | 2021 |
|
Webinar The Role of Statistical Engineering in Creating Solutions for Complex Opportunities (Abstract)
Statistical engineering is the art and science for addressing complex organizational opportunities with data. The span of statistical engineering ranges from the “problems that keep CEOs awake at night” to the analysts dealing with the results of the experimentation necessary for the success of their most current project. This talk introduces statistical engineering and its full spectrum of approaches to complex opportunities with data. The purpose of this talk is to set the stage for the two specific case studies that follow it. Too often, people lose sight of the big picture of statistical engineering by a too narrow focus on the specific case studies. Too many people walk away thinking “This is what I have been doing for years. It is simply good applied statistics.” These people fail to see what we can learn from each other through the sharing of our experiences to teach other people how to create solutions more efficiently and effectively. It is this big picture that is the focus of this talk. |
Geoff Vining Professor Virginia Tech ![]() (bio)
Geoff Vining is a Professor of Statistics at Virginia Tech, where from 1999 – 2006, he also was the department head. He holds an Honorary Doctor of Technology from Luleå University of Technology. He is an Honorary Member of the ASQ (the highest lifetime achievement award in the field of Quality), an Academician of the International Academy for Quality, a Fellow of the American Statistical Association (ASA), and an Elected Member of the International Statistical Institute. He is the Founding and Current Past-Chair of the International Statistical Engineering Association (ISEA). He is a founding member of the US DoD Science of Test Research Consortium. Dr. Vining won the 2010 Shewhart Medal, the ASQ career award given to the person who has demonstrated the most outstanding technical leadership in the field of modern quality control. He also received the 2015 Box Medal from the European Network for Business and Industrial Statistics (ENBIS). This medal recognizes a statistician who has remarkably contributed to the development and the application of statistical methods in European business and industry. In 2013, he received an Engineering Excellence Award from the NASA Engineering and Safety Center. He received the 2011 William G. Hunter Award from the ASQ Statistics Division for excellence in statistics as a communicator, consultant, educator, innovator, and integrator of statistics with other disciplines and an implementer who obtains meaningful results. Dr. Vining is the author of three textbooks. He is an internationally recognized expert in the use of experimental design for quality, productivity, and reliability improvement and in the application of statistical process control. He has extensive consulting experience, most recently with the U.S. Department of Defense through the Science of Test Research Consortium and with NASA. |
Webinar |
![]() Recording | 2020 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Sarah Burke STAT Expert STAT Center of Excellence ![]() (bio)
Dr. Sarah Burke is a scientific test and analysis techniques (STAT) Expert for the STAT Center of Excellence. She works with acquisition programs in the Air Force, Army, and Navy to improve test efficiency, plan tests effectively, and analyze the resulting test data to inform decisions on system development. She received her M.S. in Statistics and Ph.D. in Industrial Engineering from Arizona State University. |
Panel |
![]() Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
John Haman RSM Institute for Defense Analyses ![]() (bio)
Dr. John Haman is a statistician at the Institute for Defense Analyses, where he develops methods and tools for analyzing test data. He has worked with a variety of Army, Navy, and Air Force systems, including counter-UAS and electronic warfare systems. Currently, John is supporting the Joint Artificial Intelligence Center. |
Panel |
Recording | 2021 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Short Course Uncertainty Quantification |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
Short Course Uncertainty Quantification |
Ralph Smith North Carlina State Univeristy |
Short Course | Materials | 2018 |
Tutorial Tutorial: Statistics Boot Camp |
Kelly Avery IDA |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Reproducible Research |
Andrew Flack, Kevin Kirshenbaum, and John Haman IDA |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Learning Python and Julia |
Douglas Hodson Associate Professor Air Force Institute of Technology |
Tutorial | 2019 |
|
Tutorial Tutorial: Developing Valid and Reliable Scales |
Heather Wojton & Shane Hall IDA / USARMY ATEC |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Cyber Attack Resilient Weapon Systems |
Barry Horowitz Professor, Systems Engineering University of Virginia |
Tutorial |
![]() | 2019 |
Tutorial Tutorial: Combinatorial Methods for Testing and Analysis of Critical Software and Security Systems |
Rick Kuhn, Dimitris Simos, and Raghu Kacker National Institute of Standards & Technology |
Tutorial |
![]() | 2019 |
Keynote Tuesday Keynote |
David Chu President Institute for Defense Analyses ![]() |
Keynote | 2019 |
|
Breakout Trust in Automation |
Joseph Lyons Technical Advisor Air Force Research Laboratory |
Breakout | Materials | 2017 |
Breakout Toward Real-Time Decision Making in Experimental Settings |
Devin Francom | Breakout | 2019 |
|
Breakout Time Machine Learning: Getting Navy Maintenance Duration Right |
Tim Kao | Breakout |
![]() | 2019 |
Keynote Thursday Lunchtime Keynote Speaker |
T. Charles Clancy Bradley Professor of Electrical and Computer Engineering Virginia Tech ![]() |
Keynote |
![]() | 2019 |
Keynote Thursday Keynote Speaker II |
Michael Little Program Manager, Advanced Information Systems Technology Earth Science Technology Office, NASA Headquarters ![]() |
Keynote |
![]() | 2019 |
Keynote Thursday Keynote Speaker I |
Wendy Martinez Director, Mathematical Statistics Research Center, Bureau of Labor Statistics ASA President-Elect (2020) ![]() |
Keynote | Materials | 2019 |
Breakout Three Case Studies Comparing Traditional versus Modern Test Designs |
Ray Hill Air Force instite of Technology |
Breakout | Materials | 2016 |
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
Breakout The System Usability Scale: A measurement Instrument Should Suit the Measurement Needs |
Keith Kidder AFOTEC |
Breakout | Materials | 2017 |
Breakout The Sixth Sense: Clarity through Statistical Engineering |
Jennifer Van-Mullekom DuPont |
Breakout | 2016 |
|
Webinar The Science of Trust of Autonomous Unmanned Systems |
Reed Young Program Manager for Robotics and Autonomy Johns Hopkins University Applied Physics Laboratory ![]() |
Webinar |
![]() Recording | 2020 |
Webinar The Role of Uncertainty Quantification in Machine Learning |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() |
Roundtable | 2021 |
|
Webinar The Role of Statistical Engineering in Creating Solutions for Complex Opportunities |
Geoff Vining Professor Virginia Tech ![]() |
Webinar |
![]() Recording | 2020 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Sarah Burke STAT Expert STAT Center of Excellence ![]() |
Panel |
![]() Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
John Haman RSM Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |