Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
Breakout Technical Leadership Panel-Tuesday Afternoon |
Paul Roberts Chief Engineer Engineering and Safety Center |
Breakout | 2016 |
|
Contributed Infrastructure Lifetimes (Abstract)
Infrastructure refers to the structures, utilities, and interconnected roadways that support the work carried out at a given facility. In the case of the Lawrence Livermore National Laboratory infrastructure is considered exclusive of scientific apparatus, safety and security systems. LLNL inherited it’s infrastructure management policy from the University of California which managed the site during LLNL’s first 5 decades. This policy is quite different from that used in commercial property management. Commercial practice weighs reliability over cost by replacing infrastructure at industry standard lifetimes. LLNL practice weighs overall lifecycle cost seeking to mitigate reliability issues through inspection. To formalize this risk management policy a careful statistical study was undertaken using 20 years of infrastructure replacement data. In this study care was taken to adjust for left truncation as-well-as right censoring. 57 distinct infrastructure class data sets were fitted using MLE to the Generalized Gamma distribution. This distribution is useful because it produces a weighted blending of discrete failure (Weibull model) and complex system failure (Lognormal model). These parametric fittings then yielded median lifetimes and conditional probabilities of failure. From conditional probabilities bounds on budget costs could be computed as expected values. This has provided a scientific basis for rational budget management as-well-as aided operations by prioritizing inspection, repair and replacement activities. |
William Romine Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
Contributed Workforce Analytics (Abstract)
Several statistical methods have been used effectively to model workforce behavior, specifically attrition due to retirement and voluntary separation[1]. Additionally various authors have introduced career development[2] as a meaningful aspect of workforce planning. While both general and more specific attrition modeling techniques yield useful results only limited success has followed attempts to quantify career stage transition probabilities. A complete workforce model would include quantifiable flows both vertically and horizontally in the network described pictorially here at a single time point in Figure 1. The horizontal labels in Figure 1 convey one possible meaning assignable to career stage transition – in this case, competency. More formal examples might include rank within a hierarchy such as in a military organization or grade in a civil service workforce. In the case of the Nuclear Weapons labs knowing that the specialized, classified knowledge needed to deal with Stockpile Stewardship is being preserved as evidenced by the production of Masters, individuals capable of independent technical work, is also of interest to governmental oversight. In this paper we examine the allocation of labor involved in a specific Life Extension program at LLNL. This growing workforce is described by discipline and career stage to determine how well the Norden-Rayleigh development cost model[3] fits the data. Since this model underlies much budget estimation within both DOD and NNSA the results should be of general interest. Data is also examined as a possible basis for quantifying horizontal flows in Figure 1. |
William Romine Lawrence Livermore National Laboratory |
Contributed | 2018 |
|
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels (Abstract)
The use of statistically rigorous methods to support testing at Arnold Engineering Development Complex (AEDC) has been an area of focus in recent years. As part of this effort, the use of Design of Experiments (DOE) has been introduced for calibration of AEDC wind tunnels. Historical calibration efforts used One- Factor-at-a-Time (OFAT) test matrices, with a concentration on conditions of interest to test customers. With the introduction of DOE, the number of test points collected during the calibration decreased, and were not necessary located at historical calibration points. To validate the use of DOE for calibration purposes, the 4-ft Aerodynamic Wind Tunnel 4T was calibrated using both DOE and OFAT methods. The results from the OFAT calibration were compared to model developed from the DOE data points and it was determined that the DOE model sufficiently captured the tunnel behavior within the desired levels of uncertainty. DOE analysis also showed that within Tunnel 4T, systematic errors are insignificant as indicated by agreement noted between the two methods. Based on the results of this calibration, a decision was made to apply DOE methods to future tunnel calibrations, as appropriate. The development of the DOE matrix in Tunnel 4T required the consideration of operational limitations, measurement uncertainties, and differing tunnel behavior over the performance map. Traditional OFAT methods allowed tunnel operators to set conditions efficiently while minimizing time consuming plant configuration changes. DOE methods, however, require the use of randomization which had the potential to add significant operation time to the calibration. Additionally, certain tunnel parameters, such as variable porosity, are only of interest in a specific region of the performance map. In addition to operational concerns, measurement uncertainty was an important consideration for the DOE matrix. At low tunnel total pressures, the uncertainty in the Mach number measurements increase significantly. Aside from introducing non-constant variance into the calibration model, the large uncertainties at low pressures can increase overall uncertainty in the calibration in high pressure regions where the uncertainty would otherwise be lower. At high pressures and transonic Mach numbers, low Mach number uncertainties are required to meet drag count uncertainty requirements. To satisfy both the operational and calibration requirements, the DOE matrix was divided into multiple independent models over the tunnel performance map. Following the Tunnel 4T calibration, AEDC calibrated the Propulsion Wind Tunnel 16T, Hypersonic Wind Tunnels B and C, and the National Full-Scale Aerodynamics Complex (NFAC). DOE techniques were successfully applied to the calibration of Tunnel B and NFAC, while a combination of DOE and OFAT test methods were used in Tunnel 16T because of operational and uncertainty requirements over a portion of the performance map. Tunnel C was calibrated using OFAT because of operational constraints. The cost of calibrating these tunnels has not been significantly reduced through the use of DOE, but the characterization of test condition uncertainties is firmly based in statistical methods. |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
Breakout “High Velocity Analytics for NASA JPL Mars Rover Experimental Design” (Abstract)
Rigorous characterization of system capabilities is essential for defensible decisions in test and evaluation (T&E). Analysis of designed experiments is not usually associated “big” data analytics as there are typically a modest number of runs, factors, and responses. The Mars Rover program has recently conducted several disciplined DOEs on prototype coring drill performance with approximately 10 factors along with scores of responses and hundreds of recorded covariates. The goal is to characterize the ‘atthis-time’ capability to confirm what the scientists and engineers already know about the system, answer specific performance and quality questions across multiple environments, and inform future tests to optimize performance. A ‘rigorous’ characterization required that not just one analytical path should be taken, but a combination of interactive data visualization, classic DOE analysis screening methods, and newer methods from predictive analytics such as decision trees. With hundreds of response surface models across many test series and qualitative factors, these methods used had to efficiently find the signals hidden in the noise. Participants will be guided through an end-to-end analysis workflow with actual data from many tests (often Definitive Screening Designs) of the Rover prototype coring drill. We will show data assembly, data cleaning (e.g. missing values and outliers), data exploration with interactive graphical designs, variable screening, response partitioning, data tabulation, model building with stepwise and other methods, and model diagnostics. Software packages such as R and JMP will be used. |
Heath Rushing Co-founder/Principle Adsurgo (bio)
Heath Rushing is the cofounder of Adsurgo and author of the book Design and Analysis of Experiments by Douglas Montgomery: A Supplement for using JMP. Previously, he was the JMP Training Manager at SAS, a quality engineer at Amgen, an assistant professor at the Air Force Academy, and a scientific analyst for OT&E in the Air Force. In addition, over the last six years, he has taught Science of Tests (SOT) courses to T&E organizations throughout the DoD. |
Breakout | Materials | 2016 |
Breakout Resampling Methods (Abstract)
Resampling Methods: This tutorial presents widely used resampling methods to include bootstrapping, cross-validation, and permutation tests. Underlying theories will be presented briefly, but the primary focus will be on applications. A new graph-theoretic approach to change detection will be discussed as a specific application of permutation testing. Examples will be demonstrated in R; participants are encouraged to bring their own portable computers to follow along using datasets provided by the instructor. |
David Ruth United States Naval Academy |
Breakout | Materials | 2017 |
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks (Abstract)
Collaborative autonomous sensor networks have recently been used in many applications including inspection, law enforcement, search and rescue, and national security. They offer scalable, low cost solutions which are robust to the loss of multiple sensors in hostile or dangerous environments. While often comprised of less capable sensors, the performance of a large network can approach the performance of far more capable and expensive platforms if nodes are effectively coordinating their sensing actions and data processing. This talk will summarize work to date at LLNL on distributed signal processing and decentralized optimization algorithms for collaborative autonomous sensor networks, focusing on ADMM-based solutions for detection/estimation problems and sequential greedy optimization solutions which maximize submodular functions, e.g. mutual information. |
Ryan Goldhahn | Breakout | 2019 |
|
Tutorial Pseudo-Exhaustive Testing – Part 1 (Abstract)
Exhaustive testing is infeasible when testing complex engineered systems. Fortunately, a combinatorial testing approach can be almost as effective as exhaustive testing but at dramatically lower cost. The effectiveness of this approach is due to the underlying construct on which it is based, that is a mathematical construct known as a covering array. This tutorial is divided into two sections. Section 1 introduces covering arrays, introduces a few covering array metrics, and then shows how covering arrays are used in combinatorial testing methodologies. Section 2 focuses on practical applications of combinatorial testing, including a commercial aviation example, an example that focuses on a widely used machine learning library, plus other examples that illustrate how common testing challenges can be addressed. In the process of working through these examples, an easy-to-use tool for generating covering arrays will be demonstrated. |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() (bio)
Ryan Lekivetz is a Principal Research Statistician Developer for the JMP Division of SAS where he implements features for the Design of Experiments platforms in JMP software. |
Tutorial |
![]() Recording | 2021 |
Short Course Data Farming (Abstract)
This tutorial is designed for newcomers to simulation-based experiments. Data farming is the process of using computational experiments to “grow” data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. The focus of the tutorial will be on gaining practical experience with setting up and running simulation experiments, leveraging recent advances in large-scale simulation experimentation pioneered by the Simulation Experiments & Efficient Designs (SEED) Center for Data Farming at the Naval Postgraduate School (http://harvest.nps.edu). Participants will be introduced to fundamental concepts, and jointly explore simulation models in an interactive setting. Demonstrations and written materials will supplement guided, hands-on activities through the setup, design, data collection, and analysis phases of an experiment-driven simulation study. |
Susan Sanchez Naval Postgraduate School |
Short Course | Materials | 2017 |
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge (Abstract)
This session will focus on the novel application of a design of experiments approach to optimize a propulsion charge configuration for a U.S. Army artillery round. The interdisciplinary design effort included contributions from subject matter experts in statistics, propulsion charge design, computational physics and experimentation. The process, which we will present in this session, consisted of an initial, low fidelity modeling and simulation study to reduce the parametric space by eliminating inactive variables and reducing the ranges of active variables for the final design. The final design used a multi-tiered approach that consolidated data from multiple sources including low fidelity modeling and simulation, high fidelity modeling and simulation and live test data from firings in a ballistic simulator. Specific challenges of the effort that will be addressed include: integrating data from multiple sources, a highly constrained design space, functional response data, multiple competing design objectives and real-world test constraints. The result of the effort is a final, optimized propulsion charge design that will be fabricated for live gun firing. |
Sarah Longo Data Scientist US Army CCDC Armaments Center ![]() (bio)
Sarah Longo is a data scientist in the US Army CCDC Armaments Center’s Systems Analysis Division. She has a background in Chemical and Mechanical Engineering and ten years experience in gun propulsion and armament engineering. Ms. Longo’s gun-propulsion expertise has played a part in enabling the successful implementation of Design of Experiments, Empirical Modeling, Data Visualization and Data Mining for mission-critical artillery armament and weapon system design efforts. |
Breakout |
![]() | 2021 |
Breakout A User-Centered Design Approach to Military Software Development (Abstract)
This case study highlights activities performed during the front-end process of a software development effort undertaken by the Fire Support Command and Control Program Office. This program office provides the U.S. Army, Joint and coalition commanders with the capability to plan, execute and deliver both lethal and non-lethal fires. Recently, the program office has undertaken modernization of its primary field artillery command and control system that has been in use for over 30 years. The focus of this case study is on the user-centered design process and activities taken prior to and immediately following contract award. A modified waterfall model comprised of three cyclic, yet overlapping phases (observation, visualization, and evaluation) provided structure for the iterative, user-centered design process. Gathering and analyzing data collected during focus groups, observational studies, and workflow process mapping, enabled the design team to identify 1) design patterns across the role/duty, unit and echelon matrix (a hierarchical organization structure), 2) opportunities to automate manual processes, 3) opportunities to increase efficiencies for fire mission processing, 4) bottlenecks and workarounds to be eliminated through design of the modernized system, 5) shortcuts that can be leveraged in design, 6) relevant and irrelevant content for each user population for streamlining access to functionality, 7) a usability baseline for later comparison (e.g., the number of steps and time taken to perform a task as captured in workflows for comparison to the same task in the modernized system), and provided the basis for creating visualizations using wireframes. Heuristic evaluations were conducted early to obtain initial feedback from users. In the next few months, usability studies will enable users to provide feedback based on actual interaction with the newly designed software. Included in this case study are descriptions of the methods used to collect user-centered design data, how results were visualized/documented for use by the development team, and lessons learned from applying user-centered design techniques during software development of a military field artillery command and control system. |
Pam Savage-Knepshield | Breakout |
![]() | 2019 |
Breakout Uncertainty Quantification and Analysis at The Boeing Company (Abstract)
The Boeing Company is assessing uncertainty quantification methodologies across many phases of aircraft design in order to establish confidence in computational fluid dynamics-based simulations of aircraft performance. This presentation provides an overview of several of these efforts. First, the uncertainty in aerodynamic performance metrics of a commercial aircraft at transonic cruise due to turbulence model and flight condition variability is assessed using 3D CFD with non-intrusive polynomial chaos and second order probability. Second, a sample computation of uncertainty in increments is performed for an engineering trade study, leading to the development of a new method for propagating input-uncontrolled uncertainties as well as input-controlled uncertainties. This type of consideration is necessary to account for variability associated with grid convergence on different configurations, for example. Finally, progress toward applying the computed uncertainties in forces and moments into an aerodynamic database used for flight simulation will be discussed. This approach uses a combination of Gaussian processes and multiple-fidelity Kriging meta-modeling to synthesize the required data. |
John Schaefer Sandia National Labortories |
Breakout | Materials | 2018 |
Breakout Anatomy of a Cyberattack: Standardizing Data Collection for Adversarial and Defensive Analyses (Abstract)
Hardly a week goes by without news of a cybersecurity breach or an attack by cyber adversaries against a nation’s infrastructure. These incidents have wide-ranging effects, including reputational damage and lawsuits against corporations with poor data handling practices. Further, these attacks do not require the direction, support, or funding of technologically advanced nations; instead, significant damage can be – and has been – done with small teams, limited budgets, modest hardware, and open source software. Due to the significance of these threats, it is critical to analyze past events to predict trends and emerging threats. In this document, we present an implementation of a cybersecurity taxonomy and a methodology to characterize and analyze all stages of a cyberattack. The chosen taxonomy, MITRE ATT&CK™, allows for detailed definitions of aggressor actions which can be communicated, referenced, and shared uniformly throughout the cybersecurity community. We translate several open source cyberattack descriptions into the analysis framework, thereby constructing cyberattack data sets. These data sets (supplemented with notional defensive actions) illustrate example Red Team activities. The data collection procedure, when used during penetration testing and Red Teaming, provides valuable insights about the security posture of an organization, as well as the strengths and shortcomings of the network defenders. Further, these records can support past trends and future outlooks of the changing defensive capabilities of organizations. From these data, we are able to gather statistics on the timing of actions, detection rates, and cyberattack tool usage. Through analysis, we are able to identify trends in the results and compare the findings to prior events, different organizations, and various adversaries. |
Jason Schlup | Breakout |
![]() | 2019 |
Keynote Opening Remarks (Abstract)
Norton A. Schwartz serves as President of the Institute for Defense Analyses (IDA), a nonprofit corporation operating in the public interest. IDA manages three Federally Funded Research and Development Centers that answer the most challenging U.S. security and science policy questions with objective analysis leveraging extraordinary scientific, technical, and analytic expertise. At IDA, General Schwartz (U.S. Air Force, retired) directs the activities of more than 1,000 scientists and technologists employed by IDA. General Schwartz has a long and prestigious career of service and leadership that spans over 5 decades. He was most recently President and CEO of Business Executives for National Security (BENS). During his 6-year tenure at BENS, he was also a member of IDA’s Board of Trustees. Prior to retiring from the U.S. Air Force, General Schwartz served as the 19th Chief of Staff of the U.S. Air Force from 2008 to 2012. He previously held senior joint positions as Director of the Joint Staff and as the Commander of the U.S. Transportation Command. He began his service as a pilot with the airlift evacuation out of Vietnam in 1975. General Schwartz is a U.S. Air Force Academy graduate and holds a master’s degree in business administration from Central Michigan University. He is also an alumnus of the Armed Forces Staff College and the National War College. He is a member of the Council on Foreign Relations and a 1994 Fellow of Massachusetts Institute of Technology’s Seminar XXI. General Schwartz has been married to Suzie since 1981. |
Norton Schwartz President Institute for Defense Analyses ![]() |
Keynote |
Recording | 2021 |
Breakout Comparison of Methods for Testing Uniformity to Support the Validation of Simulation Models used for Live-Fire Testing (Abstract)
Goodness-of-fit (GOF) testing is used in many applications, including statistical hypothesis testing to determine if a set of data come from a hypothesized distribution. In addition, combined probability tests are extensively used in meta-analysis to combine results from several independent tests to asses an overall null hypothesis. This paper summarizes a study conducted to determine which GOF and/or combined probability test(s) can be used to determine if a set of data with relative small sample size comes from the standard uniform distribution, U(0,1). The power against different alternative hypothesis of several GOF tests and combined probability methods were examined. The GOF methods included: Anderson-Darling, Chi-Square, Kolmogorov-Smirnov, Cramér-Von Mises, Neyman-Barton, Dudewicz-van der Meulen, Sherman, Quesenberry-Miller, Frosini, and Hegazy-Green; while thecombined probability test methods included: Fisher’s Combined Probability Test, Mean Z, Mean P, Maximum P, Minimum P, Logit P, and Sum Z. While no one method was determined to provide the best power in all situations, several useful methods to support model validation were identified. |
Shannon Shelburne | Breakout |
![]() | 2019 |
Breakout How do the Framework and Design of Experiments Fundamentally Help? (Abstract)
The Military Global Positioning System (GPS) User Equipment (MGUE) program is the user segment of the GPS Enterprise—a program on the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation (DASD(DT&E)) Space and Missile Defense Systems portfolio. The MGUE program develops and test GPS cards capable of using Military-Code (M Code) and legacy signals. The program’s DT&E strategy is challenging. The GPS cards provide new, untested capabilities. Milestone A was approved on 2012 with sole source contracts released to three vendors for Increment 1. An Acquisition Decision Memorandum directs the program to support a Congressional Mandate to provide GPS M Code-capable equipment for use after FY17. Increment 1 provides GPS receiver form factors for the ground domain interface as well as for the aviation and maritime domain interface. When reviewing DASD(DT&E) Milestone B (MS B) Assessment Report, Mr. Kendall expressed curiosity about how the Developmental Evaluation Framework (DEF) and Design of Experiments (DOE) help. This presentation describes how the DEF and DOE methods help producing more informative and more economical developmental tests than what was originally under consideration by the test community—decision-quality information with a 60% reduction in test cycle time. It provides insight into how the integration of the DEF and DOE improved the overall effectiveness of the DT&E strategy, illustrates the role of modeling and simulation (M&S) in the test design process, provides examples of experiment designs for different functional and performance areas, and illustrates the logic involved in balancing risks and test resources. The DEF and DOE methods enables the DT&E strategy to fully exploit early discovery, to maximize verification and validation opportunities, and to characterize system behavior across the technical requirements space. |
Mike Sheeha MITRE |
Breakout | 2017 |
|
Tutorial Evolving Statistical Tools (Abstract)
In this session, researchers from the Institute for Defense Analyses (IDA) present a collection of statistical tools designed to meet ongoing and emerging needs for planning, designing, and evaluating operational tests. We first present a suite of interactive applications hosted on test.testscience.testscience.org that are designed to address common analytic needs in the operational test community. These freely available resources include tools for constructing confidence intervals, computing statistical power, comparing distributions, and computing Bayesian reliability. Next, we discuss four dedicated software tools: JEDIS – a JMP Add-In for automating power calculations for designed experiments skpr – an R package for generating optimal experimental designs and easily evaluating power for normal and non-normal response variables ciTools – an R package for quickly and simply generating confidence intervals and quantifying uncertainty for simple and complex linear models nautilus – an R package for visualizing and analyzing aspects of sensor performance, such as detection range and track completeness |
Jason Sheldon Research Staff Member IDA |
Tutorial | Materials | 2018 |
Breakout Open Architecture Tradeoffs (OAT): A simple, computational game engine for rapidly exploring hypotheses in Battle Management Command and Control (BMC2) (Abstract)
We created the Open Architectures Tradeoff (OAT) tool, a simple, computational game engine for rapidly exploring hypotheses about mission effectiveness in Battle Management Command and Control (BMC2). Each run of an OAT game simulates a military mission in contested airspace. Game objects represent U.S., adversary, and allied assets, each of which moves through the simulated airspace. Each U.S. asset has a Command and Control (C2) package the controls its actions—currently, neural networks form the basis of each U.S. asset’s C2 package. The weights of the neural network are randomized at the beginning of each game and are updated over the course of the game as the U.S. asset learns which of its actions lead to rewards, e.g., intercepting an adversary. Weights are updated via a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) altered to accommodate a Reinforcement Learning paradigm. OAT allows a user to winnow down the trade space that should be considered when setting up more expensive and time-consuming campaign models. OAT could be used to weed out bad ideas for “fast failure”, thus avoiding waste of campaign modeling resources. Questions can be explored via OAT such as: Which combination of system capabilities is likely to be more or less effective in a particular military mission? For example, in an early analysis, OAT was used to test the hypothesis that increases in U.S. assets’ sensor range always lead to increases in mission effectiveness, quantified as the percent of adversaries intercepted. We ran over 2500 OAT games, each time varying the sensor range of U.S. assets and the density of adversary assets. Results show that increasing sensor range did lead to an increase in military effectiveness—but only up to a certain point. Once the sensor range surpassed approximately 10-15% of the simulated airspace size, no further gains were made in the percent of adversaries intercepted. Thus, campaign modelers should hesitate to devote resources to exploring sensor range in isolation. More recent OAT analyses are exploring more complex hypotheses regarding the trade space between sensor range and communications range. |
Shelley Cazares | Breakout |
![]() | 2019 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions (Abstract)
Cybersecurity Metrics and Quantification is a fundamental but notoriously hard problem. It is one of the pillars underlying the emerging Science of Cybersecurity. In this talk, I will describe a number of cybersecurity metrics quantification research problems that are encountered in evaluating the effectiveness of a range of cyber defense tools. I will review the research results we have obtained over the past years. I will also discuss future research directions, including the ones that are undertaken in my research group. |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() (bio)
Shouhuai Xu is the Gallogly Chair Professor in the Department of Computer Science, University of Colorado Colorado Springs (UCCS). Prior to joining UCCS, he was with the Department of Computer Science, University of Texas at San Antonio. He pioneered a systematic approach, dubbed Cybersecurity Dynamics, to modeling and quantifying cybersecurity from a holistic perspective. This approach has three orthogonal research thrusts: metrics (for quantifying security, resilience and trustworthiness/uncertainty, to which this talk belongs), cybersecurity data analytics, and cybersecurity first-principle modeling (for seeking cybersecurity laws). His research has won a number of awards, including the 2019 worldwide adversarial malware classification challenge organized by the MIT Lincoln Lab. His research has been funded by AFOSR, AFRL, ARL, ARO, DOE, NSF and ONR. He co-initiated the International Conference on Science of Cyber Security (SciSec) and is serving as its Steering Committee Chair. He has served as Program Committee co-chair for a number of international conferences and as Program Committee member for numerous international conferences. Â He is/was an Associate Editor of IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), IEEE Transactions on Information Forensics and Security (IEEE T-IFS), and IEEE Transactions on Network Science and Engineering (IEEE TNSE). More information about his research can be found at https://xu-lab.org. |
Breakout | Materials | 2021 |
Breakout Sequential Experimentation for a Binary Response – The Break Separation Method (Abstract)
Binary response experiments are common in epidemiology, biostatistics as well as in military applications. The Up and Down method, Langlie’s Method, Neyer’s method, K in a Row method and 3 Phase Optimal Design are methods used for sequential experimental design when there is a single continuous variable and a binary response. During this talk, we will discuss a new sequential experimental design approach called the Break Separation Method (BSM). BSM provides an algorithm for determining sequential experimental trials that will be used to find a median quantile and fit a logistic regression model using Maximum Likelihood estimation. BSM results in a small sample size and is designed to efficiently compute the median quantile. |
Rachel Silvestrini RIT-S |
Breakout | Materials | 2017 |
Tutorial Power Anyalysis Concepts |
Jim Simpson JK Analytics |
Tutorial | Materials | 2016 |
Short Course Split-Plot and Restricted Randomization Designs (Abstract)
Have you ever built what you considered to be the ideal designed experiment, then passed it along to be run and learn later that your recommended run order was ignored? Or perhaps you were part of a test execution team and learned too late that one or more of your experimental factors are difficult or time-consuming to change. We all recognize that the best possible guard against lurking background noise is complete randomization, but often we find that a randomized run order is extremely impractical or even infeasible. Split-plot design and analysis methods have been around for over 80 years, but only in the last several years have the methods fully matured and been made available in commercial software. This class will introduce you to the world of practical split-plot design and analysis methods. We’ll provide you the skills to effectively build designs appropriate to your specific needs and demonstrate proper analysis techniques using general linear models, available in the statistical software. Topics include split-plots for 2-level and mixed-level factor sets, for first and second order models, as well as split-split-plot designs. |
Jim Simpson JK Analytics |
Short Course | Materials | 2017 |
Breakout Automated Software Testing Best Practices and Framework: A STAT COE Project (Abstract)
The process for testing military systems which are largely software intensive involves techniques and procedures often different from those for hardware-based systems. Much of the testing can be performed in laboratories at many of the acquisition stages, up to operational testing. Testing software systems is not different from testing hardware-based systems in that testing earlier and more intensively benefits the acquisition program in the long run. Automated testing of software systems enables more frequent and more extensive testing, allowing for earlier discovery of errors and faults in the code. Automated testing is beneficial for unit, integrated, functional and performance testing, but there are costs associated with automation tool license fees, specialized manpower, and the time to prepare and maintain the automation scripts. This presentation discusses some of the features unique to automated software testing and offers a framework organizations can implement to make the business case for, to organize for, and to execute and benefit from automating the right aspects of their testing needs. Automation has many benefits in saving time and money, but is most valuable in freeing test resources to perform higher value tasks. |
Jim Simpson JK Analytics |
Breakout | Materials | 2017 |
Breakout DOE and Test Automation for System of Systems TE (Abstract)
Rigorous, efficient and effective test science techniques are individually taking hold in many software centric DoD acquisition programs, both in developmental and operational test regimes. These techniques include agile software development, cybersecurity test and evaluation (T&E), design and analysis of experiments and automated software testing. Many software centric programs must also be tested together with other systems to demonstrate they can be successfully integrated into a more complex systems of systems. This presentation focuses on the two test science disciplines of designed experiments (DOE) and automated software testing (AST) and describes how they can be used effectively and leverage one another in planning for and executing a system of systems test strategy. We use the Navy’s Distributed Common Ground System as an example. |
Jim Simpson JK Analytics |
Breakout | Materials | 2018 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
Breakout Technical Leadership Panel-Tuesday Afternoon |
Paul Roberts Chief Engineer Engineering and Safety Center |
Breakout | 2016 |
|
Contributed Infrastructure Lifetimes |
William Romine Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
Contributed Workforce Analytics |
William Romine Lawrence Livermore National Laboratory |
Contributed | 2018 |
|
Breakout The Use of DOE vs OFAT in the Calibration of AEDC Wind Tunnels |
Rebecca Rought AEDC/TSTA |
Breakout | Materials | 2018 |
Breakout “High Velocity Analytics for NASA JPL Mars Rover Experimental Design” |
Heath Rushing Co-founder/Principle Adsurgo |
Breakout | Materials | 2016 |
Breakout Resampling Methods |
David Ruth United States Naval Academy |
Breakout | Materials | 2017 |
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks |
Ryan Goldhahn | Breakout | 2019 |
|
Tutorial Pseudo-Exhaustive Testing – Part 1 |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Short Course Data Farming |
Susan Sanchez Naval Postgraduate School |
Short Course | Materials | 2017 |
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge |
Sarah Longo Data Scientist US Army CCDC Armaments Center ![]() |
Breakout |
![]() | 2021 |
Breakout A User-Centered Design Approach to Military Software Development |
Pam Savage-Knepshield | Breakout |
![]() | 2019 |
Breakout Uncertainty Quantification and Analysis at The Boeing Company |
John Schaefer Sandia National Labortories |
Breakout | Materials | 2018 |
Breakout Anatomy of a Cyberattack: Standardizing Data Collection for Adversarial and Defensive Analyses |
Jason Schlup | Breakout |
![]() | 2019 |
Keynote Opening Remarks |
Norton Schwartz President Institute for Defense Analyses ![]() |
Keynote |
Recording | 2021 |
Breakout Comparison of Methods for Testing Uniformity to Support the Validation of Simulation Models used for Live-Fire Testing |
Shannon Shelburne | Breakout |
![]() | 2019 |
Breakout How do the Framework and Design of Experiments Fundamentally Help? |
Mike Sheeha MITRE |
Breakout | 2017 |
|
Tutorial Evolving Statistical Tools |
Jason Sheldon Research Staff Member IDA |
Tutorial | Materials | 2018 |
Breakout Open Architecture Tradeoffs (OAT): A simple, computational game engine for rapidly exploring hypotheses in Battle Management Command and Control (BMC2) |
Shelley Cazares | Breakout |
![]() | 2019 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() |
Breakout | Materials | 2021 |
Breakout Sequential Experimentation for a Binary Response – The Break Separation Method |
Rachel Silvestrini RIT-S |
Breakout | Materials | 2017 |
Tutorial Power Anyalysis Concepts |
Jim Simpson JK Analytics |
Tutorial | Materials | 2016 |
Short Course Split-Plot and Restricted Randomization Designs |
Jim Simpson JK Analytics |
Short Course | Materials | 2017 |
Breakout Automated Software Testing Best Practices and Framework: A STAT COE Project |
Jim Simpson JK Analytics |
Breakout | Materials | 2017 |
Breakout DOE and Test Automation for System of Systems TE |
Jim Simpson JK Analytics |
Breakout | Materials | 2018 |