Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Open Architecture Tradeoffs (OAT): A simple, computational game engine for rapidly exploring hypotheses in Battle Management Command and Control (BMC2) (Abstract)
We created the Open Architectures Tradeoff (OAT) tool, a simple, computational game engine for rapidly exploring hypotheses about mission effectiveness in Battle Management Command and Control (BMC2). Each run of an OAT game simulates a military mission in contested airspace. Game objects represent U.S., adversary, and allied assets, each of which moves through the simulated airspace. Each U.S. asset has a Command and Control (C2) package the controls its actions—currently, neural networks form the basis of each U.S. asset’s C2 package. The weights of the neural network are randomized at the beginning of each game and are updated over the course of the game as the U.S. asset learns which of its actions lead to rewards, e.g., intercepting an adversary. Weights are updated via a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) altered to accommodate a Reinforcement Learning paradigm. OAT allows a user to winnow down the trade space that should be considered when setting up more expensive and time-consuming campaign models. OAT could be used to weed out bad ideas for “fast failure”, thus avoiding waste of campaign modeling resources. Questions can be explored via OAT such as: Which combination of system capabilities is likely to be more or less effective in a particular military mission? For example, in an early analysis, OAT was used to test the hypothesis that increases in U.S. assets’ sensor range always lead to increases in mission effectiveness, quantified as the percent of adversaries intercepted. We ran over 2500 OAT games, each time varying the sensor range of U.S. assets and the density of adversary assets. Results show that increasing sensor range did lead to an increase in military effectiveness—but only up to a certain point. Once the sensor range surpassed approximately 10-15% of the simulated airspace size, no further gains were made in the percent of adversaries intercepted. Thus, campaign modelers should hesitate to devote resources to exploring sensor range in isolation. More recent OAT analyses are exploring more complex hypotheses regarding the trade space between sensor range and communications range. |
Shelley Cazares | Breakout |
| 2019 |
|
Breakout Comparison of Methods for Testing Uniformity to Support the Validation of Simulation Models used for Live-Fire Testing (Abstract)
Goodness-of-fit (GOF) testing is used in many applications, including statistical hypothesis testing to determine if a set of data come from a hypothesized distribution. In addition, combined probability tests are extensively used in meta-analysis to combine results from several independent tests to asses an overall null hypothesis. This paper summarizes a study conducted to determine which GOF and/or combined probability test(s) can be used to determine if a set of data with relative small sample size comes from the standard uniform distribution, U(0,1). The power against different alternative hypothesis of several GOF tests and combined probability methods were examined. The GOF methods included: Anderson-Darling, Chi-Square, Kolmogorov-Smirnov, Cramér-Von Mises, Neyman-Barton, Dudewicz-van der Meulen, Sherman, Quesenberry-Miller, Frosini, and Hegazy-Green; while thecombined probability test methods included: Fisher’s Combined Probability Test, Mean Z, Mean P, Maximum P, Minimum P, Logit P, and Sum Z. While no one method was determined to provide the best power in all situations, several useful methods to support model validation were identified. |
Shannon Shelburne | Breakout |
| 2019 |
|
Breakout Valuing Human Systems Integration: A Test and Data Perspective (Abstract)
Technology advances are accelerating at a rapid pace, with the potential to enable greater capability and power to the Warfighter. However, if human capabilities and limitations are not central to concepts, requirements, design, and development then new/upgraded weapons and systems will be difficult to train, operate, and maintain, may not result in the skills, job, grade, and manpower mix as projected, and may result in serious human error, injury or Soldier loss. The Army Human Systems Integration (HSI) program seeks to overcome these challenges by ensuring appropriate consideration and integration of seven technical domains: Human Factors Engineering (e.g., usability), Manpower, Personnel, Training, Safety and Occupational Health, Habitability, Force Protection and Survivability. The tradeoffs, constraints, and limitations occurring among and between these technical domains allows HSI to execute a coordinated, systematic process for putting the warfighter at the center of the design process – equipping the warfighter rather than manning equipment. To that end, the Army HSI Headquarters, currently as a directorate within the Army Headquarters Deputy Chief of Staff (DCS), G-1 develops strategies and ensures human systems factors are early key drivers in concepts, strategy, and requirements, and are fully integrated throughout system design, development, testing and evaluation, and sustainment The need to consider HSI factors early in the development cycle is critical. Too often, man-machine interface issues are not addressed until late in the development cycle (i.e. production and deployment phase) after the configuration of a particular weapon or system has been set. What results is a degraded combat capability, suboptimal system and system-of-systems integration, increased training and sustainment requirements, or fielded systems not in use. Acquisition test data are also good sources to glean HSI return on investment (ROI) metrics. Defense acquisition reports such as test and evaluation operational assessments identifies HSI factors as root causes when Army programs experience increase cost, schedule overruns, or low performance. This is identifiable by the number and type of systems that require follow-on test and evaluation (FOT&E), over reliance on field service representatives (FSRs), costly and time consuming engineering change requests (ECRs), or failures in achieving reliability, availability, and maintainability (RAM) key performance parameters (KPPs) and key system attributes (KSAs). In this presentation, we will present these data and submit several return on investment (ROI) metrics, closely aligned to the defense acquisition process, to emphasize and illustrate the value of HSI. Optimizing Warfighter-System performance and reducing human errors, minimizing risk of Soldier loss or injury, and reducing personnel and materiel life cycle costs produces data that are inextricably linked to early, iterative, and measurable HSI processes within the defense acquisition system. |
Jeffrey Thomas | Breakout |
| 2019 |
|
Breakout Air Force Human Systems Integration Program (Abstract)
The Air Force (AF) Human Systems Integration (HSI) program is led by the 711th Human Performance Wing’s Human Systems Integration Directorate (711 HPW/HP). 711 HPW HP provides direct support to system program offices and AF Major Commands (MAJCOMs) across the acquisition lifecycle from requirements development to fielding and sustainment in addition to providing home office support. With an ever-increasing demand signal for support, HSI practitioners within 711 HPW/HP assess HSI domain areas for human-centered risks and strive to ensure systems are designed and developed to safely, effectively, and affordably integrate with human capabilities and limitations. In addition to system program offices and MAJCOMs, 711 HPW/HP provides HSI support to AF Centers (e.g., AF Sustainment Center, AF Test Center), the AF Medical Service, and special cases as needed. The AF Global Strike Command (AFGSC) is the largest MAJCOM with several Programs of Record (POR), such as the B-1, B-2, and B-52 bombers, Intercontinental Ballistic Missiles (ICBM), Ground-Based Strategic Deterrent (GBSD), Airborne Launch Control System (ALCS), and other support programs/vehicles like the UH-1N. Mr. Anthony Thomas (711 HPW/HP), the AFGSC HSI representative, will discuss how 711 HPW/HP supports these programs at the MAJCOM headquarters level and in the system program offices. |
Anthony Thomas | Breakout |
| 2019 |
|
Breakout Toward Real-Time Decision Making in Experimental Settings (Abstract)
Materials scientists, computer scientists and statisticians at LANL have teamed up to investigate how to make near real time decisions during fast-paced experiments. For instance, a materials scientist at a beamline typically has a short window in which to perform a number of experiments, after which they analyze the experimental data, determine interesting new experiments and repeat. In typical circumstances, that cycle could take a year. The goal of this research and development project is to accelerate that cycle so that interesting leads are followed during the short window for experiments, rather than in years to come. We detail some of our UQ work in materials science, including emulation, sensitivity analysis, and solving inverse problems, with an eye toward real-time decision making in experimental settings. |
Devin Francom | Breakout | 2019 |
||
Breakout Area Validation for Applications with Mixed Uncertainty (Abstract)
Model validation is a process for determining how accurate a model is when compared to a true value. The methodology uses uncertainty analysis in order to assess the discrepancy between a measured and predicted value. In the literature, there have been several area metrics introduced to handle these type of discrepancies. These area metrics were applied to problems that include aleatory uncertainty, epistemic uncertainty, and mixed uncertainty. However, these methodologies lack the ability to fully characterize the true dierences between the experimental and prediction data when mixed un- certainty exists in the measurements and/or in the predictions. This work will introduce a new area metric validation approach which aims to com- pensate for the shortcomings in current techniques. The approach will be described in detail and comparisons between existing metrics will be shown. To demonstrate its applicability the new area metric will be applied to a stagnation point calibration probe’s surface predictions for a low-enthalpy conditions. For this application, testing was preformed in the Hypersonic Materials Environmental Test System (HYMETS) facility located at NASA Langley Research Center. |
Laura White | Breakout |
| 2019 |
|
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort (Abstract)
Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a “black box”. Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow. |
Robert Baurle | Breakout |
| 2019 |
|
Breakout Sources of Error and Bias in Experiments with Human Subjects (Abstract)
No set of experimental data is perfect and researchers are aware that data from experimental studies invariably contain some margin of error. This is particularly true of studies with human subjects since human behavior is vulnerable to a range of intrinsic and extrinsic influences beyond the variables being manipulated in a controlled experimental setting. Potential sources of error may lead to wide variations in the interpretation of results and the formulation of subsequent implications. This talk will discuss specific sources of error and bias in the design of experiments and present systematic ways to overcome these effects. First, some of the basic errors in general experimental design will be discussed, including human errors, systematic errors and random errors. Second, we will explore specific types of experimental error that appear in human subjects research. Lastly, we will discuss the role of bias in experiments with human subjects. Bias is a type of systematic error that is introduced into the sampling or testing phase and encourages one outcome over another. Often, bias is the result of the intentional or unintentional influence that an experimenter may exert on the outcomes of a study. We will discuss some common sources of bias in research with human subjects, including biases in sampling, selection, response, performance execution, and measurement. The talk will conclude with a discussion of how errors and bias influence the validity of human subjects research and will explore some strategies for controlling these errors and biases |
Poornima Madhavan | Breakout | 2019 |
||
Breakout Deep Reinforcement Learning (Abstract)
An overview of Deep Reinforcement Learning and it’s recent successes in creating high performing agents. Covering it’s application in “easy” environments up to massively complex multi-agent strategic environments. Will analyze the behaviors learned, discuss research challenges, and imagine future possibilities. |
Benjamin Bell | Breakout |
| 2019 |
|
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks (Abstract)
Collaborative autonomous sensor networks have recently been used in many applications including inspection, law enforcement, search and rescue, and national security. They offer scalable, low cost solutions which are robust to the loss of multiple sensors in hostile or dangerous environments. While often comprised of less capable sensors, the performance of a large network can approach the performance of far more capable and expensive platforms if nodes are effectively coordinating their sensing actions and data processing. This talk will summarize work to date at LLNL on distributed signal processing and decentralized optimization algorithms for collaborative autonomous sensor networks, focusing on ADMM-based solutions for detection/estimation problems and sequential greedy optimization solutions which maximize submodular functions, e.g. mutual information. |
Ryan Goldhahn | Breakout | 2019 |
||
Breakout An Overview of Uncertainty-Tolerant Decision Support Modeling for Cybersecurity (Abstract)
Cyber system defenders face the challenging task of continually protecting critical assets and information from a variety of malicious attackers. Defenders typically function within resource constraints, while attackers operate at relatively low costs. As a result, design and development of resilient cyber systems that support mission goals under attack, while accounting for the dynamics between attackers and defenders, is an important research problem. This talk will highlight decision support modeling challenges under uncertainty within non-cooperative cybersecurity settings. Multiple attacker-defender game formulations under uncertainty are discussed with steps for further research. |
Samrat Chatterjee | Breakout | 2019 |
||
Breakout Software Reliability and Security Assessment: Automation and Frameworks (Abstract)
Software reliability models enable several quantitative predictions such as the number of faults remaining, failure rate, and reliability (probability of failure free operation for a specified period of time in a specified environment). This talk will describe recent efforts in collaboration with NASA, including (1) the development of an automated script for the SFRAT (Software Failure and Reliability Assessment Tool) to streamline application of software reliability methods to ongoing programs, (2) application to a NASA program, (3) lessons learned, (4) and future directions for model and tool development to support the practical needs of the software reliability and security assessment frameworks. |
Lance Fiondella | Breakout |
| 2019 |
|
Breakout AI & ML in Complex Environment (Abstract)
The U.S. Army Research Laboratory’s (ARL) Essential Research Program (ERP) on Artificial Intelligence & Machine Learning (AI & ML) seeks to research, develop and employ a suite of AI-inspired and ML techniques and systems to assist teams of soldiers and autonomous agents in dynamic, uncertain, complex operational conditions. Systems will be robust, scalable, and capable of learning and acting with varying levels of autonomy, to become integral components of networked sensors, knowledge bases, autonomous agents, and human teams. Three specific research gaps will be examined: (i) Learning in Complex Data Environments, (ii) Resource-constrained AI Processing at the Point-of-Need and (iii) Generalizable & Predictable AI. The talk will highlight ARL’s internal research efforts over the next 3-5 years that are connected, cumulative and converging to produce tactically-sensible AI-enabled capabilities for decision making at the tactical edge, specifically addressing topics in: (1) adversarial distributed machine learning, (2) robust inference & machine learning over heterogeneous sources, (3) adversarial reasoning integrating learned information, (4) adaptive online learning and (5) resource-constrained adaptive computing. The talk will also highlight collaborative research opportunities in AI & ML via ARL’s Army AI Innovation Institute (A2I2) which will harness the distributed research enterprise via the ARL Open Campus & Regional Campus initiatives. |
Tien Pham | Breakout | 2019 |
||
Breakout Waste Not, Want Not: A Methodological Illustration of Quantitative Text Analysis (Abstract)
“The wise use of one’s resources will keep one from poverty.” This is the definition of the proverbial saying “waste not, want not” according to www.dictionary.com. Indeed, one of the most common resources analysts encounter is text in free-form. This text might come from survey comments, feedback, websites, transcriptions of interviews, videos, etcetera. Notably, researchers have used wisely the information conveyed in text for many years. However, in many instances, the qualitative methods employed require numerous hours of reading, training, coding, and validating, among others. As technology continues to evolve, simple access to text data is blooming. For example, analysts conducting online studies can have thousands of text entries from participants’ comments. Even without recent advances in technology analysts have had access to text in books, letters, and other archival data for centuries. One important challenge, however, is figuring out how to make sense of text data without investing a large number of resources, time, and the effort involved in qualitative methodology or “old-school” quantitative approaches (such as reading a collection of 200 books and counting the occurrence of important terms in the text). This challenge has been solved in the information retrieval field –a branch of computer science—with the implementation of a technique called latent semantic analysis (LSA; Manning, Raghavan, & Schütze, 2008) and a closely related technique called topic analysis (TA; SAS Institute Inc., 2018). Undoubtedly, other quantitative methods for text analysis, such as latent Dirichlet analysis (Blei, Ng, & Jordan, 2003), are also apt for the task of unveiling knowledge from text data, but we restrict the discussion in this presentation to LSA and TA because these exclusively deal with the underlying structure of the text rather than identifying clusters. In this presentation, we aim to make quantitative text analysis –specifically LSA and TA– accessible to researchers and analysts from a variety of disciplines. We do this by leveraging understanding of a popular multivariate technique: principal components analysis (PCA). We start by describing LSA and TA by drawing comparisons and equivalencies to PCA. We make these comparisons in an intuitive, user-friendly manner and then through a technical description of mathematical statements, which rely on the singular value decomposition of a document-term matrix. Moreover, we explain the implementation of LSA and TA using statistical software to enable simple application of these techniques. Finally, we show a practical application of LSA and TA with empirical data of aircraft incidents. |
Laura Castro-Schilo | Breakout |
| 2019 |
|
Breakout Machine Learning Prediction With Streamed Sensor Data: Fitting Neural Networks using Functional Principal Components (Abstract)
Sensors that record sequences of measurements are now embedded in many products from wearable exercise watches to chemical and semiconductor manufacturing equipment. There is information in the shapes of the sensor stream curves that is highly predictive of a variety of outcomes such as the likelihood of a product failure event or batch yield. Despite this data now being common and readily available, it is often being used either inefficiently or not at all due to lack of knowledge and tools for how to properly leverage it. In this presentation, we will propose fitting splines to sensor streams and extracting features called functional principal component scores that offer a highly efficient low dimensional compression of the signal data. Then, we use these features as inputs into machine learning models like neural networks and LASSO regression models. Once one sees sensor data in this light, answering a wide variety of applied questions becomes a straightforward two stage process of data cleanup/functional feature extraction followed by modeling using those features as inputs. |
Chris Gotwalt | Breakout |
| 2019 |
|
Breakout Screening Designs for Resource Constrained Deterministic M&S Experiments: A Munitions Case Study (Abstract)
Abstract: In applications where modeling and simulation runs are quick and cheap, space filling designs will give the tester all the information they need to make decisions about their system. In some applications however, this luxury does not exist, and each M&S run can be time consuming and expensive. In these scenarios, a sequential test approach provides an efficient solution where an initial screening is conducted, followed by an augmentation to fit specified models of interest. Until this point, no dedicated screening designs for UQ applications in resource constrained situations existed. Due to the Army’s frequent exposure to this type of situation, the need sparked a collaboration between Picatinny’s Statistical Methods and Analysis group and Professor V. Roshan Joseph of Georgie Tech, where a new type of UQ screening design was created. This paper provides a brief introduction to the design, its intended use, and a case study in which this new methodology was applied. |
Christopher Drake | Breakout |
| 2019 |
|
Breakout The Isle of Misfit Designs: A Guided Tour of Optimal Designs That Break the Mold (Abstract)
Whether it was in a Design of Experiments course or through your own work, you’ve no doubt seen and become well acquainted with the standard experimental design. You know the features: they’re “orthogonal” (no messy correlations to deal with), their correlation matrices are nice pretty diagonals, and they can only happen with run sizes of 4, 8, 12, 16, and so on. Well what if I told you that there existed optimal designs that defied convention. What if I told you that, yes, you can run an optimal design with, say, 5 factors in 9 runs. Or 10. Or even 11 runs! Join me as I show you a strange new world of optimal designs that are the best at what they do, even though they might not look very nice. |
Caleb King | Breakout |
| 2019 |
|
Breakout Adapting Operational Test to Rapid-Acquisition Programs (Abstract)
During the past several years, the DoD has begun applying rapid prototyping and fielding authorities—granted by Congress in the FY2016-FY2018 National Defense Authorization Acts (NDAA)—to many acquisition programs. Other programs have implemented an agile acquisition strategy where incremental capability is delivered in iterative cycles. As a result, Operational Test Agencies (OTA) have had to adjust their test processes to accommodate shorter test timelines and periodic delivery of capability to the warfighter. In this session, representatives from the Service OTAs will brief examples where they have implemented new practices and processes for conducting Operational Test on acquisition programs categorized as agile, DevOps, and/or Section 804 rapid-acquisition efforts. During the final 30 minutes of the session, a panel of OTA representatives will field questions from the audience concerning the challenges and opportunities related to test design, data collection, and analysis, that rapid-acquisition programs present. |
Panel Discussion | Breakout |
| 2019 |
|
Breakout Sample Size Calculations for Quiet Sonic Boom Community Surveys (Abstract)
NASA is investigating the dose-response relationship between quiet sonic boom exposure and community noise perceptions. This relationship is the key to possible future regulations that would replace the ban on commercial supersonic flights with a noise limit. We have built several Bayesian statistical models using pilot community study data. Using goodness of fit measures, we downselected to a subset of models which are the most appropriate for the data. From this subset of models we demonstrate how to calculate sample size requirements for a simplified example without any missing data. We also suggest how to modify the sample size calculation to account for missing data. |
Jasme Lee | Breakout |
| 2019 |
|
Breakout Improved Surface Gunnery Analysis with Continuous Data (Abstract)
Swarms of small, fast speedboats can challenge even the most capable modern warships, especially when they operate in or near crowded shipping lanes. As part of the Navy’s operational testing of new ships and systems, at-sea live-fire tests against remote-controlled targets allow us to test our capability against these threats. To ensure operational realism, these events are minimally scripted and allow the crew to respond in accordance with their training. This is a trade-off against designed experiments, which ensure statistically optimal sampling of data from across the factor space, but introduce many artificialities. A recent test provided data on the effectiveness of naval gunnery. However, standard binomial (hit/miss) analyses fell short, as the number of misses was much larger than the number of hits. This prevented us from fitting more than a few factors and resulted in error bars so large as to be almost useless. In short, binomial analysis taught us nothing we did not already know. Recasting gunfire data from binomial (hit/miss) to continuous (time-to-kill) allowed us to draw statistical conclusions with tactical implications from these free-play, live-fire surface gunnery events. Using a censored-data analysis approach enabled us to make this switch and avoid the shortcomings of other statistical methods. Ultimately, our analysis provided the Navy with suggestions for improvements to its tactics and the employment of its weapons. |
Benjamin Ashwell & V. Bram Lillard | Breakout |
| 2019 |
|
Breakout Human in the Loop Experiment Series Evaluating Synthetic Vision Displays for Enhanced Airplane State Awareness (Abstract)
Recent data from Boeing’s Statistical Summary of Commercial Jet Airplane Accidents shows that Loss of Control – In Flight (LOC-I) is the leading cause of fatalities in commercial aviation accidents worldwide. The Commercial Aviation Safety Team (CAST), a joint government and industry effort tasked with reducing the rate of fatal accidents, requested that the National Aeronautics and Space Administration (NASA) conduct research on virtual day-visual meteorological conditions displays, such as synthetic vision, in order to combat LOC-I. NASA recently concluded a series of experiments using commercial pilots from various backgrounds to evaluate synthetic vision displays. This presentation will focus on the two most recent experiments: one conducted with the Navy’s Disorientation Research Device and one completed at NASA Langley Research Center that utilized the Microsoft HoloLens to display synthetic vision. Statistical analysis was done on aircraft performance data, pilot inputs, and a range of subjective questionnaires to assess the efficacy of the displays. |
Kathryn Ballard | Breakout |
| 2019 |
|
Breakout Statistical Engineering and M&S in the Design and Development of DoD Systems (Abstract)
This presentation will use a notional armament system case-study to illustrate the use of M&S DOE, surrogate modeling, sensitivity analysis, multi-objective optimization and model calibration during early lifecycle development and design activities in the context of a new armament system. In addition to focusing on the statistician’s, data scientist’s, or analyst’s role and the key statistical techniques in engineering DoD systems, this presentation will also emphasize the non-statistical / engineering domain-specific aspects in a multidisciplinary design and development process which make uses of these statistical approaches at the subcomponent and subsystem-level as well as the end-to-end system modeling. A statistical engineering methodology which emphasizes the use of ‘virtual’ DOE-based model emulators developed at the subsystem-level and integrated using a systems-engineering architecture framework can yield a more tractable engineering problem compared to traditional ‘design-build-test-fix’ cycles or direct simulation of computationally expensive models. This supports a more informed prototype design for physical experimentation while providing a greater variety of materiel solutions thereby reducing development and testing cycles and time to field complex systems. |
Doug Ray & Melissa Jablonski | Breakout | 2019 |
||
Breakout Uncertainty Quantification: Combining Large Scale Computational Models with Physical Data for Inference (Abstract)
Combining physical measurements with computational models is key to many investigations involving validation and uncertainty quantification (UQ). This talk surveys some of the many approaches taken for validation and UQ, with large-scale computational models. Experience with such applications suggests classifications of different types of problems with common features (e.g. data size, amount of empiricism in the model, computational demands, availability of data, extent of extrapolation required, etc.). More recently, social and social-technical systems are being considered for similar analyses, bringing new challenges to this area. This talk will approaches for such problems and will highlight what might be new research directions for application and methodological development in UQ. |
Dave Higdon | Breakout |
| 2019 |
|
Breakout A User-Centered Design Approach to Military Software Development (Abstract)
This case study highlights activities performed during the front-end process of a software development effort undertaken by the Fire Support Command and Control Program Office. This program office provides the U.S. Army, Joint and coalition commanders with the capability to plan, execute and deliver both lethal and non-lethal fires. Recently, the program office has undertaken modernization of its primary field artillery command and control system that has been in use for over 30 years. The focus of this case study is on the user-centered design process and activities taken prior to and immediately following contract award. A modified waterfall model comprised of three cyclic, yet overlapping phases (observation, visualization, and evaluation) provided structure for the iterative, user-centered design process. Gathering and analyzing data collected during focus groups, observational studies, and workflow process mapping, enabled the design team to identify 1) design patterns across the role/duty, unit and echelon matrix (a hierarchical organization structure), 2) opportunities to automate manual processes, 3) opportunities to increase efficiencies for fire mission processing, 4) bottlenecks and workarounds to be eliminated through design of the modernized system, 5) shortcuts that can be leveraged in design, 6) relevant and irrelevant content for each user population for streamlining access to functionality, 7) a usability baseline for later comparison (e.g., the number of steps and time taken to perform a task as captured in workflows for comparison to the same task in the modernized system), and provided the basis for creating visualizations using wireframes. Heuristic evaluations were conducted early to obtain initial feedback from users. In the next few months, usability studies will enable users to provide feedback based on actual interaction with the newly designed software. Included in this case study are descriptions of the methods used to collect user-centered design data, how results were visualized/documented for use by the development team, and lessons learned from applying user-centered design techniques during software development of a military field artillery command and control system. |
Pam Savage-Knepshield | Breakout |
| 2019 |
|
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber (Abstract)
Due to the immense potential use cases, configurations, and threat behaviors, thorough and efficient cyber testing is a significant challenge for the defense community. In this presentation, Phadke will present case studies where STO was successfully deployed for cyber testing, resulting in higher assurance, reduced schedule, and reduced testing cost. Phadke will also discuss importance first focusing on the engineering and science analysis, and only after that is complete, implementing statistical methods. |
Kedar Phadke | Breakout | 2019 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Open Architecture Tradeoffs (OAT): A simple, computational game engine for rapidly exploring hypotheses in Battle Management Command and Control (BMC2) |
Shelley Cazares | Breakout |
| 2019 |
|
Breakout Comparison of Methods for Testing Uniformity to Support the Validation of Simulation Models used for Live-Fire Testing |
Shannon Shelburne | Breakout |
| 2019 |
|
Breakout Valuing Human Systems Integration: A Test and Data Perspective |
Jeffrey Thomas | Breakout |
| 2019 |
|
Breakout Air Force Human Systems Integration Program |
Anthony Thomas | Breakout |
| 2019 |
|
Breakout Toward Real-Time Decision Making in Experimental Settings |
Devin Francom | Breakout | 2019 |
||
Breakout Area Validation for Applications with Mixed Uncertainty |
Laura White | Breakout |
| 2019 |
|
Breakout A 2nd-Order Uncertainty Quantification Framework Applied to a Turbulence Model Validation Effort |
Robert Baurle | Breakout |
| 2019 |
|
Breakout Sources of Error and Bias in Experiments with Human Subjects |
Poornima Madhavan | Breakout | 2019 |
||
Breakout Deep Reinforcement Learning |
Benjamin Bell | Breakout |
| 2019 |
|
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks |
Ryan Goldhahn | Breakout | 2019 |
||
Breakout An Overview of Uncertainty-Tolerant Decision Support Modeling for Cybersecurity |
Samrat Chatterjee | Breakout | 2019 |
||
Breakout Software Reliability and Security Assessment: Automation and Frameworks |
Lance Fiondella | Breakout |
| 2019 |
|
Breakout AI & ML in Complex Environment |
Tien Pham | Breakout | 2019 |
||
Breakout Waste Not, Want Not: A Methodological Illustration of Quantitative Text Analysis |
Laura Castro-Schilo | Breakout |
| 2019 |
|
Breakout Machine Learning Prediction With Streamed Sensor Data: Fitting Neural Networks using Functional Principal Components |
Chris Gotwalt | Breakout |
| 2019 |
|
Breakout Screening Designs for Resource Constrained Deterministic M&S Experiments: A Munitions Case Study |
Christopher Drake | Breakout |
| 2019 |
|
Breakout The Isle of Misfit Designs: A Guided Tour of Optimal Designs That Break the Mold |
Caleb King | Breakout |
| 2019 |
|
Breakout Adapting Operational Test to Rapid-Acquisition Programs |
Panel Discussion | Breakout |
| 2019 |
|
Breakout Sample Size Calculations for Quiet Sonic Boom Community Surveys |
Jasme Lee | Breakout |
| 2019 |
|
Breakout Improved Surface Gunnery Analysis with Continuous Data |
Benjamin Ashwell & V. Bram Lillard | Breakout |
| 2019 |
|
Breakout Human in the Loop Experiment Series Evaluating Synthetic Vision Displays for Enhanced Airplane State Awareness |
Kathryn Ballard | Breakout |
| 2019 |
|
Breakout Statistical Engineering and M&S in the Design and Development of DoD Systems |
Doug Ray & Melissa Jablonski | Breakout | 2019 |
||
Breakout Uncertainty Quantification: Combining Large Scale Computational Models with Physical Data for Inference |
Dave Higdon | Breakout |
| 2019 |
|
Breakout A User-Centered Design Approach to Military Software Development |
Pam Savage-Knepshield | Breakout |
| 2019 |
|
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber |
Kedar Phadke | Breakout | 2019 |