Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Contributed A Metrics-based Software Tool to Guide Test Activity Allocation (Abstract)
Existing software reliability growth models are limited to parametric models that characterize the number of defects detected as a function of testing time or the number of vulnerabilities discovered with security testing. However, the amount and types of testing effort applied are rarely considered. This lack of detail regarding specific testing activities limits the application of software reliability growth models to general inferences such as the additional amount of testing required to achieve a desired failure intensity, mean time to failure, or reliability (period of failure free operation). This presentation provides an overview of an open source software reliability tool implementing covariate software reliability models [1] to aid DoD organizations and their contractors who desire to quantitatively measure and predict the reliability and security improvement of software. Unlike traditional software reliability growth models, the models implemented in the tool can accept multiple discrete time series corresponding to the amount of each type of test activity performed as well as dynamic metrics computed in each interval. When applied in the context of software failure or vulnerability discovery data, the parameters of each activity can be interpreted as the effectiveness of that activity to expose reliability defects or security vulnerabilities. Thus, these enhanced models provide the structure to assess existing and emerging techniques in an objective framework that promotes thorough testing and process improvement, motivating the collection of relevant metrics and precise measurements of the time spent performing various testing activities. References [1] Vidhyashree Nagaraju, Chathuri Jayasinghe, Lance Fiondella, Optimal test activity allocation for covariate software reliability and security models, Journal of Systems and Software, Volume 168, 2020, 110643. |
Jacob Aubertine Graduate Research Assistant University of Massachusetts Dartmouth ![]() (bio)
Jacob Aubertine is pursuing a MS degree in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth, where he also received his BS (2020) in Computer Engineering. His research interests include software reliability, performance engineering, and statistical modeling. |
Contributed |
![]() | 2021 |
Breakout An Adaptive Approach to Shock Train Detection (Abstract)
Development of new technology always incorporates model testing. This is certainly true for hypersonics, where flight tests are expensive and testing of component- and system-level models has significantly advanced the field. Unfortunately, model tests are often limited in scope, being only approximations of reality and typically only partially covering the range of potential realistic conditions. In this talk, we focus on the problem of real-time detection of the shock train leading edge in high-speed air-breathing engines, such as dual-mode scramjets. Detecting and controlling the shock train leading edge is important to the performance and stability of such engines, and a problem that has seen significant model testing on the ground and some flight testing. Often, methods developed for shock train detection are specific to the model used. Thus, they may not generalize well when tested in another facility or in flight as they typically require a significant amount of prior characterization of the model and flow regime. A successful method for shock train detection needs to be robust to changes in features like isolator geometry, inlet and combustor states, flow regimes, and available sensors. Such data can be difficult or impossible to obtain if the isolator operating regime is large. To this end, we propose the an approach for real-time detection of the isolator shock train. Our approach uses real-time pressure measurements to adaptively estimate the shock train position in a data-driven manner. We show that the method works well across different isolator models, placement of pressure transducers, and flow regimes. We believe that a data-driven approach is the way forward for bridging the gap between testing and reality, saving development time and money. |
Greg Hunt Assistant Professor William & Mary ![]() (bio)
Greg is an interdisciplinary researcher that builds scientific tools. He is trained as a statistician, mathematician and computer scientist. Currently he work on a diverse set of problems in biology, physics, and engineering. |
Breakout |
![]() | 2021 |
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment (Abstract)
The recent emergence of affordable, high-quality augmented-, mixed-, and virtual-reality (AR, MR, VR), technologies presents an opportunity to dramatically change the way users consume and interact with information. It has been shown that these immersive systems can be leveraged to enhance comprehension and accelerate decision-making in situations where data can be linked to spatial information, such as maps or terrain models. Furthermore, when immersive technologies are networked together, they allow for decentralized collaboration and provide perspective-taking not possible with traditional displays. However, enabling this shared space requires novel techniques in intelligent information management and data exchange. In this experiment, we explored a framework for leveraging distributed AI/ML processing to enable clusters of low-power, limited-functionality devices to deliver complex capabilities in aggregate to users distributed across the country collaborating simultaneously in a shared virtual environment. We deployed a motion detecting camera and triggered detection events to send information using a distributed request/reply worker framework to a remotely located YOLO image classification cluster. This work demonstrates the capability for various IoT and IoBT systems to invoke functionality without a priori knowledge of the specific endpoint to use to execute that functionality but by submitting a request based on a desired capability concept (e.g. image classification) with requiring only: 1) the knowledge of the broker location, 2) valid public/private key pair required to authenticate with the broker, and 3) the capability concept UUID and knowledge of request/reply formats used by that concept. |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
Mark Dennison is a research psychologist with DEVCOM U.S. Army Research Laboratory in the Computational and Information Sciences Directorate, Battlefield Information Systems Branch. He leads a team of government researchers and contractors focused on enabling cross-reality technologies to enhance lethality across domains through information management across echelons. Dr. Dennison graduated with a bachelor’s degree from the University of California at Irvine, and earned his Master’s and Ph.D. degrees from the University of California at Irvine, all in the field of psychology with a specialization in cognitive neuroscience. He is stationed at ARL-West in Playa Vista, CA. |
Breakout |
![]() | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Moderator (Abstract)
The defense industry faces increasingly complex systems in test and evaluation (T&E) that require interdisciplinary teams to successfully plan testing. A critical aspect in test planning is a successful collaboration between T&E experts, subject matter experts, program leadership, statisticians, and others. This panel, based on their own experiences as consulting statisticians, will discuss elements that lead to successful collaborations, barriers during collaboration, and recommendations to improve collaborations during T&E planning. |
Christine Anderson-Cook Los Alamos National Lab ![]() |
Panel |
Recording | 2021 |
Breakout Prior Formulation in a Bayesian Analysis of Biomechanical Data (Abstract)
Biomechanical experiments investigating the failure modes of biological tissue require a significant investment of time and money due to the complexity of procuring, preparing, and testing tissue. Furthermore, the potentially destructive nature of these tests makes repeated testing infeasible. This leads to experiments with notably small sample sizes in light of the high variance common to biological material. When the goal is to estimate parameters for an analytic artifact such as an injury risk curve (IRC), which relates an input quantity to a probability of injury, small sample sizes result in undesirable uncertainty. One way to ameliorate this effect is through a Bayesian approach, incorporating expert opinion and previous experimental data into a prior distribution. This has the advantage of leveraging the information contained in expert opinion and related experimental data to obtain faster convergence to an appropriate parameter estimation with a desired certainty threshold. We explore several ways of implementing Bayesian methods in a biomechanical setting, including permutations on the use of expert knowledge and prior experimental data. Specifically, we begin with a set of experimental data from which we generate a reference IRC. We then elicit expert predictions of the 10th and 90th quantiles of injury, and use them to formulate both uniform and normal prior distributions. We also generate priors from qualitatively similar experimental data, both directly on the IRC parameters and on the injury quantiles, and explore the use of weighting schemes to assign more influence to better datasets. By adjusting the standard deviation and shifting the mean, we can create priors of variable quality. Using a subset of the experimental data in conjunction with our derived priors, we then re-fit the IRC and compare it to the reference curve. For all methods we will measure the certainty, speed of convergence, and accuracy relative to the reference IRC, with the aim of recommending a best practices approach for the application of Bayesian methods in this setting. Ultimately an optimized approach for handling small samples sizes with Bayesian methods has the potential to increase the information content of individual biomechanical experiments by integrating them into the context of expert knowledge and prior experimentation. |
Amanda French Data Scientist Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Amanda French is a data scientist at Johns Hopkins University Applied Physics Laboratory. She obtained her PhD in mathematics from UNC Chapel Hill and went on to perform data science for a variety of government agencies, including the Department of State, Military Health System, and Department of Defense. Her expertise includes statistics, machine learning, and experimental design. |
Breakout |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 1 (Abstract)
Exhaustive testing is infeasible when testing complex engineered systems. Fortunately, a combinatorial testing approach can be almost as effective as exhaustive testing but at dramatically lower cost. The effectiveness of this approach is due to the underlying construct on which it is based, that is a mathematical construct known as a covering array. This tutorial is divided into two sections. Section 1 introduces covering arrays, introduces a few covering array metrics, and then shows how covering arrays are used in combinatorial testing methodologies. Section 2 focuses on practical applications of combinatorial testing, including a commercial aviation example, an example that focuses on a widely used machine learning library, plus other examples that illustrate how common testing challenges can be addressed. In the process of working through these examples, an easy-to-use tool for generating covering arrays will be demonstrated. |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() (bio)
Ryan Lekivetz is a Principal Research Statistician Developer for the JMP Division of SAS where he implements features for the Design of Experiments platforms in JMP software. |
Tutorial |
![]() Recording | 2021 |
Keynote Assessing Human-Autonomy Interaction in Driving-Assist Settings (Abstract)
In order to determine how the perception, Autopilot, and driver monitoring systems of Tesla Model 3s interact with one another, and also to determine the scale of between- and within-car variability, a series of four on-road tests were conducted. Three sets of tests were conducted on a closed track and one was conducted on a public highway. Results show wide variability across and within three Tesla Model 3s, with excellent performance in some cases but also likely catastrophic performance in others. This presentation will not only highlight how such interactions can be tested, but also how results can inform requirements and designs of future autonomous systems. |
Mary “Missy” Cummings Professor Duke University ![]() (bio)
Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval pilot from 1988-1999, she was one of the U.S. Navy’s first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department and the Director of the Humans and Autonomy Laboratory. She is an AIAA Fellow and a member of the Veoneer, Inc. Board of Directors |
Keynote |
![]() Recording | 2021 |
Breakout Estimating Pure-Error from Near Replicates in Design of Experiments (Abstract)
In design of experiments, setting exact replicates of factor settings enables estimation of pure-error; a model-independent estimate of experimental error useful in communicating inherent system noise and testing of model lack-of-fit. Often in practice, the factor levels for replicates are precisely measured rather than precisely set, resulting in near-replicates. This can result in inflated estimates of pure-error due to uncompensated set-point variation. In this article, we review previous strategies for estimating pure-error from near-replicates and propose a simple alternative. We derive key analytical properties and investigate them via simulation. Finally, we illustrate the new approach with an application. |
Caleb King Research Statistician Developer SAS Institute ![]() |
Breakout |
![]() | 2021 |
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks (Abstract)
Nearly all land, air, and sea maneuver systems (e.g. vehicles, ships, aircraft, and missiles) are becoming more software-reliant and blending internal communication across both Internet Protocol (IP) and non-IP buses. IP communication is widely understood among the cybersecurity community, whereas expertise and available test tools for non-IP protocols such as Controller Area Network (CAN) and MIL-STD-1553 are not as commonplace. However, a core tenet of operational cybersecurity testing is to asses all potential pathways of information exchange present on the system, to include IP and non-IP. In this presentation, we will introduce a few non-IP protocols (e.g. CAN, MIL-STD-1553) and provide a live demonstration of how to attack a CAN network using malicious message injection. We will also discuss how potential cyber effects on non-IP busses can lead to catastrophic mission effects to the target system. |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Breakout |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: HSI | Trustworthy AI (Abstract)
Recent successes and shortcomings of AI implementations have highlighted the importance of understanding how to design and interpret trustworthiness. AI Assurance is becoming a popular objective for some stakeholders, however, assurance and trustworthiness are context-sensitive concepts that rely not only on software performance and cybersecurity, but also on human-centered design. This talk summarizes Cognitive Engineering principles in the context of resilient AI engineering. It also introduces approaches for successful Human-Machine Teaming in high risk work domains. |
Stoney Trent Research Professor and Principal Advisor for Research and Innovation; Founder Virginia Tech; The Bulls Run Group, LLC ![]() (bio)
Stoney Trent, Ph.D. Research Professor and Principal Advisor for Research and Innovation, Virginia Tech; Founder, The Bulls Run Group, LLC Stoney is a Cognitive Engineer and Military Intelligence and Cyber Warfare veteran, who specializes in human-centered innovation. As an Army officer, Stoney designed and secured over $350M to stand up the Joint Artificial Intelligence Center (JAIC) for the Department of Defense. As the Chief of Missions in the JAIC, Stoney established product lines to deliver human-centered AI to improve warfighting and business functions in the world’s largest bureaucracy. Previously, he established and directed U.S. Cyber Command’s $50M applied research lab, which develops and assesses products for the Cyber Mission Force. Stoney has served as a Strategic Policy Research Fellow with the RAND Arroyo Center and is a former Assistant Professor in the Department of Behavioral Science and Leadership at the United States Military Academy. He has served in combat and stability operations in Iraq, Kosovo, Germany, and Korea. Stoney is a graduate of the Army War College and former Cyber Fellow at the National Security Agency. |
Panel |
Recording | 2021 |
Breakout A Great Test Requires a Great Plan (Abstract)
The Scientific Test and Analysis Techniques (STAT) process is designed to provide structure for a test team to progress from a requirement to decision quality information. The four phases of the STAT process are Plan, Design, Execute, and Analyze. Within the Test and Evaluation (T&E) community we tend to focus on the quantifiable metrics and the hard science of testing, which are the Design and the Analyze phases. At the STAT Center of Excellence (COE) we have emphasized an increased focus on the planning phase and in this presentation we focus on the elements necessary for a comprehensive planning session. In order to efficiently and effectively test a system it is vital that the test team understand the requirements, the System Under Test (SUT) to include any subsystems that will be tested, and the test facility. To accomplish this the right team members with the necessary knowledge must be in the room and prepared to present their information and have an educated discussion to arrive at a comprehensive agreement about the desired end stated of the test. Our recommendations for the initial planning meeting are based on a thorough study of the STAT process and on lessons learned from actual planning meetings. |
Aaron Ramert STAT Analyst Scientific Test and Analytics Techniques Center of Excellence (STAT COE) ![]() (bio)
Mr. Ramert is a graduate of the US Naval Academy and the Naval Postgraduate School and a 20 year veteran of the Marine Corps. During his career in the Marines he served tours in operational air and ground units as well as academic assignments. He joined the Scientific Test and Analysis Techniques (STAT) Center of Excellence (COE) in 2016 where he works with major acquisition programs with the Department of Defense to apply rigor and efficiency to their test and evaluation methodology through the application of the STAT process. |
Breakout |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 2 |
Joseph Morgan Principal Research Statistician SAS Institute ![]() (bio)
Joseph Morgan is a Principal Research Statistician/Developer in the JMP Division of SAS Institute Inc. where he implements features for the Design of Experiments platforms in JMP software. His research interests include combinatorial testing, empirical software engineering and algebraic design theory. |
Tutorial |
![]() Recording | 2021 |
Keynote Opening Remarks (Abstract)
Norton A. Schwartz serves as President of the Institute for Defense Analyses (IDA), a nonprofit corporation operating in the public interest. IDA manages three Federally Funded Research and Development Centers that answer the most challenging U.S. security and science policy questions with objective analysis leveraging extraordinary scientific, technical, and analytic expertise. At IDA, General Schwartz (U.S. Air Force, retired) directs the activities of more than 1,000 scientists and technologists employed by IDA. General Schwartz has a long and prestigious career of service and leadership that spans over 5 decades. He was most recently President and CEO of Business Executives for National Security (BENS). During his 6-year tenure at BENS, he was also a member of IDA’s Board of Trustees. Prior to retiring from the U.S. Air Force, General Schwartz served as the 19th Chief of Staff of the U.S. Air Force from 2008 to 2012. He previously held senior joint positions as Director of the Joint Staff and as the Commander of the U.S. Transportation Command. He began his service as a pilot with the airlift evacuation out of Vietnam in 1975. General Schwartz is a U.S. Air Force Academy graduate and holds a master’s degree in business administration from Central Michigan University. He is also an alumnus of the Armed Forces Staff College and the National War College. He is a member of the Council on Foreign Relations and a 1994 Fellow of Massachusetts Institute of Technology’s Seminar XXI. General Schwartz has been married to Suzie since 1981. |
Norton Schwartz President Institute for Defense Analyses ![]() |
Keynote |
Recording | 2021 |
Breakout Uncertainty Quantification and Sensitivity Analysis Methodology for AJEM (Abstract)
The Advanced Joint Effectiveness Model (AJEM) is a joint forces model developed by the U.S. Army that is used in vulnerability and lethality (V/L) predictions for threat/target interactions. This complex model primarily generates a probability response for various components, scenarios, loss of capabilities, or summary conditions. Sensitivity analysis (SA) and uncertainty quantification (UQ), referred to jointly as SA/UQ, are disciplines that provide the working space for how model estimates changes with respect to changes in input variables. A comparative measure that will be used to characterize the effect of an input change on the predicted outcome was developed and is reviewed and illustrated in this presentation. This measure provides a practical context that stakeholders can better understand and utilize. We show graphical and tabular results using this measure. |
Craig Andres Mathematical Statistician U.S. Army CCDC Data & Analysis Center ![]() (bio)
Craig Andres is a Mathematical Statistician at the recently formed DEVCOM Data & Analysis Center in the Materiel M&S Branch working primarily on the uncertainty quantification, as well as the verification and validation, of the AJEM vulnerability model. He is currently on developmental assignment with the Capabilities Projection Team. He has a master’s degrees in Applied Statistics from Oakland University and a master’s degree in Mathematics from Western Michigan University. |
Breakout | 2021 |
|
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge (Abstract)
This session will focus on the novel application of a design of experiments approach to optimize a propulsion charge configuration for a U.S. Army artillery round. The interdisciplinary design effort included contributions from subject matter experts in statistics, propulsion charge design, computational physics and experimentation. The process, which we will present in this session, consisted of an initial, low fidelity modeling and simulation study to reduce the parametric space by eliminating inactive variables and reducing the ranges of active variables for the final design. The final design used a multi-tiered approach that consolidated data from multiple sources including low fidelity modeling and simulation, high fidelity modeling and simulation and live test data from firings in a ballistic simulator. Specific challenges of the effort that will be addressed include: integrating data from multiple sources, a highly constrained design space, functional response data, multiple competing design objectives and real-world test constraints. The result of the effort is a final, optimized propulsion charge design that will be fabricated for live gun firing. |
Sarah Longo Data Scientist US Army CCDC Armaments Center ![]() (bio)
Sarah Longo is a data scientist in the US Army CCDC Armaments Center’s Systems Analysis Division. She has a background in Chemical and Mechanical Engineering and ten years experience in gun propulsion and armament engineering. Ms. Longo’s gun-propulsion expertise has played a part in enabling the successful implementation of Design of Experiments, Empirical Modeling, Data Visualization and Data Mining for mission-critical artillery armament and weapon system design efforts. |
Breakout |
![]() | 2021 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities (Abstract)
Although artificial intelligence may take over tasks traditionally performed by humans or power systems that act autonomously, humans will still interact with these systems in some way. The need to ensure these interactions are fluid and effective does not disappear—if anything, this need only grows with AI-enabled capabilities. These technologies introduce multiple new hazards for achieving high quality human-system integration. Testers will need to evaluate both traditional HSI issues as well as these novel concerns in order to establish the trustworthiness of a system for activity in the field, and we will need to develop new T&E methods in order to do this. In this session, we will hear how three national security organizations are preparing for these HSI challenges, followed by a broader panel discussion on which of these problems is most pressing and which is most promising for DoD research investments. |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Breakout Metrics for Assessing Underwater Demonstrations for Detection and Classification of UXO (Abstract)
Receiver Operating Characteristic curves (ROC curves) are often used to assess the performance of detection and classification systems. ROC curves can have unexpected subtleties that make them difficult to interpret. For example, the Strategic Environmental Research and Development Program and the Environmental Security Technology Certification Program (SERDP/ESTCP) is sponsoring the development of novel systems for the detection and classification of Unexploded Ordnance (UXO) in underwater environments. SERDP is also sponsoring underwater testbeds to demonstrate the performance of these novel systems. The Institute for Defense Analyses (IDA) is currently designing and implementing the scoring process for these underwater demonstrations that addresses the subtleties of ROC curve interpretation. This presentation will provide an overview of the main considerations for ROC curve parameter selection when scoring underwater demonstrations for UXO detection and classification. |
Jacob Bartel Research Associate Institute for Defense Analyses ![]() (bio)
Jacob Bartel is a Research Associate at the Institute for Defense Analyses (IDA). His research focuses on computational modeling and verification and validation (V&V), primarily in the field of nuclear engineering. Recently, he has worked with SERDP/ESTCP to develop and implement scoring processes for testing underwater UXO detection and classification systems. Prior to joining IDA, his graduate research focused on the development of novel algorithms to model fuel burnup in nuclear reactors. Jacob earned his master’s degree in Nuclear Engineering and his bachelor’s degree in Physics from Virginia Tech. |
Breakout |
![]() | 2021 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 1 (Abstract)
Leadership has placed a high premium on analytically defensible results for M&S Verification and Validation. This mini-tutorial will provide a quick overview of relevant standard methods to establish equivalency in mean, variance, and distribution shape such as Two One-Sided Tests (TOST), K-S tests, Fisher’s Exact, and Fisher’s Combined Probability. The focus will be on more advanced methods such as the equality between model parameters in statistical emulators versus live tests (Hotelling T2, loglinear variance), equivalence of output curves (functional data analysis), and bootstrap methods. Additionally, we introduce a new method for near real-time adaptive sampling that places the next set of M&S runs at boundary regions of high gradient in the responses to more efficiently characterize complex surfaces such as those seen in autonomous systems. |
Jim Wisnowski Principal Consultant Adsurgo LLC ![]() (bio)
Jim Wisnowski is Principal Consultant and Co-founder at Adsurgo, LLC. He currently provides applied statistics training and consulting services across numerous industries and government departments with particular emphasis on Design of Experiments and Test & Evaluation. Previously, he was a commander and engineer in the Air Force, statistics professor at the US Air Force Academy, and Joint Staff officer. He received his PhD in Industrial Engineering from Arizona State University. |
Tutorial |
![]() Recording | 2021 |
Keynote Closing Remarks (Abstract)
Mr. William (Allen) Kilgore serves as Director, Research Directorate at NASA Langley Research Center. He previously served as Deputy Director of Aerosciences providing executive leadership and oversight for the Center’s Aerosciences fundamental and applied research and technology capabilities with the responsibility over Aeroscience experimental and computational research. After being appointed to the Senior Executive Service (SES) in 2013, Mr. Kilgore served as the Deputy Director, Facilities and Laboratory Operations in the Research Directorate. Prior to this position, Mr. Kilgore spent over twenty years in the operations of NASA Langley’s major aerospace research facilities including budget formulation and execution, maintenance, strategic investments, workforce planning and development, facility advocacy, and integration of facilities’ schedules. During his time at Langley, he has worked in nearly all of the major wind tunnels with a primary focus on process controls, operations and testing techniques supporting aerosciences research. For several years, Mr. Kilgore led the National Transonic Facility, the world’s largest cryogenic wind tunnel. Mr. Kilgore has been at NASA Langley Research Center since 1989, starting as a graduate student. Mr. Kilgore earned a B.S. and M.S. in Mechanical Engineering with concentration in dynamics and controls from Old Dominion University in 1984 and 1989, respectively. He is the recipient of NASA’s Exceptional Engineering Achievement Medal in 2008 and Exceptional Service Medal in 2012. |
William “Allen” Kilgore Director, Research Directorate NASA Langley Research Center ![]() |
Keynote |
Recording | 2021 |
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases (Abstract)
Generating aerodynamic coefficients can be computationally expensive, especially for the viscous CFD solvers in which multiple complex models are iteratively solved. When filling large design spaces, utilizing only a high accuracy viscous CFD solver can be infeasible. We apply state-of-the-art methods for design and analysis of computer experiments to efficiently develop an emulator for high-fidelity simulations. First, we apply a cokriging model to leverage information from fast low-fidelity simulations to improve predictions with more expensive high-fidelity simulations. Combining space-filling designs with a Gaussian process model-based sequential sampling criterion allows us to efficiently generate sample points and limit the number of costly simulations needed to achieve the desired model accuracy. We demonstrate the effectiveness of these methods with an aerodynamic simulation study using a conic shape geometry. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Release Number: LLNL-ABS-818163 |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
Webinar D-Optimally Based Sequential Test Method for Ballistic Limit Testing (Abstract)
Ballistic limit testing of armor is testing in which a kinetic energy threat is shot at armor at varying velocities. The striking velocity and whether the threat completely penetrated or partially penetrated the armor is recorded. The probability of penetration is modeled as a function of velocity using a generalized linear model. The parameters of the model serve as inputs to MUVES which is a DoD software tool used to analyze weapon system vulnerability and munition lethality. Generally, the probability of penetration is assumed to be monotonically increasing with velocity. However, in cases in which there is a change in penetration mechanism, such as the shatter gap phenomena, the probability of penetration can no longer be assumed to be monotonically increasing and a more complex model is necessary. One such model was developed by Chang and Bodt to model the probability of penetration as a function of velocity over a velocity range in which there are two penetration mechanisms. This paper proposes a D-optimally based sequential shot selection method to efficiently select threat velocities during testing. Two cases are presented: the case in which the penetration mechanism for each shot is known (via high-speed or post shot x-ray) and the case in which the penetration mechanism is not known. This method may be used to support an improved evaluation of armor performance for cases in which there is a change in penetration mechanism. |
Leonard Lombardo Mathematician U.S. Army Aberdeen Test Center ![]() (bio)
Leonard currently serves is an analyst for the RAM/ILS Engineering and Analysis Division at the U.S. Army Aberdeen Test Center (ATC). At ATC, he is the lead analyst for both ballistic testing of helmets and fragmentation analysis. Previously, while on a developmental assignment at the U.S. Army Evaluation Center, he worked towards increasing the use of generalized linear models in ballistic limit testing. Since then, he has contributed towards the implementation of generalized linear models within the test center through test design and analysis. |
Webinar |
![]() Recording | 2020 |
Webinar Introduction to Uncertainty Quantification for Practitioners and Engineers (Abstract)
Uncertainty is an inescapable reality that can be found in nearly all types of engineering analyses. It arises from sources like measurement inaccuracies, material properties, boundary and initial conditions, and modeling approximations. Uncertainty Quantification (UQ) is a systematic process that puts error bands on results by incorporating real world variability and probabilistic behavior into engineering and systems analysis. UQ answers the question: what is likely to happen when the system is subjected to uncertain and variable inputs. Answering this question facilitates significant risk reduction, robust design, and greater confidence in engineering decisions. Modern UQ techniques use powerful statistical models to map the input-output relationships of the system, significantly reducing the number of simulations or tests required to get accurate answers. This tutorial will present common UQ processes that operate within a probabilistic framework. These include statistical Design of Experiments, statistical emulation methods used to create the simulation inputs to response relationship, and statistical calibration for model validation and tuning to better represent test results. Examples from different industries will be presented to illustrate how the covered processes can be applied to engineering scenarios. This is purely an educational tutorial and will focus on the concepts, methods, and applications of probabilistic analysis and uncertainty quantification. SmartUQ software will only be used for illustration of the methods and examples presented. This is an introductory tutorial designed for practitioners and engineers with little to no formal statistical training. However, statisticians and data scientists may also benefit from seeing the material presented from a more practical use than a purely technical perspective. There are no prerequisites other than an interest in UQ. Attendees will gain an introductory understanding of Probabilistic Methods and Uncertainty Quantification, basic UQ processes used to quantify uncertainties, and the value UQ can provide in maximizing insight, improving design, and reducing time and resources. Instructor Bio: Gavin Jones, Sr. SmartUQ Application Engineer, is responsible for performing simulation and statistical work for clients in aerospace, defense, automotive, gas turbine, and other industries. He is also a key contributor in SmartUQ’s Digital Twin/Digital Thread initiative. Mr. Jones received a B.S. in Engineering Mechanics and Astronautics and a B.S. in Mathematics from the University of Wisconsin-Madison. |
Gavin Jones Sr. Application Engineer SmartUQ ![]() |
Webinar | 2020 |
|
Webinar A Validation Case Study: The Environment Centric Weapons Analysis Facility (Abstract)
Reliable modeling and simulation (M&S) allows the undersea warfare community to understand torpedo performance in scenarios that could never be created in live testing, and do so for a fraction of the cost of an in-water test. The Navy hopes to use the Environment Centric Weapons Analysis Facility (ECWAF), a hardware-in-the-loop simulation, to predict torpedo effectiveness and supplement live operational testing. In order to trust the model’s results, the T&E community has applied rigorous statistical design of experiments techniques to both live and simulation testing. As part of ECWAF’s two-phased validation approach, we ran the M&S experiment with the legacy torpedo and developed an empirical emulator of the ECWAF using logistic regression. Comparing the emulator’s predictions to actual outcomes from live test events supported the test design for the upgraded torpedo. This talk overviews the ECWAF’s validation strategy, decisions that have put the ECWAF on a promising path, and the metrics used to quantify uncertainty. |
Elliot Bartis Research Staff Member IDA ![]() (bio)
Elliot Bartis is a research staff member at the Institute for Defense Analyses where he works on test and evaluation of undersea warfare systems such as torpedoes and torpedo countermeasures. Prior to coming to IDA, Elliot received his B.A. in physics from Carleton College and his Ph.D. in materials science and engineering from the University of Maryland in College Park. For his doctorate dissertation, he studied how cold plasma interacts with biomolecules and polymers. Elliot was introduced to model validation through his work on a torpedo simulation called the Environment Centric Weapons Analysis Facility. In 2019, Elliot and others involved in the MK 48 torpedo program received a Special Achievement Award from the International Test and Evaluation Association in part for their work on this simulation. Elliot lives in Falls Church, VA with his wife Jacqueline and their cat Lily. |
Webinar |
![]() Recording | 2020 |
Webinar The Role of Uncertainty Quantification in Machine Learning (Abstract)
Uncertainty is an inherent, yet often under-appreciated, component of machine learning and statistical modeling. Data-driven modeling often begins with noisy data from error-prone sensors collected under conditions for which no ground-truth can be ascertained. Analysis then continues with modeling techniques that rely on a myriad of design decisions and tunable parameters. The resulting models often provide demonstrably good performance, yet they illustrate just one of many plausible representations of the data – each of which may make somewhat different predictions on new data. This talk provides an overview of recent, application-driven research at Sandia Labs that considers methods for (1) estimating the uncertainty in the predictions made by machine learning and statistical models, and (2) using the uncertainty information to improve both the model and downstream decision making. We begin by clarifying the data-driven uncertainty estimation task and identifying sources of uncertainty in machine learning. We then present results from applications in both supervised and unsupervised settings. Finally, we conclude with a summary of lessons learned and critical directions for future work. |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
Webinar A Practical Introduction To Gaussian Process Regression (Abstract)
Abstract: Gaussian process regression is ubiquitous in spatial statistics, machine learning, and the surrogate modeling of computer simulation experiments. Fortunately their prowess as accurate predictors, along with an appropriate quantification of uncertainty, does not derive from difficult-to-understand methodology and cumbersome implementation. We will cover the basics, and provide a practical tool-set ready to be put to work in diverse applications. The presentation will involve accessible slides authored in Rmarkdown, with reproducible examples spanning bespoke implementation to add-on packages. Instructor Bio: Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians. Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities. |
Robert “Bobby” Gramacy Virginia Tech ![]() (bio)
Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians. Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities. |
Webinar |
![]() | 2020 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Contributed A Metrics-based Software Tool to Guide Test Activity Allocation |
Jacob Aubertine Graduate Research Assistant University of Massachusetts Dartmouth ![]() |
Contributed |
![]() | 2021 |
Breakout An Adaptive Approach to Shock Train Detection |
Greg Hunt Assistant Professor William & Mary ![]() |
Breakout |
![]() | 2021 |
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() |
Breakout |
![]() | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Moderator |
Christine Anderson-Cook Los Alamos National Lab ![]() |
Panel |
Recording | 2021 |
Breakout Prior Formulation in a Bayesian Analysis of Biomechanical Data |
Amanda French Data Scientist Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 1 |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Keynote Assessing Human-Autonomy Interaction in Driving-Assist Settings |
Mary “Missy” Cummings Professor Duke University ![]() |
Keynote |
![]() Recording | 2021 |
Breakout Estimating Pure-Error from Near Replicates in Design of Experiments |
Caleb King Research Statistician Developer SAS Institute ![]() |
Breakout |
![]() | 2021 |
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: HSI | Trustworthy AI |
Stoney Trent Research Professor and Principal Advisor for Research and Innovation; Founder Virginia Tech; The Bulls Run Group, LLC ![]() |
Panel |
Recording | 2021 |
Breakout A Great Test Requires a Great Plan |
Aaron Ramert STAT Analyst Scientific Test and Analytics Techniques Center of Excellence (STAT COE) ![]() |
Breakout |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 2 |
Joseph Morgan Principal Research Statistician SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Keynote Opening Remarks |
Norton Schwartz President Institute for Defense Analyses ![]() |
Keynote |
Recording | 2021 |
Breakout Uncertainty Quantification and Sensitivity Analysis Methodology for AJEM |
Craig Andres Mathematical Statistician U.S. Army CCDC Data & Analysis Center ![]() |
Breakout | 2021 |
|
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge |
Sarah Longo Data Scientist US Army CCDC Armaments Center ![]() |
Breakout |
![]() | 2021 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Breakout Metrics for Assessing Underwater Demonstrations for Detection and Classification of UXO |
Jacob Bartel Research Associate Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 1 |
Jim Wisnowski Principal Consultant Adsurgo LLC ![]() |
Tutorial |
![]() Recording | 2021 |
Keynote Closing Remarks |
William “Allen” Kilgore Director, Research Directorate NASA Langley Research Center ![]() |
Keynote |
Recording | 2021 |
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
Webinar D-Optimally Based Sequential Test Method for Ballistic Limit Testing |
Leonard Lombardo Mathematician U.S. Army Aberdeen Test Center ![]() |
Webinar |
![]() Recording | 2020 |
Webinar Introduction to Uncertainty Quantification for Practitioners and Engineers |
Gavin Jones Sr. Application Engineer SmartUQ ![]() |
Webinar | 2020 |
|
Webinar A Validation Case Study: The Environment Centric Weapons Analysis Facility |
Elliot Bartis Research Staff Member IDA ![]() |
Webinar |
![]() Recording | 2020 |
Webinar The Role of Uncertainty Quantification in Machine Learning |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
Webinar A Practical Introduction To Gaussian Process Regression |
Robert “Bobby” Gramacy Virginia Tech ![]() |
Webinar |
![]() | 2020 |