Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge (Abstract)
This session will focus on the novel application of a design of experiments approach to optimize a propulsion charge configuration for a U.S. Army artillery round. The interdisciplinary design effort included contributions from subject matter experts in statistics, propulsion charge design, computational physics and experimentation. The process, which we will present in this session, consisted of an initial, low fidelity modeling and simulation study to reduce the parametric space by eliminating inactive variables and reducing the ranges of active variables for the final design. The final design used a multi-tiered approach that consolidated data from multiple sources including low fidelity modeling and simulation, high fidelity modeling and simulation and live test data from firings in a ballistic simulator. Specific challenges of the effort that will be addressed include: integrating data from multiple sources, a highly constrained design space, functional response data, multiple competing design objectives and real-world test constraints. The result of the effort is a final, optimized propulsion charge design that will be fabricated for live gun firing. |
Sarah Longo Data Scientist US Army CCDC Armaments Center ![]() (bio)
Sarah Longo is a data scientist in the US Army CCDC Armaments Center’s Systems Analysis Division. She has a background in Chemical and Mechanical Engineering and ten years experience in gun propulsion and armament engineering. Ms. Longo’s gun-propulsion expertise has played a part in enabling the successful implementation of Design of Experiments, Empirical Modeling, Data Visualization and Data Mining for mission-critical artillery armament and weapon system design efforts. |
Breakout |
![]() | 2021 |
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge (Abstract)
This session will focus on the novel application of a design of experiments approach to optimize a propulsion charge configuration for a U.S. Army artillery round. The interdisciplinary design effort included contributions from subject matter experts in statistics, propulsion charge design, computational physics, and experimentation. The process, which we will present in this session, consisted of an initial, low fidelity modeling and simulation study to reduce the parametric space by eliminating inactive variables and reducing the ranges of active variables for the final design. The final design used a multi-tiered approach that consolidated data from multiple sources including low fidelity modeling and simulation, high fidelity modeling and simulation and live test data from firings in a ballistic simulator. Specific challenges of the effort that will be addressed include: integrating data from multiple sources, a highly constrained design space, functional response data, multiple competing design objectives, and real-world test constraints. The result of the effort is a final, optimized propulsion charge design that will be fabricated for live gun firing. |
Melissa Jablonski Statistician US Army CCDC Armaments Center ![]() (bio)
Melissa Jablonski is a statistician at the US Army Combat Capabilities Development Command Armaments Center. She graduated from Stevens Institute of Technology with a Bachelor’s and Master’s degree in Mechanical Engineering and started her career in the area of finite element analysis. She now works as a statistical consultant focusing in the areas of Design and Analysis of Computer Experiments (DACE) and Uncertainty Quantification (UQ). She also acts as a technical expert and consultant in Design of Experiments (DOE), Probabilistic System Optimization, Data Mining/Machine Learning, and other statistical analysis areas for munition and weapon systems. She is currently pursuing a Master’s degree in Applied Statistics from Pennsylvania State University. |
Breakout |
![]() | 2021 |
Breakout A Framework for Efficient Operational Testing through Bayesian Adaptive Design (Abstract)
When developing a system, it is important to consider system performance from a user perspective. This can be done through operational testing—assessing the ability of representative users to satisfactorily accomplish tasks or missions with the system in operationally-representative environments. This process can be expensive and time-consuming, but is critical for evaluating a system. We show how an existing design of experiments (DOE) process for operational testing can be leveraged to construct a Bayesian adaptive design. This method, nested within the larger design created by the DOE process, allows interim analyses using predictive probabilities to stop testing early for success or futility. Furthermore, operational environments with varying probabilities of encountering are directly used in product evaluation. Representative simulations demonstrate how these interim analyses can be used in an operational test setting, and reductions in necessary test events are shown. The method allows for using either weakly informative priors when data from previous testing is not available, or for priors built using developmental testing data when it is available. The proposed method for creating priors using developmental testing data allows for more flexibility in which data can be incorporated into analysis than the current process does, and demonstrates that it is possible to get more precise parameter estimates. This method will allow future testing to be conducted in less time and at less expense, on average, without compromising the ability of the existing process to verify the system meets the user’s needs. |
Victoria Sieck Student / Operations Research Analyst University of New Mexico / Air Force Institute of Technology ![]() (bio)
Victoria R.C. Sieck is a PhD Candidate in Statistics at the University of New Mexico. She is also an Operations Research Analyst in the US Air Force (USAF), with experiences in the USAF testing community as a weapons and tactics analyst and an operational test analyst. Her research interests include design of experiments and improving operational testing through the use of Bayesian methods. |
Breakout |
![]() | 2021 |
Breakout A Great Test Requires a Great Plan (Abstract)
The Scientific Test and Analysis Techniques (STAT) process is designed to provide structure for a test team to progress from a requirement to decision quality information. The four phases of the STAT process are Plan, Design, Execute, and Analyze. Within the Test and Evaluation (T&E) community we tend to focus on the quantifiable metrics and the hard science of testing, which are the Design and the Analyze phases. At the STAT Center of Excellence (COE) we have emphasized an increased focus on the planning phase and in this presentation we focus on the elements necessary for a comprehensive planning session. In order to efficiently and effectively test a system it is vital that the test team understand the requirements, the System Under Test (SUT) to include any subsystems that will be tested, and the test facility. To accomplish this the right team members with the necessary knowledge must be in the room and prepared to present their information and have an educated discussion to arrive at a comprehensive agreement about the desired end stated of the test. Our recommendations for the initial planning meeting are based on a thorough study of the STAT process and on lessons learned from actual planning meetings. |
Aaron Ramert STAT Analyst Scientific Test and Analytics Techniques Center of Excellence (STAT COE) ![]() (bio)
Mr. Ramert is a graduate of the US Naval Academy and the Naval Postgraduate School and a 20 year veteran of the Marine Corps. During his career in the Marines he served tours in operational air and ground units as well as academic assignments. He joined the Scientific Test and Analysis Techniques (STAT) Center of Excellence (COE) in 2016 where he works with major acquisition programs with the Department of Defense to apply rigor and efficiency to their test and evaluation methodology through the application of the STAT process. |
Breakout |
![]() Recording | 2021 |
Contributed A Metrics-based Software Tool to Guide Test Activity Allocation (Abstract)
Existing software reliability growth models are limited to parametric models that characterize the number of defects detected as a function of testing time or the number of vulnerabilities discovered with security testing. However, the amount and types of testing effort applied are rarely considered. This lack of detail regarding specific testing activities limits the application of software reliability growth models to general inferences such as the additional amount of testing required to achieve a desired failure intensity, mean time to failure, or reliability (period of failure free operation). This presentation provides an overview of an open source software reliability tool implementing covariate software reliability models [1] to aid DoD organizations and their contractors who desire to quantitatively measure and predict the reliability and security improvement of software. Unlike traditional software reliability growth models, the models implemented in the tool can accept multiple discrete time series corresponding to the amount of each type of test activity performed as well as dynamic metrics computed in each interval. When applied in the context of software failure or vulnerability discovery data, the parameters of each activity can be interpreted as the effectiveness of that activity to expose reliability defects or security vulnerabilities. Thus, these enhanced models provide the structure to assess existing and emerging techniques in an objective framework that promotes thorough testing and process improvement, motivating the collection of relevant metrics and precise measurements of the time spent performing various testing activities. References [1] Vidhyashree Nagaraju, Chathuri Jayasinghe, Lance Fiondella, Optimal test activity allocation for covariate software reliability and security models, Journal of Systems and Software, Volume 168, 2020, 110643. |
Jacob Aubertine Graduate Research Assistant University of Massachusetts Dartmouth ![]() (bio)
Jacob Aubertine is pursuing a MS degree in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth, where he also received his BS (2020) in Computer Engineering. His research interests include software reliability, performance engineering, and statistical modeling. |
Contributed |
![]() | 2021 |
Breakout Advancements in Characterizing Warhead Fragmentation Events (Abstract)
Fragmentation analysis is a critical piece of the live fire test and evaluation (LFT&E) of lethality and vulnerability aspects of warheads. But the traditional methods for data collection are expensive and laborious. New optical tracking technology is promising to increase the fidelity of fragmentation data, and decrease the time and costs associated with data collection. However, the new data will be complex, three dimensional ‘fragmentation clouds’, possibly with a time component as well. This raises questions about how testers can effectively summarize spatial data to draw conclusions for sponsors. In this briefing, we will discuss the Bayesian spatial models that are fast and effective for characterizing the patterns in fragmentation data, along with several exploratory data analysis techniques that help us make sense of the data. Our analytic goals are to – Produce simple statistics and visuals that help the live fire analyst compare and contrast warhead fragmentations; – Characterize important performance attributes or confirm design/spec compliance; and – Provide data methods that ensure higher fidelity data collection translates to higher fidelity modeling and simulation down the line. This talk is a version of the first-step feasibility study IDA is taking – hopefully much more to come as we continue to work on this important topic. |
John Haman Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. John Haman is a statistician at the Institute for Defense Analyses, where he develops methods and tools for analyzing test data. He has worked with a variety of Army, Navy, and Air Force systems, including counter-UAS and electronic warfare systems. Currently, John is providing technical support on operational testing to the Joint Artificial Intelligence Center. |
Breakout |
![]() Recording | 2021 |
Breakout An Adaptive Approach to Shock Train Detection (Abstract)
Development of new technology always incorporates model testing. This is certainly true for hypersonics, where flight tests are expensive and testing of component- and system-level models has significantly advanced the field. Unfortunately, model tests are often limited in scope, being only approximations of reality and typically only partially covering the range of potential realistic conditions. In this talk, we focus on the problem of real-time detection of the shock train leading edge in high-speed air-breathing engines, such as dual-mode scramjets. Detecting and controlling the shock train leading edge is important to the performance and stability of such engines, and a problem that has seen significant model testing on the ground and some flight testing. Often, methods developed for shock train detection are specific to the model used. Thus, they may not generalize well when tested in another facility or in flight as they typically require a significant amount of prior characterization of the model and flow regime. A successful method for shock train detection needs to be robust to changes in features like isolator geometry, inlet and combustor states, flow regimes, and available sensors. Such data can be difficult or impossible to obtain if the isolator operating regime is large. To this end, we propose the an approach for real-time detection of the isolator shock train. Our approach uses real-time pressure measurements to adaptively estimate the shock train position in a data-driven manner. We show that the method works well across different isolator models, placement of pressure transducers, and flow regimes. We believe that a data-driven approach is the way forward for bridging the gap between testing and reality, saving development time and money. |
Greg Hunt Assistant Professor William & Mary ![]() (bio)
Greg is an interdisciplinary researcher that builds scientific tools. He is trained as a statistician, mathematician and computer scientist. Currently he work on a diverse set of problems in biology, physics, and engineering. |
Breakout |
![]() | 2021 |
Panel Army’s Open Experimentation Test Range for Internet of Battlefield Things: MSA-DPG (Abstract)
One key feature of future Multi-Domain Operations (MDO) is expected to be the ubiquity of devices providing information connected in an Internet of Battlefield Things (IoBT). To this end, U.S. Army aims to advance the underlying science of pervasive and heterogeneous IoBT sensing, networking, and actuation. In this effort, IoBT experimentation testbed is an integral part of the capability development, which evaluates and validates the scientific theories, algorithms, and technologies integrated with C2 systems under the military scenarios. Originally conceived for this purpose, Multi-Purpose Sensing Area Distributed Proving Ground (MSA-DPG) is an open-range test bed developed by the Army Research Laboratory (ARL). We discuss the vision and the development of MSA-DPG and its fundamental roles of MSA-DPG in research serving the communities of Military Sciences. |
Jade Freeman Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
Dr. Jade Freeman currently serves as the Associate Branch Chief and a Team Lead at Battlefield Information Systems Branch. In this capacity, Dr. Freeman oversees information systems and engineering research projects and analyses. Prior to joining ARL, Dr. Freeman served as the Senior Statistician at the Office of Cybersecurity and Communications, Department of Homeland Security. Throughout the career, her work in operations and research includes cyber threat analyses, large survey design and analyses, experimental design, survival analysis, and missing data imputation methods. Dr. Freeman is also a PMP certified project manager, experienced in leading and managing IT development projects. Dr. Freeman obtained a Ph. D. in Statistics from the George Washington University. |
Panel |
![]() | 2021 |
Keynote Assessing Human-Autonomy Interaction in Driving-Assist Settings (Abstract)
In order to determine how the perception, Autopilot, and driver monitoring systems of Tesla Model 3s interact with one another, and also to determine the scale of between- and within-car variability, a series of four on-road tests were conducted. Three sets of tests were conducted on a closed track and one was conducted on a public highway. Results show wide variability across and within three Tesla Model 3s, with excellent performance in some cases but also likely catastrophic performance in others. This presentation will not only highlight how such interactions can be tested, but also how results can inform requirements and designs of future autonomous systems. |
Mary “Missy” Cummings Professor Duke University ![]() (bio)
Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval pilot from 1988-1999, she was one of the U.S. Navy’s first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department and the Director of the Humans and Autonomy Laboratory. She is an AIAA Fellow and a member of the Veoneer, Inc. Board of Directors |
Keynote |
![]() Recording | 2021 |
Breakout Assessing Next-Gen Spacesuit Reliability: A Probabilistic Analysis Case Study at NASA (Abstract)
Under the Artemis program, the Exploration Extravehicular Mobility Unit (xEMU) spacesuit will ensure the safety of NASA astronauts during the targeted 2024 return to the moon. Efforts are currently underway to finalize and certify the xEMU design. There is a delicate balance between producing a spacesuit that is robust enough to safely withstand potential fall events while still satisfying stringent mass and mobility requirements. The traditional approach of considering worst case-type loading and applying conservative factors of safety (FoS) to account for uncertainties in the analysis was unlikely to meet the narrow design margins. Thus, the xEMU design requirement was modified to include a probability of no impact failure (PnIF) threshold that must be verified through probabilistic analysis. As part of a broader one year effort to help integrate modern uncertainty quantification (UQ) methodology into engineering practice at NASA, the certification of the xEMU spacesuit was selected as the primary case study. The project, led by NASA Langley Research Center (LaRC) under the Engineering Research & Analysis (R&A) Program in 2020, aimed to develop an end-to-end UQ workflow for engineering problems and to help facilitate reliability-based design at NASA. The main components of the UQ workflow included 1) sensitivity analysis to identify the most influential model parameters, 2) model calibration to quantified model parameter uncertainties using experimental data, and 3) uncertainty propagation for producing probabilistic model predictions and estimating reliability. Particular emphasis was placed on overcoming the common practical barrier of prohibitive computational expense associated with probabilistic analysis by leveraging state-of-the-art UQ methods and high performance computing (HPC). In lieu of mature computational models and test data for the xEMU at the time of the R&A Program, the UQ workflow for estimating PnIF was demonstrated using existing models and data from the previous generation of spacesuits (the Z-2). However, the lessons learned and capabilities developed in the process of the R&A are directly transferable to the ongoing xEMU certification effort and are currently being integrated in 2021. This talk provides an overview of the goals of and findings under NASA’s UQ R&A project, focusing on the spacesuit certification case study. The steps of the UQ workflow applied to the Z-2 spacesuit using the available finite element method (FEM) models and impact test data will be detailed. The ability to quantify uncertainty in the most influential subset of FEM model input parameters and then propagate that uncertainty to estimates of PnIF is demonstrated. Since the FEM model of the full Z-2 assembly took nearly 1 day to execute just once, the advanced UQ methods and HPC utilization required to make the probabilistic analysis tractable are discussed. Finally, the lessons learned from conducting the case study are provided along with planned ongoing/future work for the xEMU certification in 2021. |
James Warner Computational Scientist NASA Langley Research Center ![]() (bio)
Dr. James Warner joined NASA Langley Research Center (LaRC) in 2014 as a Research Computer Engineer after receiving his PhD in Computational Solid Mechanics from Cornell University. Previously, he received his B.S. in Mechanical Engineering from SUNY Binghamton University and held temporary research positions at the National Institute of Standards and Technology and Duke University. Dr. Warner is a member of the Durability, Damage Tolerance, and Reliability Branch (DDTRB) at LaRC, where he focuses on developing computationally-efficient approaches for uncertainty quantification for a range of applications including structural health management and space radiation shielding design. His other research interests include high performance computing, inverse methods, and topology optimization. |
Breakout |
![]() | 2021 |
Breakout Automated Test Case Generation for Human-Machine Interaction (Abstract)
The growing complexity of interactive systems requires increasing amounts of effort to ensure reliability and usability. Testing is an effective approach for finding and correcting problems with implemented systems. However, testing is often regarded as the most intellectual-demanding, time-consuming, and expensive part of system development. Furthermore, it can be difficult (if not impossible) for testers to anticipate all of the conditions that need to be evaluated. This is especially true of human-machine systems. This is because the human operator (who is attempting to achieve his or her task goals) is an additional concurrent component of the system and one whose behavior is not strictly governed by the implementation of designed system elements. To address these issues, researchers have developed approaches for automatically generating test cases. Among these are formal methods: rigorous, mathematical languages, tools, and techniques for modeling, specifying, and verifying (proving properties about) systems. These support model-based approaches (almost exclusively used in computer engineering) for creating tests that are efficient and provide guarantees about their completeness (at least with respect to the model). In particular, model checking can be used for automated test case generation. In this, efficient and exhaustive algorithms search a system model to find traces (test cases) through that model that satisfy specified coverage criteria: descriptions of the conditions the tests should encounter during execution. This talk focuses on a formal automated test generation method developed in my lab for creating cases for human-system interaction. This approach makes use of task models. Task models are a standard human factors method for describing how humans normatively achieve goals when interacting with a system. When these models are given formal semantics, they can be paired with models of system behavior to account for human-system interaction. Formal, automated test case generation can then be performed for coverage criteria asserted over the system (for example, to cover the entire human interface) or human task (to ensure all human activities or actions are performed). Generated tasks, when manually executed with the system, can serve two purposes. First, testers can observe whether the human behavior in test always produces the system behavior from the test. This can help analysts validate the models and, if no problems are found, be sure that any desirable properties exhibited by the model hold in the actual system. Second, testers will be able to use their insights about system usability and performance to subjectively evaluate the system under all of conditions contained in the tests. Given the coverage guarantees provided by the process, this means that testers can be confident they have seen every system condition relevant to the coverage criteria. In this talk, I will describe this approach to automated test case generation and illustrate its utility with a simple example. I will then describe how this approach could be extended to account for different dimensions of human cognitive performance and emerging challenges in human-autonomy interaction. |
Matthew Bolton Associate Professor University at Buffalo, the State University of New York ![]() (bio)
Dr. Bolton is an Associate Professor of Industrial and Systems Engineering at the University at Buffalo (UB). He obtained his Ph.D. in Systems Engineering from the University of Virginia, Charlottesville, in 2010. Before joining UB, he worked as a Senior Research Associate at NASA’s Ames Research Center and as an Assistant Professor of Industrial Engineering at the University of Illinois at Chicago. Dr. Bolton is an expert on the use of formal methods in human factors engineering and has published widely in this area. He has successfully applied his research to safety-critical applications in aerospace, medicine, defense, and cybersecurity. He has received funding on projects sponsored by the European Space Agency, NSF, NASA, AHRQ, and DoD. This includes a Young Investigator Program Award from the Army Research Office. He is an associate editor for the IEEE Transactions on Human Machine Systems and the former Chair of the Human Performance Modeling Technical Group for the Human Factors and Ergonomics Society. He was appointed as a Senior Member of IEEE in 2015 and received the Human Factors and Ergonomics Society’s William C. Howell Young Investigator award in 2018. |
Breakout |
![]() | 2021 |
Breakout Certification by Analysis: A 20-year Vision for Virtual Flight and Engine Testing (Abstract)
Analysis-based means of compliance for airplane and engine certification, commonly known as “Certification by Analysis” (CbA), provides a strong motivation for the development and maturation of current and future flight and engine modeling technology. The most obvious benefit of CbA is streamlined product certification testing programs at lower cost while maintaining equivalent levels of safety. The current state of technologies and processes for analysis is not sufficient to adequately address most aspects of CbA today, and concerted efforts to drastically improve analysis capability are required to fully bring the benefits of CbA to fruition. While the short-term cost and schedule benefits of reduced flight and engine testing are clearly visible, the fidelity of analysis capability required to realize CbA across a much larger percentage of product certification is not yet sufficient. Higher-fidelity analysis can help reduce the product development cycle and avoid costly and unpredictable performance and operability surprises that sometimes happen late in the development cycle. Perhaps the greatest long-term value afforded by CbA is the potential to accelerate the introduction of more aerodynamically and environmentally efficient products to market, benefitting not just manufacturers, but also airlines, passengers, and the environment. A far-reaching vision for CbA has been constructed to offer guidance in developing lofty yet realizable expectations regarding technology development and maturity through stakeholder involvement. This vision is composed of the following four elements: The ability to numerically simulate the integrated system performance and response of full-scale airplane and engine configurations in an accurate, robust, and computationally efficient manner. The development of quantified flight and engine modeling uncertainties to establish appropriate confidence in the use of numerical analysis for certification. The rigorous validation of flight and engine modeling capabilities against full-scale data from critical airplane and engine testing. The use of flight and engine modeling to enable Certification by Simulation. Key technical challenges include the ability to accurately predict airplane and engine performance for a single discipline, the robust and efficient integration of multiple disciplines, and the appropriate modeling of system-level assessment. Current modeling methods lack the capability to adequately model conditions that exist at the edges of the operating envelope where the majority of certification testing generally takes place. Additionally, large-scale engine or airplane multidisciplinary integration has not matured to the level where it can be reliably used to efficiently model the intricate interactions that exist in current or future aerospace products. Logistical concerns center primarily on the future High Performance Computing capability needed to perform the large number of computationally intensive simulations needed for CbA. Complex, time-dependent, multidisciplinary analyses will require a computing capacity increase several orders of magnitude greater than is currently available. Developing methods to ensure credible simulation results is critically important for regulatory acceptance of CbA. Confidence in analysis methodology and solutions is examined so that application validation cases can be properly identified. Other means of measuring confidence such as uncertainty quantification and “validation-domain” approaches may increase the credibility and trust in the predictions. Certification by Analysis is a challenging long-term endeavor that will motivate many areas of simulation technology development, while driving the potential to decrease cost, improve safety, and improve airplane and engine efficiency. Requirements to satisfy certification regulations provide a measurable definition for the types of analytical capabilities required for success. There is general optimism that CbA is a goal that can be achieved, and that a significant amount of flight testing can be reduced in the next few decades. |
Timothy Mauery Boeing ![]() (bio)
For the past 20 years, Timothy Mauery has been involved in the development of low-speed CFD design processes. In this capacity, he has had the opportunity to interact with users and provide CFD support and training throughout the product development cycle. Prior to moving to the Commercial Airplanes division of The Boeing Company, he worked at the Lockheed Martin Aircraft Center, providing aerodynamic liaison support on a variety of military modification and upgrade programs. At Boeing, he has had the opportunity to support both future products as well as existing programs with CFD analysis and wind tunnel testing. Over the past ten years, he has been closely involved in the development and evaluation of analysis-based certification processes for commercial transport vehicles, for both derivative programs as well as new airplanes. Most recently he was the principal investigator on a NASA research announcement for developing requirements for airplane certification by analysis. Timothy received his bachelor’s degree from Brigham Young University, and his master’s degree from The George Washington University, where he was also a research assistant at NASA-Langley. |
Breakout |
![]() | 2021 |
Breakout Challenges in Verification and Validation of CFD for Industrial Aerospace Applications (Abstract)
Verification and validation represent important steps for appropriate use of CFD codes and it is presently considered the user’s responsibility to ensure that these steps are completed. Inconsistent definitions and use of these terms in aerospace complicate the effort. For industrial-use CFD codes, there are a number of challenges that can further confound these efforts including varying grid topology, non-linearities in the solution, challenges in isolating individual components, and difficulties in finding validation experiments. In this presentation, a number of these challenges will be reviewed with some specific examples that demonstrate why verification is much more involved and challenging than typically implied in numerical method courses, but remains an important exercise. Some of the challenges associated with validation will also be highlighted using a range of different cases, from canonical flow elements to complete aircraft models. Benchmarking is often used to develop confidence in CFD solutions for engineering purposes, but falls short of validation in the absence of being able to predict bounds on the simulation error. The key considerations in performing benchmarking and validation will be highlighted and some current shortcomings in practice will be presented, leading to recommendations for conducting validation exercises. CFD workshops have considerably improved in their application of these practices, but there continues to be need for additional steps. |
Andrew Cary Technical Fellow Boeing Research and Technology ![]() (bio)
Andrew Cary is a technical fellow of the Boeing Company in CFD and is the focal for the BCFD solver. In this capacity, he has a strong focus on supporting users of the code across the Boeing enterprise as well as leading the development team. These responsibilities align with his interests in verification, validation, and uncertainty quantification as an approach to ensure reliable results as well as in algorithm development, CFD-based shape optimization, and unsteady fluid dynamics. Since hiring into the CFD team in 1996, he has led CFD application efforts across a full range of Boeing products as well as working in grid generation methods, flow solver algorithms, post-processing approaches, and process automation. These assignments have given him the opportunity to work with teams around the world, both inside and outside Boeing. Andrew has been an active member of the American Institute of Aeronautics and Astronautics, serving in multiple technical committees, including his present role on the CFD Vision 2030 Integration Committee. Andrew has also been an adjunct professor at Washington University since 1999, teaching graduate classes in CFD and fluid dynamics. Andrew received a Ph.D. (97) in Aerospace Engineering from the University of Michigan and a B.S. (92) and M.S. (97) in Aeronautical and Astronautical Engineering from the University of Illinois Urbana-Champaign. |
Breakout |
![]() | 2021 |
Breakout Characterizing Human-Machine Teaming Metrics for Test and Evaluation (Abstract)
As advanced technologies and capabilities are enabling machines to engage in tasks that only humans have done previously, new challenges have emerged for the rigorous testing and evaluation (T&E) of human-machine teaming (HMT) concepts. We differentiate the distinction between a HMT and a human using a tool, and new challenges are enumerated: Agents’ mental models are opaque, machine-to-human communications need to be evaluated, and self-tasking and autonomy need to be evaluated. We argue that a focus on mission outcomes cannot fully characterize team performance due to the increased problem space evaluated and that the T&E community needs to develop and refine new metrics for agents of teams and teammate interactions. Our IDA HMT framework outlines major categories for HMT evaluation, emphasizing team metrics and parallelizing agent metrics across humans and machines. Major categories are tied to the literature and proposed as a starting point for additional T&E metric specification for robust evaluation. |
Brian Vickers Research Staff Member Institute for Defense Analyses ![]() (bio)
Brian is a Research Staff Member at the Institute for Defense Analyses where he applies rigorous statistics and study design to evaluate, test, and report on various programs. Dr. Vickers holds a Ph.D. from the University of Michigan, Ann Arbor where he researched various factors that influence decision making, with a focus on how people allocate their money, time, and other resources. |
Breakout |
![]() | 2021 |
Keynote Closing Remarks (Abstract)
Mr. William (Allen) Kilgore serves as Director, Research Directorate at NASA Langley Research Center. He previously served as Deputy Director of Aerosciences providing executive leadership and oversight for the Center’s Aerosciences fundamental and applied research and technology capabilities with the responsibility over Aeroscience experimental and computational research. After being appointed to the Senior Executive Service (SES) in 2013, Mr. Kilgore served as the Deputy Director, Facilities and Laboratory Operations in the Research Directorate. Prior to this position, Mr. Kilgore spent over twenty years in the operations of NASA Langley’s major aerospace research facilities including budget formulation and execution, maintenance, strategic investments, workforce planning and development, facility advocacy, and integration of facilities’ schedules. During his time at Langley, he has worked in nearly all of the major wind tunnels with a primary focus on process controls, operations and testing techniques supporting aerosciences research. For several years, Mr. Kilgore led the National Transonic Facility, the world’s largest cryogenic wind tunnel. Mr. Kilgore has been at NASA Langley Research Center since 1989, starting as a graduate student. Mr. Kilgore earned a B.S. and M.S. in Mechanical Engineering with concentration in dynamics and controls from Old Dominion University in 1984 and 1989, respectively. He is the recipient of NASA’s Exceptional Engineering Achievement Medal in 2008 and Exceptional Service Medal in 2012. |
William “Allen” Kilgore Director, Research Directorate NASA Langley Research Center ![]() |
Keynote |
Recording | 2021 |
Breakout Cognitive Work Analysis – From System Requirements to Validation and Verification (Abstract)
Human-system interaction is a critical yet often neglected aspect of the system development process. It is mostly commonly incorporated into system performance assessments late in the design process leaving little opportunity for any substantive changes to be made to ensure satisfactory system performance achieved. As a result, workarounds and compromises become a patchwork of “corrections” that end up in the final fielded system. But what if mission outcomes, the work context, and performance expectations can be articulated earlier in the process, thereby influencing the development process throughout? This presentation will discuss how a formative method from the field of cognitive systems engineering, cognitive work analysis, can be leveraged to derive design requirements compatible with traditional systems engineering processes. This method establishes not only requirements from which system designs can be constructed, but also how system performance expectations can be more acutely defined a priori to guide the validation and verification process. Cognitive work analysis methods will be described to highlight how ‘cognitive work’ and ‘information relationship’ requirements can be derived and will be showcased in a case-study application of building a decision support system for future human spaceflight operations. Specifically, a description of the testing campaign employed to verify and validate the fielded system will be provided. In summary, this presentation will cover how system requirements can be established early in the design phase, guide the development of design solutions, and subsequently be used to assess the operational performance of the solutions within the context of the work domain it is intended to support. |
Matthew Miller Exploration Research Engineer Jacobs/NASA Johnson Space Center ![]() (bio)
Matthew J. Miller is an Exploration Research Engineer within the Astromaterials Research and Exploration Sciences (ARES) division at NASA Johnson Space Center. His work focuses on advancing present-day tools, technologies and techniques to improve future EVA operations by applying cognitive systems engineering principles. He has over seven years of EVA flight operations and NASA analog experience where he has developed and deployed various EVA support systems and concept of operations. He received a B.S. (2012), M.S. (2014) and Ph.D. (2017) in aerospace engineering from the Georgia Institute of Technology. |
Breakout |
![]() | 2021 |
Breakout Collaborative Human AI Red Teaming (Abstract)
The Collaborative Human AI Red Teaming (CHART) project is an effort to develop an AI Collaborator which can help human test engineers quickly develop test plans for AI systems. CHART was built around processes developed for cybersecurity red-teaming. Using a goal-focused approach based upon iteratively testing and attacking a system then updating the testers model to discover novel failure modes not discovered by traditional T&E processes. Red teaming is traditionally a time intensive process which requires subject matter expert to study the system they are testing for months in order to develop attack strategies. CHART will accelerate this process by guiding the user through the process of diagraming the AI system under test and drawing upon a pre-established body of knowledge to identify the most probably vulnerabilities. CHART was provided internal seedling funds during FY20 to perform a feasibility study of the technology. During this period the team developed a taxonomy of AI vulnerabilities and an ontology of AI irruptions. Irruptions being events (either caused by a malicious actor or unintended consequences) which trigger the vulnerability and lead to an undesirable result. Using this taxonomy we built a threat modeling tool that allows users to diagram their AI system and identifies all the possible irruptions which could occur. This initial demonstration was based around two scenarios. An smartphone-based ECG system for telemedicine and a UAV trained reinforcement learning to avoid mid-air collisions. In this talk we will first discuss how Red Teaming differs from adversarial machine learning and traditional testing and evaluation. Next, we will provide an overview of how industry is approaching the problem of AI Red Teaming and how our approach differs. Finally, we will discuss how we developed our taxonomy of AI vulnerabilities, how to apply goal-focused testing to AI systems, and our strategy for automatically generating test plans. |
Galen Mullins Senior AI Researcher Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Dr. Galen Mullins is a senior staff scientist in the Robotics Group of the Intelligent Systems branch at the Johns Hopkins Applied Physics Laboratory. His research is focused on developing intelligent testing techniques and adversarial tools for finding the vulnerabilities of AI systems. His recent project work has included the development of new imitation learning frameworks for modeling the behavior of autonomous vehicles, creating algorithms for generating adversarial environments, and developing red teaming procedures for AI systems. He is the secretary for the IEEE/RAS working group on Guidelines for Verification of Autonomous Systems and teaches the Introduction to Robotics course at the Johns Hopkins Engineering for Professionals program. Dr. Galen Mullins received his B.S degrees in Mechanical Engineering and Mathematics respectively from Carnegie Mellon University in 2007 and joined APL the same year. Since then he earned his M.S. in Applied Physics from Johns Hopkins University in 2010, and his Ph.D in Mechanical Engineering from the University of Maryland in 2018. His doctoral research was focused on developing active learning algorithms for generating adversarial scenarios for autonomous vehicles. |
Breakout |
![]() | 2021 |
Short Course Combinatorial Interaction Testing (Abstract)
This mini-tutorial provides an introduction to combinatorial interaction testing (CIT). The main idea behind CIT is to pseudo-exhaustively test software and hardware systems by covering combinations of components in order to detect faults. In 90 minutes, we provide an overview of this domain that includes the following topics: the role of CIT in software and hardware testing, how it complements and differs from design of experiments, considerations such as variable strength and constraints, the typical combinatorial arrays used for constructing test suites, and existing tools for test suite construction. Last, defense systems are increasingly relying on software with embedded machine learning (ML), yet ML poses unique challenges to applying conventional software testing due to characteristics such as the large input space, effort required for white box testing, and emergent behaviors apparent only at integration or system levels. As a well-studied black box approach to testing integrated systems with a pseudo-exhaustive strategy for handling large input spaces, CIT provides a good foundation for testing ML. In closing, we present recent research adapting concepts of combinatorial coverage to test design for ML. |
Erin Lanus Research Assistant Professor Virginia Tech ![]() (bio)
Erin Lanus is a Research Assistant Professor at the Hume Center for National Security and Technology at Virginia Tech. She has a Ph.D. in Computer Science with a concentration in cybersecurity from Arizona State University. Her experience includes work as a Research Fellow at University of Maryland Baltimore County and as a High Confidence Software and Systems Researcher with the Department of Defense. Her current interests are software and combinatorial testing, machine learning in cybersecurity, and artificial intelligence assurance. |
Short Course |
Materials
Recording | 2021 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions (Abstract)
Cybersecurity Metrics and Quantification is a fundamental but notoriously hard problem. It is one of the pillars underlying the emerging Science of Cybersecurity. In this talk, I will describe a number of cybersecurity metrics quantification research problems that are encountered in evaluating the effectiveness of a range of cyber defense tools. I will review the research results we have obtained over the past years. I will also discuss future research directions, including the ones that are undertaken in my research group. |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() (bio)
Shouhuai Xu is the Gallogly Chair Professor in the Department of Computer Science, University of Colorado Colorado Springs (UCCS). Prior to joining UCCS, he was with the Department of Computer Science, University of Texas at San Antonio. He pioneered a systematic approach, dubbed Cybersecurity Dynamics, to modeling and quantifying cybersecurity from a holistic perspective. This approach has three orthogonal research thrusts: metrics (for quantifying security, resilience and trustworthiness/uncertainty, to which this talk belongs), cybersecurity data analytics, and cybersecurity first-principle modeling (for seeking cybersecurity laws). His research has won a number of awards, including the 2019 worldwide adversarial malware classification challenge organized by the MIT Lincoln Lab. His research has been funded by AFOSR, AFRL, ARL, ARO, DOE, NSF and ONR. He co-initiated the International Conference on Science of Cyber Security (SciSec) and is serving as its Steering Committee Chair. He has served as Program Committee co-chair for a number of international conferences and as Program Committee member for numerous international conferences. Â He is/was an Associate Editor of IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), IEEE Transactions on Information Forensics and Security (IEEE T-IFS), and IEEE Transactions on Network Science and Engineering (IEEE TNSE). More information about his research can be found at https://xu-lab.org. |
Breakout | Materials | 2021 |
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
Breakout Debunking Stress Rupture Theories Using Weibull Regression Plots (Abstract)
As statisticians, we are always working on new ways to explain statistical methodologies to non-statisticians. It is in this realm that we never underestimate the value of graphics and patience! In this presentation, we present a case study that involves stress rupture data where a Weibull regression is needed to estimate the parameters. The context of the case study results from a multi-stage project supported by NASA’s Engineering Safety Center (NESC) where the objective was to assess the safety of composite overwrapped pressure vessels (COPVs). The analytical team was tasked with devising a test plan to model stress rupture failure risk in carbon fiber strands that encase the COPVs with the goal of understanding the reliability of the strands at use conditions for the expected mission life. While analyzing the data, we found that the proper analysis contradicts accepted theories about the stress rupture phenomena. In this talk, we will introduce ways to graph the stress rupture data to better explain the proper analysis and also explore assumptions. |
Anne Driscoll Associate Collegiate Professor Virginia Tech ![]() (bio)
Anne Ryan Driscoll is an Associate Collegiate Professor in the Department of Statistics at Virginia Tech. She received her PhD in Statistics from Virginia Tech. Her research interests include statistical process control, design of experiments, and statistics education. She is a member of ASQ and ASA. |
Breakout |
![]() | 2021 |
Breakout Empirical Analysis of COVID-19 in U.S. States and Counties (Abstract)
The zoonotic emergence of the coronavirus SARS-CoV-2 at the beginning of 2020 and the subsequent global pandemic of COVID-19 has caused massive disruptions to economies and health care systems, particularly in the United States. Using the results of serology testing, we have developed true prevalence estimates for COVID-19 case counts in the U.S. over time, which allows for more clear estimates of infection and case fatality rates throughout the course of the pandemic. In order to elucidate policy, demographic, weather, and behavioral factors that contribute to or inhibit the spread of COVID-19, IDA compiled panel data sets of empirically derived, publicly available COVID-19 data and analyzed which factors were most highly correlated with increased and decreased spread within U.S. states and counties. These analyses lead to several recommendations for future pandemic response preparedness. |
Emily Heuring Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Emily Heuring received her PhD in Biochemistry, Cellular, and Molecular Biology from the Johns Hopkins University School of Medicine in 2004 on the topic of human immunodeficiency virus and its impact on the central nervous system. Since that time, she has been a Research Staff Member at the Institute for Defense Analyses, supporting operational testing of chemical and biological defense programs. More recently, Dr. Heuring has supported OSD-CAPE on Army and Marine Corps programs and the impact of COVID-19 on the general population and DOD. |
Breakout |
![]() | 2021 |
Breakout Entropy-Based Adaptive Design for Contour Finding and Estimating Reliability (Abstract)
In reliability, methods used to estimate failure probability are often limited by the costs associated with model evaluations. Many of these methods, such as multi-fidelity importance sampling (MFIS), rely upon a cheap, surrogate model like a Gaussian process (GP) to quickly generate predictions. The quality of the GP fit, at least in the vicinity of the failure region(s), is instrumental in propping up such estimation strategies. We introduce an entropy-based GP adaptive design that, when paired with MFIS, provides more accurate failure probability estimates and with higher confidence. We show that our greedy data acquisition scheme better identifies multiple failure regions compared to existing contour-finding schemes. We then extend the method to batch selection. Illustrative examples are provided on benchmark data as well as an application to the impact damage simulator of a NASA spacesuit design. |
Austin Cole PhD Candidate Virginia Tech ![]() (bio)
Austin Cole is a statistics PhD candidate at Virginia Tech. He previously taught high school math and statistics courses, and holds a Bachelor’s in Mathematics and Master’s in Secondary Education from the College of William and Mary. Austin has worked with dozens of researchers as a lead collaborator in Virginia Tech’s Statistical Applications and Innovations Group (SAIG). Under the supervision of Dr. Robert Gramacy, Austin has conducted research in the area of computer experiments with focuses on Bayesian optimization, sparse covariance matrices, and importance sampling. He is currently collaborating with researchers at NASA Langley, to evaluate the safety of the next generation of spacesuits. |
Breakout |
![]() | 2021 |
Breakout Estimating Pure-Error from Near Replicates in Design of Experiments (Abstract)
In design of experiments, setting exact replicates of factor settings enables estimation of pure-error; a model-independent estimate of experimental error useful in communicating inherent system noise and testing of model lack-of-fit. Often in practice, the factor levels for replicates are precisely measured rather than precisely set, resulting in near-replicates. This can result in inflated estimates of pure-error due to uncompensated set-point variation. In this article, we review previous strategies for estimating pure-error from near-replicates and propose a simple alternative. We derive key analytical properties and investigate them via simulation. Finally, we illustrate the new approach with an application. |
Caleb King Research Statistician Developer SAS Institute ![]() |
Breakout |
![]() | 2021 |
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo (Abstract)
With the rise of machine learning and artificial intelligence, there has been a huge surge in data-driven approaches to solve computational science and engineering problems. In the context of uncertainty propagation, machine learning is often employed for the construction of efficient surrogate models (i.e., response surfaces) to replace expensive, physics-based simulations. However, relying solely on surrogate models without any recourse to the original high-fidelity simulation will produce biased estimators and can yield unreliable or non-physical results. This talk discusses multi-model Monte Carlo methods that combine predictions from both fast, low-fidelity models with reliable, high-fidelity simulations to enable efficient and accurate uncertainty propagation. For instance, the low-fidelity models could arise from coarsened discretizations in space/time (e.g., Multilevel Monte Carlo – MLMC) or from general data-driven or reduced order models (e.g., Multifidelity Monte Carlo – MFMC; Approximate Control Variates – ACV). Given a fixed computational budget and a collection of models of varying cost/accuracy, the goal of these methods is to optimally allocate and combine samples across the models. The talk will also present a NASA-developed open-source Python library that acts as a general multi-model uncertainty propagation capability. The effectiveness of the discussed methods and Python library is demonstrated on a trajectory simulation application. Here, orders of magnitude computational speedup and accuracy are obtained for predicting the landing location of an umbrella heat shield under significant uncertainties in initial state, atmospheric conditions, etc. |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() (bio)
Dr. Geoffrey Bomarito is a Materials Research Engineer at NASA Langley Research Center. Before joining NASA in 2014, he earned a PhD in Computational Solid Mechanics from Cornell University. He also holds an MEng from the Massachusetts Institute of Technology and a BS from Cornell University, both in Civil and Environmental Engineering. Dr. Bomarito’s work centers around machine learning and uncertainty quantification as applied to aerospace materials and structures. His current topics of interest are physics informed machine learning, symbolic regression, additive manufacturing, and trajectory simulation. |
Breakout |
![]() | 2021 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge |
Sarah Longo Data Scientist US Army CCDC Armaments Center ![]() |
Breakout |
![]() | 2021 |
Breakout A DOE Case Study: Multidisciplinary Approach to Design an Army Gun Propulsion Charge |
Melissa Jablonski Statistician US Army CCDC Armaments Center ![]() |
Breakout |
![]() | 2021 |
Breakout A Framework for Efficient Operational Testing through Bayesian Adaptive Design |
Victoria Sieck Student / Operations Research Analyst University of New Mexico / Air Force Institute of Technology ![]() |
Breakout |
![]() | 2021 |
Breakout A Great Test Requires a Great Plan |
Aaron Ramert STAT Analyst Scientific Test and Analytics Techniques Center of Excellence (STAT COE) ![]() |
Breakout |
![]() Recording | 2021 |
Contributed A Metrics-based Software Tool to Guide Test Activity Allocation |
Jacob Aubertine Graduate Research Assistant University of Massachusetts Dartmouth ![]() |
Contributed |
![]() | 2021 |
Breakout Advancements in Characterizing Warhead Fragmentation Events |
John Haman Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() Recording | 2021 |
Breakout An Adaptive Approach to Shock Train Detection |
Greg Hunt Assistant Professor William & Mary ![]() |
Breakout |
![]() | 2021 |
Panel Army’s Open Experimentation Test Range for Internet of Battlefield Things: MSA-DPG |
Jade Freeman Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() |
Panel |
![]() | 2021 |
Keynote Assessing Human-Autonomy Interaction in Driving-Assist Settings |
Mary “Missy” Cummings Professor Duke University ![]() |
Keynote |
![]() Recording | 2021 |
Breakout Assessing Next-Gen Spacesuit Reliability: A Probabilistic Analysis Case Study at NASA |
James Warner Computational Scientist NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
Breakout Automated Test Case Generation for Human-Machine Interaction |
Matthew Bolton Associate Professor University at Buffalo, the State University of New York ![]() |
Breakout |
![]() | 2021 |
Breakout Certification by Analysis: A 20-year Vision for Virtual Flight and Engine Testing |
Timothy Mauery Boeing ![]() |
Breakout |
![]() | 2021 |
Breakout Challenges in Verification and Validation of CFD for Industrial Aerospace Applications |
Andrew Cary Technical Fellow Boeing Research and Technology ![]() |
Breakout |
![]() | 2021 |
Breakout Characterizing Human-Machine Teaming Metrics for Test and Evaluation |
Brian Vickers Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
Keynote Closing Remarks |
William “Allen” Kilgore Director, Research Directorate NASA Langley Research Center ![]() |
Keynote |
Recording | 2021 |
Breakout Cognitive Work Analysis – From System Requirements to Validation and Verification |
Matthew Miller Exploration Research Engineer Jacobs/NASA Johnson Space Center ![]() |
Breakout |
![]() | 2021 |
Breakout Collaborative Human AI Red Teaming |
Galen Mullins Senior AI Researcher Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
Short Course Combinatorial Interaction Testing |
Erin Lanus Research Assistant Professor Virginia Tech ![]() |
Short Course |
Materials
Recording | 2021 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() |
Breakout | Materials | 2021 |
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
Breakout Debunking Stress Rupture Theories Using Weibull Regression Plots |
Anne Driscoll Associate Collegiate Professor Virginia Tech ![]() |
Breakout |
![]() | 2021 |
Breakout Empirical Analysis of COVID-19 in U.S. States and Counties |
Emily Heuring Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
Breakout Entropy-Based Adaptive Design for Contour Finding and Estimating Reliability |
Austin Cole PhD Candidate Virginia Tech ![]() |
Breakout |
![]() | 2021 |
Breakout Estimating Pure-Error from Near Replicates in Design of Experiments |
Caleb King Research Statistician Developer SAS Institute ![]() |
Breakout |
![]() | 2021 |
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |