Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout A Decision-Theoretic Framework for Adaptive Simulation Experiments (Abstract)
We describe a model-based framework for increasing effectiveness of simulation experiments in the presence of uncertainty. Unlike conventionally designed simulation experiments, it adaptively chooses where to sample, based on the value of information obtained. A Bayesian perspective is taken to formulate and update the framework’s four models. A simulation experiment is conducted to answer some question. In order to define precisely how informative a run is for answering the question, the answer must be defined as a random variable. This random variable is called a query and has the general form of p(theta | y), where theta is the query parameter and y is the available data. Examples of each of the four models employed in the framework are briefly described below: 1. The continuous correlated beta process model (CCBP) estimates the proportions of successes and failures using beta-distributed uncertainty at every point in the input space. It combines results using an exponentially decaying correlation function. The output of the CCBP is used to estimate value of a candidate run. 2. The mutual information model quantifies uncertainty in one random variable that is reduced by observing the other one. The model quantifies the mutual information between any candidate runs and the query, thereby scoring the value of running each candidate. 3. The cost model estimates how long future runs will take, based upon past runs using, e.g., a generalized linear model. A given simulation might have multiple fidelity options that require different run times. It may be desirable to balance information with the cost of a mixture of runs using these multi-fidelity options. 4. The grid state model, together with the mutual information model, are used to select the next collection of runs for optimal information per cost, accounting for current grid load. The framework has been applied to several use cases, including model verification and validation with uncertainty quantification (VVUQ). Given a mathematically precise query, an 80 percent reduction in total runs has been observed. |
Terril Hurst Senior Engineering Fellow Raytheon Technologies ![]() (bio)
Terril N Hurst is a Senior Engineering Fellow at Raytheon Technologies, where he works to ensure that model-based engineering is based upon credible models and protocols that allow uncertainty quantification. Prior to coming to Raytheon in 2005, Dr. Hurst worked at Hewlett-Packard Laboratories, including a post-doctoral appointment in Stanford University’s Logic-Based Artificial Intelligence Group under the leadership of Nils Nilsson. |
Breakout | Session Recording |
| 2022 |
A Framework for Using Priors in a Continuum of Testing (Abstract)
A strength of the Bayesian paradigm is that it allows for the explicit use of all available information—to include subject matter expert (SME) opinion and previous (possibly dissimilar) data. While frequentists are constrained to only including data in an analysis (that is to say, only including information that can be observed), Bayesians can easily consider both data and SME opinion, or any other related information that could be constructed. This can be accomplished through the development and use of priors. When prior development is done well, a Bayesian analysis will not only lead to more direct probabilistic statements about system performance, but can result in smaller standard errors around fitted values when compared to a frequentist approach. Furthermore, by quantifying the uncertainty surrounding a model parameter, through the construct of a prior, Bayesians are able to capture the uncertainty across a test space of consideration. This presentation develops a framework for thinking about how different priors can be used throughout the continuum of testing. In addition to types of priors, how priors can change or evolve across the continuum of testing—especially when a system changes (e.g., is modified or adjusted) during phases of testing—will be addressed. Priors that strive to provide no information (reference priors) will be discussed, and will build up to priors that contain available information (informative priors). Informative priors—both those based on institutional knowledge or summaries from databases, as well as those developed based on previous testing data—will be discussed, with a focus on how to consider previous data that is dissimilar in some way, relative to the current test event. What priors might be more common in various phases of testing, types of information that can be used in priors, and how priors evolve as information accumulates will all be discussed. |
Victoria Sieck Deputy Director / Assistant Professor of Statistics Scientific Test & Analysis Techniques Center of Excellence (STAT COE) / Air Force Institute of Technology (AFIT) ![]() (bio)
Dr. Victoria R. C. Sieck is the Deputy Director of the Scientific Test & Analysis (STAT) Center of Excellence (COE), where she works with major acquisition programs within the Department of Defense (DoD) to apply rigor and efficiency to current and emerging test and evaluation methodologies through the application of the STAT process. Additionally, she is an Assistant Professor of Statistics at the Air Force Institute of Technology (AFIT), where her research interests include design of experiments, and developing innovate Bayesian approaches to DoD testing. As an Operations Research Analyst in the US Air Force (USAF), her experiences in the USAF testing community include being a weapons and tactics analyst and an operational test analyst. Dr. Sieck has a M.S. in Statistics from Texas A&M University, and a Ph.D. in Statistics from the University of New Mexico. Her Ph.D. research was on improving operational testing through the use of Bayesian adaptive testing methods. |
Session Recording |
| 2022 |
|
Breakout A New Method for Planning Full-Up System-Level (FUSL) Live Fire Tests (Abstract)
Planning Full-Up System-Level (FUSL) Live Fire tests is a complex process that has historically relied solely on subject matter expertise. In particular, there is no established method to determine the appropriate number of FUSL tests necessary for a given program. We developed a novel method that is analogous to the Design of Experiments process that is used to determine the scope of Operational Test events. Our proposed methodology first requires subject matter experts (SMEs) to define all potential FUSL shots. For each potential shot, SMEs estimate the severity of that shot, the uncertainty of that severity estimate, and the similarity of that shot to all other potential shots. We developed a numerical optimization algorithm that uses the SME inputs to generate a prioritized list of FUSL events and a corresponding plot of the total information gained with each successive shot. Together, these outputs can help analysts determine the adequate number of FUSL tests for a given program. We illustrate this process with an example on a notional ground vehicle. Future work is necessary prior to implementation on a program of record. |
Lindsey Butler Research Staff Member IDA (bio)
Dr. Lindsey Butler holds a B.S. in Chemical Engineering from Virginia Tech and a Ph.D. in Biomedical Engineering from the University of South Carolina. She has worked at the Institute for Defense Analyses for 5 years where she supports the Director of Operational Test and Evaluation. Dr. Butler is the Deputy Live Fire lead at the Institute for Defense Analyses. Her primary projects focus on assessing the survivability of body armor and armored vehicles against operationally realistic threats. She also has an expertise evaluating casualty assessments to personnel after live fire tests. |
Breakout | Session Recording |
| 2022 |
Short Course A Practitioner’s Guide to Advanced Topics in DOE (Abstract)
Having completed a first course in DOE and begun to apply these concepts, engineers and scientists quickly learn that test and evaluation often demands knowledge beyond the use of classical designs. This one-day short course, taught by an engineer from a practitioner’s perspective, targets this problem. Three broad areas are covered:
The course format is to introduce relevant background material, discuss case studies, and provide software demonstrations. Case studies and demonstrations are derived from a variety of sources, including aerospace testing and DOD T&E. Learn design approaches, design comparison metrics, best practices, and lessons learned from the instructor’s experience. A first course in Design of Experiments is a prerequisite. |
Drew Landman Professor Old Dominion University ![]() (bio)
Drew Landman has 34 years of experience in engineering education as a professor at Old Dominion University. Dr. Landman’s career highlights include13 years (1996-2009) as chief engineer at the NASA Langley Full-Scale Wind Tunnel in Hampton, VA. Landman was responsible for program management, test design, instrument design and calibration and served as the lead project engineer for many automotive, heavy truck, aircraft, and unmanned aerial vehicle wind tunnel tests including the Centennial Wright Flyer and the Boeing X-48B and C. His research interests and sponsored programs are focused on wind tunnel force measurement systems and statistically defensible experiment design primarily to support wind tunnel testing. Dr. Landman has served as a consultant and trainer in the area of statistical engineering to test and evaluation engineers and scientists at AIAA, NASA, Aerovironment, Airbus, Aerion, ATI, USAF, US Navy, US Marines and the Institute for Defense Analysis. Landman founded a graduate course sequence in statistical engineering within the ODU Department of Mechanical and Aerospace Engineering. He is currently co-authoring a text on wind tunnel test techniques. |
Short Course | Materials | 2022 |
|
Breakout A Systems Perspective on Bringing Reliability and Prognostics to Machine Learning (Abstract)
Machine learning is being deployed into the real-world, yet the body of knowledge on testing, evaluating, and maintaining machine learning models is overwhelmingly centered on component-level analysis. But, machine learning and engineered systems are tightly coupled. This is evidenced by extreme sensitivity to of ML to changes in system structure and behavior. Thus, reliability, prognostics, and other efforts related to test and evaluation for ML cannot be divorced from the system. That is, machine learning and its system go hand-in-hand. Any other way makes an unjustified assumption about the existence of an independent variable. This talk explores foundational reasons for this phenomena, and the foundational challenges it poses to existing practice. Cases in machine health monitoring and in cyber defense are used to motivate the position that machine learning is not independent of physical changes to the system with which it interacts, and ML is not independent of the adversaries it defends against. By acknowledging these couplings, systems and mission engineers can better align test and evaluation practices with the fundamental character of ML. |
Tyler Cody Research Assistant Professor Virginia Tech National Security Institute ![]() (bio)
Tyler Cody is an Assistant Research Professor at the Virginia Tech National Security Institute. His research interest is in developing principles and best practices for the systems engineering of machine learning and artificial intelligence. His research has been applied to machine learning for engineered systems broadly, including hydraulic actuators, industrial compressors, rotorcraft, telecommunication systems, and computer networks. He received his Ph.D. in systems engineering from the University of Virginia in May 2021 for his work on a systems theory of transfer learning. |
Breakout | Session Recording |
| 2022 |
Breakout Adversaries and Airwaves – Compromising Wireless and Radio Frequency Communications (Abstract)
Wireless and radio frequency (RF) technology are ubiquitous in our daily lives, including laptops, key fobs, remote sensors, and antennas. These devices, while oftentimes portable and convenient, can potentially be susceptible to adversarial attack over the air. This breakout session will provide a short introduction into wireless hacking concepts such as passive scanning, active injection, and the use of software defined radios to flexibly sample the RF spectrum. We will also ground these concepts in live demonstrations of attacks against both wireless and wired systems. |
Mark Herrera and Jason Schlup Research Staff Member IDA ![]() (bio)
Dr. Mark Herrera is a physicist (PhD – University of Maryland) turned defense analyst who specializes in missiles systems, mine warfare, and cyber. As a Project Lead at the Institute for Defense Analyses (IDA), he leads a team of cyber subject matter experts in providing rigorous, responsive, and sound analytic input supporting independent assessments of a wide variety of US Navy platforms. Prior to IDA, Mark was a principal investigator with Heron Systems Inc, leading the development of machine learning algorithms to improve aircraft sensor performance. Dr. Jason Schlup received his Ph.D. in Aeronautics from the California Institute of Technology in 2018. He is now a Research Staff Member at the Institute for Defense Analyses and provides analytical support to the Director, Operational Test and Evaluation’s Cyber Assessment Program. Jason also contributes to IDA’s Cyber Lab capability. |
Breakout | Session Recording |
| 2022 |
Tutorial An Introduction to Data Visualization (Abstract)
Data visualization can be used to present findings, explore data, and use the human eye to find patterns that a computer would struggle to locate. Borrowing tools from art, storytelling, data analytics and software development, data visualization is an indispensable part of the analysis process. While data visualization usage spans across multiple disciplines and sectors, most never receive formal training in the subject. As such, this tutorial will introduce key data visualization building blocks and how to best use those building blocks for different scenarios and audiences. We will also go over tips on accessibility, design and interactive elements. While this will by no means be a complete overview of the data visualization field, by building a foundation and introducing some rules of thumb, attendees will be better equipped for communicating their findings to their audience. |
Christina Heinich AST, Data Systems NASA ![]() (bio)
Chris Heinich is a software engineer data visualization at NASA Langley Research Center. Working in a diverse set of fields like web development, software testing, and data analysis, Chris currently acts as the data visualization expert for the Office of Chief Information Officer (OCIO) Data Science Team. In the past they have developed and maintained dashboards for use cases such as COVID case/vaccine tracking or project management, as well as deploying open source data visualization tools like Dash/Plotly to traditional government servers. Currently they lead an agency-wide data visualization community of practice, which provides training and online collaboration space for NASA employees to learn and share experiences in data visualization. |
Tutorial | Session Recording |
| 2022 |
Breakout An Introduction to Sustainment: The Importance and Challenges of Analyzing System Readines (Abstract)
The Department of Defense (DoD) spends the majority of its annual budget on making sure that systems are ready to perform when called to action. Even with large investments, though, maintaining adequate system readiness poses a major challenge for the DoD. Here, we discuss why readiness is so difficult to maintain and introduce the tools IDA has developed to aid readiness and supply chain analysis and decision-making. Particular emphasis is placed on “honeybee,” the tool developed to clean, assemble, and mine data across a variety of sources in a well-documented and reproducible way. Using a notional example, we demonstrate the utility of this tool and others like it in our suite; these tools lower the barrier to performing meaningful analysis, constructing and estimating input data for readiness models, and aiding the DoD’s ability to tie resources to readiness outcomes. |
V. Bram Lillard and Megan L. Gelsinger Deputy Director, OED (VBL) and Research Staff Member (MLG) IDA (bio)
V. Bram Lillard is the Deputy Director of the Operational Evaluation Division at IDA. In addition to managing the Division’s research staff, he leads IDA’s Sustainment and Readiness Modeling group, which supports multiple sponsors; he directs a growing team of researchers focused on analyzing raw maintenance and supply data, developing software tools to build end-to-end simulations, and identifying what investments are needed to improve weapon system readiness. Dr. Lillard has been at IDA for over 18 years, contributing to a variety of research areas, including readiness, operational testing, cost analyses, F-35, surface, submarine, and anti-submarine warfare systems. Prior to joining IDA, Dr. Lillard completed his PhD in physics at the University of Maryland. Megan Gelsinger is a Research Staff Member in the Operational Evaluation Division of IDA. Her work focuses on weapons system sustainment and readiness modeling. Prior to joining IDA in August 2021, she completed her PhD in statistics at Cornell University. Her graduate work focused on developing computationally efficient methods for implementing big data spatio-temporal statistical models. |
Breakout | Session Recording |
| 2022 |
Breakout An Overview of NASA’s Low Boom Flight Demonstration (Abstract)
NASA will soon begin a series of tests that will collect nationally representative data on how people perceive low noise supersonic overflights. For half a century, civilian aircraft have been required to fly slower than the speed of sound over land to prevent “creating an unacceptable situation” on the ground due to sonic booms. However, new aircraft shaping techniques have led to dramatic changes in how shockwaves from supersonic flight merge together as they travel to the ground. What used to sound like a boom on the ground will be transformed into a thump. NASA is now building a full-scale, piloted demonstration aircraft called the X-59 to demonstrate low noise supersonic flight. In 2024, the X-59 aircraft will commence a national series of community overflight tests to collect data on how people perceive “sonic thumps.” The community response data will be provided to national and international noise regulators as they consider creating new standards that allow supersonic flight over land at acceptably low noise levels. |
Jonathan Rathsam Technical Lead NASA Langley Research Center ![]() (bio)
Jonathan Rathsam is a Senior Research Engineer at NASA’s Langley Research Center in Hampton, Virginia. He conducts laboratory and field research on human perceptions of low noise supersonic overflights. He currently serves as Technical Lead of Survey Design and Analysis for Community Test Planning and Execution within NASA’s Commercial Supersonic Technology Project. Recently he served as co-chair for the annual Defense and Aerospace Test and Analysis Workshop (DATAWorks) and as chair for a NASA Source Evaluation Board. He holds a Ph.D. in Engineering from the University of Nebraska, a B.A. in Physics from Grinnell College in Iowa, and completed postdoctoral research in acoustics at Ben-Gurion University in Israel. |
Breakout | Session Recording |
| 2022 |
Analysis Apps for the Operational Tester (Abstract)
In the acquisition and testing world, data analysts repeatedly encounter certain categories of data, such as time or distance until an event (e.g., failure, alert, detection), binary outcomes (e.g., success/failure, hit/miss), and survey responses. Analysts need tools that enable them to produce quality and timely analyses of the data they acquire during testing. This poster presents four web-based apps that can analyze these types of data. The apps are designed to assist analysts and researchers with simple repeatable analysis tasks, such as building summary tables and plots for reports or briefings. Using software tools like these apps can increase reproducibility of results, timeliness of analysis and reporting, attractiveness and standardization of aesthetics in figures, and accuracy of results. The first app models reliability of a system or component by fitting parametric statistical distributions to time-to-failure data. The second app fits a logistic regression model to binary data with one or two independent continuous variables as predictors. The third calculates summary statistics and produces plots of groups of Likert-scale survey question responses. The fourth calculates the system usability scale (SUS) scores for SUS survey responses and enables the app user to plot scores versus an independent variable. These apps are available for public use on the Test Science Interactive Tools webpage https://new.testscience.org/interactive-tools/. |
William Raymond Whitledge Research Staff Member IDA ![]() (bio)
Bill Whitledge is a research staff member at the Institute for Defense Analyses (IDA), where he has worked since 2010. He is currently the project leader for the operational test agent task for the Cybersecurity and Infrastructure Security Agency (CISA) under the Department of Homeland Security (DHS). This project aims to provide CISA with rigorous analysis of the operational effectiveness, usability, and reliability of the National Cybersecurity Protection System (NCPS) family of systems. In addition to leading the NCPS testing project, Bill works on the IDA data management committee updating IDA’s data management practices and procedures. He also co-leads the IDA Connects speaker series, an internal IDA series of lunchtime talks intended to help staff stay informed on current events, meet colleagues, and learn about research across IDA. Bill is passionate about helping people visualize and present information in more elegant and succinct ways. One of his main interests is writing tools in R and other programming languages to automate data collection, analysis, and visualization. He has developed four web applications hosted on the IDA Test Science website enabling easier analysis of system reliability, binary outcomes, system usability, and general survey responses. Bill is also an avid cyclist and golfer, and he is one of the coordinators of the IDA golf league. Bill received his Bachelor of Arts in physics with an economics minor from Colby College in 2008. He received his Master of Science in electrical engineering with a focus in optics and optical communication at Stanford University in 2010. |
Session Recording |
| 2022 |
|
Analysis of Target Location Error using Stochastic Differential Equations (Abstract)
This paper presents an analysis of target location error (TLE) based on the Cox Ingersoll Ross (CIR) model. In brief, this model characterizes TLE as a function of range based the stochastic differential equation model dX(r) = a(b-X(r))dr + sigma *sqrt(X(r)) dW(r) where X(t) is TLE at range r, b is the long-term mean (terminal) of the TLE, a is the rate of reversion of X(r) to b, sigma is the process volatility, and W(t) is the standard Weiner process. Multiple flight test runs under the same conditions exhibit different realizations of the TLE process. This approach to TLE analysis models each flight test run as a realization the CIR process. Fitting a CIR model to multiple data runs then provides a characterization of the TLE system under test. This paper presents an example use of the CIR model. Maximum likelihood estimates of the parameters of the CIR model are found from a collection of TLE data runs. The resulting CIR model is then used to characterize overall system TLE performance as a function of range to the target as well as the asymptotic estimate of long-term TLE. |
James Brownlow mathematical statistician USAF ![]() (bio)
Dr. James Brownlow is a tech expert in statistics with the USAF, Edwards AFB, CA. His PhD is in time series from UC Riverside. Dr. Brownlow has developed test and evaluation procedures using Bayesian techniques, and developed Python code to adapt parametric survival models to the analysis of target location error. He is a coauthor of a paper that used stochastic differential equations to characterize Kalman-filtered estimates of target track state vectors. |
Session Recording |
| 2022 |
|
Breakout Applications for Monte Carlo Analysis within Job Shop Planning (Abstract)
Summary overview of Discrete Event Simulations (DES) for optimizing scheduling operations in a high mix, low volume, job shop environment. The DES model employs Monte Carlo simulation to minimize schedule conflicts and prioritize work, while taking into account competition for limited resources. Iterative simulation balancing to dampen model results and arrive at a globally optimized schedule plan will be contrasted with traditional deterministic scheduling methodologies. |
Dominik Alder Project Management & Planning Operations Rep Senior Staff Lockheed Martin, Program Management (bio)
Master’s of Science in Systems Engineering from University of Denver with emphasis in discrete event simulation and optimization Bachelor’s of Science in Manufacturing Engineering from Brigham Young University with emphasis in statistics and applied research +13 years of experience with Lockheed Martin Corporation employed in statistical analysis, material science, and manufacturing engineering |
Breakout | Session Recording |
| 2022 |
Breakout Applications of Equivalence Testing in T&E (Abstract)
Traditional hypothesis testing is used extensively in test and evaluation (T&E) to determine if there is a difference between two or more populations. For example, we can analyze a designed experiment using t-tests to determine if a factor affects the response or not. Rejecting the null hypothesis would provide evidence that the factor changes the response value. However, there are many situations in T&E where the goal is to actually show that things didn’t change; the response is actually the same (or nearly the same) after some change in the process or system. If we use traditional hypothesis testing to assess this scenario, we would want to “fail to reject” the null hypothesis; however, this doesn’t actually provide evidence that the null hypothesis is true. Instead, we can orient the analysis to the decision that will be made and use equivalence testing. Equivalence testing initially assumes the populations are different; the alternative hypothesis is that they are the same. Rejecting the null hypothesis provides evidence that the populations are the same, matching the objective of the test. This talk provides an overview of equivalence testing with examples demonstrating its applicability in T&E. We also discuss additional considerations for planning a test where equivalence testing will be used including: sample size and what does “equivalent” really mean. |
Sarah Burke Analyst LinQuest (bio)
Sarah Burke is an analyst for The Perduco Group, A Linquest Company. She earned her doctorate in industrial engineering at Arizona State University. Her research interests include design of experiments, response surface methodology, multi‐criteria decision making, and statistical engineering. |
Breakout | Session Recording |
| 2022 |
Breakout Applying Design of Experiments to Cyber Testing (Abstract)
We describe a potential framework for applying DOE to cyber testing and provide an example of its application to testing of a hypothetical command and control system. |
J. Michael Gilmore Research Staff Member IDA ![]() (bio)
Dr. James M. Gilmore “Mike” Research Staff Member | IDA SED Fields of Expertise Test and evaluation; cost analysis; cost-effectiveness analysis. Education 1976 – 1980 Doctor of Philosophy in Nuclear Engineering at University of Wisconsin 1972 – 1976 Bachelor of Science in Physics at M.I.T. Employment 2017 – 2018 Principal Physical Scientist, RAND Corporation Performed various analyses for federal government clients 2009 – 2017 Director, Operational Test and Evaluation, Department of Defense Senate-confirmed Presidential appointee serving as the principal advisor to the Secretary of Defense regarding the operational effectiveness of all defense systems 2001 – 2009 Assistant Director for National Security , Congressional Budget Office Responsible for the CBO division performing analyses of a broad array of issues in national security for committees of the U.S. Congress 1994 – 2001 Deputy Director for General Purpose Programs, OSD Program Analysis and Evaluation Responsible for four divisions performing analyses and evaluations of all aspects of DoD’s conventional forces and associated programs 1993 – 1994 Division Director, OSD Progam Analysis and Evaluation Responsible for divisions in the Cost Analysis Improvement Group performing independent cost analyses of major defense acquisition programs 1990 – 1993 Analyst, OSD Program Analysis and Evaluation Performed analysis of strategic defense systems and command, control, and communications systems 1989 – 1990 Analyst, Falcon Associates Performed various analyses for DoD clients 1985 – 1989 Analyst, McDonnell Douglas Performed analysis involving issues in command, control, communications, and intelligence 1981 – 1985 Scientist, Lawrence Livermore National Laboratory Modelled nuclear fusion experiments |
Breakout | Session Recording |
| 2022 |
Keynote April 27 Opening Keynote II |
Jill Marlowe Digital Transformation Officer, NASA ![]() (bio)
Jill Marlowe is the NASA’s first Digital Transformation Officer, leading the Agency to conceive, architect, and accelerate enterprise digital solutions that transform NASA’s work, workforce and workplace to achieve bolder missions faster and more affordably than ever before. Her responsibilities include refinement and integration of NASA’s digital transformation strategy, plans and policies and coordination of implementation activities across the NASA enterprise in six strategic thrusts: data, collaboration, modeling, artificial intelligence/machine learning, process transformation, and culture and workforce. Prior to this role, Ms. Marlowe was the Associate Center Director, Technical, at NASA’s Langley Research Center in Hampton, Virginia. Ms. Marlowe led strategy and transformation of the center’s technical capabilities to assure NASA’s future mission success. In this role, she focused on accelerating Langley’s internal and external collaborations as well as the infusion of digital technologies critical for the center to thrive as a modern federal laboratory in an ever more digitally-enabled, hyper-connected, fast-paced, and globally-competitive world. In 2008, Ms. Marlowe was selected to the Senior Executive Service as the Deputy Director for Engineering at NASA Langley, and went on to serve as the center’s Engineering Director and Research Director. With the increasing responsibility and scope of these roles, Ms. Marlowe has a broad range of leadership experiences that include: running large organizations of 500 – 1,000 people to deliver solutions to every one of NASA’s mission directorates; sustaining and morphing a diverse portfolio of technical capabilities spanning aerosciences, structures & materials, intelligent flight systems, space flight instruments, and entry descent & landing systems; assuring safe operation of over two-million square feet of laboratories and major facilities; architecting partnerships with universities, industry and other government agencies to leverage and advance NASA’s goals; project management of technology development and flight test experiments; and throughout all of this, incentivizing innovation in very different organizational cultures spanning foundational research, technology invention, flight design and development engineering, and operations. She began her NASA career in 1990 as a structural analyst supporting the development of numerous space flight instruments to characterize Earth’s atmosphere. Ms. Marlowe’s formal education includes a Bachelor of Science degree in Aerospace and Ocean Engineering from Virginia Tech in 1988, a Master of Science in Mechanical Engineering from Rensselaer Polytechnic Institute in 1990, and a Degree of Engineer in Civil and Environmental Engineering at George Washington University in 1997. She serves on advisory boards for Virginia Tech’s Aerospace & Ocean Engineering Department, Sandia National Laboratory’s Engineering Sciences Research Foundation, and Cox Communications’ Digitally Inclusive Communities (regional). She is the recipient of two NASA Outstanding Leadership Medals, was named the 2017 NASA Champion of Innovation, is an AIAA Associate Fellow, and was inducted in 2021 into the Virginia Tech Academy of Aerospace & Ocean Engineering Excellence. She lives in Yorktown, Virginia, with her husband, Kevin, along with the youngest of their three children and two energetic labradoodles. |
Keynote | Session Recording |
| 2022 |
Keynote April 27th Opening Keynote I |
Wendy Masiello President, Wendy Mas Consulting LLC ![]() (bio)
Wendy Masiello is an independent consultant having retired from the United States Air Force as a Lieutenant General. She is president of Wendy Mas Consulting, LLC and serves as an independent director for KBR Inc., EURPAC Service, Inc., and StandardAero (owned by The Carlyle Group). She is also a Director on the Procurement Round Table and the National ReBuilding Together Board, President-elect for the National Contract Management Association (NCMA) Board, Chair of Rawls Advisory Council for Texas Tech University’s College of Business and serves on the Air Force Studies Board under the National Academies of Science, Engineering, and Medicine. Prior to her July 2017 retirement, she was Director of the Defense Contract Management Agency where she oversaw a $1.4 billion budget and 12,000 people worldwide in oversight of 20,000 contractors performing 340,000 contracts with more than $2 trillion in contract value. During her 36-year career, General Masiello also served as Deputy Assistant Secretary (Contracting), Office of the Assistant Secretary of the Air Force for Acquisition, and as Program Executive Officer for the Air Forces $65 billion Service Acquisition portfolio. She also commanded the 96th Air Base Wing at Edwards Air Force Base and deployed to Iraq and Afghanistan as Principal Assistant Responsible for Contracting for Forces. General Masiello’s medals and commendations include the Defense Superior Service Medal, Distinguished Service Medal and the Bronze Star. She earned her Bachelor of Business Administration degree from Texas Tech University, a Master of Science degree in logistics management from the Air Force Institute of Technology, a Master of Science degree in national resource strategy from the Industrial College of the Armed Forces, Fort Lesley J. McNair, Washington, D.C., and is a graduate of Harvard Kennedy School’s Senior Managers in Government. General Masiello is a 2017 Distinguished Alum of Texas Tech University, was twice (2015 and 2016) named among Executive Mosaic’s Wash 100, the 2014 Greater Washington Government Contractor “Public Sector Partner of the Year,” and recognized by Federal Computer Week as one of “The 2011 Federal 100”. She is an NCMA Certified Professional Contract Manager and an NCMA Fellow. |
Keynote | Session Recording |
| 2022 |
Keynote April 28 Opening Keynote |
Nickolas Guertin Director, Operational Test & Evaluation, OSD/DOT&E ![]() (bio)
Nickolas H. Guertin was sworn in as Director, Operational Test and Evaluation on December 20, 2021. A Presidential appointee confirmed by the United States Senate, he serves as the senior advisor to the Secretary of Defense on operational and live fire test and evaluation of Department of Defense weapon systems. Mr. Guertin has an extensive four-decade combined military and civilian career in submarine operations, ship construction and maintenance, development and testing of weapons, sensors, combat management products including the improvement of systems engineering, and defense acquisition. Most recently, he has performed applied research for government and academia in software-reliant and cyber-physical systems at Carnegie Mellon University’s Software Engineering Institute. Over his career, he has been in leadership of organizational transformation, improving competition, application of modular open system approaches, as well as prototyping and experimentation. He has also researched and published extensively on software-reliant system design, testing and acquisition. He received a BS in Mechanical Engineering from the University of Washington and an MBA from Bryant University. He is a retired Navy Reserve Engineering Duty Officer, was Defense Acquisition Workforce Improvement Act (DAWIA) certified in Program Management and Engineering, and is also a registered Professional Engineer (Mechanical). Mr. Guertin is involved with his community as an Assistant Scoutmaster and Merit Badge Counselor for two local Scouts BSA troops as well as being an avid amateur musician. He is a native of Connecticut and now resides in Virginia with his wife and twin children. |
Keynote | Session Recording | 2022 |
|
Assurance Techniques for Learning Enabled Autonomous Systems which Aid Systems Engineering (Abstract)
It is widely recognized that the complexity and resulting capabilities of autonomous systems created using machine learning methods, which we refer to as learning enabled autonomous systems (LEAS), pose new challenges to systems engineering test, evaluation, verification, and validation (TEVV) compared to their traditional counterparts. This presentation provides a preliminary attempt to map recently developed technical approaches in the assurance and TEVV of learning enabled autonomous systems (LEAS) literature to a traditional systems engineering v-model. This mapping categorizes such techniques into three main approaches: development, acquisition, and sustainment. This mapping reviews the latest techniques to develop safe, reliable, and resilient learning enabled autonomous systems, without recommending radical and impractical changes to existing systems engineering processes. By performing this mapping, we seek to assist acquisition professionals by (i) informing comprehensive test and evaluation planning, and (ii) objectively communicating risk to leaders. The inability to translate qualitative assessments to quantitative metrics which measure system performance hinder adoption. Without understanding the capabilities and limitations of existing assurance techniques, defining safety and performance requirements that are both clear and testable remains out of reach. We accompany recent literature reviews on autonomy assurance and TEVV by mapping such developments to distinct steps of a well known systems engineering model chosen due to its prevalence, namely the v-model. For three top-level lifecycle phases: development, acquisition, and sustainment, a section of the presentation has been dedicated to outlining recent technical developments for autonomy assurance. This representation helps identify where the latest methods for TEVV fit in the broader systems engineering process while also enabling systematic consideration of potential sources of defects, faults, and attacks. Note that we use the v-model only to assist the classification of where TEVV methods fit. This is not a recommendation to use a certain software development lifecycle over another. |
Christian Ellis Journeyman Fellow Army Research Laboratory / University of Mass. Dartmouth ![]() (bio)
Christian Ellis is a PhD student in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth, focused on building safe and robust autonomous ground systems for the United States Department of Defense. His research interests include unstructured wheeled ground autonomy, autonomous systems assurance, and safe autonomous navigation from human demonstrations. Recently, Christian received a student based paper award in 2020 for the paper titled “Software and System Reliability Engineering for Autonomous Systems Incorporating Machine Learning”. |
Session Recording |
| 2022 |
|
Bayesian Estimation for Covariate Defect Detection Model Based on Discrete Cox Proportiona (Abstract)
Traditional methods to assess software characterize the defect detection process as a function of testing time or effort to quantify failure intensity and reliability. More recent innovations include models incorporating covariates that explain defect detection in terms of underlying test activities. These covariate models are elegant and only introduce a single additional parameter per testing activity. However, the model forms typically exhibit a high degree of non-linearity. Hence, stable and efficient model fitting methods are needed to enable widespread use by the software community, which often lacks mathematical expertise. To overcome this limitation, this poster presents Bayesian estimation methods for covariate models, including the specification of informed priors as well as confidence intervals for the mean value function and failure intensity, which often serves as a metric of software stability. The proposed approach is compared to traditional alternative such as maximum likelihood estimation. Our results indicate that Bayesian methods with informed priors converge most quickly and achieve the best model fits. Incorporating these methods into tools should therefore encourage widespread use of the models to quantitatively assess software. |
Priscila Silva Graduate Student University of Massachusetts Dartmouth ![]() (bio)
Priscila Silva is a MS student in the Department of Electrical & Computer Engineering at the University of Massachusetts Dartmouth (UMassD). She received her BS (2017) in Electrical Engineering from the Federal University of Ouro Preto, Brazil. |
Session Recording |
| 2022 |
|
Building Bridges: a Case Study of Assisting a Program from the Outside (Abstract)
STAT practitioners often find ourselves outsiders to the programs we assist. This session presents a case study that demonstrates some of the obstacles in communication of capabilities, purpose, and expectations that may arise due to approaching the project externally. Incremental value may open the door to greater collaboration in the future, and this presentation discusses potential solutions to provide greater benefit to testing programs in the face of obstacles that arise due to coming from outside the program team. DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. CLEARED on 5 Jan 2022. Case Number: 88ABW-2022-0002 |
Anthony Sgambellone Huntington Ingalls Industries ![]() (bio)
Dr. Tony Sgambellone is a STAT Expert (Huntington Ingalls Industries contractor) at the Scientific Test and Analysis Techniques (STAT) Center of Excellence (COE) at the Air Force Institute of Technology (AFIT). The STAT COE provides independent STAT consultation to designated acquisition programs and special projects to improve Test & Evaluation (T&E) rigor, effectiveness, and efficiency,. Dr. Sgambellone holds a Ph.D. in Statistics, a graduate minor in College and University Teaching, and has a decade of experience spanning the fields of finance, software, and test and development. His current interests include artificial neural networks and the application of machine learning. |
Session Recording |
| 2022 |
|
Tutorial Case Study on Applying Sequential Methods in Operational Testing (Abstract)
Sequential methods concerns statistical evaluation in which the number, pattern, or composition of the data is not determined at the start of the investigation but instead depends on the information acquired during the investigation. Although sequential methods originated in ballistics testing for the Department of Defense (DoD), it is underutilized in the DoD. Expanding the use of sequential methods may save money and reduce test time. In this presentation, we introduce sequential methods, describe its potential uses in operational test and evaluation (OT&E), and present a method for applying it to the test and evaluation of defense systems. We evaluate the proposed method by performing simulation studies and applying the method to a case study. Additionally, we discuss some of the challenges we might encounter when using sequential analysis in OT&E. |
Keyla Pagán-Rivera Research Staff Member IDA ![]() (bio)
Dr. Keyla Pagán-Rivera has a Ph.D. in Biostatistics from The University of Iowa and serves as a Research Staff Member in the Operational Evaluation Division at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on training, research and applications of statistical methods. |
Tutorial | Session Recording |
| 2022 |
Short Course Categorical Data Analysis (Abstract)
Categorical data is abundant in the 21st century, and its analysis is vital to advance research across many domains. Thus, data-analytic techniques that are tailored for categorical data are an essential part of the practitioner’s toolset. The purpose of this short course is to help attendees develop and sharpen their abilities with these tools. Topics covered in this short course will include binary and multi-category logistic regression, ordinal regression, and classification, and methods to assess predictive accuracy of these approaches will be discussed. Data will be analyzed using the R software package, and course content loosely follow Alan Agresti’s excellent textbook “An Introduction to Categorical Data Analysis, Third Edition.” |
Chris Franck Assistant Professor Virginia Tech ![]() (bio)
Chris Franck is an Assistant Professor in the Department of Statistics at Virginia Tech. |
Short Course | Materials | 2022 |
|
Closing Remarks |
Alyson Wilson NCSU ![]() (bio)
Dr. Alyson Wilson is the Associate Vice Chancellor for National Security and Special Research Initiatives at North Carolina State University. She is also a professor in the Department of Statistics and Principal Investigator for the Laboratory for Analytic Sciences. Her areas of expertise include statistical reliability, Bayesian methods, and the application of statistics to problems in defense and national security. Dr. Wilson is a leader in developing transformative models for rapid innovation in defense and intelligence. Prior to joining NC State, Dr. Wilson was a jointly appointed research staff member at the IDA Science and Technology Policy Institute and Systems and Analyses Center (2011-2013); associate professor in the Department of Statistics at Iowa State University (2008-2011); Scientist 5 and technical lead for Department of Defense Programs in the Statistical Sciences Group at Los Alamos National Laboratory (1999-2008); and senior statistician and operations research analyst with Cowboy Programming Resources (1995-1999). She is currently serving on the National Academy of Sciences Committee on Applied and Theoretical Statistics and on the Board of Trustees for the National Institute of Statistical Sciences. Dr. Wilson is a Fellow of the American Statistical Association, the American Association for the Advancement of Science, and an elected member of the International Statistics Institute. |
2022 |
|||
Cloud Computing for Computational Fluid Dynamics (CFD) in T&E (Abstract)
In this talk we’ll focus on exploring the motivation for using cloud computing for Computational Fluid Dynamics (CFD) for Federal Government Test & Evaluation. Using examples from automotive, aerospace and manufacturing we’ll look at benchmarks for a number of CFD codes using CPUs (x86 & Arm) and GPUs and we’ll look at how the development of high-fidelity CFD e.g. WMLES, HRLES, is accelerating the need for access to large scale HPC. The onset of COVID-19 has also meant a large increase in the need for remote visualization with greater numbers of researchers and engineering needing to work from home. This has also accelerated the adoption of the same approaches needed towards the pre- and post-processing of peta/exa-scale CFD simulation and we’ll look at how these are more easily accessed via a cloud infrastructure. Finally, we’ll explore perspectives on integrating ML/AI into CFD workflows using data lakes from a range of sources and where the next decade may take us. |
Neil Ashton WW Principal CFD Specialist Solution Architect, HPC Amazon Web Services ![]() (bio)
Neil Ashton is the WW subject matter expert for CFD within AWS. He works with customers in enterprise, startup and public-sector across the globe to help them to run their CFD (and often also FEA) workloads on AWS. In addition he acts as a key advisor to the global product teams to deliver better hardware and software for CFD and broader CAE users. He is also still very active in academic research around deep-learning and machine learning, future HPC approaches and novel CFD approaches (GPU’s, numerical methods, turbulence modelling) |
Session Recording |
| 2022 |
|
Poster Combining data from scanners to inform cadet physical performance (Abstract)
Digital anthropometry obtained from 3D body scanners has already revolutionized the clothing and fitness industries. Within seconds, these scanners collect hundreds of anthropometric measurements which are used by tailors to customize an article of clothing or by fitness trainers to track their client’s progress towards a goal. Three-dimensional body scanners have also been used in military applications, such as predicting injuries at Army basic training and checking a solder’s compliance with body composition standards. In response this increased demand, several 3D body scanners have become commercially available, each with a proprietary algorithm for measuring specific body parts. Individual scanners may suffice to collect measurements from a small population; however, they are not practical for use in creating large data sets necessary to train artificial intelligence (AI) or machine learning algorithms. This study fills the gap between these two applications by correlating body circumferences taken from a small population (n = 109) on three different body scanners and creating a standard scale for pooling data from the different scanners into one large AI ready data set. This data set is then leveraged in a separate application to understand the relationship between body shape and performance on the Army Combat Fitness Test (ACFT). |
Nicholas Ashby Student United States Military Academy ![]() (bio)
Nicholas (Nick) Ashby is a fourth-year cadet at the United States Military Academy. He was born in California’s Central Coast and grew up in Charlottesville, VA. At West Point he is pursuing a B.S. in Applied Statistics and Data Science and will commission as an Army Aviation officer in May. In his free time Nick works as a student manager and data analyst for Army’s NCAA Division 1 baseball team and he enjoys playing golf as well. His research, under advisor Dr. Diana M. Thomas, has focused on body shape, body composition, and performance on the U.S. Army Combat Fitness Test (ACFT). |
Poster | Session Recording |
| 2022 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout A Decision-Theoretic Framework for Adaptive Simulation Experiments |
Terril Hurst Senior Engineering Fellow Raytheon Technologies ![]() |
Breakout | Session Recording |
| 2022 |
A Framework for Using Priors in a Continuum of Testing |
Victoria Sieck Deputy Director / Assistant Professor of Statistics Scientific Test & Analysis Techniques Center of Excellence (STAT COE) / Air Force Institute of Technology (AFIT) ![]() |
Session Recording |
| 2022 |
|
Breakout A New Method for Planning Full-Up System-Level (FUSL) Live Fire Tests |
Lindsey Butler Research Staff Member IDA |
Breakout | Session Recording |
| 2022 |
Short Course A Practitioner’s Guide to Advanced Topics in DOE |
Drew Landman Professor Old Dominion University ![]() |
Short Course | Materials | 2022 |
|
Breakout A Systems Perspective on Bringing Reliability and Prognostics to Machine Learning |
Tyler Cody Research Assistant Professor Virginia Tech National Security Institute ![]() |
Breakout | Session Recording |
| 2022 |
Breakout Adversaries and Airwaves – Compromising Wireless and Radio Frequency Communications |
Mark Herrera and Jason Schlup Research Staff Member IDA ![]() |
Breakout | Session Recording |
| 2022 |
Tutorial An Introduction to Data Visualization |
Christina Heinich AST, Data Systems NASA ![]() |
Tutorial | Session Recording |
| 2022 |
Breakout An Introduction to Sustainment: The Importance and Challenges of Analyzing System Readines |
V. Bram Lillard and Megan L. Gelsinger Deputy Director, OED (VBL) and Research Staff Member (MLG) IDA |
Breakout | Session Recording |
| 2022 |
Breakout An Overview of NASA’s Low Boom Flight Demonstration |
Jonathan Rathsam Technical Lead NASA Langley Research Center ![]() |
Breakout | Session Recording |
| 2022 |
Analysis Apps for the Operational Tester |
William Raymond Whitledge Research Staff Member IDA ![]() |
Session Recording |
| 2022 |
|
Analysis of Target Location Error using Stochastic Differential Equations |
James Brownlow mathematical statistician USAF ![]() |
Session Recording |
| 2022 |
|
Breakout Applications for Monte Carlo Analysis within Job Shop Planning |
Dominik Alder Project Management & Planning Operations Rep Senior Staff Lockheed Martin, Program Management |
Breakout | Session Recording |
| 2022 |
Breakout Applications of Equivalence Testing in T&E |
Sarah Burke Analyst LinQuest |
Breakout | Session Recording |
| 2022 |
Breakout Applying Design of Experiments to Cyber Testing |
J. Michael Gilmore Research Staff Member IDA ![]() |
Breakout | Session Recording |
| 2022 |
Keynote April 27 Opening Keynote II |
Jill Marlowe Digital Transformation Officer, NASA ![]() |
Keynote | Session Recording |
| 2022 |
Keynote April 27th Opening Keynote I |
Wendy Masiello President, Wendy Mas Consulting LLC ![]() |
Keynote | Session Recording |
| 2022 |
Keynote April 28 Opening Keynote |
Nickolas Guertin Director, Operational Test & Evaluation, OSD/DOT&E ![]() |
Keynote | Session Recording | 2022 |
|
Assurance Techniques for Learning Enabled Autonomous Systems which Aid Systems Engineering |
Christian Ellis Journeyman Fellow Army Research Laboratory / University of Mass. Dartmouth ![]() |
Session Recording |
| 2022 |
|
Bayesian Estimation for Covariate Defect Detection Model Based on Discrete Cox Proportiona |
Priscila Silva Graduate Student University of Massachusetts Dartmouth ![]() |
Session Recording |
| 2022 |
|
Building Bridges: a Case Study of Assisting a Program from the Outside |
Anthony Sgambellone Huntington Ingalls Industries ![]() |
Session Recording |
| 2022 |
|
Tutorial Case Study on Applying Sequential Methods in Operational Testing |
Keyla Pagán-Rivera Research Staff Member IDA ![]() |
Tutorial | Session Recording |
| 2022 |
Short Course Categorical Data Analysis |
Chris Franck Assistant Professor Virginia Tech ![]() |
Short Course | Materials | 2022 |
|
Closing Remarks |
Alyson Wilson NCSU ![]() |
2022 |
|||
Cloud Computing for Computational Fluid Dynamics (CFD) in T&E |
Neil Ashton WW Principal CFD Specialist Solution Architect, HPC Amazon Web Services ![]() |
Session Recording |
| 2022 |
|
Poster Combining data from scanners to inform cadet physical performance |
Nicholas Ashby Student United States Military Academy ![]() |
Poster | Session Recording |
| 2022 |