Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Poster Next Gen Breaching Technology: A Case Study in Deterministic Binary Response Emulation (Abstract)
Combat Capabilities Development Command Armaments Center (DEVCOM AC) is developing the next generation breaching munition, a replacement for the M58 Mine Clearing Line Charge. A series of M&S experiments were conducted to aid with the design of mine-neutralizing submunitions, utilizing space-filling designs, support vector machines, and hyper-parameter optimization. A probabilistic meta-model of the FEA-simulated performance data was generated with Platt Scaling in order to facilitate optimization, which was implemented to generate several candidate designs for follow-up live testing. This paper will detail the procedure used to iteratively explore and extract information from a deterministic process with a binary response. |
Eli Golden Statistician US Army DEVCOM Armaments Center ![]() (bio)
Eli Golden, GStat is a statistician in the Systems Analysis Division of US Army Combat Capabilities Development Command Armaments Center (DEVCOM AC). He is an experienced practitioner of Design of Experiments, empirical model-building, and data visualization focusing in the domains of conventional munition development, market research, advanced manufacturing, and modelling and simulation, and is an instructor/content curator for the Probability and Statistics courses at the Armament Center’s Armament Graduate School (AGS). Mr. Golden has an M.S. in Applied Statistics from New Jersey Institute of Technology, M.S. in Mechanical Engineering from Stevens Institute of Technology, and a B.S. in Mechanical Engineering with a minor in Mathematics from Lafayette College. |
Poster | Session Recording |
![]() | 2022 |
Poster Nonparametric multivariate profile monitoring using regression trees (Abstract)
Monitoring noisy profiles for changes in the behavior can be used to validate whether the process is operating under normal conditions over time. Change-point detection and estimation in sequences of multivariate functional observations is a common method utilized in monitoring such profiles. A nonparametric method utilizing Classification and Regression Trees (CART) to build a sequence of regression trees is proposed which makes use of the Kolmogorov-Smirnov statistic to monitor profile behavior. Our novel method compares favorably to existing methods in the literature. |
Daniel A. Timme PhD Candidate Florida State University ![]() ![]() (bio)
Daniel A. Timme is currently a student pursuing his PhD in Statistics from Florida State University. Mr. Timme graduated with a BS in Mathematics from the University of Houston and a BS in Business Management from the University of Houston-Clear Lake. He earned an MS in Systems Engineering with a focus in Reliability and a second MS in Space Systems with focuses in Space Vehicle Design and Astrodynamics, both from the Air Force Institute of Technology. Mr. Timme’s research interest is primarily focused in the areas of reliability engineering, applied mathematicsstatistics, optimization, and regression.
|
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Opening Remarks |
Bram Lillard Director, OED IDA ![]() ![]() (bio)
V. Bram Lillard assumed the role of director of the Operational Evaluation Division (OED) in early 2022. In this position, Bram provides strategic leadership, project oversight, and direction for the division’s research program, which primarily supports the Director, Operational Test and Evaluation (DOT&E) within the Office of the Secretary of Defense. He also oversees OED’s contributions to strategic studies, weapon system sustainment analyses, and cybersecurity evaluations for DOD and anti-terrorism technology evaluations for the Department of Homeland Security. Bram joined IDA in 2004 as a member of the research staff. In 2013-14, he was the acting science advisor to DOT&E. He then served as OED’s assistant director in 2014-21, ascending to deputy director in late 2021. Prior to his current position, Bram was embedded in the Pentagon where he led IDA’s analytical support to the Cost Assessment and Program Evaluation office within the Office of the Secretary of Defense. He previously led OED’s Naval Warfare Group in support of DOT&E. In his early years at IDA, Bram was the submarine warfare project lead for DOT&E programs. He is an expert in quantitative data analysis methods, test design, naval warfare systems and operations and sustainment analyses for Defense Department weapon systems. Bram has both a doctorate and a master’s degree in physics from the University of Maryland. He earned his bachelor’s degree in physics and mathematics from State University of New York at Geneseo. Bram is also a graduate of the Harvard Kennedy School’s Senior Executives in National and International Security program, and he was awarded IDA’s prestigious Goodpaster Award for Excellence in Research in 2017. |
2022 |
|||
Opening Remarks |
Norton Schwartz President IDA ![]() ![]() (bio)
Norton A. Schwartz serves as President of the Institute for Defense Analyses (IDA), a nonprofit corporation operating in the public interest. IDA manages three Federally Funded Research and Development Centers that answer the most challenging U.S. security and science policy questions with objective analysis leveraging extraordinary scientific, technical, and analytic expertise. At IDA, General Schwartz (U.S. Air Force, retired) directs the activities of more than 1,000 scientists and technologists employed by IDA. General Schwartz has a long and prestigious career of service and leadership that spans over 5 decades. He was most recently President and CEO of Business Executives for National Security (BENS). During his 6-year tenure at BENS, he was also a member of IDA’s Board of Trustees. Prior to retiring from the U.S. Air Force, General Schwartz served as the 19th Chief of Staff of the U.S. Air Force from 2008 to 2012. He previously held senior joint positions as Director of the Joint Staff and as the Commander of the U.S. Transportation Command. He began his service as a pilot with the airlift evacuation out of Vietnam in 1975. General Schwartz is a U.S. Air Force Academy graduate and holds a master’s degree in business administration from Central Michigan University. He is also an alumnus of the Armed Forces Staff College and the National War College. He is a member of the Council on Foreign Relations and a 1994 Fellow of Massachusetts Institute of Technology’s Seminar XXI. General Schwartz has been married to Suzie since 1981. |
Session Recording | 2022 |
||
Short Course Operational Cyber Resilience in Engineering and Systems Test (Abstract)
Cyber resilience is the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources. As a property defined in terms of system behavior, cyber resilience presents special challenges from a test and evaluation perspective. Typically, system requirements are specified in terms of technology function and can be tested through manipulation of the systems operational environment, controls, or inputs. Resilience, however, is a high-level property relating to the capacity of the system to recover from unwanted loss of function. There are no commonly accepted definitions of how to measure this system property. Moreover, by design, resilience behaviors are exhibited only when the system has lost critical functions. The implication is that the test and evaluation of requirements for operational resilience will involve creating, emulating, or reasoning about the internal systems states that might result from successful attacks. This tutorial will introduce the Framework for Operational Resilience in Engineering and System Test (FOREST), a framework that supports the derivation of measures and metrics for developmental and operational test plans and activities for cyber resilience in cyber-physical systems. FOREST aims to provide insights to support the development of testable requirements for cyber resilience and the design of systems with immunity to new vulnerabilities and threat tactics. FOREST’s elements range from attack sensing to the existence and characterization of resilience modes of operation to operator decisions and forensic evaluation. The framework is meant to be a reusable, repeatable, and practical framework that calls for system designers to describe a system’s operational resilience design in a designated, partitioned manner that aligns with resilience requirements and directly relates to the development of associated test concepts and performance metrics. The tutorial introduces model-based systems engineering (MBSE) tools and associated engineering methods that complement FOREST and support the architecting, design, or engineering aspects of cyber resilience. Specifically, it features Mission Aware, a MBSE meta-model and associated requirements and architecture analysis process targeted to decomposition of loss scenarios into testable resilience features in a system design. FOREST, Mission Aware, and associated methodologies and digital engineering tools will be applied to two case studies for cyber resilience: (1) Silverfish, a hypothetical networked munition system and (2) an oil distribution pipeline. The case studies will lead to derivations of requirements for cyber resilience and survivability, along with associated measures and metrics. |
Peter Beling Professor Virginia Tech ![]() ![]() (bio)
Peter A. Beling is a professor in the Grado Department of Industrial and Systems Engineering and associate director of the Intelligent Systems Division in the Virginia Tech National Security Institute. Dr. Beling’s research interests lie at the intersections of systems engineering and artificial intelligence (AI) and include AI adoption, reinforcement learning, transfer learning, and digital engineering. He has contributed extensively to the development of methodologies and tools in support of cyber resilience in military systems. He serves on the Research Council of the Systems Engineering Research Center (SERC), a University Affiliated Research Center for the Department of Defense. Tom McDermott is the Deputy Director and Chief Technology Officer of the Systems Engineering Research Center at Stevens Institute of Technology in Hoboken, NJ. He leads research on Digital Engineering transformation, education, security, and artificial intelligence applications. Mr. McDermott also teaches system architecture concepts, systems thinking and decision making, and engineering leadership for universities, government, and industry. He serves on the INCOSE Board of Directors as Director of Strategic Integration. Tim Sherburne is a research associate in the Intelligent System Division of the Virginia Tech National Security Institute. Sherburne was previously a member of the systems engineering staff at the University of Virginia supporting Mission Aware research through rapid prototyping of cyber resilient solutions and model-based systems engineering (MBSE) specifications. Prior to joining the University of Virginia, he worked at Motorola Solutions in various Software Development and Systems Engineering roles defining and building mission critical public safety communications systems. |
Short Course |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Poster Optimal Designs for Multiple Response Distributions (Abstract)
Designed experiments can be a powerful tool for gaining fundamental understanding of systems and processes or maintaining or optimizing systems and processes. There are usually multiple performance and quality metrics that are of interest in an experiment, and these multiple responses may include data from nonnormal distributions, such as binary or count data. A design that is optimal for a normal response can be very different from a design that is optimal for a nonnormal response. This work includes a two-phase method that helps experimenters identify a hybrid design for a multiple response problem. Mixture and optimal design methods are used with a weighted optimality criterion for a three-response problem that includes a normal, a binary, and a Poisson model, but could be generalized to an arbitrary number and combination of responses belonging to the exponential family. A mixture design is utilized to identify the optimal weights in the criterion presented. |
Brittany Fischer PhD Candidate Arizona State University (bio)
Brittany Fischer is a PhD candidate in industrial engineering at Arizona State University. Prior to ASU, she received her bachelor’s and master’s degrees in statistics from Pennsylvania State University and worked as a statistical engineer for 5 years at Corning Incorporated. |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Orbital Debris Effects Prediction Tool for Satellite Constellations (Abstract)
Based on observations gathered from the IDA Forum on Orbital Debris (OD) Risks and Challenges (October 8-9, 2020), DOT&E needed first-order predictive tools to evaluate the effects of orbital debris on mission risk, catastrophic collision, and collateral damage to DOD spacecraft and other orbital assets – either from unintentional or intentional [Anti-Satellite (ASAT)] collisions. This lack of modeling capability hindered DOT&E’s ability to evaluate the risk to operational effectiveness and survivability of individual satellites and large constellations, as well as risks to the overall use of space assets in the future. Part 1 of this presentation describes an IDA-derived Excel-based tool (SatPen) for determining the probability and mission effects of >1mm orbital debris impacts and penetration on individual satellites in low Earth orbit (LEO). IDA estimated the likelihood of satellite mission loss using a Starlink-like satellite as a case study and NASA’s ORDEM 3.1 orbital debris environment as an input, supplemented with typical damage prediction equations to support mission loss predictions. Part 2 of this presentation describes an IDA-derived technique (DebProp) to evaluate the debris propagating effects of large, trackable debris (>5 cm) or antisatellite weapons colliding with satellites within constellations. IDA researchers again used a Starlink-like satellite as a case study and worked with Stellingwerf Associates to modify the Smooth Particle Hydrodynamic Code (SPHC) in order to predict the number and direction of fragments following a collision by a tracked satellite fragment. The result is a file format that is readable as an input file for predicting orbital stability or debris re-entry for thousands of created particles, and predict additional, short-term OD-induced losses to other satellites in the constellation. By pairing these techniques, IDA can predict additional, short-term and long-term OD-induced losses to other satellites in the constellation, and conduct long-term debris growth studies. |
Joel Williamsen Research Staff Member IDA ![]() ![]() (bio)
FIELDS OF EXPERTISE Air and space vehicle survivability, missile lethality, LFT&E, ballistic response, active protection systems, hypervelocity impact, space debris, crew and passenger casualty assessment EDUCATION HISTORY 1993 Doctor of Philosophy in Systems Engineering at University of Alabama, Huntsville 1989 Master of Science in Engineering Management at University of Alabama, Huntsville 1983 Bachelor of Science in Mechanical Engineering at University of Nebraska EMPLOYMENT HISTORY 2003 – Present Research Staff Member, IDA, OED 1998 – 2003 Director, Center for Space Systems Survivability, University of Denver 1987 – 1998 Spacecraft Survivability Design , NASA-Marshall Space Flight Center, NASA 1983 – 1987 U.S. Army Missile Command, Research Development and Engineering Center, Warhead Design, U.S. Army PROFESSIONAL ACTIVITIES American Institute of Aeronautics and Astronautics (Chair, Survivability Technical Committee, 2001-2003) Tau Beta Pi Engineering Honorary Society Pi Tau Sigma Mechanical Engineering Honorary Society HONORS IDA Welch Award, 2020. National AIAA Survivability Award, 2012. Citation reads, “For outstanding achievement in enhancing spacecraft, aircraft, and crew survivability through advanced meteoroid/orbital debris shield designs, on-orbit repair techniques, risk assessment tools, and live fire evaluation.” NASA Astronauts’ Personal Achievement Award (Silver Snoopy), 2001. NASA Exceptional Achievement Medal, Spacecraft Survivability Analysis, 1995. Army Research and Development Achievement Award, 1985. Patents and Statutory Invention Registrations: Enhanced Hypervelocity Impact Shield, 1997. Joint. Patents and Statutory Invention Registrations: Pressure Wall Patch, 1994. Joint. Patents and Statutory Invention Registrations: Advanced Anti-Tank Airframe Configuration Tandem Warhead Missile, 1991. Joint. Patents and Statutory Invention Registrations: Extendible Shoulder Fired Anti-tank Missile, 1990. Joint. Patents and Statutory Invention Registrations: Particulated Density Shaped Charge Liner, 1987. Patents and Statutory Invention Registrations: High Velocity Rotating Shaped Charge Warhead, 1986. Patents and Statutory Invention Registrations: Missile Canting Shaped Charge Warhead, 1985. Joint. NASA Group Achievement Awards (Space Station), 1992-1994. NASA Group Achievement Awards (Hubble System Review Team) 1989, 1990. Outstanding Performance Awards, 1984-1988, 1990, 1992-1997. First NASA-MSFC representative to International Space University, 1989. |
Breakout | Session Recording | 2022 |
|
Breakout Panelist 1 |
Heather Wojton Chief Data Officer IDA ![]() ![]() (bio)
Heather Wojton is Director of Data Strategy and Chief Data Officer at IDA, a role she assumed in 2021. In this position, Heather provides strategic leadership, project management, and direction for the corporation’s data strategy. She is responsible for enhancing IDA’s ability to efficiently and effectively accomplish research and business operations by assessing and evolving data systems, data management infrastructure, and data-related practices. Heather joined IDA in 2015 as a researcher in the Operational Evaluation Division of IDA’s Systems and Analyses Center. She is an expert in quantitative research methods, including test design and program evaluation. She held numerous research and leadership roles before being named an assistant director in the Operational Evaluation Division. As a researcher at IDA, Heather led IDA’s test science research program that facilitates data-driven decision-making within the Department of Defense (DOD) by advancing statistical, behavioral, and data science methodologies and applying them to the evaluation of defense acquisition programs. Heather’s other accomplishments include advancing methods for test design, modeling and simulation validation, data management and curation, and artificial intelligence testing. In this role, she worked closely with academic and DOD partners to adapt existing test design and evaluation methods for DoD use and develop novel methods where gaps persist. Heather has a doctorate in experimental psychology from the University of Toledo and a bachelor’s degree in research psychology from Marietta College, where she was a member of the McDonough International Leadership Program. She is a graduate of the George Washington University National Security Studies Senior Management Program and the Maxwell School National Security Management Course at Syracuse University. |
Breakout | Session Recording | 2022 |
|
Breakout Panelist 2 |
Laura Freeman Director, Intelligent Systems Division Virginia Tech ![]() ![]() (bio)
Dr. Laura Freeman is a Research Associate Professor of Statistics and dual hatted as the Director of the Intelligent Systems Lab, Virginia Tech National Security Institute and the Director of the Information Sciences and Analytics Division, Virginia Tech Applied Research Corporation (VT-ARC). Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She develops new methods for test and evaluation focusing on emerging system technology. In her role with VT-ARC she focuses on transitioning emerging research in these areas to solve challenges in Defense and Homeland Security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance. She is the Assistant Dean for Research for the College of Science, in that capacity she works to shape research directions and collaborations in across the College of Science in the Greater Washington D.C. area. Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data. |
Breakout | Session Recording | 2022 |
|
Panelist 3 |
Jane Pinelis Joint Artificial Intelligence Center ![]() ![]() (bio)
Dr. Jane Pinelis is the Chief of AI Assurance at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) as well as Responsible AI (RAI) implementation for JAIC capabilities, as well as development of AI Assurance products and standards that will support testing of AI-enabled systems across the DoD. Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing. Her team developed metrics at various levels of testing for AI capabilities and provided leadership empirically-based recommendations for model fielding. Additionally, she oversaw operational and human-machine teaming testing, and conducted research and outreach to establish standards in T&E of systems using artificial intelligence. Dr. Pinelis has spent over 10 years working predominantly in the area of defense and national security. She has largely focused on operational test and evaluation, both in support of the service operational testing commands and also at the OSD level. In her previous job as the Test Science Lead at the Institute of Defense Analyses, she managed an interdisciplinary team of scientists supporting the Director and the Chief Scientist of the Department of Operational Test and Evaluation on integration of statistical test design and analysis and data-driven assessments into test and evaluation practice. Before, that, in her assignment at the Marine Corps Operational Test and Evaluation Activity, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.” In addition to T&E, Dr. Pinelis has several years of experience leading analyses for the DoD in the areas of wargaming, precision medicine, warfighter mental health, nuclear non-proliferation, and military recruiting and manpower planning. Her areas of statistical expertise include design and analysis of experiments, quasi-experiments, and observational studies, causal inference, and propensity score methods. Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor. |
Session Recording | 2022 |
||
Panelist 4 |
Calvin Robinson NASA ![]() ![]() (bio)
Calvin Robinson is a Data Architect within the Information and Applications Division at NASA Glenn Research Center. He has over 10 years of experience supporting data analysis and simulation development for research, and currently supports several key data management efforts to make data more discoverable and aligned with FAIR principles. Calvin oversees the Center’s Information Management Program and supports individuals leading strategic AIML efforts within the Agency. Calvin holds a BS in Computer Science and Engineering from the University of Toledo. |
Session Recording | 2022 |
||
Predicting Trust in Automated Systems: Validation of the Trust of Automated Systems Test (Abstract)
The number of people using autonomous systems for everyday tasks has increased steadily since the 1960s and has dramatically increased with the invention of smart devices that can be controlled via smartphone. Within the defense community, automated systems are currently used to perform search and rescue missions and to assume control of aircraft to avoid ground collision. Until recently, researchers have only been able to gain insights on trust levels by observing a human’s reliance on the system, so it was apparent that researchers needed a validated method of quantifying how much an individual trusts the automated system they are using. IDA researchers developed the Trust of Automated Systems Test (TOAST scale) to serve as a validated scale capable of measuring how much an individual trusts a system. This presentation will outline the nine item TOAST scale’s understanding and performance elements, and how it can effectively be used in a defense setting. We believe that this scale should be used to evaluate the trust level of any human using any system, including predicting when operators will misuse or disuse complex, automated and autonomous systems. |
Caitlan Fealing Data Science Fellow IDA ![]() ![]() (bio)
Caitlan Fealing is a Data Science Fellow within the Test Science group of OED. She has a Bachelor of Arts degree in Mathematics, Economics, and Psychology from Williams College. Caitlan uses her background and focus on data science to create data visualizations, support OED’s program management databases, and contribute to the development of the many resources available on IDA’s Test Science website. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Profile Monitoring via Eigenvector Perturbation (Abstract)
Control charts are often used to monitor the quality characteristics of a process over time to ensure undesirable behavior is quickly detected. The escalating complexity of processes we wish to monitor spurs the need for more flexible control charts such as those used in profile monitoring. Additionally, designing a control chart that has an acceptable false alarm rate for a practitioner is a common challenge. Alarm fatigue can occur if the sampling rate is high (say, once a millisecond) and the control chart is calibrated to an average in-control run length (ARL0) of 200 or 370 which is often done in the literature. As alarm fatigue may not just be annoyance but result in detrimental effects to the quality of the product, control chart designers should seek to minimize the false alarm rate. Unfortunately, reducing the false alarm rate typically comes at the cost of detection delay or average out-of-control run length (ARL1). Motivated by recent work on eigenvector perturbation theory, we develop a computationally fast control chart called the Eigenvector Perturbation Control Chart for nonparametric profile monitoring. The control chart monitors the l_2 perturbation of the leading eigenvector of a correlation matrix and requires only a sample of known in-control profiles to determine control limits. Through a simulation study we demonstrate that it is able to outperform its competition by achieving an ARL1 close to or equal to 1 even when the control limits result in a large ARL0 on the order of 10^6. Additionally, non-zero false alarm rates with a change point after 10^4 in-control observations were only observed in scenarios that are either pathological or truly difficult for a correlation based monitoring scheme. |
Takayuki Iguchi PhD Student Florida State University (bio)
Takayuki Iguchi is a Captain in the US Air Force and is currently a PhD student under the direction of Dr. Eric Chicken at Florida State University. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Quantifying the Impact of Staged Rollout Policies on Software Process and Product Metrics (Abstract)
Software processes define specific sequences of activities performed to effectively produce software, whereas tools provide concrete computational artifacts by which these processes are carried out. Tool independent modeling of processes and related practices enable quantitative assessment of software and competing approaches. This talk presents a framework to assess an approach employed in modern software development known as staged rollout, which releases new or updated software features to a fraction of the user base in order to accelerate defect discovery without imposing the possibility of failure on all users. The framework quantifies process metrics such as delivery time and product metrics, including reliability, availability, security, and safety, enabling tradeoff analysis to objectively assess the quality of software produced by vendors, establish baselines, and guide process and product improvement. Failure data collected during software testing is employed to emulate the approach as if the project were ongoing. The underlying problem is to identify a policy that decides when to perform various stages of rollout based on the software’s failure intensity. The illustrations examine how alternative policies impose tradeoffs between two or more of the process and product metrics. |
Lance Fiondella Associate Professor University of Massachusetts Dartmouth ![]() ![]() (bio)
Lance Fiondella is an associate professor of Electrical and Computer Engineering at the University of Massachusetts Dartmouth and the Director of the UMassD Cybersecurity Center, a NSA/DHS designated Center of Academic Excellence in Cyber Research. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Risk Comparison and Planning for Bayesian Assurance Tests (Abstract)
Designing a Bayesian assurance test plan requires choosing a test plan that guarantees a product of interest is good enough to satisfy consumer’s criteria but not ‘so good’ that it causes producer’s concern if they fail the test. Bayesian assurance tests are especially useful because they can incorporate previous product information in the test planning and explicitly control levels of risk for the consumer and producer. We demonstrate an algorithm for efficiently computing a test plan given desired levels of risks in binomial and exponential testing. Numerical comparisons with the Operational Characteristic (OC) curve, Probability Ratio Sequential Test (PRST), and a simulation-based Bayesian sample size determination approach are also considered. |
Hyoshin Kim North Carolina State University ![]() ![]() (bio)
Hyoshin Kim received her B.Ec. in Statistics from Sungkyunkwan University, South Korea, in 2017, and her M.S. in Statistics from Seoul National University, South Korea, in 2019. She is currently a third year Ph.D. student at the department of Statistics at North Carolina State University. Her research interests are Bayesian assurance testing and Bayesian clustering algorithms for high dimensional correlated outcomes. |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Safe Machine Learning Prediction and Optimization via Extrapolation Control (Abstract)
Uncontrolled model extrapolation leads to two serious kinds of errors: (1) the model may be completely invalid far from the data, and (2) the combinations of variable values may not be physically realizable. Optimizing models that are fit to observational data can lead to extrapolated solutions that are of no practical use without any warning. In this presentation we introduce a general approach to identifying extrapolation based on a regularized Hotelling T-squared metric. The metric is robust to certain kinds of messy data and can handle models with both continuous and categorical inputs. The extrapolation model is intended to be used in parallel with a machine learning model to identify when the machine learning model is being applied to data that are not close to that model training set or as a non-extrapolation constraint when optimizing the model. The methodology described was introduced into the JMP Pro 16 Profiler. |
Tom Donnelly and Laura Lancaster JMP Statistical Discovery LLC |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Sparse Models for Detecting Malicious Behavior in OpTC (Abstract)
Host-based sensors are standard tools for generating event data to detect malicious activity on a network. There is often interest in detecting activity using as few event classes as possible in order to minimize host processing slowdowns. Using DARPA’s Operationally Transparent Cyber (OpTC) Data Release, we consider the problem of detecting malicious activity using event counts aggregated over five-minute windows. Event counts are categorized by eleven features according to MITRE CAR data model objects. In the supervised setting, we use regression trees with all features to show that malicious activity can be detected at above a 90% true positive rate with a negligible false positive rate. Using forward and exhaustive search techniques, we show the same performance can be obtained using a sparse model with only three features. In the unsupervised setting, we show that the isolation forest algorithm is somewhat successful at detecting malicious activity, and that a sparse three-feature model performs comparably. Finally, we consider various search criteria for identifying sparse models and demonstrate that the RMSE criteria is generally optimal. |
Andrew Mastin Operations Research Scientist Lawrence Livermore National Laboratory ![]() ![]() (bio)
Andrew Mastin is an Operations Research Scientist at Lawrence Livermore National Laboratory. He holds a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. His current research interests include cybersecurity, network interdiction, dynamic optimization, and game theory. |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
||
Breakout STAT and UQ Implementation Lessons Learned (Abstract)
David Harrison and Kelsey Cannon from Lockheed Martin Space will present on STAT and UQ implementation lessons learned within Lockheed Martin. Faced with training 60,000 engineers in statistics, David and Kelsey formed a plan to make STAT and UQ processes the standard at Lockheed Martin. The presentation includes a range of information from initial communications plan, to obtaining leader adoption, to training engineers across the corporation. Not all programs initially accepted this process, but implementation lessons have been learned over time as many compounding successes and savings have been recorded. ©2022 Lockheed Martin, all rights reserved |
Kelsey Cannon Materials Engineer Lockheed Martin ![]() ![]() (bio)
Kelsey Cannon is a Senior Research Scientist at Lockheed Martin Space, previously completing a Specialty Engineering rotation program where she worked in a variety of environments and roles. Kelsey currently works with David Harrison, the statistical engineering SME at LM, to implement technical principles and a communications plan throughout the corporation. Kelsey holds a BS in Metallurgical and Materials Engineering from the Colorado School of Mines and is nearing completion of a MS in Computer Science and Data Science. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Stochastic Modeling and Characterization of a Wearable-Sensor-Based Surveillance Network (Abstract)
Current disease outbreak surveillance practices reflect underlying delays in the detection and reporting of disease cases, relying on individuals who present symptoms to seek medical care and enter the health care system. To accelerate the detection of outbreaks resulting from possible bioterror attacks, we introduce a novel two-tier, human sentinel network (HSN) concept composed of wearable physiological sensors capable of pre-symptomatic illness detection, which prompt individuals to enter a confirmatory stage where diagnostic testing occurs at a certified laboratory. Both the wearable alerts and test results are reported automatically and immediately to a secure online platform via a dedicated application. The platform aggregates the information and makes it accessible to public health authorities. We evaluated the HSN against traditional public health surveillance practices for outbreak detection of 80 Bacillus anthracis (Ba) release scenarios in mid-town Manhattan, NYC. We completed an end-to-end modeling and analysis effort, including the calculation of anthrax exposures and doses based on computational atmospheric modeling of release dynamics, and development of a custom-built probabilistic model to simulate resulting wearable alerts, diagnostic test results, symptom onsets, and medical diagnoses for each exposed individual in the population. We developed a novel measure of network coverage, formulated new metrics to compare the performance of the HSN to public health surveillance practices, completed a Design of Experiments to optimize the test matrix, characterized the performant trade-space, and performed sensitivity analyses to identify the most important engineering parameters. Our results indicate that a network covering greater than ~10% of the population would yield approximately a 24-hour time advantage over public health surveillance practices in identifying outbreak onset, and provide a non-target-specific indication (in the form of a statistically aberrant number of wearable alerts) of approximately 36-hours; these earlier detections would enable faster and more effective public health and law enforcement responses to support incident characterization and decrease morbidity and mortality via post-exposure prophylaxis. |
Jane E. Valentine Senior Biomedical Engineer Johns Hopkins University Applied Physics Laboratory (bio)
Jane Valentine received her B.S. in Mathematics and French, and Ph.D. in Biomedical Engineering, both from Carnegie Mellon University. She then completed a post-doc in Mechanical Engineering at the University of Illinois, and a data science fellowship in the United Kingdom, working with a pharmaceutical company. She has been working at the Johns Hopkins University Applied Physics Laboratory since 2020, where she works on mathematical modeling and simulation, optimization, and data science, particularly in the areas of biosensors, knowledge graphs, and epidemiological modeling. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Structural Dynamic Programming Methods for DOD Research (Abstract)
Structural dynamic programming models are a powerful tool to help guide policy under uncertainty. By creating a mathematical representation of the intertemporal optimization problem of interest, these models can answer questions that static models cannot address. Applications can be found from military personnel policy (how does future compensation affect retention now?) to inventory management (how many aircraft are needed to meet readiness objectives?). Recent advances in statistical methods and computational algorithms allow us to develop dynamic programming models of complex real-world problems that were previously too difficult to solve. |
Mikhail Smirnov Research Staff Member IDA (bio)
Mikhail earned his PhD in Economics from Johns Hopkins University in 2017 and recently joined the Strategy, Forces, and Resources Division at the Institute for Defense Analyses after spending several years at CNA. He specializes in structural and nonparametric econometrics, computational statistics, and machine learning, and his research has focused on questions related to retention and other personnel related decisions for the DOD. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Tutorial Survey Dos and Don’ts (Abstract)
How many surveys have you been asked to fill out? How many did you actually complete? Why those surveys? Did you ever feel like the answer you wanted to mark was missing from the list of possible responses? Surveys can be a great tool for data collection if they are thoroughly planned out and well-designed. They are a relatively inexpensive way to collect a large amount of data from hard to reach populations. However, if they are poorly designed, the test team might end up with a lot of data and little to no information. Join the STAT COE for a short tutorial on the dos and don’ts of survey design and analysis. We’ll point out the five most common survey mistakes, compare and contrast types of questions, discuss the pros and cons for potential analysis methods (such as descriptive statistics, linear regression, principal component analysis, factor analysis, hypothesis testing, and cluster analysis), and highlight how surveys can be used to supplement other sources of information to provide value to an overall test effort. DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. CLEARED on 5 Jan 2022. Case Number: 88ABW-2022-0003 |
Gina Sigler & Alex (Mary) McBride Statistician Scientific Test and Analysis Techniques Center of Excellence (STAT COE) (bio)
Gina Sigler is a senior statistician at Huntington Ingalls Industries, working at the Scientific Test and Analysis Techniques (STAT) Center of Excellence (COE) at the Air Force Institute of Technology (AFIT), where she provides rigorous test designs and best practices to programs across the Department of Defense (DoD). She was part of the AETC 2019 Air Force Analytic Team of the Year. Before joining the STAT COE, she worked as a faculty associate in the Statistics Department at the University of Wisconsin (UW)-Madison. She earned a B.S. degree in statistics from Michigan State University in 2012, an M.S. in statistics from the UW-Madison in 2014, and is currently pursuing a Ph.D. in Applied Mathematics-Statistics at AFIT. Alex McBride is a senior statistician at Huntington Ingalls Industries, working at the Homeland Security Community of Best Practices (HS CoBP) at the Air Force Institute of Technology (AFIT), where she provides rigorous test designs, analysis, and workforce development to acquisition programs across the Department of Homeland Security (DHS). She was part of the TED Workforce Development Team awarded a 2020 Under Secretary’s Award in the category of Science and Engineering. Before joining the HS CoBP, she was a graduate teaching assistant for the Statistics Department at Wright State University. She earned a B.S. degree in statistics from Grand Valley State University in 2017 and an M.S. in statistics from the Wright State University in 2019. |
Tutorial | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Poster T&E of Responsible AI (Abstract)
Getting Responsible AI (RAI) right is difficult and demands expertise. All AI-relevant skill sets, including ethics, are in high demand and short supply, especially regarding AI’s intersection with test and evaluation (T&E). Frameworks, guidance, and tools are needed to empower working-level personnel across DOD to generate RAI assurance cases with support from RAI SMEs. At a high level, framework should address the following points: 1. T&E is a necessary piece of the RAI puzzle–testing provides a feedback mechanism for system improvement and builds public and warfighter confidence in our systems, and RAI should be treated just like performance, reliability, and safety requirements. 2. We must intertwine T&E and RAI across the cradle-to-grave product life cycle. Programs must embrace T&E and RAI from inception; as development proceeds, these two streams must be integrated in tight feedback loops to ensure effective RAI implementation. Furthermore, many AI systems, along with their operating environments and use cases, will continue to update and evolve and thus will require continued evaluation after fielding. 3. The five DOD RAI principles are a necessary north star, but alone they are not enough to implement or ensure RAI. Programs will have to integrate multiple methodologies and sources of evidence to construct holistic arguments for how much the programs have reduced RAI risks. 4. RAI must be developed, tested, and evaluated in context–T&E without operationally relevant context will fail to ensure that fielded tools achieve RAI. Mission success depends on technology that must interact with warfighters and other systems in complex environments, while constrained by processes and regulation. AI systems will be especially sensitive to operational context and will force T&E to expand what it considers. |
Rachel Haga Research Associate IDA ![]() ![]() |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Taming the beast: making questions about the supply system tractable by quantifying risk (Abstract)
The DoD sustainment system is responsible for managing the supply of millions of different spare parts, most of which are infrequently and inconsistently requisitioned, and many of which have procurement lead times measured in years. The DoD must generally buy items in anticipation of need, yet it simply cannot afford to buy even one copy of every unique part it might be called upon to deliver. Deciding which items to purchase necessarily involves taking risks, both military and financial. However, the huge scale of the supply system makes these risks difficult to quantify. We have developed methods that use raw supply data in new ways to support this decision making process. First, we have created a method to identify areas of potential overinvestment that could safely be reallocated to areas at risk of underinvestment. Second, we have used raw requisition data to create an item priority list for individual weapon systems in terms of importance to mission success. Together, these methods allow DoD decision makers to make better-informed decisions about where to take risks and where to invest scarce resources. |
Joseph Fabritius and Kyle Remley Research Staff Member/Research Staff Member IDA (bio)
Joseph Fabritius earned his Bachelor’s degree in Physics from Rochester Institute of Technology in 2012. He earned his Master’s degree in Physics from Drexel University 2017, and he earned his PhD in Physics from Drexel University in 2021. He currently is a Research Staff Member at the Institute for Defense Analyses where he works on sustainment analyses. Kyle Remley earned his Bachelor’s degree in Nuclear and Radiological Engineering from Georgia Tech in 2013. He earned his Master’s degree in Nuclear Engineering from Georgia Tech in 2015, and he earned his PhD in Nuclear and Radiological Engineering from Georgia Tech in 2016. He was an engineer at the Naval Nuclear Laboratory from 2017 to 2020. Since 2020, he has been a Research Staff Member at Institute for Defense Analyses, where he works on sustainment analyses. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Test & Evaluation of ML Models (Abstract)
Machine Learning models have been incredibly impactful over the past decade; however, testing those models and comparing their performance has remained challenging and complex. In this presentation, I will demonstrate novel methods for measuring the performance of computer vision object detection models, including running those models against still imagery and video. The presentation will start with an introduction to the pros and cons of various metrics, including traditional metrics like precision, recall, average precision, mean average precision, F1, and F-beta. The talk will then discuss more complex topics such as tracking metrics, handling multiple object classes, visualizing multi-dimensional metrics, and linking metrics to operational impact. Anecdotes will be shared discussing different types of metrics that are appropriate for different types of stakeholders, how system testing fits in, best practices for model integration, best practices for data splitting, and cloud vs on-prem compute lessons learned. The presentation will conclude by discussing what software libraries are available to calculate these metrics, including the MORSE-developed library Charybdis. |
Anna Rubinstein Director of Test and Evaluation MORSE Corporation ![]() ![]() (bio)
Dr. Anna Rubinstein serves as the Director of Test and Evaluation for a Department of Defense (DoD) Artificial Intelligence (AI) program. She directs testing for AI models spanning the fields of computer vision, natural language processing, and other forms of machine perception. She leads teams developing metrics and assessing capabilities at the algorithm, system, and operational level, with a particular interest in human-machine teaming. Dr. Rubinstein has spent the last five years supporting national defense as a contractor, largely focusing on model and system evaluation. In her previous role as a Science Advisor in the Defense Advanced Research Projects Agency’s (DARPA) Information Innovation Office (I2O), she provided technical insight to research programs modeling cyber operations in the information domain and building secure software-reliant systems. Before that, Dr. Rubinstein served as a Research Staff Member at the Institute for Defense Analyses (IDA), leading efforts to provide verification and validation of nuclear weapons effects modeling codes in support of the Defense Threat Reduction Agency (DTRA). Dr. Rubinstein also has several years of experience developing algorithms for atmospheric forecasting, autonomous data fusion, social network mapping, anomaly detection, and pattern optimization. Dr. Rubinstein holds an M.A. in Chemical Engineering and a Ph.D. in Chemical Engineering and Materials Science from Princeton University, where she was a National Science Foundation Graduate Research Fellow. She also received a B.S. in Chemical Engineering, a B.A. in Chemistry, and a B.A. in Chinese, all from the University of Mississippi, where she was a Barry M. Goldwater Scholar. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Test and Evaluation Framework for AI Enabled Systems (Abstract)
In the current moment, autonomous and artificial intelligence (AI) systems are emerging at a dizzying pace. Such systems promise to expand the capacity and capability of individuals by delegating increasing levels of decision making down to the agent-level. In this way, operators can set high-level objectives for multiple vehicles or agents and need only intervene when alerted to anomalous conditions. Test and evaluation efforts at the Join AI Center are focused on exercising a prescribed test strategy for AI-enabled systems. This new AI T&E Framework recognizes the inherent complexity that follows from incorporating dynamic decision makers into a system (or into a system-of-systems). The AI T&E Framework is composed of four high-level types of testing that examine at an AI-enabled system from different angles to provide as complete a picture as possible of the system’s capabilities and limitations, including algorithmic, system integration, human-system integration, and operational tests. These testing categories provides stakeholders with appropriate qualitative and quantitative assessments that bound the system’s use cases in a meaningful way. The algorithmic tests characterize the AI models themselves against metrics for effectiveness, security, robustness and responsible AI principles. The system integration tests the system itself to ensure it operates reliably, functions correctly, and is compatible with other components. The human-machine testing asks what do human operators think of the system, if they understand what the system is telling them, and if they trust the system under appropriate conditions. All of which culminates in an operational test that evaluates how the system performs in a realistic environment with realistic scenarios and adversaries. Interestingly, counter to traditional approaches, this framework is best applied during and throughout the development of an AI-enabled system. Our experience is that programs that conduct independent T&E alongside development do not suffer delays, but instead benefit from the feedback and insights gained from incremental and iterative testing, which leads to the delivery of a better overall capability. |
Brian Woolley T&E Enclave Manager Joint Artificial Intelligence Center ![]() ![]() (bio)
Lt Col Brian Woolley is a U.S. Air Force officer currently serving as the Test and Evaluation Enclave Manager at the DoD’s Joint Artificial Intelligence Center, Arlington, Virginia. He earned his Doctoral degree in Computer Engineering from the University of Central Florida and a Masters of Science in Software Engineering from the Air Force Institute of Technology. During his 19-year military career, Brian has served as a Cyber Operations Officer supporting the Air Force Weather Agency, the Joint Headquarters for the DoD Information Networks, U.S. Cyber Command, and as the Deputy Directory for the Autonomy and Navigation Center at the Air Force Institute of Technology. |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Poster Next Gen Breaching Technology: A Case Study in Deterministic Binary Response Emulation |
Eli Golden Statistician US Army DEVCOM Armaments Center ![]() |
Poster | Session Recording |
![]() | 2022 |
Poster Nonparametric multivariate profile monitoring using regression trees |
Daniel A. Timme PhD Candidate Florida State University ![]() ![]() |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Opening Remarks |
Bram Lillard Director, OED IDA ![]() ![]() |
2022 |
|||
Opening Remarks |
Norton Schwartz President IDA ![]() ![]() |
Session Recording | 2022 |
||
Short Course Operational Cyber Resilience in Engineering and Systems Test |
Peter Beling Professor Virginia Tech ![]() ![]() |
Short Course |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Poster Optimal Designs for Multiple Response Distributions |
Brittany Fischer PhD Candidate Arizona State University |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Orbital Debris Effects Prediction Tool for Satellite Constellations |
Joel Williamsen Research Staff Member IDA ![]() ![]() |
Breakout | Session Recording | 2022 |
|
Breakout Panelist 1 |
Heather Wojton Chief Data Officer IDA ![]() ![]() |
Breakout | Session Recording | 2022 |
|
Breakout Panelist 2 |
Laura Freeman Director, Intelligent Systems Division Virginia Tech ![]() ![]() |
Breakout | Session Recording | 2022 |
|
Panelist 3 |
Jane Pinelis Joint Artificial Intelligence Center ![]() ![]() |
Session Recording | 2022 |
||
Panelist 4 |
Calvin Robinson NASA ![]() ![]() |
Session Recording | 2022 |
||
Predicting Trust in Automated Systems: Validation of the Trust of Automated Systems Test |
Caitlan Fealing Data Science Fellow IDA ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Profile Monitoring via Eigenvector Perturbation |
Takayuki Iguchi PhD Student Florida State University |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Quantifying the Impact of Staged Rollout Policies on Software Process and Product Metrics |
Lance Fiondella Associate Professor University of Massachusetts Dartmouth ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Risk Comparison and Planning for Bayesian Assurance Tests |
Hyoshin Kim North Carolina State University ![]() ![]() |
Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
|
Breakout Safe Machine Learning Prediction and Optimization via Extrapolation Control |
Tom Donnelly and Laura Lancaster JMP Statistical Discovery LLC |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Sparse Models for Detecting Malicious Behavior in OpTC |
Andrew Mastin Operations Research Scientist Lawrence Livermore National Laboratory ![]() ![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
||
Breakout STAT and UQ Implementation Lessons Learned |
Kelsey Cannon Materials Engineer Lockheed Martin ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Stochastic Modeling and Characterization of a Wearable-Sensor-Based Surveillance Network |
Jane E. Valentine Senior Biomedical Engineer Johns Hopkins University Applied Physics Laboratory |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Structural Dynamic Programming Methods for DOD Research |
Mikhail Smirnov Research Staff Member IDA |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Tutorial Survey Dos and Don’ts |
Gina Sigler & Alex (Mary) McBride Statistician Scientific Test and Analysis Techniques Center of Excellence (STAT COE) |
Tutorial | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Poster T&E of Responsible AI |
Rachel Haga Research Associate IDA ![]() ![]() |
Poster | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Taming the beast: making questions about the supply system tractable by quantifying risk |
Joseph Fabritius and Kyle Remley Research Staff Member/Research Staff Member IDA |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Test & Evaluation of ML Models |
Anna Rubinstein Director of Test and Evaluation MORSE Corporation ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |
Breakout Test and Evaluation Framework for AI Enabled Systems |
Brian Woolley T&E Enclave Manager Joint Artificial Intelligence Center ![]() ![]() |
Breakout | Session Recording |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | 2022 |