Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Keynote Closing Remarks (Abstract)
Mr. William (Allen) Kilgore serves as Director, Research Directorate at NASA Langley Research Center. He previously served as Deputy Director of Aerosciences providing executive leadership and oversight for the Center’s Aerosciences fundamental and applied research and technology capabilities with the responsibility over Aeroscience experimental and computational research. After being appointed to the Senior Executive Service (SES) in 2013, Mr. Kilgore served as the Deputy Director, Facilities and Laboratory Operations in the Research Directorate. Prior to this position, Mr. Kilgore spent over twenty years in the operations of NASA Langley’s major aerospace research facilities including budget formulation and execution, maintenance, strategic investments, workforce planning and development, facility advocacy, and integration of facilities’ schedules. During his time at Langley, he has worked in nearly all of the major wind tunnels with a primary focus on process controls, operations and testing techniques supporting aerosciences research. For several years, Mr. Kilgore led the National Transonic Facility, the world’s largest cryogenic wind tunnel. Mr. Kilgore has been at NASA Langley Research Center since 1989, starting as a graduate student. Mr. Kilgore earned a B.S. and M.S. in Mechanical Engineering with concentration in dynamics and controls from Old Dominion University in 1984 and 1989, respectively. He is the recipient of NASA’s Exceptional Engineering Achievement Medal in 2008 and Exceptional Service Medal in 2012. |
William “Allen” Kilgore Director, Research Directorate NASA Langley Research Center ![]() |
Keynote | Session Recording |
Recording | 2021 |
Breakout Cognitive Work Analysis – From System Requirements to Validation and Verification (Abstract)
Human-system interaction is a critical yet often neglected aspect of the system development process. It is mostly commonly incorporated into system performance assessments late in the design process leaving little opportunity for any substantive changes to be made to ensure satisfactory system performance achieved. As a result, workarounds and compromises become a patchwork of “corrections” that end up in the final fielded system. But what if mission outcomes, the work context, and performance expectations can be articulated earlier in the process, thereby influencing the development process throughout? This presentation will discuss how a formative method from the field of cognitive systems engineering, cognitive work analysis, can be leveraged to derive design requirements compatible with traditional systems engineering processes. This method establishes not only requirements from which system designs can be constructed, but also how system performance expectations can be more acutely defined a priori to guide the validation and verification process. Cognitive work analysis methods will be described to highlight how ‘cognitive work’ and ‘information relationship’ requirements can be derived and will be showcased in a case-study application of building a decision support system for future human spaceflight operations. Specifically, a description of the testing campaign employed to verify and validate the fielded system will be provided. In summary, this presentation will cover how system requirements can be established early in the design phase, guide the development of design solutions, and subsequently be used to assess the operational performance of the solutions within the context of the work domain it is intended to support. |
Matthew Miller Exploration Research Engineer Jacobs/NASA Johnson Space Center ![]() (bio)
Matthew J. Miller is an Exploration Research Engineer within the Astromaterials Research and Exploration Sciences (ARES) division at NASA Johnson Space Center. His work focuses on advancing present-day tools, technologies and techniques to improve future EVA operations by applying cognitive systems engineering principles. He has over seven years of EVA flight operations and NASA analog experience where he has developed and deployed various EVA support systems and concept of operations. He received a B.S. (2012), M.S. (2014) and Ph.D. (2017) in aerospace engineering from the Georgia Institute of Technology. |
Breakout |
![]() | 2021 |
|
Breakout Collaborative Human AI Red Teaming (Abstract)
The Collaborative Human AI Red Teaming (CHART) project is an effort to develop an AI Collaborator which can help human test engineers quickly develop test plans for AI systems. CHART was built around processes developed for cybersecurity red-teaming. Using a goal-focused approach based upon iteratively testing and attacking a system then updating the testers model to discover novel failure modes not discovered by traditional T&E processes. Red teaming is traditionally a time intensive process which requires subject matter expert to study the system they are testing for months in order to develop attack strategies. CHART will accelerate this process by guiding the user through the process of diagraming the AI system under test and drawing upon a pre-established body of knowledge to identify the most probably vulnerabilities. CHART was provided internal seedling funds during FY20 to perform a feasibility study of the technology. During this period the team developed a taxonomy of AI vulnerabilities and an ontology of AI irruptions. Irruptions being events (either caused by a malicious actor or unintended consequences) which trigger the vulnerability and lead to an undesirable result. Using this taxonomy we built a threat modeling tool that allows users to diagram their AI system and identifies all the possible irruptions which could occur. This initial demonstration was based around two scenarios. An smartphone-based ECG system for telemedicine and a UAV trained reinforcement learning to avoid mid-air collisions. In this talk we will first discuss how Red Teaming differs from adversarial machine learning and traditional testing and evaluation. Next, we will provide an overview of how industry is approaching the problem of AI Red Teaming and how our approach differs. Finally, we will discuss how we developed our taxonomy of AI vulnerabilities, how to apply goal-focused testing to AI systems, and our strategy for automatically generating test plans. |
Galen Mullins Senior AI Researcher Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Dr. Galen Mullins is a senior staff scientist in the Robotics Group of the Intelligent Systems branch at the Johns Hopkins Applied Physics Laboratory. His research is focused on developing intelligent testing techniques and adversarial tools for finding the vulnerabilities of AI systems. His recent project work has included the development of new imitation learning frameworks for modeling the behavior of autonomous vehicles, creating algorithms for generating adversarial environments, and developing red teaming procedures for AI systems. He is the secretary for the IEEE/RAS working group on Guidelines for Verification of Autonomous Systems and teaches the Introduction to Robotics course at the Johns Hopkins Engineering for Professionals program. Dr. Galen Mullins received his B.S degrees in Mechanical Engineering and Mathematics respectively from Carnegie Mellon University in 2007 and joined APL the same year. Since then he earned his M.S. in Applied Physics from Johns Hopkins University in 2010, and his Ph.D in Mechanical Engineering from the University of Maryland in 2018. His doctoral research was focused on developing active learning algorithms for generating adversarial scenarios for autonomous vehicles. |
Breakout |
![]() | 2021 |
|
Short Course Combinatorial Interaction Testing (Abstract)
This mini-tutorial provides an introduction to combinatorial interaction testing (CIT). The main idea behind CIT is to pseudo-exhaustively test software and hardware systems by covering combinations of components in order to detect faults. In 90 minutes, we provide an overview of this domain that includes the following topics: the role of CIT in software and hardware testing, how it complements and differs from design of experiments, considerations such as variable strength and constraints, the typical combinatorial arrays used for constructing test suites, and existing tools for test suite construction. Last, defense systems are increasingly relying on software with embedded machine learning (ML), yet ML poses unique challenges to applying conventional software testing due to characteristics such as the large input space, effort required for white box testing, and emergent behaviors apparent only at integration or system levels. As a well-studied black box approach to testing integrated systems with a pseudo-exhaustive strategy for handling large input spaces, CIT provides a good foundation for testing ML. In closing, we present recent research adapting concepts of combinatorial coverage to test design for ML. |
Erin Lanus Research Assistant Professor Virginia Tech ![]() (bio)
Erin Lanus is a Research Assistant Professor at the Hume Center for National Security and Technology at Virginia Tech. She has a Ph.D. in Computer Science with a concentration in cybersecurity from Arizona State University. Her experience includes work as a Research Fellow at University of Maryland Baltimore County and as a High Confidence Software and Systems Researcher with the Department of Defense. Her current interests are software and combinatorial testing, machine learning in cybersecurity, and artificial intelligence assurance. |
Short Course | Session Recording |
Materials
Recording | 2021 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions (Abstract)
Cybersecurity Metrics and Quantification is a fundamental but notoriously hard problem. It is one of the pillars underlying the emerging Science of Cybersecurity. In this talk, I will describe a number of cybersecurity metrics quantification research problems that are encountered in evaluating the effectiveness of a range of cyber defense tools. I will review the research results we have obtained over the past years. I will also discuss future research directions, including the ones that are undertaken in my research group. |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() (bio)
Shouhuai Xu is the Gallogly Chair Professor in the Department of Computer Science, University of Colorado Colorado Springs (UCCS). Prior to joining UCCS, he was with the Department of Computer Science, University of Texas at San Antonio. He pioneered a systematic approach, dubbed Cybersecurity Dynamics, to modeling and quantifying cybersecurity from a holistic perspective. This approach has three orthogonal research thrusts: metrics (for quantifying security, resilience and trustworthiness/uncertainty, to which this talk belongs), cybersecurity data analytics, and cybersecurity first-principle modeling (for seeking cybersecurity laws). His research has won a number of awards, including the 2019 worldwide adversarial malware classification challenge organized by the MIT Lincoln Lab. His research has been funded by AFOSR, AFRL, ARL, ARO, DOE, NSF and ONR. He co-initiated the International Conference on Science of Cyber Security (SciSec) and is serving as its Steering Committee Chair. He has served as Program Committee co-chair for a number of international conferences and as Program Committee member for numerous international conferences. Â He is/was an Associate Editor of IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), IEEE Transactions on Information Forensics and Security (IEEE T-IFS), and IEEE Transactions on Network Science and Engineering (IEEE TNSE). More information about his research can be found at https://xu-lab.org. |
Breakout | Materials | 2021 |
|
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
|
Breakout Debunking Stress Rupture Theories Using Weibull Regression Plots (Abstract)
As statisticians, we are always working on new ways to explain statistical methodologies to non-statisticians. It is in this realm that we never underestimate the value of graphics and patience! In this presentation, we present a case study that involves stress rupture data where a Weibull regression is needed to estimate the parameters. The context of the case study results from a multi-stage project supported by NASA’s Engineering Safety Center (NESC) where the objective was to assess the safety of composite overwrapped pressure vessels (COPVs). The analytical team was tasked with devising a test plan to model stress rupture failure risk in carbon fiber strands that encase the COPVs with the goal of understanding the reliability of the strands at use conditions for the expected mission life. While analyzing the data, we found that the proper analysis contradicts accepted theories about the stress rupture phenomena. In this talk, we will introduce ways to graph the stress rupture data to better explain the proper analysis and also explore assumptions. |
Anne Driscoll Associate Collegiate Professor Virginia Tech ![]() (bio)
Anne Ryan Driscoll is an Associate Collegiate Professor in the Department of Statistics at Virginia Tech. She received her PhD in Statistics from Virginia Tech. Her research interests include statistical process control, design of experiments, and statistics education. She is a member of ASQ and ASA. |
Breakout |
![]() | 2021 |
|
Breakout Empirical Analysis of COVID-19 in U.S. States and Counties (Abstract)
The zoonotic emergence of the coronavirus SARS-CoV-2 at the beginning of 2020 and the subsequent global pandemic of COVID-19 has caused massive disruptions to economies and health care systems, particularly in the United States. Using the results of serology testing, we have developed true prevalence estimates for COVID-19 case counts in the U.S. over time, which allows for more clear estimates of infection and case fatality rates throughout the course of the pandemic. In order to elucidate policy, demographic, weather, and behavioral factors that contribute to or inhibit the spread of COVID-19, IDA compiled panel data sets of empirically derived, publicly available COVID-19 data and analyzed which factors were most highly correlated with increased and decreased spread within U.S. states and counties. These analyses lead to several recommendations for future pandemic response preparedness. |
Emily Heuring Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Emily Heuring received her PhD in Biochemistry, Cellular, and Molecular Biology from the Johns Hopkins University School of Medicine in 2004 on the topic of human immunodeficiency virus and its impact on the central nervous system. Since that time, she has been a Research Staff Member at the Institute for Defense Analyses, supporting operational testing of chemical and biological defense programs. More recently, Dr. Heuring has supported OSD-CAPE on Army and Marine Corps programs and the impact of COVID-19 on the general population and DOD. |
Breakout |
![]() | 2021 |
|
Breakout Entropy-Based Adaptive Design for Contour Finding and Estimating Reliability (Abstract)
In reliability, methods used to estimate failure probability are often limited by the costs associated with model evaluations. Many of these methods, such as multi-fidelity importance sampling (MFIS), rely upon a cheap, surrogate model like a Gaussian process (GP) to quickly generate predictions. The quality of the GP fit, at least in the vicinity of the failure region(s), is instrumental in propping up such estimation strategies. We introduce an entropy-based GP adaptive design that, when paired with MFIS, provides more accurate failure probability estimates and with higher confidence. We show that our greedy data acquisition scheme better identifies multiple failure regions compared to existing contour-finding schemes. We then extend the method to batch selection. Illustrative examples are provided on benchmark data as well as an application to the impact damage simulator of a NASA spacesuit design. |
Austin Cole PhD Candidate Virginia Tech ![]() (bio)
Austin Cole is a statistics PhD candidate at Virginia Tech. He previously taught high school math and statistics courses, and holds a Bachelor’s in Mathematics and Master’s in Secondary Education from the College of William and Mary. Austin has worked with dozens of researchers as a lead collaborator in Virginia Tech’s Statistical Applications and Innovations Group (SAIG). Under the supervision of Dr. Robert Gramacy, Austin has conducted research in the area of computer experiments with focuses on Bayesian optimization, sparse covariance matrices, and importance sampling. He is currently collaborating with researchers at NASA Langley, to evaluate the safety of the next generation of spacesuits. |
Breakout |
![]() | 2021 |
|
Breakout Estimating Pure-Error from Near Replicates in Design of Experiments (Abstract)
In design of experiments, setting exact replicates of factor settings enables estimation of pure-error; a model-independent estimate of experimental error useful in communicating inherent system noise and testing of model lack-of-fit. Often in practice, the factor levels for replicates are precisely measured rather than precisely set, resulting in near-replicates. This can result in inflated estimates of pure-error due to uncompensated set-point variation. In this article, we review previous strategies for estimating pure-error from near-replicates and propose a simple alternative. We derive key analytical properties and investigate them via simulation. Finally, we illustrate the new approach with an application. |
Caleb King Research Statistician Developer SAS Institute ![]() |
Breakout |
![]() | 2021 |
|
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo (Abstract)
With the rise of machine learning and artificial intelligence, there has been a huge surge in data-driven approaches to solve computational science and engineering problems. In the context of uncertainty propagation, machine learning is often employed for the construction of efficient surrogate models (i.e., response surfaces) to replace expensive, physics-based simulations. However, relying solely on surrogate models without any recourse to the original high-fidelity simulation will produce biased estimators and can yield unreliable or non-physical results. This talk discusses multi-model Monte Carlo methods that combine predictions from both fast, low-fidelity models with reliable, high-fidelity simulations to enable efficient and accurate uncertainty propagation. For instance, the low-fidelity models could arise from coarsened discretizations in space/time (e.g., Multilevel Monte Carlo – MLMC) or from general data-driven or reduced order models (e.g., Multifidelity Monte Carlo – MFMC; Approximate Control Variates – ACV). Given a fixed computational budget and a collection of models of varying cost/accuracy, the goal of these methods is to optimally allocate and combine samples across the models. The talk will also present a NASA-developed open-source Python library that acts as a general multi-model uncertainty propagation capability. The effectiveness of the discussed methods and Python library is demonstrated on a trajectory simulation application. Here, orders of magnitude computational speedup and accuracy are obtained for predicting the landing location of an umbrella heat shield under significant uncertainties in initial state, atmospheric conditions, etc. |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() (bio)
Dr. Geoffrey Bomarito is a Materials Research Engineer at NASA Langley Research Center. Before joining NASA in 2014, he earned a PhD in Computational Solid Mechanics from Cornell University. He also holds an MEng from the Massachusetts Institute of Technology and a BS from Cornell University, both in Civil and Environmental Engineering. Dr. Bomarito’s work centers around machine learning and uncertainty quantification as applied to aerospace materials and structures. His current topics of interest are physics informed machine learning, symbolic regression, additive manufacturing, and trajectory simulation. |
Breakout |
![]() | 2021 |
|
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() (bio)
Joseph B. Lyons is the Lead for the Collaborative Interfaces and Teaming Core Research Area within the 711 Human Performance Wing at Wright-Patterson AFB, OH. Dr. Lyons received his PhD in Industrial/Organizational Psychology from Wright State University in Dayton, OH, in 2005. Some of Dr. Lyons’ research interests include human-machine trust, interpersonal trust, human factors, and influence. Dr. Lyons has worked for the Air Force Research Laboratory as a civilian researcher since 2005, and between 2011-2013 he served as the Program Officer at the Air Force Office of Scientific Research where he created a basic research portfolio to study both interpersonal and human-machine trust as well as social influence. Dr. Lyons has published in a variety of peer-reviewed journals, and is an Associate Editor for the journal Military Psychology. Dr. Lyons is a Fellow of the American Psychological Association and the Society for Military Psychologists. |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() (bio)
Dr. Jane Pinelis is the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for JAIC capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing. Her team developed metrics at various levels of testing for AI capabilities and provided leadership empirically-based recommendations for model fielding. Additionally, she oversaw operational and human-machine teaming testing, and conducted research and outreach to establish standards in T&E of systems using artificial intelligence. Dr. Pinelis has spent over 10 years working predominantly in the area of defense and national security. She has largely focused on operational test and evaluation, both in support of the service operational testing commands and also at the OSD level. In her previous job as the Test Science Lead at the Institute of Defense Analyses, she managed an interdisciplinary team of scientists supporting the Director and the Chief Scientist of the Department of Operational Test and Evaluation on integration of statistical test design and analysis and data-driven assessments into test and evaluation practice. Before, that, in her assignment at the Marine Corps Operational Test and Evaluation Activity, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.” In addition to T&E, Dr. Pinelis has several years of experience leading analyses for the DoD in the areas of wargaming, precision medicine, warfighter mental health, nuclear non-proliferation, and military recruiting and manpower planning. Her areas of statistical expertise include design and analysis of experiments, quasi-experiments, and observational studies, causal inference, and propensity score methods. Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor. |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities (Abstract)
Although artificial intelligence may take over tasks traditionally performed by humans or power systems that act autonomously, humans will still interact with these systems in some way. The need to ensure these interactions are fluid and effective does not disappear—if anything, this need only grows with AI-enabled capabilities. These technologies introduce multiple new hazards for achieving high quality human-system integration. Testers will need to evaluate both traditional HSI issues as well as these novel concerns in order to establish the trustworthiness of a system for activity in the field, and we will need to develop new T&E methods in order to do this. In this session, we will hear how three national security organizations are preparing for these HSI challenges, followed by a broader panel discussion on which of these problems is most pressing and which is most promising for DoD research investments. |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: HSI | Trustworthy AI (Abstract)
Recent successes and shortcomings of AI implementations have highlighted the importance of understanding how to design and interpret trustworthiness. AI Assurance is becoming a popular objective for some stakeholders, however, assurance and trustworthiness are context-sensitive concepts that rely not only on software performance and cybersecurity, but also on human-centered design. This talk summarizes Cognitive Engineering principles in the context of resilient AI engineering. It also introduces approaches for successful Human-Machine Teaming in high risk work domains. |
Stoney Trent Research Professor and Principal Advisor for Research and Innovation; Founder Virginia Tech; The Bulls Run Group, LLC ![]() (bio)
Stoney Trent, Ph.D. Research Professor and Principal Advisor for Research and Innovation, Virginia Tech; Founder, The Bulls Run Group, LLC Stoney is a Cognitive Engineer and Military Intelligence and Cyber Warfare veteran, who specializes in human-centered innovation. As an Army officer, Stoney designed and secured over $350M to stand up the Joint Artificial Intelligence Center (JAIC) for the Department of Defense. As the Chief of Missions in the JAIC, Stoney established product lines to deliver human-centered AI to improve warfighting and business functions in the world’s largest bureaucracy. Previously, he established and directed U.S. Cyber Command’s $50M applied research lab, which develops and assesses products for the Cyber Mission Force. Stoney has served as a Strategic Policy Research Fellow with the RAND Arroyo Center and is a former Assistant Professor in the Department of Behavioral Science and Leadership at the United States Military Academy. He has served in combat and stability operations in Iraq, Kosovo, Germany, and Korea. Stoney is a graduate of the Army War College and former Cyber Fellow at the National Security Agency. |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Rachel Haga Research Associate Institute for Defense Analyses ![]() (bio)
Rachel is a Research Associate at the Institute for Defense Analyses where she applies rigorous statistics and study design to evaluate, test, and report on various programs. She specializes in human system integration. |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Chad Bieber Director, Test and Evaluation. Senior Research Engineer. Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Chad Bieber is a Senior Research Engineer at the Johns Hopkins University Applied Physics Lab, is currently working as the Test and Evaluation Director for Project Maven, and was previously a Research Staff Member at IDA. A former pilot in the US Air Force, he received his Ph.D. in Aerospace Engineering from North Carolina State University. Chad is interested in how humans interact with complex, and increasingly autonomous, systems. |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Poornima Madhavan Principal Scientist and Capability Lead for Social and Behavioral Sciences MITRE ![]() (bio)
Dr. Poornima Madhavan is a Principal Scientist and Capability Lead for Social and Behavioral Sciences at the MITRE Corporation. She has more than 15 years of experience studying human-systems integration issues in sociotechnical systems including trust calibration, decision making, and risk perception. Dr. Madhavan spent the first decade of her career as a professor of Human Factors Psychology at Old Dominion University where she studied threat detection, risk analysis and human decision making in aviation and border security. This was followed by a stint as the Director of the Board on Human-Systems Integration at the National Academies of Sciences, Engineering and Medicine where she served as the primary spokesperson to the federal government on policy issues related to human-systems integration. Just before joining MITRE, Dr. Madhavan’s work focused on modeling human behavioral effects of non-lethal weapons and human-machine teaming for autonomous systems at the Institute for Defense Analyses. Dr. Madhavan received her M.A. and Ph.D. in Engineering Psychology from the University of Illinois at Urbana-Champaign and completed her post-doctoral fellowship in Social and Decision Sciences at Carnegie Mellon University. |
Panel | Session Recording |
Recording | 2021 |
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks (Abstract)
Many systems within the Department of Defense (DoD) contain networks that use both Internet Protocol (IP) and non-IP forms of information exchange. While IP communication is widely understood among the cybersecurity community, expertise and available test tools for non-IP protocols such as Controller Area Network (CAN), MIL-STD-1553, and SCADA are not as commonplace. Over the past decade, the DoD has repeatedly identified gaps in data collection and analysis when assessing the cybersecurity of non-IP buses. This roundtable is intended to open a discussion among testers and evaluators on the existing measurement and analysis tools for non-IP buses used across the community and also propose solutions to recurring roadblocks experienced when performing operational testing on non-IP components. Specific topics of discussion will include: What tools do you or your supporting teams use during cybersecurity events to attack, scan, and monitor non-IP communications? What raw quantitative data do you collect that captures the adversarial activity and/or system response from cyber aggression to non-IP components? Please provide examples of test instrumentation and data collection methods. What data analysis tools do you use to draw conclusions from measured data? What types of non-IP buses, including components on those buses, have you personally been able to test? What components were you not able to test? Why were you not able to test them? Was it due to safety concerns, lack of permission, lack of available tools and expertise, or other? Had you been given authority to test those components, do you think it would have improved the quality of test and comprehensiveness of the assessment? |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Roundtable | 2021 |
||
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment (Abstract)
The recent emergence of affordable, high-quality augmented-, mixed-, and virtual-reality (AR, MR, VR), technologies presents an opportunity to dramatically change the way users consume and interact with information. It has been shown that these immersive systems can be leveraged to enhance comprehension and accelerate decision-making in situations where data can be linked to spatial information, such as maps or terrain models. Furthermore, when immersive technologies are networked together, they allow for decentralized collaboration and provide perspective-taking not possible with traditional displays. However, enabling this shared space requires novel techniques in intelligent information management and data exchange. In this experiment, we explored a framework for leveraging distributed AI/ML processing to enable clusters of low-power, limited-functionality devices to deliver complex capabilities in aggregate to users distributed across the country collaborating simultaneously in a shared virtual environment. We deployed a motion detecting camera and triggered detection events to send information using a distributed request/reply worker framework to a remotely located YOLO image classification cluster. This work demonstrates the capability for various IoT and IoBT systems to invoke functionality without a priori knowledge of the specific endpoint to use to execute that functionality but by submitting a request based on a desired capability concept (e.g. image classification) with requiring only: 1) the knowledge of the broker location, 2) valid public/private key pair required to authenticate with the broker, and 3) the capability concept UUID and knowledge of request/reply formats used by that concept. |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
Mark Dennison is a research psychologist with DEVCOM U.S. Army Research Laboratory in the Computational and Information Sciences Directorate, Battlefield Information Systems Branch. He leads a team of government researchers and contractors focused on enabling cross-reality technologies to enhance lethality across domains through information management across echelons. Dr. Dennison graduated with a bachelor’s degree from the University of California at Irvine, and earned his Master’s and Ph.D. degrees from the University of California at Irvine, all in the field of psychology with a specialization in cognitive neuroscience. He is stationed at ARL-West in Playa Vista, CA. |
Breakout |
![]() | 2021 |
|
Short Course Introduction to Neural Networks for Deep Learning with Tensorflow (Abstract)
This mini-tutorial session discusses the practical application of neural networks from a lay person’s perspective and will walk through a hands-on case study in which we build, train, and analyze a few neural network models using TensorFlow. The course will review the basics of neural networks and touch on more complex neural network architecture variants for deep learning applications. Deep learning techniques are becoming more prevalent throughout the development of autonomous and AI-enabled systems, and this session will provide students with the foundational intuition needed to understand these systems. |
Roshan Patel Data Scientist US Army CCDC Armaments Center ![]() (bio)
Mr. Roshan Patel is a systems engineer and data scientist working at CCDC Armament Center. His role focuses on systems engineering infrastructure, statistical modeling, and the analysis of weapon systems. He holds a Masters of Computer Science from Rutgers University, where he specialized in operating systems programming and machine learning. At Rutgers, Mr. Patel was a part-time lecturer for systems programming and data science seminars. Mr. Patel is the current AI lead for the Systems Engineering Directorate at CCDC Armaments Center. |
Short Course | Session Recording |
Materials
Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 1 (Abstract)
Qualitative data, captured through freeform comment boxes, interviews, focus groups, and activity observation is heavily employed in testing and evaluation (T&E). The qualitative research approach can offer many benefits, but knowledge of how to implement methods, collect data, and analyze data according to rigorous qualitative research standards is not broadly understood within the T&E community. This tutorial offers insight into the foundational concepts of method and practice that embody defensible approaches to qualitative research. We discuss where qualitative data comes from, how it can be captured, what kind of value it offers, and how to capitalize on that value through methods and best practices. |
Kristina Carter Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Kristina Carter is a Research Staff Member at the Institute for Defense Analyses in the Operational Evaluation Division where she supports the Director, Operational Test and Evaluation (DOT&E) in the use of statistics and behavioral science in test and evaluation. She joined IDA full time in 2019 and her work focuses on the measurement and evaluation of human-system interaction. Her areas of expertise include design of experiments, statistical analysis, and psychometrics. She has a Ph.D. in Cognitive Psychology from Ohio University, where she specialized in quantitative approaches to judgment and decision making. |
Tutorial | Session Recording |
![]() Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 2 |
Daniel Hellman Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Daniel Hellmann is a Research Staff Member in the Operational Evaluation Division at the Institute for Defense Analyses. He is also a prior service U.S. Marine with multiple combat tours. Currently, Dr. Hellmann specializes in mixed methods research on topics related to distributed cognition, institutions and organizations, and Computer Supported Cooperative Work (CSCW).” |
Tutorial | 2021 |
||
Tutorial Introduction to Qualitative Methods – Part 3 |
Emily Fedele Research Staff Member Institute for Defense Analyses ![]() (bio)
Emily Fedele is a Research Staff Member at the Institute for Defense Analyses in the Science and Technology Division. She joined IDA in 2018 and her work focuses on conducting and evaluating behavioral science research on a variety of defense related topics. She has expertise in research design, experimental methods, and statistical analysis. |
Tutorial | 2021 |
||
Tutorial Introduction to Structural Equation Modeling: Implications for Human-System Interactions (Abstract)
Structural Equation Modeling (SEM) is an analytical framework that offers unique opportunities for investigating human-system interactions. SEM is used heavily in the social and behavioral sciences, where emphasis is placed on (1) explanation rather than prediction, and (2) measuring variables that are not observed directly (e.g., perceived performance, satisfaction, quality, trust, etcetera). The framework facilitates modeling of survey data through confirmatory factor analysis and latent (i.e., unobserved) variable regression models. We provide a general introduction to SEM by describing what it is, the unique features it offers to analysts and researchers, and how it is easily implemented in JMP Pro 16.0. Attendees will learn how to perform path analysis and confirmatory factor analysis, assess model fit, compare alternative models, and interpret results provided in SEM. The presentation relies on a real-data example everyone can relate to. Finally, we shed light on a few published studies that have used SEM to unveil insights on human performance factors and the mechanisms by which performance is affected. The key goal of this presentation is to provide general exposure to a modeling tool that is likely new to most in the fields of defense and aerospace. |
Laura Castro-Schilo Sr. Research Statistician Developer SAS Institute ![]() (bio)
Laura Castro-Schilo works on structural equations models in JMP. She is interested in multivariate analysis and its application to different kinds of data; continuous, discrete, ordinal, nominal and even text. Previously, she was Assistant Professor at the L. L. Thurstone Psychometric Laboratory at the University of North Carolina at Chapel Hill. Dr. Castro-Schilo obtained her PhD in quantitative psychology from the University of California, Davis. |
Tutorial | Session Recording |
![]() Recording | 2021 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Keynote Closing Remarks |
William “Allen” Kilgore Director, Research Directorate NASA Langley Research Center ![]() |
Keynote | Session Recording |
Recording | 2021 |
Breakout Cognitive Work Analysis – From System Requirements to Validation and Verification |
Matthew Miller Exploration Research Engineer Jacobs/NASA Johnson Space Center ![]() |
Breakout |
![]() | 2021 |
|
Breakout Collaborative Human AI Red Teaming |
Galen Mullins Senior AI Researcher Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Short Course Combinatorial Interaction Testing |
Erin Lanus Research Assistant Professor Virginia Tech ![]() |
Short Course | Session Recording |
Materials
Recording | 2021 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() |
Breakout | Materials | 2021 |
|
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Breakout Debunking Stress Rupture Theories Using Weibull Regression Plots |
Anne Driscoll Associate Collegiate Professor Virginia Tech ![]() |
Breakout |
![]() | 2021 |
|
Breakout Empirical Analysis of COVID-19 in U.S. States and Counties |
Emily Heuring Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
|
Breakout Entropy-Based Adaptive Design for Contour Finding and Estimating Reliability |
Austin Cole PhD Candidate Virginia Tech ![]() |
Breakout |
![]() | 2021 |
|
Breakout Estimating Pure-Error from Near Replicates in Design of Experiments |
Caleb King Research Statistician Developer SAS Institute ![]() |
Breakout |
![]() | 2021 |
|
Breakout Fast, Unbiased Uncertainty Propagation with Multi-model Monte Carlo |
Geoffrey Bomarito Materials Research Engineer NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
|
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: HSI | Trustworthy AI |
Stoney Trent Research Professor and Principal Advisor for Research and Innovation; Founder Virginia Tech; The Bulls Run Group, LLC ![]() |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Rachel Haga Research Associate Institute for Defense Analyses ![]() |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Chad Bieber Director, Test and Evaluation. Senior Research Engineer. Johns Hopkins University Applied Physics Laboratory ![]() |
Panel | Session Recording |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Poornima Madhavan Principal Scientist and Capability Lead for Social and Behavioral Sciences MITRE ![]() |
Panel | Session Recording |
Recording | 2021 |
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Short Course Introduction to Neural Networks for Deep Learning with Tensorflow |
Roshan Patel Data Scientist US Army CCDC Armaments Center ![]() |
Short Course | Session Recording |
Materials
Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 1 |
Kristina Carter Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | Session Recording |
![]() Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 2 |
Daniel Hellman Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | 2021 |
||
Tutorial Introduction to Qualitative Methods – Part 3 |
Emily Fedele Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | 2021 |
||
Tutorial Introduction to Structural Equation Modeling: Implications for Human-System Interactions |
Laura Castro-Schilo Sr. Research Statistician Developer SAS Institute ![]() |
Tutorial | Session Recording |
![]() Recording | 2021 |