Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() (bio)
Joseph B. Lyons is the Lead for the Collaborative Interfaces and Teaming Core Research Area within the 711 Human Performance Wing at Wright-Patterson AFB, OH. Dr. Lyons received his PhD in Industrial/Organizational Psychology from Wright State University in Dayton, OH, in 2005. Some of Dr. Lyons’ research interests include human-machine trust, interpersonal trust, human factors, and influence. Dr. Lyons has worked for the Air Force Research Laboratory as a civilian researcher since 2005, and between 2011-2013 he served as the Program Officer at the Air Force Office of Scientific Research where he created a basic research portfolio to study both interpersonal and human-machine trust as well as social influence. Dr. Lyons has published in a variety of peer-reviewed journals, and is an Associate Editor for the journal Military Psychology. Dr. Lyons is a Fellow of the American Psychological Association and the Society for Military Psychologists. |
Panel |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() (bio)
Dr. Jane Pinelis is the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for JAIC capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing. Her team developed metrics at various levels of testing for AI capabilities and provided leadership empirically-based recommendations for model fielding. Additionally, she oversaw operational and human-machine teaming testing, and conducted research and outreach to establish standards in T&E of systems using artificial intelligence. Dr. Pinelis has spent over 10 years working predominantly in the area of defense and national security. She has largely focused on operational test and evaluation, both in support of the service operational testing commands and also at the OSD level. In her previous job as the Test Science Lead at the Institute of Defense Analyses, she managed an interdisciplinary team of scientists supporting the Director and the Chief Scientist of the Department of Operational Test and Evaluation on integration of statistical test design and analysis and data-driven assessments into test and evaluation practice. Before, that, in her assignment at the Marine Corps Operational Test and Evaluation Activity, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.” In addition to T&E, Dr. Pinelis has several years of experience leading analyses for the DoD in the areas of wargaming, precision medicine, warfighter mental health, nuclear non-proliferation, and military recruiting and manpower planning. Her areas of statistical expertise include design and analysis of experiments, quasi-experiments, and observational studies, causal inference, and propensity score methods. Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor. |
Panel |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities (Abstract)
Although artificial intelligence may take over tasks traditionally performed by humans or power systems that act autonomously, humans will still interact with these systems in some way. The need to ensure these interactions are fluid and effective does not disappear—if anything, this need only grows with AI-enabled capabilities. These technologies introduce multiple new hazards for achieving high quality human-system integration. Testers will need to evaluate both traditional HSI issues as well as these novel concerns in order to establish the trustworthiness of a system for activity in the field, and we will need to develop new T&E methods in order to do this. In this session, we will hear how three national security organizations are preparing for these HSI challenges, followed by a broader panel discussion on which of these problems is most pressing and which is most promising for DoD research investments. |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: HSI | Trustworthy AI (Abstract)
Recent successes and shortcomings of AI implementations have highlighted the importance of understanding how to design and interpret trustworthiness. AI Assurance is becoming a popular objective for some stakeholders, however, assurance and trustworthiness are context-sensitive concepts that rely not only on software performance and cybersecurity, but also on human-centered design. This talk summarizes Cognitive Engineering principles in the context of resilient AI engineering. It also introduces approaches for successful Human-Machine Teaming in high risk work domains. |
Stoney Trent Research Professor and Principal Advisor for Research and Innovation; Founder Virginia Tech; The Bulls Run Group, LLC ![]() (bio)
Stoney Trent, Ph.D. Research Professor and Principal Advisor for Research and Innovation, Virginia Tech; Founder, The Bulls Run Group, LLC Stoney is a Cognitive Engineer and Military Intelligence and Cyber Warfare veteran, who specializes in human-centered innovation. As an Army officer, Stoney designed and secured over $350M to stand up the Joint Artificial Intelligence Center (JAIC) for the Department of Defense. As the Chief of Missions in the JAIC, Stoney established product lines to deliver human-centered AI to improve warfighting and business functions in the world’s largest bureaucracy. Previously, he established and directed U.S. Cyber Command’s $50M applied research lab, which develops and assesses products for the Cyber Mission Force. Stoney has served as a Strategic Policy Research Fellow with the RAND Arroyo Center and is a former Assistant Professor in the Department of Behavioral Science and Leadership at the United States Military Academy. He has served in combat and stability operations in Iraq, Kosovo, Germany, and Korea. Stoney is a graduate of the Army War College and former Cyber Fellow at the National Security Agency. |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Rachel Haga Research Associate Institute for Defense Analyses ![]() (bio)
Rachel is a Research Associate at the Institute for Defense Analyses where she applies rigorous statistics and study design to evaluate, test, and report on various programs. She specializes in human system integration. |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Chad Bieber Director, Test and Evaluation. Senior Research Engineer. Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Chad Bieber is a Senior Research Engineer at the Johns Hopkins University Applied Physics Lab, is currently working as the Test and Evaluation Director for Project Maven, and was previously a Research Staff Member at IDA. A former pilot in the US Air Force, he received his Ph.D. in Aerospace Engineering from North Carolina State University. Chad is interested in how humans interact with complex, and increasingly autonomous, systems. |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Poornima Madhavan Principal Scientist and Capability Lead for Social and Behavioral Sciences MITRE ![]() (bio)
Dr. Poornima Madhavan is a Principal Scientist and Capability Lead for Social and Behavioral Sciences at the MITRE Corporation. She has more than 15 years of experience studying human-systems integration issues in sociotechnical systems including trust calibration, decision making, and risk perception. Dr. Madhavan spent the first decade of her career as a professor of Human Factors Psychology at Old Dominion University where she studied threat detection, risk analysis and human decision making in aviation and border security. This was followed by a stint as the Director of the Board on Human-Systems Integration at the National Academies of Sciences, Engineering and Medicine where she served as the primary spokesperson to the federal government on policy issues related to human-systems integration. Just before joining MITRE, Dr. Madhavan’s work focused on modeling human behavioral effects of non-lethal weapons and human-machine teaming for autonomous systems at the Institute for Defense Analyses. Dr. Madhavan received her M.A. and Ph.D. in Engineering Psychology from the University of Illinois at Urbana-Champaign and completed her post-doctoral fellowship in Social and Decision Sciences at Carnegie Mellon University. |
Panel |
Recording | 2021 |
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks (Abstract)
Many systems within the Department of Defense (DoD) contain networks that use both Internet Protocol (IP) and non-IP forms of information exchange. While IP communication is widely understood among the cybersecurity community, expertise and available test tools for non-IP protocols such as Controller Area Network (CAN), MIL-STD-1553, and SCADA are not as commonplace. Over the past decade, the DoD has repeatedly identified gaps in data collection and analysis when assessing the cybersecurity of non-IP buses. This roundtable is intended to open a discussion among testers and evaluators on the existing measurement and analysis tools for non-IP buses used across the community and also propose solutions to recurring roadblocks experienced when performing operational testing on non-IP components. Specific topics of discussion will include: What tools do you or your supporting teams use during cybersecurity events to attack, scan, and monitor non-IP communications? What raw quantitative data do you collect that captures the adversarial activity and/or system response from cyber aggression to non-IP components? Please provide examples of test instrumentation and data collection methods. What data analysis tools do you use to draw conclusions from measured data? What types of non-IP buses, including components on those buses, have you personally been able to test? What components were you not able to test? Why were you not able to test them? Was it due to safety concerns, lack of permission, lack of available tools and expertise, or other? Had you been given authority to test those components, do you think it would have improved the quality of test and comprehensiveness of the assessment? |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Roundtable | 2021 |
|
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment (Abstract)
The recent emergence of affordable, high-quality augmented-, mixed-, and virtual-reality (AR, MR, VR), technologies presents an opportunity to dramatically change the way users consume and interact with information. It has been shown that these immersive systems can be leveraged to enhance comprehension and accelerate decision-making in situations where data can be linked to spatial information, such as maps or terrain models. Furthermore, when immersive technologies are networked together, they allow for decentralized collaboration and provide perspective-taking not possible with traditional displays. However, enabling this shared space requires novel techniques in intelligent information management and data exchange. In this experiment, we explored a framework for leveraging distributed AI/ML processing to enable clusters of low-power, limited-functionality devices to deliver complex capabilities in aggregate to users distributed across the country collaborating simultaneously in a shared virtual environment. We deployed a motion detecting camera and triggered detection events to send information using a distributed request/reply worker framework to a remotely located YOLO image classification cluster. This work demonstrates the capability for various IoT and IoBT systems to invoke functionality without a priori knowledge of the specific endpoint to use to execute that functionality but by submitting a request based on a desired capability concept (e.g. image classification) with requiring only: 1) the knowledge of the broker location, 2) valid public/private key pair required to authenticate with the broker, and 3) the capability concept UUID and knowledge of request/reply formats used by that concept. |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
Mark Dennison is a research psychologist with DEVCOM U.S. Army Research Laboratory in the Computational and Information Sciences Directorate, Battlefield Information Systems Branch. He leads a team of government researchers and contractors focused on enabling cross-reality technologies to enhance lethality across domains through information management across echelons. Dr. Dennison graduated with a bachelor’s degree from the University of California at Irvine, and earned his Master’s and Ph.D. degrees from the University of California at Irvine, all in the field of psychology with a specialization in cognitive neuroscience. He is stationed at ARL-West in Playa Vista, CA. |
Breakout |
![]() | 2021 |
Short Course Introduction to Neural Networks for Deep Learning with Tensorflow (Abstract)
This mini-tutorial session discusses the practical application of neural networks from a lay person’s perspective and will walk through a hands-on case study in which we build, train, and analyze a few neural network models using TensorFlow. The course will review the basics of neural networks and touch on more complex neural network architecture variants for deep learning applications. Deep learning techniques are becoming more prevalent throughout the development of autonomous and AI-enabled systems, and this session will provide students with the foundational intuition needed to understand these systems. |
Roshan Patel Data Scientist US Army CCDC Armaments Center ![]() (bio)
Mr. Roshan Patel is a systems engineer and data scientist working at CCDC Armament Center. His role focuses on systems engineering infrastructure, statistical modeling, and the analysis of weapon systems. He holds a Masters of Computer Science from Rutgers University, where he specialized in operating systems programming and machine learning. At Rutgers, Mr. Patel was a part-time lecturer for systems programming and data science seminars. Mr. Patel is the current AI lead for the Systems Engineering Directorate at CCDC Armaments Center. |
Short Course |
Materials
Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 1 (Abstract)
Qualitative data, captured through freeform comment boxes, interviews, focus groups, and activity observation is heavily employed in testing and evaluation (T&E). The qualitative research approach can offer many benefits, but knowledge of how to implement methods, collect data, and analyze data according to rigorous qualitative research standards is not broadly understood within the T&E community. This tutorial offers insight into the foundational concepts of method and practice that embody defensible approaches to qualitative research. We discuss where qualitative data comes from, how it can be captured, what kind of value it offers, and how to capitalize on that value through methods and best practices. |
Kristina Carter Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Kristina Carter is a Research Staff Member at the Institute for Defense Analyses in the Operational Evaluation Division where she supports the Director, Operational Test and Evaluation (DOT&E) in the use of statistics and behavioral science in test and evaluation. She joined IDA full time in 2019 and her work focuses on the measurement and evaluation of human-system interaction. Her areas of expertise include design of experiments, statistical analysis, and psychometrics. She has a Ph.D. in Cognitive Psychology from Ohio University, where she specialized in quantitative approaches to judgment and decision making. |
Tutorial |
![]() Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 2 |
Daniel Hellman Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Daniel Hellmann is a Research Staff Member in the Operational Evaluation Division at the Institute for Defense Analyses. He is also a prior service U.S. Marine with multiple combat tours. Currently, Dr. Hellmann specializes in mixed methods research on topics related to distributed cognition, institutions and organizations, and Computer Supported Cooperative Work (CSCW).” |
Tutorial | 2021 |
|
Tutorial Introduction to Qualitative Methods – Part 3 |
Emily Fedele Research Staff Member Institute for Defense Analyses ![]() (bio)
Emily Fedele is a Research Staff Member at the Institute for Defense Analyses in the Science and Technology Division. She joined IDA in 2018 and her work focuses on conducting and evaluating behavioral science research on a variety of defense related topics. She has expertise in research design, experimental methods, and statistical analysis. |
Tutorial | 2021 |
|
Tutorial Introduction to Structural Equation Modeling: Implications for Human-System Interactions (Abstract)
Structural Equation Modeling (SEM) is an analytical framework that offers unique opportunities for investigating human-system interactions. SEM is used heavily in the social and behavioral sciences, where emphasis is placed on (1) explanation rather than prediction, and (2) measuring variables that are not observed directly (e.g., perceived performance, satisfaction, quality, trust, etcetera). The framework facilitates modeling of survey data through confirmatory factor analysis and latent (i.e., unobserved) variable regression models. We provide a general introduction to SEM by describing what it is, the unique features it offers to analysts and researchers, and how it is easily implemented in JMP Pro 16.0. Attendees will learn how to perform path analysis and confirmatory factor analysis, assess model fit, compare alternative models, and interpret results provided in SEM. The presentation relies on a real-data example everyone can relate to. Finally, we shed light on a few published studies that have used SEM to unveil insights on human performance factors and the mechanisms by which performance is affected. The key goal of this presentation is to provide general exposure to a modeling tool that is likely new to most in the fields of defense and aerospace. |
Laura Castro-Schilo Sr. Research Statistician Developer SAS Institute ![]() (bio)
Laura Castro-Schilo works on structural equations models in JMP. She is interested in multivariate analysis and its application to different kinds of data; continuous, discrete, ordinal, nominal and even text. Previously, she was Assistant Professor at the L. L. Thurstone Psychometric Laboratory at the University of North Carolina at Chapel Hill. Dr. Castro-Schilo obtained her PhD in quantitative psychology from the University of California, Davis. |
Tutorial |
![]() Recording | 2021 |
Breakout Machine Learning Reveals that the Russian IRA’s Twitter Topic Patterns Evolved over Time (Abstract)
Introduction: Information Operations (IO) are a key component of our adversaries’ strategy to undermine U.S. military power without escalating to more traditional (and more easily identifiable) military strikes. Social media activity is one method of IO. In 2017 and 2018, Twitter suspended thousands of accounts likely belonging to the Kremlin-backed Internet Research Agency (IRA). Clemson University archived a large subset of these tweets (2.9M tweets posted by over 2800 IRA accounts), tagged each tweet with metadata (date, time, language, supposed geographical region, number of followers, etc.), and published this dataset on the polling aggregation website FiveThirtyEight. Methods: Machine Learning researchers at the Institute for Defense Analyses (IDA) downloaded Clemson’s dataset from FiveThirtyEight and analyzed both the content of the IRA tweets and their accompanying metadata. Using unsupervised learning techniques (Latent Dirichlet Allocation), IDA researchers mapped out how the patterns in the IRA’s tweet topics evolved over time. Results: Results showed that the IRA started tweeting in/before February 2012, but ramped up significantly in May/June 2015. Most tweets were in English, and most likely targeted the U.S. The IRA created new accounts after the first Twitter suspension in November 2017, with each new account quickly establishing an audience. Between at least January 2015 and October 2017, the IRA’s English tweet topics evolved over time, becoming tighter, more specific, more negative, and more polarizing, with the final pattern emerging in late 2015. Discussion: The United States government must expect that our adversaries’ social media activity will continue to evolve over time. Efficient processing pipelines are needed for semi-automated analyses of time-evolving social media activity. |
Emily Parrish Research Associate Institute for Defense Analyses ![]() (bio)
Emily Parrish is a Research Associate in the Science and Technology Division at the Institute for Defense Analyses (IDA). Her research focuses at IDA range from model verification and validation, countermine system developmental testing, and using natural language processing (NLP) to interpret collections of data of interest to DoD sponsors. Emily graduated from the College of William and Mary in 2015 with a B.S. in chemistry and is currently earning her M.S. in data analytics at The George Washington University, with a focus on machine learning and NLP. |
Breakout |
![]() | 2021 |
Breakout Metrics for Assessing Underwater Demonstrations for Detection and Classification of UXO (Abstract)
Receiver Operating Characteristic curves (ROC curves) are often used to assess the performance of detection and classification systems. ROC curves can have unexpected subtleties that make them difficult to interpret. For example, the Strategic Environmental Research and Development Program and the Environmental Security Technology Certification Program (SERDP/ESTCP) is sponsoring the development of novel systems for the detection and classification of Unexploded Ordnance (UXO) in underwater environments. SERDP is also sponsoring underwater testbeds to demonstrate the performance of these novel systems. The Institute for Defense Analyses (IDA) is currently designing and implementing the scoring process for these underwater demonstrations that addresses the subtleties of ROC curve interpretation. This presentation will provide an overview of the main considerations for ROC curve parameter selection when scoring underwater demonstrations for UXO detection and classification. |
Jacob Bartel Research Associate Institute for Defense Analyses ![]() (bio)
Jacob Bartel is a Research Associate at the Institute for Defense Analyses (IDA). His research focuses on computational modeling and verification and validation (V&V), primarily in the field of nuclear engineering. Recently, he has worked with SERDP/ESTCP to develop and implement scoring processes for testing underwater UXO detection and classification systems. Prior to joining IDA, his graduate research focused on the development of novel algorithms to model fuel burnup in nuclear reactors. Jacob earned his master’s degree in Nuclear Engineering and his bachelor’s degree in Physics from Virginia Tech. |
Breakout |
![]() | 2021 |
Breakout Modeling and Simulation in Support of the Decision Analysis Process (Abstract)
Informed enterprise and program decision making is central to DoD’s Digital Engineering’s purpose statement. Decision analysis serves as a key mechanism to link the voice of the sponsor/end user with the voice of the engineer and the voice of the budgetary analyst in order to enable a closed loop requirements writing approach that is informed by rigorous assessments of a broad range of system-level alternatives across a thorough set of stakeholder value criteria to include life-cycle costs, schedule, performance, and long term viability. . The decision analytics framework employed by the U.S. Army’s Combat Capabilities Development Command (CCDC) Armaments Center (AC) is underpinned by a state-of-the-art modeling and simulation framework called PRISM (Performance Related and Integrated Suite of Models) developed at CCDC-AC. PRISM was designed in a way to allow performance estimates of a weapon system to evolve as more information and higher fidelity representations of those systems become available. PRISM provides the most up to date performance estimates into the decision analysis framework so that decision makers have the best information available when making complex strategic decisions. This briefing will unpack PRISM and highlight the model design elements that make it the foundation of CCDC-AC’s weapon system architecture and design decision making process. |
Michael Greco Computer Scientist U.S. Army CCDC Armaments Center ![]() (bio)
Mr. Greco is the lead architect and developer of the Performance Related and Integrated Suite of Models (PRISM) modeling and simulation framework. The PRISM tool has been used to support the analysis of many projects locally for CCDC-AC and externally for the larger Army analytical community. Mr. Greco has over 10 years of experience working in force effectiveness analysis with the use of operational and system performance simulations. |
Breakout |
![]() Recording | 2021 |
Panel Multi-Agent Adaptive Coordinated Autonomy in Contested Battlefields (Abstract)
Autonomous multi-robot systems have the potential to augment the future force with enhanced capability while reducing the risk to human personnel in multi-domain operations (MDO). Mobile robots can constitute nodes in a heterogeneous Internet of Battlefield Things (IoBT); they can offer additional capability in the form of mobility to effectively make observations useful for planning and executing military operations against adversaries. In this talk, I will present the result of a series of field experiments where robots are tasked to perform military-relevant missions in realistic environments, in addition to describing the integration of mobile robot assets in the Multi-Purpose Sensing Array Distributed Proving Ground (MSA-DPG) for the purpose of augmenting IoBT systems. |
John Rogers Senior Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
John Rogers is a research scientist specializing in autonomous mobile robotics at the Army Research Laboratory’s Intelligent Robotics Branch of the Computational and Information Sciences Directorate (CISD). John’s research has focused on autonomy for multi-robot teams as well as distributed multi-robot state estimation. John is currently leading the Tactical Behaviors group in the AI for mobility and maneuver essential research program, which is focused on developing deep learning and game theoretic maneuver for ground robots against adversaries. Prior to this, John led the multi-robot mapping research on the Autonomy Research Pilot Initiative (ARPI) on “autonomous collective defeat of hard and deeply buried targets” in collaboration with his colleagues at the Air Force Research Laboratory. John has also partnered with DCIST, MAST and Robotics CTA partners to extend funded research programs within-house collaborative projects. John completed his Ph.D. degree at the Georgia Institute of Technology in2012 with his advisor, Prof. Henrik Christensen from the Robotics and Intelligent Machines center. While at Georgia Tech, John participated in a variety of sponsored research projects including the Micro Autonomous Systems and Technology (MAST) project from the Army Research Laboratory, a counter-terrorism “red team” project for the Naval Research Laboratory in addition to his thesis research on semantic mapping and reasoning culminating in his thesis “Life-long mapping of objects and places for domestic robots”. Prior to attending Georgia Tech, John completed a M.S. degree in Computer Science at Stanford (2006) while working with Prof. Sebastian Thrun and Prof. Andrew Ng on the DARPA Learning Applied to Ground Robots (LAGR) project. John also holds M.S. and B.S. degrees in Electrical and Computer Engineering from Carnegie Mellon University (2002). John has authored or co-authored over 50 scientific publications in Robotics and Computer Vision journals and conferences. John’s current research interests are automatic exploration and mapping of large-scale indoor, outdoor, and subterranean environments, place recognition in austere locations, semantic scene understanding, and probabilistic reasoning for autonomous mobile robots. CV: Google Scholar: https://scholar.google.com/citations?user=uH_LDocAAAAJ&hl=en |
Panel |
![]() | 2021 |
Keynote Opening Remarks (Abstract)
Norton A. Schwartz serves as President of the Institute for Defense Analyses (IDA), a nonprofit corporation operating in the public interest. IDA manages three Federally Funded Research and Development Centers that answer the most challenging U.S. security and science policy questions with objective analysis leveraging extraordinary scientific, technical, and analytic expertise. At IDA, General Schwartz (U.S. Air Force, retired) directs the activities of more than 1,000 scientists and technologists employed by IDA. General Schwartz has a long and prestigious career of service and leadership that spans over 5 decades. He was most recently President and CEO of Business Executives for National Security (BENS). During his 6-year tenure at BENS, he was also a member of IDA’s Board of Trustees. Prior to retiring from the U.S. Air Force, General Schwartz served as the 19th Chief of Staff of the U.S. Air Force from 2008 to 2012. He previously held senior joint positions as Director of the Joint Staff and as the Commander of the U.S. Transportation Command. He began his service as a pilot with the airlift evacuation out of Vietnam in 1975. General Schwartz is a U.S. Air Force Academy graduate and holds a master’s degree in business administration from Central Michigan University. He is also an alumnus of the Armed Forces Staff College and the National War College. He is a member of the Council on Foreign Relations and a 1994 Fellow of Massachusetts Institute of Technology’s Seminar XXI. General Schwartz has been married to Suzie since 1981. |
Norton Schwartz President Institute for Defense Analyses ![]() |
Keynote |
Recording | 2021 |
Keynote Opening Remarks (Abstract)
Dr. O’Toole is the Acting Director, Operational Test and Evaluation as of January 20, 2021. Dr. O’Toole was appointed as the Principal Deputy Director, Operational Test and Evaluation in February 2020. In this capacity he is the principal staff assistant for all functional areas assigned to the office. He participates in the formulation, development, advocacy, and oversight of policies of the Secretary of Defense and in the development and implementation of test and test resource programs. He supports the Director in the planning, conduct, evaluation and reporting of operational and live fire testing. He serves as the Appropriation Director and Comptroller for the Operational Test and Evaluation, Defense Appropriation and the principal advisor to the Director on all Planning, Programming, and Budgeting System matters. Dr. O’Toole is the former Deputy Director for Naval Warfare within DOT&E. He oversaw the operational and live-fire testing of ships and submarines and their associated sensors; combat and communications systems, and weapons. He was also responsible for overseeing the adequacy of the test infrastructure and resources to support operational and live-fire testing for all acquisition programs across the Defense Department. Dr. O’Toole was previously an employee of the Naval Sea Systems Command as the Deputy Group Director of Aircraft Carrier Design and Systems Engineering. Prior to that, he was the Director of Systems Engineering Division (Submarines and Undersea Systems) where he led a diverse team of engineers who supported all Submarine Program Managers. His other assignments include being a Ship Design Manager/Navy’s Technical Authority for the USS VIRGINIA Class submarines during design and new construction and for Amphibious Ships, Auxiliary Ships, and Command & Control Ships during inservice operations. Dr. O’Toole has also held other positions within the Department of Defense such as Deputy Program Executive Officer (Maritime and Rotary Wing) at the United States Special Operations Acquisition Command, Staff to the Deputy Assistant Secretary of the Navy for Research, Development & Acquisition (Ship Programs), and Deputy Director of Regional Maintenance for COMPACFLT (N43). In addition, Dr. O’Toole has over 30 years of experience as a Naval Officer (Active and Reserve) retiring at the rank of CAPTAIN. His significant tours include 5 Commanding Officer tours. Dr. Raymond D. O’Toole, Jr. is a native of Long Island NY and a graduate of the State University of New York – Maritime College earning a Bachelor of Engineering in Marine Engineering. He also holds a Master of Engineering Degree in Systems Engineering from Virginia Polytechnic Institute and State University, a Master of Science Degree in National Resource Strategy from the Industrial College of the Armed Forces, and a Doctorate in Engineering in the field of Engineering Management from the George Washington University, where he is now a Professional Lecturer of Engineering Management and Systems Engineering. He has received the SECDEF Meritorious Civilian Service Award and the USN Meritorious and Superior Civilian Service Awards. |
Raymond O’Toole Acting Director, Operational Test and Evaluation DOT&E ![]() |
Keynote |
Recording | 2021 |
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks (Abstract)
Nearly all land, air, and sea maneuver systems (e.g. vehicles, ships, aircraft, and missiles) are becoming more software-reliant and blending internal communication across both Internet Protocol (IP) and non-IP buses. IP communication is widely understood among the cybersecurity community, whereas expertise and available test tools for non-IP protocols such as Controller Area Network (CAN) and MIL-STD-1553 are not as commonplace. However, a core tenet of operational cybersecurity testing is to asses all potential pathways of information exchange present on the system, to include IP and non-IP. In this presentation, we will introduce a few non-IP protocols (e.g. CAN, MIL-STD-1553) and provide a live demonstration of how to attack a CAN network using malicious message injection. We will also discuss how potential cyber effects on non-IP busses can lead to catastrophic mission effects to the target system. |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Breakout |
![]() Recording | 2021 |
Roundtable Opportunities and Challenges for Openly Publishing Statistics Research for National Defense (Abstract)
Openly publishing on statistics for defense and national security poses certain challenges and is often not straightforward, but research in this area is important to share with the open community to advance the field. Since statistical research for national defense applications is rather niche, target journals and audiences are challenging to identify. Adding an additional hurdle, much of the data for practical implementation is sensitive and surrogate datasets must be relied upon for publication. Lastly, many statisticians in these areas do not face the same expectations to openly publish as their colleagues. This roundtable is an opportunity for statisticians in the defense and national security community to come together and discuss the importance and challenges for publishing within this space. Participants will be asked to share challenges and successes related to publishing research for national defense applications. A handout summarizing common challenges and tips for succeeding will be provided. Specific topics for discussion will include: What expectations exist for statisticians to publish in this community? Are these expectations reasonable? Are you encouraged and supported by your funding institution to openly publish? Are there opportunities for collaborative work across institutions that might further encourage publications? |
Lyndsay Shand R&D Statistician Sandia National Laboratories ![]() (bio)
Lyndsay Shand received her PhD in Statistics from University of Illinois Urbana-Champaign in 2017 with a focus on spatio-temporal analysis. At Sandia, Lyndsay continues to conduct, lead and publish on research centered around space-time statistics with applications to material science, climate science and disease modeling to name a few. Specific research topics have included accounting for missing data in space-time point processes, Bayesian functional data registration, hierarchical disease modeling, spatial extremes and design of experiments for computer model. Lyndsay currently leads efforts to quantify the effects of ship-emissions on clouds to reduce uncertainty of aerosol processes in climate models and serves as programmatic lead and statistical subject matter expert of a team developing experimental design and surrogate modeling approaches for material modeling efforts at Sandia. Additionally, Lyndsay serves as a statistical consultant to many projects across the lab and was nominated for an employee recognition award for her leadership in 2019. Outside of research, Lyndsay is actively engaged in joint research with university partners including the University of Illinois, Brigham Young University, and the University of Washington. She is also a member and of the International Society for Bayesian Analysis (ISBA), the American Statistical Association, and the American Geophysical Union, serving as co-web editor for ISBA and treasurer for the environmental section of ISBA. |
Roundtable |
![]() | 2021 |
Roundtable Organizing and Sharing Data within the T&E Community (Abstract)
Effective data sharing requires alignment of personnel, systems, and policies. Data are costly and precious, and to get the most value out of the data we collect, it is important that we share and reuse it whenever possible and appropriate. Data are typically collected and organized with a single specific use or goal in mind, and after that goal has been achieved (e.g., the report is published), the data are no longer viewed as important or useful. This process is self-fulfilling, as future analysts who might want to use these data will not be able to find them or will be unable to understand the data sufficiently due to lack of documentation and metadata. Implementing data standards and facilitating sharing are challenging in the national security environment. There are many data repositories within the DoD, but most of them are specific to certain organizations and are accessible by a limited number of people. Valid concerns about security make the process of sharing particular data sets challenging, and the opacity of data ownership often complicates the issue. This roundtable will facilitate discussion of these issues. Participants will have opportunities to share their experiences trying to share data and make use of data from previous testing. We hope to identify useful lessons learned and find ways to encourage data sharing within the community. |
Matthew Avery Research Staff Memeber Institute for Defense Analyses ![]() (bio)
Dr. Matthew Avery is a Research Staff Member at the Institute for Defense Analyses in the Operational Evaluation Division. As the Tactical Aerial Reconnaissance Systems Project Leader, he provides input to the DoD on test plans and reports for Army, Marine Corps, and Navy tactical unmanned aircraft systems. Dr. Avery co-chairs IDA’s Data Governance Committee and leads the Data Management Group within IDA’s Test Science team, where he helps develop internal policies and trainings on data usage, access and storage. From 2017 to 2018, Dr. Avery worked as an embedded analyst at the Department of Defense Office of Cost Analysis and Program Evaluation (CAPE), focusing on Space Control and Mobilization. Dr. Avery received his PhD in Statistics from North Carolina State University in 2012. |
Roundtable | 2021 |
|
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E (Abstract)
The majority of statistical analyses involves observing a fixed set of data and analyzing those data after the final observation has been collected to draw some inference about the population from which they came. Unlike these traditional methods, sequential analysis is concerned with situations for which the number, pattern, or composition of the data is not determined at the start of the investigation but instead depends upon the information acquired throughout the course of the investigation. Expanding the use of sequential analysis in DoD testing has the potential to save substantial test dollars and decrease test time. However, switching from traditional to sequential planning will likely induce unique challenges. The goal of this round table is to provide an open forum for topics related to sequential analyses. We aim to discuss potential challenges, identify potential ways to overcome them, and talk about successful stories of sequential analyses implementation and lessons learned. Specific questions for discussion will be provided to participants prior to the event. |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() (bio)
Dr. Rebecca Medlin is a Research Staff Member at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on the use of statistics in test & evaluation and has designed tests and conducted statistical analyses for several major defense programs including tactical vehicles, mobility aircraft, radars, and electronic warfare systems. Her areas of expertise include design of experiments, statistical modeling, and reliability. She has a Ph.D. in Statistics from Virginia Tech. |
Roundtable | 2021 |
|
Breakout Physics-Informed Deep Learning for Modeling and Simulation under Uncertainty (Abstract)
Certification by analysis (CBA) involves the supplementation of expensive physical testing with modeling and simulation. In high-risk fields such as defense and aerospace, it is critical that these models accurately represent the real world, and thus they must be verified, validated and provide measures of uncertainty. While machine learning (ML) algorithms such as deep neural networks have seen significant success in low-risk sectors, they are typically opaque, difficult to interpret and often fail to meet these stringent requirements. Recently, a Department of Energy (DOE) report was released on the concept of scientific machine learning (SML) [1] with the aim of generally improving confidence in ML and enabling broader use in the scientific and engineering communities. The report identified three critical attributes that ML algorithms should possess: domain-awareness, interpretability, and robustness. Recent advances in physics-informed neural networks (PINNs) are promising in that they can provide both domain awareness and a degree of interpretability [2, 3, 4] by using governing partial differential equations as constraints during training. In this way, PINNs output physically admissible, albeit deterministic, solutions. Another noteworthy deep learning algorithm is the generative adversarial network (GAN), which can learn probability distributions [5] and provide robustness through uncertainty quantification. A limited number of works have recently demonstrated success by combining these two methods into what is referred to as a physics-informed GAN, or PIGAN [6, 7]. The PIGAN has the ability to produce physically admissible, non-deterministic predictions as well as solve non-deterministic inverse problems, potentially meeting the goals of domain awareness, interpretability, and robustness. This talk will present an introduction to PIGANs as well as an example of current NASA research implementing these networks. REFERENCES [1] Nathan Baker, Frank Alexander, Timo Bremer, Aric Hagberg, Yannis Kevrekidis, Habib Najm, Manish Parashar, Abani Patra, James Sethian, Stefan Wild, Karen Willcox, and Steven Lee. Workshop report on basic research needs for scientific machine learning: Core technologies for artificial intelligence. Technical report, USDOE Office of Science (SC) Washington, DC (United States), 2019. [2] Maziar Raissi, Paris Perdikaris, and George Karniadakis. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686-707, 2019. [3] Alexandre Tartakovsky, Carlos Ortiz Marrero, Paris Perdikaris, Guzel Tartakovsky, and David Barajas-Solano. Learning parameters and constitutive relationships with physics informed deep neural networks. arXiv preprint arXiv:1808.03398, 2018. [4] Julia Ling, Andrew Kurzawski, and Jeremy Templeton. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807:155-166, 2016. [5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014. [6] Liu Yang, Dongkun Zhang, and George Karniadakis. Physics-informed generative adversarial networks for stochastic differential equations. arXiv preprint arXiv:1811.02033, 2018. [7] Yibo Yang and Paris Perdikaris. Adversarial uncertainty quantification in physics-informed neural networks. Journal of Computational Physics, 394:136-152, 2019. |
Patrick Leser Aerospace Technologist NASA Langley Research Center ![]() (bio)
Dr. Patrick Leser is a researcher in the Durability, Damage Tolerance, and Reliability Branch (DDTRB) at NASA Langley Research Center (LaRC) in Hampton, VA. After receiving a B.S. in Aerospace Engineering from North Carolina State University (NCSU), he became a NASA civil servant under the Pathways Intern Employment Program in 2014. In 2017, he received his PhD in Aerospace Engineering, also from NCSU. Dr. Leser’s research focuses primarily on uncertainty quantification (UQ), model calibration, and fatigue crack growth in metallic materials. Dr. Leser has applied these interests to various topics including structural health management, digital twin, additive manufacturing, battery health management and various NASA Engineering Safety Center (NESC) assessments, primarily focusing on the fatigue life of composite overwrapped pressure vessels (COPVs). A primary focus of his work has been the development of computationally-efficient UQ methods that utilize fields such as high performance computing and machine learning. |
Breakout |
![]() | 2021 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() |
Panel |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() |
Panel |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: HSI | Trustworthy AI |
Stoney Trent Research Professor and Principal Advisor for Research and Innovation; Founder Virginia Tech; The Bulls Run Group, LLC ![]() |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Rachel Haga Research Associate Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Chad Bieber Director, Test and Evaluation. Senior Research Engineer. Johns Hopkins University Applied Physics Laboratory ![]() |
Panel |
Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Poornima Madhavan Principal Scientist and Capability Lead for Social and Behavioral Sciences MITRE ![]() |
Panel |
Recording | 2021 |
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
|
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() |
Breakout |
![]() | 2021 |
Short Course Introduction to Neural Networks for Deep Learning with Tensorflow |
Roshan Patel Data Scientist US Army CCDC Armaments Center ![]() |
Short Course |
Materials
Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 1 |
Kristina Carter Research Staff Member Institute for Defense Analyses ![]() |
Tutorial |
![]() Recording | 2021 |
Tutorial Introduction to Qualitative Methods – Part 2 |
Daniel Hellman Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | 2021 |
|
Tutorial Introduction to Qualitative Methods – Part 3 |
Emily Fedele Research Staff Member Institute for Defense Analyses ![]() |
Tutorial | 2021 |
|
Tutorial Introduction to Structural Equation Modeling: Implications for Human-System Interactions |
Laura Castro-Schilo Sr. Research Statistician Developer SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Breakout Machine Learning Reveals that the Russian IRA’s Twitter Topic Patterns Evolved over Time |
Emily Parrish Research Associate Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
Breakout Metrics for Assessing Underwater Demonstrations for Detection and Classification of UXO |
Jacob Bartel Research Associate Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
Breakout Modeling and Simulation in Support of the Decision Analysis Process |
Michael Greco Computer Scientist U.S. Army CCDC Armaments Center ![]() |
Breakout |
![]() Recording | 2021 |
Panel Multi-Agent Adaptive Coordinated Autonomy in Contested Battlefields |
John Rogers Senior Research Scientist U.S. Army DEVCOM Army Research Laboratory ![]() |
Panel |
![]() | 2021 |
Keynote Opening Remarks |
Norton Schwartz President Institute for Defense Analyses ![]() |
Keynote |
Recording | 2021 |
Keynote Opening Remarks |
Raymond O’Toole Acting Director, Operational Test and Evaluation DOT&E ![]() |
Keynote |
Recording | 2021 |
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() Recording | 2021 |
Roundtable Opportunities and Challenges for Openly Publishing Statistics Research for National Defense |
Lyndsay Shand R&D Statistician Sandia National Laboratories ![]() |
Roundtable |
![]() | 2021 |
Roundtable Organizing and Sharing Data within the T&E Community |
Matthew Avery Research Staff Memeber Institute for Defense Analyses ![]() |
Roundtable | 2021 |
|
Roundtable Overcoming Challenges and Applying Sequential Procedures to T&E |
Rebecca Medlin Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
|
Breakout Physics-Informed Deep Learning for Modeling and Simulation under Uncertainty |
Patrick Leser Aerospace Technologist NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |