Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Contributed Sound level recommendations for quiet sonic boom dose-response community surveys (Abstract)
The current ban on commercial overland supersonic flight may be replaced by a noise limit on sonic boom sound level. NASA is establishing a quiet sonic boom database to guide the new regulation. The database will consist of multiple community surveys used to model the dose-response relationship between sonic boom sound levels and human annoyance. There are multiple candidate dose-response modeling techniques, such as classical logistic regression and multilevel modeling. To plan for these community surveys, recommendations for data collection will be developed from pilot community test data. Two important aspects are selecting sample size and sound level range. Selection of sample size must be strategic as large sample sizes are costly whereas small sample sizes may result in more uncertainty in the estimates. Likewise, there are trade-offs associated with selection of the sound level range. If the sound level range includes excessively high sound levels, the public may misunderstand the potential impact of quiet sonic booms, resulting in a negative backlash against a promising technological advancement. Conversely, a narrow range that includes only low sound levels might exclude the eventual noise limit. This presentation will focus on recommendations for sound level range given the expected shape of the dose-response curve. |
Jasme Lee North Carlina State Univeristy |
Contributed | Materials | 2018 |
|
Webinar I have the Power! Power Calculation in Complex (and Not So Complex) Modeling Situations Part 2 (Abstract)
Instructor Bio: Ryan Lekivetz is a Senior Research Statistician Developer for the JMP Division of SAS where he implements features for the Design of Experiments platforms in JMP software. |
Ryan Lekivetz JMP Division, SAS Institute Inc. ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Machine Learning: Overview and Applications to Test (Abstract)
“Machine learning is quickly gaining importance in being able to infer meaning from large, high-dimensional datasets. It has even demonstrated performance meeting or exceeding human capabilities in conducting a particular set of tasks such as speech recognition and image recognition. Employing these machine learning capabilities can lead to increased efficiency in data collection, processing, and analysis. Presenters will provide an overview of common examples of supervised and unsupervised learning tasks and algorithms as an introduction to those without experience in machine learning. Presenters will also provide motivation for machine learning tasks and algorithms in a variety of test and evaluation settings. For example, in both developmental and operational test, restrictions on instrumentation, number of sorties, and the amount of time allocated to analyze collected data make data analysis challenging. When instrumentation is unavailable or fails, a common back-up data source is an over-the-shoulder video recording or recordings of aircraft intercom and radio transmissions, which traditionally are tedious to analyze. Machine learning based image and speech recognition algorithms can assist in extracting information quickly from hours of video and audio recordings. Additionally, unsupervised learning techniques may be used to aid in the identification of influences of logged or uncontrollable factors in many test and evaluation settings. Presenters will provide a potential example for the application of unsupervised learning techniques to test and evaluation.” |
Megan Lewis AFOTEC |
Breakout | Materials | 2017 |
|
Keynote Testing and Analytical Challenges on the Path to Hypersonic Flight |
Mark Lewis Director IDA-STPI ![]() (bio)
Dr. Mark J. Lewis is the Director of IDA’s Science and Technology Policy Institute, a federally funded research and development center. He leads an organization that provides analysis of national and international science and technology issues to the Office of Science and Technology Policy in the White House, as well as other Federal agencies including the National Institutes of Health, the National Science Foundation, NASA, the Department of Energy, Homeland Security, and the Federal Aviation Administration, among others.Prior to taking charge of STPI, Dr. Lewis served as the Willis Young, Jr. Professor and Chair of the Department of Aerospace Engineering at the University of Maryland. A faculty member at Maryland for 24 years, Dr. Lewis taught and conducted basic and applied research. From 2004 to 2008, Dr. Lewis was the Chief Scientist of the U.S. Air Force. From 2010 to 2011, he was President of the American Institute of Aeronautics and Astronautics (AIAA).Dr. Lewis attended the Massachusetts Institute of Technology, where he received a Bachelor of Science degree in aeronautics and astronautics, Bachelor of Science degree in earth and planetary science (1984), Master of Science (1985), and Doctor of Science (1988) in aeronautics and astronautics.Dr. Lewis is the author of more than 300 technical publications and has been an adviser to more than 70 graduate students. Dr. Lewis has also served on various advisory boards for NASA, the Air Force, and DoD, including two terms on the Air Force Scientific Advisory Board, the NASA Advisory Council, and the Aeronautics and Space Engineering Board of the National Academies.Dr. Lewis’s awards include the Meritorious Civilian Service Award and Exceptional Civilian Service Award; he was also recognized as the 1994 AIAA National Capital Young Scientist/Engineer of the Year, IECEC/ AIAA Lifetime Achievement Award, and is an Aviation Week and Space Technology Laureate (2007). |
Keynote |
![]() | 2018 |
|
Tutorial A Statistical Tool for Efficient and information-Rich Testing (Abstract)
Binomial metrics like probability-to-detect or probability-to-hit typically provide operationally meaningful and easy to interpret test outcomes. However, they are informationpoor metrics and extremely expensive to test. The standard power calculations to size a test employ hypothesis tests, which typically result in many tens to hundreds of runs. In addition to being expensive, the test is most likely inadequate for characterizing performance over a variety of conditions due to the inherently large statistical uncertainties associated with binomial metrics. A solution is to convert to a continuous variable, such as miss distance or time-todetect. The common objection to switching to a continuous variable is that the hit/miss or detect/non-detect binomial information is lost, when the fraction of misses/no-detects is often the most important aspect of characterizing system performance. Furthermore, the new continuous metric appears to no longer be connected to the requirements document, which was stated in terms of a probability. These difficulties can be overcome with the use of censored data analysis. This presentation will illustrate the concepts and benefits of this approach, and will illustrate a simple analysis with data, including power calculations to show the cost savings for employing the methodology. |
Bram Lillard Research Staff Member IDA |
Tutorial | Materials | 2016 |
|
Breakout Censored Data Analysis for Performance Data (Abstract)
Binomial metrics like probability-to-detect or probability-to-hit typically provide operationally meaningful and easy to interpret test outcomes. However, they are information poor metrics and extremely expensive to test. The standard power calculations to size a test employ hypothesis tests, which typically result in many tens to hundreds of runs. In addition to being expensive, the test is most likely inadequate for characterizing performance over a variety of conditions due to the inherently large statistical uncertainties associated with binomial metrics. A solution is to convert to a continuous variable, such as miss distance or time-to-detect. The common objection to switching to a continuous variable is that the hit/miss or detect/non-detect binomial information is lost, when the fraction of misses/no-detects is often the most important aspect of characterizing system performance. Furthermore, the new continuous metric appears to no longer be connected to the requirements document, which was stated in terms of a probability. These difficulties can be overcome with the use of censored data analysis. This presentation will illustrate the concepts and benefits of this approach, and will illustrate a simple analysis with data, including power calculations to show the cost savings for employing the methodology. |
Bram Lillard IDA |
Breakout | Materials | 2017 |
|
Opening Remarks |
Bram Lillard Director, OED IDA ![]() (bio)
V. Bram Lillard assumed the role of director of the Operational Evaluation Division (OED) in early 2022. In this position, Bram provides strategic leadership, project oversight, and direction for the division’s research program, which primarily supports the Director, Operational Test and Evaluation (DOT&E) within the Office of the Secretary of Defense. He also oversees OED’s contributions to strategic studies, weapon system sustainment analyses, and cybersecurity evaluations for DOD and anti-terrorism technology evaluations for the Department of Homeland Security. Bram joined IDA in 2004 as a member of the research staff. In 2013-14, he was the acting science advisor to DOT&E. He then served as OED’s assistant director in 2014-21, ascending to deputy director in late 2021. Prior to his current position, Bram was embedded in the Pentagon where he led IDA’s analytical support to the Cost Assessment and Program Evaluation office within the Office of the Secretary of Defense. He previously led OED’s Naval Warfare Group in support of DOT&E. In his early years at IDA, Bram was the submarine warfare project lead for DOT&E programs. He is an expert in quantitative data analysis methods, test design, naval warfare systems and operations and sustainment analyses for Defense Department weapon systems. Bram has both a doctorate and a master’s degree in physics from the University of Maryland. He earned his bachelor’s degree in physics and mathematics from State University of New York at Geneseo. Bram is also a graduate of the Harvard Kennedy School’s Senior Executives in National and International Security program, and he was awarded IDA’s prestigious Goodpaster Award for Excellence in Research in 2017. |
2022 |
|||
Breakout A New Method for Planning Full-Up System-Level (FUSL) Live Fire Tests (Abstract)
Planning Full-Up System-Level (FUSL) Live Fire tests is a complex process that has historically relied solely on subject matter expertise. In particular, there is no established method to determine the appropriate number of FUSL tests necessary for a given program. We developed a novel method that is analogous to the Design of Experiments process that is used to determine the scope of Operational Test events. Our proposed methodology first requires subject matter experts (SMEs) to define all potential FUSL shots. For each potential shot, SMEs estimate the severity of that shot, the uncertainty of that severity estimate, and the similarity of that shot to all other potential shots. We developed a numerical optimization algorithm that uses the SME inputs to generate a prioritized list of FUSL events and a corresponding plot of the total information gained with each successive shot. Together, these outputs can help analysts determine the adequate number of FUSL tests for a given program. We illustrate this process with an example on a notional ground vehicle. Future work is necessary prior to implementation on a program of record. |
Lindsey Butler Research Staff Member IDA (bio)
Dr. Lindsey Butler holds a B.S. in Chemical Engineering from Virginia Tech and a Ph.D. in Biomedical Engineering from the University of South Carolina. She has worked at the Institute for Defense Analyses for 5 years where she supports the Director of Operational Test and Evaluation. Dr. Butler is the Deputy Live Fire lead at the Institute for Defense Analyses. Her primary projects focus on assessing the survivability of body armor and armored vehicles against operationally realistic threats. She also has an expertise evaluating casualty assessments to personnel after live fire tests. |
Breakout | Session Recording |
![]() Recording | 2022 |
Keynote Thursday Keynote Speaker II |
Michael Little Program Manager, Advanced Information Systems Technology Earth Science Technology Office, NASA Headquarters ![]() (bio)
Over the past 45 years, Mike’s primary focus has been on the management of research and development, focusing on making the results more useful in meeting the needs of the user community. Since 1984, he has specialized in communications, data and processing systems, including projects in NASA, the US Air Force, the FAA and the Census Bureau. Before that, he worked on Major System Acquisition Programs, in the Department of Defense including Marine Corps combat vehicles and US Navy submarines. Currently, Mike manages a comprehensive program to provide NASA’s Earth Science research efforts with the information technologies it will need in the 2020-2035 time-frame to characterize, model and understand the Earth. This Program addresses the full range of data lifecycle from generating data using instruments and models, through the management of the data and including the ways in which information technology can help to exploit the data. Of particular interest today are the ways in which NASA can measure and understand transient and transitional phenomena and the impact of climate change. The AIST Program focuses the application of applied math and statistics, artificial intelligence, case-based reasoning, machine learning and automation to improve our ability to use observational data and model output in understanding Earth’s physical processes and natural phenomena. Training and odd skills: Application of cloud computing US Government Computer Security US Navy Nuclear Propulsion operations and maintenance on two submarines |
Keynote |
![]() | 2019 |
|
Webinar D-Optimally Based Sequential Test Method for Ballistic Limit Testing (Abstract)
Ballistic limit testing of armor is testing in which a kinetic energy threat is shot at armor at varying velocities. The striking velocity and whether the threat completely penetrated or partially penetrated the armor is recorded. The probability of penetration is modeled as a function of velocity using a generalized linear model. The parameters of the model serve as inputs to MUVES which is a DoD software tool used to analyze weapon system vulnerability and munition lethality. Generally, the probability of penetration is assumed to be monotonically increasing with velocity. However, in cases in which there is a change in penetration mechanism, such as the shatter gap phenomena, the probability of penetration can no longer be assumed to be monotonically increasing and a more complex model is necessary. One such model was developed by Chang and Bodt to model the probability of penetration as a function of velocity over a velocity range in which there are two penetration mechanisms. This paper proposes a D-optimally based sequential shot selection method to efficiently select threat velocities during testing. Two cases are presented: the case in which the penetration mechanism for each shot is known (via high-speed or post shot x-ray) and the case in which the penetration mechanism is not known. This method may be used to support an improved evaluation of armor performance for cases in which there is a change in penetration mechanism. |
Leonard Lombardo Mathematician U.S. Army Aberdeen Test Center ![]() (bio)
Leonard currently serves is an analyst for the RAM/ILS Engineering and Analysis Division at the U.S. Army Aberdeen Test Center (ATC). At ATC, he is the lead analyst for both ballistic testing of helmets and fragmentation analysis. Previously, while on a developmental assignment at the U.S. Army Evaluation Center, he worked towards increasing the use of generalized linear models in ballistic limit testing. Since then, he has contributed towards the implementation of generalized linear models within the test center through test design and analysis. |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Model Uncertainty and its Inclusion in Testing Results (Abstract)
Answers to real world questions are often based on the use of judiciously chosen mathematical/statistical/physical models. In particular, assessment of failure probabilities of physical systems rely heavily on such models. Since no model describes the real world exactly, sensitivity analyses are conducted to examine influences of (small) perturbations of an assumed model. In this talk we present a structured approach, using an “Assumptions Lattice” and corresponding “Uncertainty Pyramid”, for transparently conveying the influence of various assumptions on analysis conclusions. We illustrate this process in the context of a simple multicomponent system. |
Stever Lund NIST |
Breakout | Materials | 2017 |
|
Roundtable Opportunities and Challenges for Openly Publishing Statistics Research for National Defense (Abstract)
Openly publishing on statistics for defense and national security poses certain challenges and is often not straightforward, but research in this area is important to share with the open community to advance the field. Since statistical research for national defense applications is rather niche, target journals and audiences are challenging to identify. Adding an additional hurdle, much of the data for practical implementation is sensitive and surrogate datasets must be relied upon for publication. Lastly, many statisticians in these areas do not face the same expectations to openly publish as their colleagues. This roundtable is an opportunity for statisticians in the defense and national security community to come together and discuss the importance and challenges for publishing within this space. Participants will be asked to share challenges and successes related to publishing research for national defense applications. A handout summarizing common challenges and tips for succeeding will be provided. Specific topics for discussion will include: What expectations exist for statisticians to publish in this community? Are these expectations reasonable? Are you encouraged and supported by your funding institution to openly publish? Are there opportunities for collaborative work across institutions that might further encourage publications? |
Lyndsay Shand R&D Statistician Sandia National Laboratories ![]() (bio)
Lyndsay Shand received her PhD in Statistics from University of Illinois Urbana-Champaign in 2017 with a focus on spatio-temporal analysis. At Sandia, Lyndsay continues to conduct, lead and publish on research centered around space-time statistics with applications to material science, climate science and disease modeling to name a few. Specific research topics have included accounting for missing data in space-time point processes, Bayesian functional data registration, hierarchical disease modeling, spatial extremes and design of experiments for computer model. Lyndsay currently leads efforts to quantify the effects of ship-emissions on clouds to reduce uncertainty of aerosol processes in climate models and serves as programmatic lead and statistical subject matter expert of a team developing experimental design and surrogate modeling approaches for material modeling efforts at Sandia. Additionally, Lyndsay serves as a statistical consultant to many projects across the lab and was nominated for an employee recognition award for her leadership in 2019. Outside of research, Lyndsay is actively engaged in joint research with university partners including the University of Illinois, Brigham Young University, and the University of Washington. She is also a member and of the International Society for Bayesian Analysis (ISBA), the American Statistical Association, and the American Geophysical Union, serving as co-web editor for ISBA and treasurer for the environmental section of ISBA. |
Roundtable |
![]() | 2021 |
|
Breakout Trust in Automation (Abstract)
This brief talk will focus on the process of human-machine trust in context of automated intelligence tools. The trust process is multifaceted and this talk will define concepts such as trust, trustworthiness, trust behavior, and will examine how these constructs might be operationalized in user studies. The talk will walk through various aspects of what might make an automated intelligence tool more or less trustworthy. Further, the construct of transparency will be discussed as a mechanism to foster shared awareness and shared intent between humans and machines. |
Joseph Lyons Technical Advisor Air Force Research Laboratory |
Breakout | Materials | 2017 |
|
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() (bio)
Joseph B. Lyons is the Lead for the Collaborative Interfaces and Teaming Core Research Area within the 711 Human Performance Wing at Wright-Patterson AFB, OH. Dr. Lyons received his PhD in Industrial/Organizational Psychology from Wright State University in Dayton, OH, in 2005. Some of Dr. Lyons’ research interests include human-machine trust, interpersonal trust, human factors, and influence. Dr. Lyons has worked for the Air Force Research Laboratory as a civilian researcher since 2005, and between 2011-2013 he served as the Program Officer at the Air Force Office of Scientific Research where he created a basic research portfolio to study both interpersonal and human-machine trust as well as social influence. Dr. Lyons has published in a variety of peer-reviewed journals, and is an Associate Editor for the journal Military Psychology. Dr. Lyons is a Fellow of the American Psychological Association and the Society for Military Psychologists. |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Poornima Madhavan Principal Scientist and Capability Lead for Social and Behavioral Sciences MITRE ![]() (bio)
Dr. Poornima Madhavan is a Principal Scientist and Capability Lead for Social and Behavioral Sciences at the MITRE Corporation. She has more than 15 years of experience studying human-systems integration issues in sociotechnical systems including trust calibration, decision making, and risk perception. Dr. Madhavan spent the first decade of her career as a professor of Human Factors Psychology at Old Dominion University where she studied threat detection, risk analysis and human decision making in aviation and border security. This was followed by a stint as the Director of the Board on Human-Systems Integration at the National Academies of Sciences, Engineering and Medicine where she served as the primary spokesperson to the federal government on policy issues related to human-systems integration. Just before joining MITRE, Dr. Madhavan’s work focused on modeling human behavioral effects of non-lethal weapons and human-machine teaming for autonomous systems at the Institute for Defense Analyses. Dr. Madhavan received her M.A. and Ph.D. in Engineering Psychology from the University of Illinois at Urbana-Champaign and completed her post-doctoral fellowship in Social and Decision Sciences at Carnegie Mellon University. |
Panel | Session Recording |
Recording | 2021 |
Breakout B-52 Radar Modernization Test Design Considerations (Abstract)
Inherent system processes, restrictions on collection, or cost may impact the practical execution of an operational test. This study presents the use of blocking and split-plot designs when complete randomization is not feasible in operational test. Specifically, the USAF B-52 Radar Modernization Program test design is used to present tradeoffs of different design choices and the impacts of those choices on cost, operational relevance, and analytical rigor. |
Joseph Maloney AFOTEC |
Breakout | Materials | 2018 |
|
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks (Abstract)
Many systems within the Department of Defense (DoD) contain networks that use both Internet Protocol (IP) and non-IP forms of information exchange. While IP communication is widely understood among the cybersecurity community, expertise and available test tools for non-IP protocols such as Controller Area Network (CAN), MIL-STD-1553, and SCADA are not as commonplace. Over the past decade, the DoD has repeatedly identified gaps in data collection and analysis when assessing the cybersecurity of non-IP buses. This roundtable is intended to open a discussion among testers and evaluators on the existing measurement and analysis tools for non-IP buses used across the community and also propose solutions to recurring roadblocks experienced when performing operational testing on non-IP components. Specific topics of discussion will include: What tools do you or your supporting teams use during cybersecurity events to attack, scan, and monitor non-IP communications? What raw quantitative data do you collect that captures the adversarial activity and/or system response from cyber aggression to non-IP components? Please provide examples of test instrumentation and data collection methods. What data analysis tools do you use to draw conclusions from measured data? What types of non-IP buses, including components on those buses, have you personally been able to test? What components were you not able to test? Why were you not able to test them? Was it due to safety concerns, lack of permission, lack of available tools and expertise, or other? Had you been given authority to test those components, do you think it would have improved the quality of test and comprehensiveness of the assessment? |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Roundtable | 2021 |
||
Breakout A Statistical Approach for Uncertainty Quantification with Missing Data (Abstract)
Uncertainty quantification (UQ) has emerged as the science of quantitative characterization and reduction of uncertainties in simulation and testing. Stretching across applied mathematics, statistics, and engineering, UQ is a multidisciplinary field with broad applications. A popular UQ method to analyze the effects of input variability and uncertainty on the system responses is generalized Polynomial Chaos Expansion (gPCE). This method was developed using applied mathematics and does not require knowledge of a simulation’s physics. Thus, gPCE may be used across disparate industries and is applicable to both individual component and system level simulations. The gPCE method can encounter problems when any of the input configurations fail to produce valid simulation results. gPCE requires that results be collected on a sparse grid Design of Experiment (DOE), which is generated based on probability distributions of the input variables. A failure to run the simulation at any one input configuration can result in a large decrease in the accuracy of a gPCE. In practice, simulation data sets with missing values are common because simulations regularly yield invalid results due to physical restrictions or numerical instability. We propose a statistical approach to mitigating the cost of missing values. This approach yields accurate UQ results if simulation failure makes gPCE methods unreliable. The proposed approach addresses this missing data problem by introducing an iterative machine learning algorithm. This methodology allows gPCE modelling to handle missing values in the sparse grid DOE. The study will demonstrate the convergence characteristics of the methodology to reach steady state values for the missing points using a series of simulations and numerical results. Remarks about the convergence rate and the advantages and feasibility of the proposed methodology will be provided. Several examples are used to demonstrate the proposed framework and its utility including a secondary air system example from the jet engine industry and several non-linear test functions. This is based on joint work with Dr. Mark Andrews at SmartUQ. |
Mark Andrews | Breakout | 2019 |
||
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment (Abstract)
The recent emergence of affordable, high-quality augmented-, mixed-, and virtual-reality (AR, MR, VR), technologies presents an opportunity to dramatically change the way users consume and interact with information. It has been shown that these immersive systems can be leveraged to enhance comprehension and accelerate decision-making in situations where data can be linked to spatial information, such as maps or terrain models. Furthermore, when immersive technologies are networked together, they allow for decentralized collaboration and provide perspective-taking not possible with traditional displays. However, enabling this shared space requires novel techniques in intelligent information management and data exchange. In this experiment, we explored a framework for leveraging distributed AI/ML processing to enable clusters of low-power, limited-functionality devices to deliver complex capabilities in aggregate to users distributed across the country collaborating simultaneously in a shared virtual environment. We deployed a motion detecting camera and triggered detection events to send information using a distributed request/reply worker framework to a remotely located YOLO image classification cluster. This work demonstrates the capability for various IoT and IoBT systems to invoke functionality without a priori knowledge of the specific endpoint to use to execute that functionality but by submitting a request based on a desired capability concept (e.g. image classification) with requiring only: 1) the knowledge of the broker location, 2) valid public/private key pair required to authenticate with the broker, and 3) the capability concept UUID and knowledge of request/reply formats used by that concept. |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() (bio)
Mark Dennison is a research psychologist with DEVCOM U.S. Army Research Laboratory in the Computational and Information Sciences Directorate, Battlefield Information Systems Branch. He leads a team of government researchers and contractors focused on enabling cross-reality technologies to enhance lethality across domains through information management across echelons. Dr. Dennison graduated with a bachelor’s degree from the University of California at Irvine, and earned his Master’s and Ph.D. degrees from the University of California at Irvine, all in the field of psychology with a specialization in cognitive neuroscience. He is stationed at ARL-West in Playa Vista, CA. |
Breakout |
![]() | 2021 |
|
Breakout Adversaries and Airwaves – Compromising Wireless and Radio Frequency Communications (Abstract)
Wireless and radio frequency (RF) technology are ubiquitous in our daily lives, including laptops, key fobs, remote sensors, and antennas. These devices, while oftentimes portable and convenient, can potentially be susceptible to adversarial attack over the air. This breakout session will provide a short introduction into wireless hacking concepts such as passive scanning, active injection, and the use of software defined radios to flexibly sample the RF spectrum. We will also ground these concepts in live demonstrations of attacks against both wireless and wired systems. |
Mark Herrera and Jason Schlup Research Staff Member IDA ![]() (bio)
Dr. Mark Herrera is a physicist (PhD – University of Maryland) turned defense analyst who specializes in missiles systems, mine warfare, and cyber. As a Project Lead at the Institute for Defense Analyses (IDA), he leads a team of cyber subject matter experts in providing rigorous, responsive, and sound analytic input supporting independent assessments of a wide variety of US Navy platforms. Prior to IDA, Mark was a principal investigator with Heron Systems Inc, leading the development of machine learning algorithms to improve aircraft sensor performance. Dr. Jason Schlup received his Ph.D. in Aeronautics from the California Institute of Technology in 2018. He is now a Research Staff Member at the Institute for Defense Analyses and provides analytical support to the Director, Operational Test and Evaluation’s Cyber Assessment Program. Jason also contributes to IDA’s Cyber Lab capability. |
Breakout | Session Recording |
![]() Recording | 2022 |
Keynote April 27 Opening Keynote II |
Jill Marlowe Digital Transformation Officer, NASA ![]() (bio)
Jill Marlowe is the NASA’s first Digital Transformation Officer, leading the Agency to conceive, architect, and accelerate enterprise digital solutions that transform NASA’s work, workforce and workplace to achieve bolder missions faster and more affordably than ever before. Her responsibilities include refinement and integration of NASA’s digital transformation strategy, plans and policies and coordination of implementation activities across the NASA enterprise in six strategic thrusts: data, collaboration, modeling, artificial intelligence/machine learning, process transformation, and culture and workforce. Prior to this role, Ms. Marlowe was the Associate Center Director, Technical, at NASA’s Langley Research Center in Hampton, Virginia. Ms. Marlowe led strategy and transformation of the center’s technical capabilities to assure NASA’s future mission success. In this role, she focused on accelerating Langley’s internal and external collaborations as well as the infusion of digital technologies critical for the center to thrive as a modern federal laboratory in an ever more digitally-enabled, hyper-connected, fast-paced, and globally-competitive world. In 2008, Ms. Marlowe was selected to the Senior Executive Service as the Deputy Director for Engineering at NASA Langley, and went on to serve as the center’s Engineering Director and Research Director. With the increasing responsibility and scope of these roles, Ms. Marlowe has a broad range of leadership experiences that include: running large organizations of 500 – 1,000 people to deliver solutions to every one of NASA’s mission directorates; sustaining and morphing a diverse portfolio of technical capabilities spanning aerosciences, structures & materials, intelligent flight systems, space flight instruments, and entry descent & landing systems; assuring safe operation of over two-million square feet of laboratories and major facilities; architecting partnerships with universities, industry and other government agencies to leverage and advance NASA’s goals; project management of technology development and flight test experiments; and throughout all of this, incentivizing innovation in very different organizational cultures spanning foundational research, technology invention, flight design and development engineering, and operations. She began her NASA career in 1990 as a structural analyst supporting the development of numerous space flight instruments to characterize Earth’s atmosphere. Ms. Marlowe’s formal education includes a Bachelor of Science degree in Aerospace and Ocean Engineering from Virginia Tech in 1988, a Master of Science in Mechanical Engineering from Rensselaer Polytechnic Institute in 1990, and a Degree of Engineer in Civil and Environmental Engineering at George Washington University in 1997. She serves on advisory boards for Virginia Tech’s Aerospace & Ocean Engineering Department, Sandia National Laboratory’s Engineering Sciences Research Foundation, and Cox Communications’ Digitally Inclusive Communities (regional). She is the recipient of two NASA Outstanding Leadership Medals, was named the 2017 NASA Champion of Innovation, is an AIAA Associate Fellow, and was inducted in 2021 into the Virginia Tech Academy of Aerospace & Ocean Engineering Excellence. She lives in Yorktown, Virginia, with her husband, Kevin, along with the youngest of their three children and two energetic labradoodles. |
Keynote | Session Recording |
![]() Recording | 2022 |
Keynote Thursday Keynote Speaker I |
Wendy Martinez Director, Mathematical Statistics Research Center, Bureau of Labor Statistics ASA President-Elect (2020) ![]() (bio)
Wendy Martinez has been serving as the Director of the Mathematical Statistics Research Center at the Bureau of Labor Statistics (BLS) for six years. Prior to this, she served in several research positions throughout the Department of Defense. She held the position of Science and Technology Program Officer at the Office of Naval Research, where she established a research portfolio comprised of academia and industry performers developing data science products for the future Navy and Marine Corps. Her areas of interest include computational statistics, exploratory data analysis, and text data mining. She is the lead author of three books on MATLAB and statistics. Dr. Martinez was elected as a Fellow of the American Statistical Association (ASA) in 2006 and is an elected member of the International Statistical Institute. She was honored by the American Statistical Association when she received the ASA Founders Award at the JSM 2017 conference. Wendy is also proud and grateful to have been elected as the 2020 ASA President. |
Keynote | Materials | 2019 |
|
Tutorial Introduction to Survey Design (Abstract)
Surveys are a common tool for assessing user experiences with systems in various stages of development. This mini-tutorial introduces the social and cognitive processes involved in survey measurement and addresses best practices in survey design. Clarity of question wording, appropriate scale use, and methods for reducing survey-fatigue are emphasized. Attendees will learn practical tips to maximize the information gained from user surveys and should bring paper and pencils to practice writing and evaluating questions. |
Justin Mary Research Staff Member IDA |
Tutorial | Materials | 2016 |
|
Keynote Assessing Human-Autonomy Interaction in Driving-Assist Settings (Abstract)
In order to determine how the perception, Autopilot, and driver monitoring systems of Tesla Model 3s interact with one another, and also to determine the scale of between- and within-car variability, a series of four on-road tests were conducted. Three sets of tests were conducted on a closed track and one was conducted on a public highway. Results show wide variability across and within three Tesla Model 3s, with excellent performance in some cases but also likely catastrophic performance in others. This presentation will not only highlight how such interactions can be tested, but also how results can inform requirements and designs of future autonomous systems. |
Mary “Missy” Cummings Professor Duke University ![]() (bio)
Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval pilot from 1988-1999, she was one of the U.S. Navy’s first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department and the Director of the Humans and Autonomy Laboratory. She is an AIAA Fellow and a member of the Veoneer, Inc. Board of Directors |
Keynote | Session Recording |
![]() Recording | 2021 |
Keynote April 27th Opening Keynote I |
Wendy Masiello President, Wendy Mas Consulting LLC ![]() (bio)
Wendy Masiello is an independent consultant having retired from the United States Air Force as a Lieutenant General. She is president of Wendy Mas Consulting, LLC and serves as an independent director for KBR Inc., EURPAC Service, Inc., and StandardAero (owned by The Carlyle Group). She is also a Director on the Procurement Round Table and the National ReBuilding Together Board, President-elect for the National Contract Management Association (NCMA) Board, Chair of Rawls Advisory Council for Texas Tech University’s College of Business and serves on the Air Force Studies Board under the National Academies of Science, Engineering, and Medicine. Prior to her July 2017 retirement, she was Director of the Defense Contract Management Agency where she oversaw a $1.4 billion budget and 12,000 people worldwide in oversight of 20,000 contractors performing 340,000 contracts with more than $2 trillion in contract value. During her 36-year career, General Masiello also served as Deputy Assistant Secretary (Contracting), Office of the Assistant Secretary of the Air Force for Acquisition, and as Program Executive Officer for the Air Forces $65 billion Service Acquisition portfolio. She also commanded the 96th Air Base Wing at Edwards Air Force Base and deployed to Iraq and Afghanistan as Principal Assistant Responsible for Contracting for Forces. General Masiello’s medals and commendations include the Defense Superior Service Medal, Distinguished Service Medal and the Bronze Star. She earned her Bachelor of Business Administration degree from Texas Tech University, a Master of Science degree in logistics management from the Air Force Institute of Technology, a Master of Science degree in national resource strategy from the Industrial College of the Armed Forces, Fort Lesley J. McNair, Washington, D.C., and is a graduate of Harvard Kennedy School’s Senior Managers in Government. General Masiello is a 2017 Distinguished Alum of Texas Tech University, was twice (2015 and 2016) named among Executive Mosaic’s Wash 100, the 2014 Greater Washington Government Contractor “Public Sector Partner of the Year,” and recognized by Federal Computer Week as one of “The 2011 Federal 100”. She is an NCMA Certified Professional Contract Manager and an NCMA Fellow. |
Keynote | Session Recording |
![]() Recording | 2022 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Contributed Sound level recommendations for quiet sonic boom dose-response community surveys |
Jasme Lee North Carlina State Univeristy |
Contributed | Materials | 2018 |
|
Webinar I have the Power! Power Calculation in Complex (and Not So Complex) Modeling Situations Part 2 |
Ryan Lekivetz JMP Division, SAS Institute Inc. ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Machine Learning: Overview and Applications to Test |
Megan Lewis AFOTEC |
Breakout | Materials | 2017 |
|
Keynote Testing and Analytical Challenges on the Path to Hypersonic Flight |
Mark Lewis Director IDA-STPI ![]() |
Keynote |
![]() | 2018 |
|
Tutorial A Statistical Tool for Efficient and information-Rich Testing |
Bram Lillard Research Staff Member IDA |
Tutorial | Materials | 2016 |
|
Breakout Censored Data Analysis for Performance Data |
Bram Lillard IDA |
Breakout | Materials | 2017 |
|
Opening Remarks |
Bram Lillard Director, OED IDA ![]() |
2022 |
|||
Breakout A New Method for Planning Full-Up System-Level (FUSL) Live Fire Tests |
Lindsey Butler Research Staff Member IDA |
Breakout | Session Recording |
![]() Recording | 2022 |
Keynote Thursday Keynote Speaker II |
Michael Little Program Manager, Advanced Information Systems Technology Earth Science Technology Office, NASA Headquarters ![]() |
Keynote |
![]() | 2019 |
|
Webinar D-Optimally Based Sequential Test Method for Ballistic Limit Testing |
Leonard Lombardo Mathematician U.S. Army Aberdeen Test Center ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Model Uncertainty and its Inclusion in Testing Results |
Stever Lund NIST |
Breakout | Materials | 2017 |
|
Roundtable Opportunities and Challenges for Openly Publishing Statistics Research for National Defense |
Lyndsay Shand R&D Statistician Sandia National Laboratories ![]() |
Roundtable |
![]() | 2021 |
|
Breakout Trust in Automation |
Joseph Lyons Technical Advisor Air Force Research Laboratory |
Breakout | Materials | 2017 |
|
Panel Finding the Human in the Loop: Considerations for AI in Decision Making |
Joe Lyons Lead for the Collaborative Interfaces and Teaming Core Research Area 711 Human Performance Wing at Wright-Patterson AFB ![]() |
Panel | Session Recording |
![]() Recording | 2021 |
Panel Finding the Human in the Loop: Panelist |
Poornima Madhavan Principal Scientist and Capability Lead for Social and Behavioral Sciences MITRE ![]() |
Panel | Session Recording |
Recording | 2021 |
Breakout B-52 Radar Modernization Test Design Considerations |
Joseph Maloney AFOTEC |
Breakout | Materials | 2018 |
|
Roundtable Identifying Challenges and Solutions to T&E of Non-IP Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
||
Breakout A Statistical Approach for Uncertainty Quantification with Missing Data |
Mark Andrews | Breakout | 2019 |
||
Breakout Intelligent Integration of Limited-Knowledge IoT Services in a Cross-Reality Environment |
Mark Dennison Research Psychologist U.S. Army DEVCOM Army Research Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Breakout Adversaries and Airwaves – Compromising Wireless and Radio Frequency Communications |
Mark Herrera and Jason Schlup Research Staff Member IDA ![]() |
Breakout | Session Recording |
![]() Recording | 2022 |
Keynote April 27 Opening Keynote II |
Jill Marlowe Digital Transformation Officer, NASA ![]() |
Keynote | Session Recording |
![]() Recording | 2022 |
Keynote Thursday Keynote Speaker I |
Wendy Martinez Director, Mathematical Statistics Research Center, Bureau of Labor Statistics ASA President-Elect (2020) ![]() |
Keynote | Materials | 2019 |
|
Tutorial Introduction to Survey Design |
Justin Mary Research Staff Member IDA |
Tutorial | Materials | 2016 |
|
Keynote Assessing Human-Autonomy Interaction in Driving-Assist Settings |
Mary “Missy” Cummings Professor Duke University ![]() |
Keynote | Session Recording |
![]() Recording | 2021 |
Keynote April 27th Opening Keynote I |
Wendy Masiello President, Wendy Mas Consulting LLC ![]() |
Keynote | Session Recording |
![]() Recording | 2022 |