Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Webinar A HellerVVA Problem: The Catch-22 for Simulated Testing of Fully Autonomous Systems (Abstract)
In order to verify, validate, and accredit (VV&A) a simulation environment for testing the performance of an autonomous system, testers must examine more than just sensor physics—they must also provide evidence that the environmental features which drives system decision making are represented at all. When systems are black boxes though, these features are fundamentally unknown, necessitating that we first test to discover these features. An umbrella known as “model induction” provides approaches for demystifying black boxes and obtaining models of their decision making, but the current state of the art assumes testers can input large quantities of operationally relevant data. When systems only make passive perceptual decisions or operate in purely virtual environments, these assumptions are typically met. However, this will not be the case for black-box, fully autonomous systems. These systems can make decisions about the information they acquire—which cannot be changed in pre-recorded passive inputs—and a major reason to obtain a decision model is to VV&A the simulation environment—preventing the valid use of a virtual environment to obtain a model. Furthermore, the current consensus is that simulation will be used to get limited safety releases for live testing. This creates a catch-22 of needing data to obtain the decision-model, but needing the decision-model to validly obtain the data. In this talk, we provide a brief overview of this challenge and possible solutions. |
Daniel Porter Research Staff Member IDA ![]() |
Webinar |
![]() Recording | 2020 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities (Abstract)
Although artificial intelligence may take over tasks traditionally performed by humans or power systems that act autonomously, humans will still interact with these systems in some way. The need to ensure these interactions are fluid and effective does not disappear—if anything, this need only grows with AI-enabled capabilities. These technologies introduce multiple new hazards for achieving high quality human-system integration. Testers will need to evaluate both traditional HSI issues as well as these novel concerns in order to establish the trustworthiness of a system for activity in the field, and we will need to develop new T&E methods in order to do this. In this session, we will hear how three national security organizations are preparing for these HSI challenges, followed by a broader panel discussion on which of these problems is most pressing and which is most promising for DoD research investments. |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Breakout Sources of Error and Bias in Experiments with Human Subjects (Abstract)
No set of experimental data is perfect and researchers are aware that data from experimental studies invariably contain some margin of error. This is particularly true of studies with human subjects since human behavior is vulnerable to a range of intrinsic and extrinsic influences beyond the variables being manipulated in a controlled experimental setting. Potential sources of error may lead to wide variations in the interpretation of results and the formulation of subsequent implications. This talk will discuss specific sources of error and bias in the design of experiments and present systematic ways to overcome these effects. First, some of the basic errors in general experimental design will be discussed, including human errors, systematic errors and random errors. Second, we will explore specific types of experimental error that appear in human subjects research. Lastly, we will discuss the role of bias in experiments with human subjects. Bias is a type of systematic error that is introduced into the sampling or testing phase and encourages one outcome over another. Often, bias is the result of the intentional or unintentional influence that an experimenter may exert on the outcomes of a study. We will discuss some common sources of bias in research with human subjects, including biases in sampling, selection, response, performance execution, and measurement. The talk will conclude with a discussion of how errors and bias influence the validity of human subjects research and will explore some strategies for controlling these errors and biases |
Poornima Madhavan | Breakout | 2019 |
|
Breakout Bayesian Adaptive Design for Conformance Testing with Bernoulli Trials (Abstract)
Co-authors: Adam L. Pintar, Blaza Toman, and Dennis Leber. A task of the Domestic Nuclear Detection Office (DNDO) is the evaluation of radiation and nuclear (rad/nuc) detection systems used to detect and identify illicit rad/nuc materials. To obtain estimated system performance measures, such as probability of detection, and to determine system acceptability, the DNDO sometimes conduct large scale field tests of these systems at great cost. Typically, non adaptive designs are employed where each rad/nuc test source is presented to each system under test a predetermined and fixed number of times. This approach can lead to unnecessary cost if the system is clearly acceptable or unacceptable. In this presentation, an adaptive design with Bayesian decision theoretic foundations is discussed as an alternative to, and contrasted with, the more common single stage design. Although the basis of the method is Bayesian decision theory, designs may be tuned to have desirable type I and II error rates. While the focus of the presentation is a specific DNDO example, the method is applicable widely. Further, since constructing the designs is somewhat compute intensive, software in the form of an R package will be shown and is available upon request. |
Adamn Pintar NIST |
Breakout | Materials | 2016 |
Breakout Estimating the Distribution of an Extremum using a Peaks-Over-Threshold Model and Monte Carlo Simulation (Abstract)
Estimating the probability distribution of an extremum (maximum or minimum), for some fixed amount of time, using a single time series typically recorded for a shorter amount of time, is important in many application areas, e.g., structural design, reliability, quality, and insurance. When designing structural members, engineers are concerned with maximum wind effects, which are functions of wind speed. With respect to reliability and quality, extremes experienced during storage or transport, e.g., extreme temperatures, may substantially impact product quality, lifetime, or both. Insurance companies are of course concerned about very large claims. In this presentation, a method to estimate the distribution of an extremum using a well-known peaks-over-threshold (POT) model and Monte Carlo simulation is presented. Since extreme values have long been a subject of study, some brief history is first discussed. The POT model that underlies the approach is then laid out. A description of the algorithm follows. It leverages pressure data collected on scale models of buildings in a wind tunnel for context. Essentially, the POT model is fitted to the observed data and then used to simulate many times series of the desired length. The empirical distribution of the extrema is obtained from the simulated series. Uncertainty in the estimated distribution is quantified by a bootstrap algorithm. Finally, an R package implementing the computations is discussed. |
Adam Pintar NIST |
Breakout | Materials | 2017 |
Webinar Statistical Engineering for Service Life Prediction of Polymers (Abstract)
Economically efficient selection of materials depends on knowledge of not just the immediate properties, but the durability of those properties. For example, when selecting building joint sealant, the initial properties are critical to successful design. These properties change over time and can result in failure in the application (buildings leak, glass falls). A NIST led industry consortium has a research focus on developing new measurement science to determine how the properties of the sealant change with environmental exposure. In this talk, the two-decade history of the NIST led effort will be examined through the lens of Statistical Engineering, specifically its 6 phases: (1) Identify the problem. (2) Provide structure. (3)Understand the context. (4) Develop a strategy. (5) Develop and execute tactics. (6) Identify and deploy a solution. Phases 5 and 6 will be the primary focus of this talk, but all of the phases will be discussed. The tactics of phase 5 were often themselves multi-month or year research problems. Our approach to predicting outdoor degradation based only on accelerated weathering in the laboratory has been revised and improved many times over several years. In phase 6, because of NIST’s unique mission of promoting U.S. innovation and industrial competitiveness, the focus has been outward on technology transfer and the advancement of test standards. This may differ from industry and other government agencies where the focus may be improvement of processes inside of the organization. |
Adam Pintar Mathematical Statistician National Institute of Standards and Technology ![]() (bio)
Adam Pintar is a Mathematical Statistician at the National Institute of Standards and Technology. He applies statistical methods and thinking to diverse application areas including Physics, Chemistry, Biology, Engineering, and more recently Social Science. He received a PhD in Statistics from Iowa State University. |
Webinar |
![]() Recording | 2020 |
Tutorial Presenting Complex Statistical Methodologies to Military Leadership (Abstract)
More often than not, the data we analyze for the military is plagued with statistical issues. Multicollinearity, small sample sizes, quasi-experimental designs, and convenience samples are some examples of what we commonly see in military data. Many of these complications can be resolved either in the design or analysis stage with appropriate statistical procedures. But, to keep our work useful, usable, and transparent to the military leadership who sponsors it, we must strike the elusive balance between explaining and justifying our design and analysis techniques and not inundating our audience with unnecessary details. It can be even more difficult to get military leadership to understand the statistical problems and solutions so well that they are enthused and supportive of our approaches. Using literature written on the subject as well as a variety of experiences, we will showcase several examples, as well as present ideas for keeping our clients actively engaged in statistical methodology discussions. |
Jane Pinelis John Hopkins University, Applied Physics Lab |
Tutorial | Materials | 2016 |
Breakout Communicating Complex Statistical Methodologies to Leadership (Abstract)
More often than not, the data we analyze for the military is plagued with statistical issues. Multicollinearity, small sample sizes, quasi-experimental designs, and convenience samples are some examples of what we commonly see in military data. Many of these complications can be resolved either in the design or analysis stage with appropriate statistical procedures. But, to keep our work useful, usable, and transparent to the military leadership who sponsors it, we must strike the elusive balance between explaining and justifying our design and analysis techniques and not inundating our audience with unnecessary details. It can be even more difficult to get military leadership to understand the statistical problems and solutions so well that they are enthused and supportive of our approaches. Using literature written on the subject as well as a variety of experiences, we will showcase several examples, as well as present ideas for keeping our clients actively engaged in statistical methodology discussions. |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
Breakout Do Asymmetries in Nuclear Arsenals Matter? (Abstract)
The importance of the nuclear balance vis-a-vis our principal adversary has been the subject of intense but unresolved debate in the international security community for almost seven decades. Perspectives on this question underlie national security policies regarding potential unilateral reductions in strategic nuclear forces, the imbalance of nonstrategic nuclear weapons in Europe, nuclear crisis management, nuclear proliferation, and nuclear doctrine. The overwhelming majority of past studies of the role of the nuclear balance in nuclear crisis evolution and outcome have been qualitative and focused on the relative importance of the nuclear balance and national resolve. Some recent analyses have invoked statistical methods, however, these quantitative studies have generated intense controversy because of concerns with analytic rigor. We apply a multi-disciplinary approach that combines historical case study, international relations theory, and appropriate statistical analysis. This approach results in defensible findings on causal mechanisms that regulate nuclear crisis resolution. Such findings should inform national security policy choices facing the Trump administration. |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
Tutorial Quality Control and Statistical Process Control (Abstract)
formance. On the other hand, the need to draw causal inference about factors not under the researchers’ control, calls for a specialized set of techniques developed for observational studies. The persuasiveness and adequacy of such an analysis depends in part on the ability to recover metrics from the data that would approximate those of an experiment. This tutorial will provide a brief overview of the common problems encountered with lack of randomization, as well as suggested approaches for rigorous analysis of observational studies. |
Jane Pinelis Research Staff Member IDA |
Tutorial |
![]() | 2018 |
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() (bio)
Dr. Jane Pinelis is the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for JAIC capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing. Her team developed metrics at various levels of testing for AI capabilities and provided leadership empirically-based recommendations for model fielding. Additionally, she oversaw operational and human-machine teaming testing, and conducted research and outreach to establish standards in T&E of systems using artificial intelligence. Dr. Pinelis has spent over 10 years working predominantly in the area of defense and national security. She has largely focused on operational test and evaluation, both in support of the service operational testing commands and also at the OSD level. In her previous job as the Test Science Lead at the Institute of Defense Analyses, she managed an interdisciplinary team of scientists supporting the Director and the Chief Scientist of the Department of Operational Test and Evaluation on integration of statistical test design and analysis and data-driven assessments into test and evaluation practice. Before, that, in her assignment at the Marine Corps Operational Test and Evaluation Activity, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.” In addition to T&E, Dr. Pinelis has several years of experience leading analyses for the DoD in the areas of wargaming, precision medicine, warfighter mental health, nuclear non-proliferation, and military recruiting and manpower planning. Her areas of statistical expertise include design and analysis of experiments, quasi-experiments, and observational studies, causal inference, and propensity score methods. Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor. |
Panel |
![]() Recording | 2021 |
Breakout System Level Uncertainty Quanification for Low-Boom Supersonic Flight Vehicles (Abstract)
Under current FAA regulations, civilian aircraft may not operate at supersonic speeds over land. However, over the past few decades, there have been renewed efforts to invest in technologies to mitigate sonic boom from supersonic aircraft through advances in both vehicle design and sonic boom prediction. NASA has heavily invested in tools and technologies to enable commercial supersonic flight and currently has several technical challenges related to sonic boom reduction. One specific technical challenge relates to the development of tools and methods to predict, under uncertainty, the noise on the ground generated by an aircraft flying at supersonic speeds. In attempting to predict ground noise, many factors from multiple disciplines must be considered. Further, classification and treatment of uncertainties in coupled systems, mutlifidelity simulations, experimental data, and community responses are all concerns in system level analysis of sonic boom prediction. This presentation will introduce the various methodologies and techniques utilized for uncertainty quantification with a focus on the build up to system level analysis. An overview of recent research activities and case studies investigating the impact of various disciplines and factors on variance in ground noise will be discussed. |
Ben Phillips | Breakout | 2018 |
|
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber (Abstract)
Due to the immense potential use cases, configurations, and threat behaviors, thorough and efficient cyber testing is a significant challenge for the defense community. In this presentation, Phadke will present case studies where STO was successfully deployed for cyber testing, resulting in higher assurance, reduced schedule, and reduced testing cost. Phadke will also discuss importance first focusing on the engineering and science analysis, and only after that is complete, implementing statistical methods. |
Kedar Phadke | Breakout | 2019 |
|
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() (bio)
Dr. Peter Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods. His expertise is in collaboratively integrating research objectives, measurement sciences, modeling and simulation, and test design to produce actionable knowledge that supports rigorous decision-making for aerospace research and development. After eight years in private industry, Dr. Parker joined Langley Research Center in 1997. He holds a B.S. in Mechanical Engineering from Old Dominion University, a M.S. in Applied Physics and Computer Science from Christopher Newport University, and a M.S. and Ph.D. in Statistics from Virginia Tech. He is a licensed Professional Engineer in the Commonwealth of Virginia. Dr. Parker is a senior member of the American Institute of Aeronautics and Astronautics, American Society for Quality, and American Statistical Association. He currently serves as Chair-Elect of the International Statistical Engineering Association. Dr. Parker is the past-Chair of the American Society for Quality’s Publication Management Board and Editor Emeritus of the journal Quality Engineering. |
Breakout |
Recording | 2021 |
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks (Abstract)
Nearly all land, air, and sea maneuver systems (e.g. vehicles, ships, aircraft, and missiles) are becoming more software-reliant and blending internal communication across both Internet Protocol (IP) and non-IP buses. IP communication is widely understood among the cybersecurity community, whereas expertise and available test tools for non-IP protocols such as Controller Area Network (CAN) and MIL-STD-1553 are not as commonplace. However, a core tenet of operational cybersecurity testing is to asses all potential pathways of information exchange present on the system, to include IP and non-IP. In this presentation, we will introduce a few non-IP protocols (e.g. CAN, MIL-STD-1553) and provide a live demonstration of how to attack a CAN network using malicious message injection. We will also discuss how potential cyber effects on non-IP busses can lead to catastrophic mission effects to the target system. |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() (bio)
Peter Mancini works at the Institute for Defense Analyses, supporting the Director, Operational Test and Evaluation (DOT&E) as a Cybersecurity OT&E analyst. |
Breakout |
![]() Recording | 2021 |
Breakout Data Visualization (Abstract)
Teams of people with many different talents and skills work together at NASA to improve our understanding of our planet Earth, our Sun and solar system, and the Universe. The Earth System is made up of complex interactions and dependencies of the solar, oceanic, terrestrial, atmospheric, and living components. Solar storms have been recognized as a cause of technological problems on Earth since the invention of the telegraph in the 19th century. Solar flares, coronal holes, and coronal mass ejections (CME’s) can emit large bursts of radiation, high speed electrons and protons, and other highly energetic particles that are released from the sun, and are sometimes directed at Earth. These particles and radiation can damage satellites in space, shutdown power grids on earth, cause GPS outages, and have serious health concerns to humans flying at high altitudes on earth, as well as astronauts in space. NASA builds and operates a fleet of satellites to study the sun and a fleet of satellites and aircraft to observe the Earth system. NASA’s Computer Models combine the observations with numerical models, to understand how these systems work. Using satellite observations alongside computer models we can combine many pieces of information to form a coherent view of Earth and the Sun. NASA research helps us understand how processes combine to affect life on Earth: this includes severe weather, health, changes in climate, and space weather. The Scientific Visualization Studio wants you to learn about NASA programs through visualization. The SVS works closely with scientists in the creation of data visualizations, animations, and images in order to promote a greater understanding of Earth and Space Science research activities at NASA and within the academic research community supported by NASA. |
Lori Perklins NASA |
Breakout | 2017 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Frank Peri Deputy Director Langley Engineering Directorate |
Breakout | 2016 |
|
Keynote NASA AERONAUTICS |
Bob Pearce Deputy Associate Administrator Strategy, Aeronautics Research Mission DirectorateNASA ![]() (bio)
Mr. Pearce is responsible for leading aeronautics research mission strategic planning to guide the conduct of the agency’s aeronautics research and technology programs, as well as leading ARMD portfolio planning and assessments, mission directorate budget development and approval processes, and review and evaluation of all of NASA’s aeronautics research mission programs for strategic progress and relevance. Pearce is also currently acting director for ARMD’s Airspace Operations and Safety Program, and responsible for the overall planning, management and evaluation of foundational air traffic management and operational safety research. Previously he was director for strategy, architecture and analysis for ARMD, responsible for establishing a strategic systems analysis capability focused on understanding the system-level impacts of NASA’s programs, the potential for integrated solutions, and the development of high-leverage options for new investment and partnership. From 2003 until July 2010, Pearce was the deputy director of the FAA-led Next Generation Air Transportation System (NextGen) Joint Planning and Development Office (JPDO). The JPDO was an interagency office tasked with developing and facilitating the implementation of a national plan to transform the air transportation system to meet the long-term transportation needs of the nation. Prior to the JPDO, Pearce held various strategic and program management positions within NASA. In the mid-1990s he led the development of key national policy documents including the National Science and Technology Council’s “Goals for a National Partnership in Aeronautics Research and Technology” and the “Transportation Science and Technology Strategy.” These two documents provided a substantial basis for NASA’s expanded investment in aviation safety and airspace systems. He began his career as a design engineer at the Grumman Corporation, working on such projects as the Navy’s F-14 Tomcat fighter and DARPA’s X-29 Forward Swept Wing Demonstrator. Pearce also has experience from the Department of Transportation’s Volpe National Transportation Systems Center where he made contributions in the area of advanced concepts for intercity transportation systems. Pearce has received NASA’s Exceptional Service Medal for sustained excellence in planning and advocating innovative aeronautics programs in conjunction with the White House and other federal agencies. He received NASA’s Exceptional Achievement Medal for outstanding leadership of the JPDO in support of the transformation of the nation’s air transportation system. Pearce has also received NASA’s Cooperative External Achievement Award and several Exceptional Performance and Group Achievement Awards.He earned a bachelor’s of science degree in mechanical and aerospace engineering from Syracuse University, and a master’s of science degree in technology and policy from the Massachusetts Institute of Technology. |
Keynote | Materials | 2018 |
Breakout Statistical Engineering in Practice (Abstract)
Problems faced in defense and aerospace often go well beyond textbook problems presented in academic settings. Textbook problems are typically very well defined, and can be solved through application of one “correct” tool. Conversely, many defense and aerospace problems are ill-defined, at least initially, and require an overall strategy to attack, one involving multiple tools and often multiple disciplines. Statistical engineering is an approach recently developed for addressing large, complex, unstructured problems, particularly those for which data can be effectively utilized. This session will present a brief overview of statistical engineering, and how it can be applied to engineer solutions to complex problems. Following this introduction, two case studies of statistical engineering will be presented, to illustrate the concepts. |
Angie Patterson Chief Consulting Engineer GE Aviation |
Breakout |
Recording | 2021 |
Breakout Physics-Informed Deep Learning for Modeling and Simulation under Uncertainty (Abstract)
Certification by analysis (CBA) involves the supplementation of expensive physical testing with modeling and simulation. In high-risk fields such as defense and aerospace, it is critical that these models accurately represent the real world, and thus they must be verified, validated and provide measures of uncertainty. While machine learning (ML) algorithms such as deep neural networks have seen significant success in low-risk sectors, they are typically opaque, difficult to interpret and often fail to meet these stringent requirements. Recently, a Department of Energy (DOE) report was released on the concept of scientific machine learning (SML) [1] with the aim of generally improving confidence in ML and enabling broader use in the scientific and engineering communities. The report identified three critical attributes that ML algorithms should possess: domain-awareness, interpretability, and robustness. Recent advances in physics-informed neural networks (PINNs) are promising in that they can provide both domain awareness and a degree of interpretability [2, 3, 4] by using governing partial differential equations as constraints during training. In this way, PINNs output physically admissible, albeit deterministic, solutions. Another noteworthy deep learning algorithm is the generative adversarial network (GAN), which can learn probability distributions [5] and provide robustness through uncertainty quantification. A limited number of works have recently demonstrated success by combining these two methods into what is referred to as a physics-informed GAN, or PIGAN [6, 7]. The PIGAN has the ability to produce physically admissible, non-deterministic predictions as well as solve non-deterministic inverse problems, potentially meeting the goals of domain awareness, interpretability, and robustness. This talk will present an introduction to PIGANs as well as an example of current NASA research implementing these networks. REFERENCES [1] Nathan Baker, Frank Alexander, Timo Bremer, Aric Hagberg, Yannis Kevrekidis, Habib Najm, Manish Parashar, Abani Patra, James Sethian, Stefan Wild, Karen Willcox, and Steven Lee. Workshop report on basic research needs for scientific machine learning: Core technologies for artificial intelligence. Technical report, USDOE Office of Science (SC) Washington, DC (United States), 2019. [2] Maziar Raissi, Paris Perdikaris, and George Karniadakis. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686-707, 2019. [3] Alexandre Tartakovsky, Carlos Ortiz Marrero, Paris Perdikaris, Guzel Tartakovsky, and David Barajas-Solano. Learning parameters and constitutive relationships with physics informed deep neural networks. arXiv preprint arXiv:1808.03398, 2018. [4] Julia Ling, Andrew Kurzawski, and Jeremy Templeton. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807:155-166, 2016. [5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014. [6] Liu Yang, Dongkun Zhang, and George Karniadakis. Physics-informed generative adversarial networks for stochastic differential equations. arXiv preprint arXiv:1811.02033, 2018. [7] Yibo Yang and Paris Perdikaris. Adversarial uncertainty quantification in physics-informed neural networks. Journal of Computational Physics, 394:136-152, 2019. |
Patrick Leser Aerospace Technologist NASA Langley Research Center ![]() (bio)
Dr. Patrick Leser is a researcher in the Durability, Damage Tolerance, and Reliability Branch (DDTRB) at NASA Langley Research Center (LaRC) in Hampton, VA. After receiving a B.S. in Aerospace Engineering from North Carolina State University (NCSU), he became a NASA civil servant under the Pathways Intern Employment Program in 2014. In 2017, he received his PhD in Aerospace Engineering, also from NCSU. Dr. Leser’s research focuses primarily on uncertainty quantification (UQ), model calibration, and fatigue crack growth in metallic materials. Dr. Leser has applied these interests to various topics including structural health management, digital twin, additive manufacturing, battery health management and various NASA Engineering Safety Center (NESC) assessments, primarily focusing on the fatigue life of composite overwrapped pressure vessels (COPVs). A primary focus of his work has been the development of computationally-efficient UQ methods that utilize fields such as high performance computing and machine learning. |
Breakout |
![]() | 2021 |
Short Course Introduction to Neural Networks for Deep Learning with Tensorflow (Abstract)
This mini-tutorial session discusses the practical application of neural networks from a lay person’s perspective and will walk through a hands-on case study in which we build, train, and analyze a few neural network models using TensorFlow. The course will review the basics of neural networks and touch on more complex neural network architecture variants for deep learning applications. Deep learning techniques are becoming more prevalent throughout the development of autonomous and AI-enabled systems, and this session will provide students with the foundational intuition needed to understand these systems. |
Roshan Patel Data Scientist US Army CCDC Armaments Center ![]() (bio)
Mr. Roshan Patel is a systems engineer and data scientist working at CCDC Armament Center. His role focuses on systems engineering infrastructure, statistical modeling, and the analysis of weapon systems. He holds a Masters of Computer Science from Rutgers University, where he specialized in operating systems programming and machine learning. At Rutgers, Mr. Patel was a part-time lecturer for systems programming and data science seminars. Mr. Patel is the current AI lead for the Systems Engineering Directorate at CCDC Armaments Center. |
Short Course |
Materials
Recording | 2021 |
Breakout Machine Learning Reveals that the Russian IRA’s Twitter Topic Patterns Evolved over Time (Abstract)
Introduction: Information Operations (IO) are a key component of our adversaries’ strategy to undermine U.S. military power without escalating to more traditional (and more easily identifiable) military strikes. Social media activity is one method of IO. In 2017 and 2018, Twitter suspended thousands of accounts likely belonging to the Kremlin-backed Internet Research Agency (IRA). Clemson University archived a large subset of these tweets (2.9M tweets posted by over 2800 IRA accounts), tagged each tweet with metadata (date, time, language, supposed geographical region, number of followers, etc.), and published this dataset on the polling aggregation website FiveThirtyEight. Methods: Machine Learning researchers at the Institute for Defense Analyses (IDA) downloaded Clemson’s dataset from FiveThirtyEight and analyzed both the content of the IRA tweets and their accompanying metadata. Using unsupervised learning techniques (Latent Dirichlet Allocation), IDA researchers mapped out how the patterns in the IRA’s tweet topics evolved over time. Results: Results showed that the IRA started tweeting in/before February 2012, but ramped up significantly in May/June 2015. Most tweets were in English, and most likely targeted the U.S. The IRA created new accounts after the first Twitter suspension in November 2017, with each new account quickly establishing an audience. Between at least January 2015 and October 2017, the IRA’s English tweet topics evolved over time, becoming tighter, more specific, more negative, and more polarizing, with the final pattern emerging in late 2015. Discussion: The United States government must expect that our adversaries’ social media activity will continue to evolve over time. Efficient processing pipelines are needed for semi-automated analyses of time-evolving social media activity. |
Emily Parrish Research Associate Institute for Defense Analyses ![]() (bio)
Emily Parrish is a Research Associate in the Science and Technology Division at the Institute for Defense Analyses (IDA). Her research focuses at IDA range from model verification and validation, countermine system developmental testing, and using natural language processing (NLP) to interpret collections of data of interest to DoD sponsors. Emily graduated from the College of William and Mary in 2015 with a B.S. in chemistry and is currently earning her M.S. in data analytics at The George Washington University, with a focus on machine learning and NLP. |
Breakout |
![]() | 2021 |
Keynote Wednesday Keynote Speaker I |
Peter Parker Team Lead, Advanced Measurement Systems NASA ![]() (bio)
Dr. Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods including statistical design of experiments, response surface methodology, and measurement system characterization. His expertise is in collaboratively integrating research objectives, measurement sciences, test design, and statistical methods to produce actionable knowledge for aerospace research and development. He holds a B.S. in Mechanical Engineering, a M.S. in Applied Physics and Computer Science and a M.S. and Ph.D. in Statistics from Virginia Tech. Dr. Parker is a senior member of the American Institute for Aeronautics and Astronautics, American Society for Quality, and the American Statistical Association. Dr. Parker currently Chairs the American Society for Quality’s Publication Management Board and previously served as Editor-in-Chief of the journal Quality Engineering. |
Keynote |
![]() | 2019 |
Keynote High-Effective Statistical Collaboration. The Art and the Science |
Peter Parker Team Lead Advance Measurement Systems NASA ![]() (bio)
Dr. Parker is Team Lead for Advanced Measurement Systemsat the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods including statistical design of experiments, response surface methodology, and measurement system characterization. His expertise is in collaboratively integrating research objectives, measurement sciences, test design, and statistical methods to produce actionable knowledge for aerospace research and development. He holds a B.S. in Mechanical Engineering, a M.S. in Applied Physics and Computer Science and a M.S. and Ph.D. in Statistics from Virginia Tech. Dr. Parker is a senior member of the American Institute for Aeronautics and Astronautics, American Society for Quality, and the American Statistical Association. Dr. Parker currently Chairs the American Society for Quality’s Publication Management Board and previously served as Editor-in-Chief of the journal Quality Engineering. |
Keynote |
![]() | 2018 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Peter Parker Team Lead for Advanced Measurement Systems NASA Langley ![]() (bio)
Dr. Peter Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods. His expertise is in collaboratively integrating research objectives, measurement sciences, modeling and simulation, and test design to produce actionable knowledge that supports rigorous decision-making for aerospace research and development. |
Panel |
Recording | 2021 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Webinar A HellerVVA Problem: The Catch-22 for Simulated Testing of Fully Autonomous Systems |
Daniel Porter Research Staff Member IDA ![]() |
Webinar |
![]() Recording | 2020 |
Panel Finding the Human in the Loop: Evaluating Warfighters’ Ability to Employ AI Capabilities |
Dan Porter Research Staff Member Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Breakout Sources of Error and Bias in Experiments with Human Subjects |
Poornima Madhavan | Breakout | 2019 |
|
Breakout Bayesian Adaptive Design for Conformance Testing with Bernoulli Trials |
Adamn Pintar NIST |
Breakout | Materials | 2016 |
Breakout Estimating the Distribution of an Extremum using a Peaks-Over-Threshold Model and Monte Carlo Simulation |
Adam Pintar NIST |
Breakout | Materials | 2017 |
Webinar Statistical Engineering for Service Life Prediction of Polymers |
Adam Pintar Mathematical Statistician National Institute of Standards and Technology ![]() |
Webinar |
![]() Recording | 2020 |
Tutorial Presenting Complex Statistical Methodologies to Military Leadership |
Jane Pinelis John Hopkins University, Applied Physics Lab |
Tutorial | Materials | 2016 |
Breakout Communicating Complex Statistical Methodologies to Leadership |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
Breakout Do Asymmetries in Nuclear Arsenals Matter? |
Jane Pinelis Johns Hopkins University Applied Physics Lab or JHU |
Breakout | Materials | 2017 |
Tutorial Quality Control and Statistical Process Control |
Jane Pinelis Research Staff Member IDA |
Tutorial |
![]() | 2018 |
Panel Finding the Human in the Loop: Evaluating HSI with AI-Enabled Systems: What should you consider in a TEMP? |
Jane Pinelis Chief of the Test, Evaluation, and Assessment branch Department of Defense Joint Artificial Intelligence Center (JAIC) ![]() |
Panel |
![]() Recording | 2021 |
Breakout System Level Uncertainty Quanification for Low-Boom Supersonic Flight Vehicles |
Ben Phillips | Breakout | 2018 |
|
Breakout Engineering first, Statistics second: Deploying Statistical Test Optimization (STO) for Cyber |
Kedar Phadke | Breakout | 2019 |
|
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() |
Breakout |
Recording | 2021 |
Breakout Operational Cybersecurity Test and Evaluation of Non-IP and Wireless Networks |
Peter Mancini Research Staff Member Institute for Defense Analyses ![]() |
Breakout |
![]() Recording | 2021 |
Breakout Data Visualization |
Lori Perklins NASA |
Breakout | 2017 |
|
Breakout Technical Leadership Panel-Tuesday Afternoon |
Frank Peri Deputy Director Langley Engineering Directorate |
Breakout | 2016 |
|
Keynote NASA AERONAUTICS |
Bob Pearce Deputy Associate Administrator Strategy, Aeronautics Research Mission DirectorateNASA ![]() |
Keynote | Materials | 2018 |
Breakout Statistical Engineering in Practice |
Angie Patterson Chief Consulting Engineer GE Aviation |
Breakout |
Recording | 2021 |
Breakout Physics-Informed Deep Learning for Modeling and Simulation under Uncertainty |
Patrick Leser Aerospace Technologist NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
Short Course Introduction to Neural Networks for Deep Learning with Tensorflow |
Roshan Patel Data Scientist US Army CCDC Armaments Center ![]() |
Short Course |
Materials
Recording | 2021 |
Breakout Machine Learning Reveals that the Russian IRA’s Twitter Topic Patterns Evolved over Time |
Emily Parrish Research Associate Institute for Defense Analyses ![]() |
Breakout |
![]() | 2021 |
Keynote Wednesday Keynote Speaker I |
Peter Parker Team Lead, Advanced Measurement Systems NASA ![]() |
Keynote |
![]() | 2019 |
Keynote High-Effective Statistical Collaboration. The Art and the Science |
Peter Parker Team Lead Advance Measurement Systems NASA ![]() |
Keynote |
![]() | 2018 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Peter Parker Team Lead for Advanced Measurement Systems NASA Langley ![]() |
Panel |
Recording | 2021 |