Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout Prior Formulation in a Bayesian Analysis of Biomechanical Data (Abstract)
Biomechanical experiments investigating the failure modes of biological tissue require a significant investment of time and money due to the complexity of procuring, preparing, and testing tissue. Furthermore, the potentially destructive nature of these tests makes repeated testing infeasible. This leads to experiments with notably small sample sizes in light of the high variance common to biological material. When the goal is to estimate parameters for an analytic artifact such as an injury risk curve (IRC), which relates an input quantity to a probability of injury, small sample sizes result in undesirable uncertainty. One way to ameliorate this effect is through a Bayesian approach, incorporating expert opinion and previous experimental data into a prior distribution. This has the advantage of leveraging the information contained in expert opinion and related experimental data to obtain faster convergence to an appropriate parameter estimation with a desired certainty threshold. We explore several ways of implementing Bayesian methods in a biomechanical setting, including permutations on the use of expert knowledge and prior experimental data. Specifically, we begin with a set of experimental data from which we generate a reference IRC. We then elicit expert predictions of the 10th and 90th quantiles of injury, and use them to formulate both uniform and normal prior distributions. We also generate priors from qualitatively similar experimental data, both directly on the IRC parameters and on the injury quantiles, and explore the use of weighting schemes to assign more influence to better datasets. By adjusting the standard deviation and shifting the mean, we can create priors of variable quality. Using a subset of the experimental data in conjunction with our derived priors, we then re-fit the IRC and compare it to the reference curve. For all methods we will measure the certainty, speed of convergence, and accuracy relative to the reference IRC, with the aim of recommending a best practices approach for the application of Bayesian methods in this setting. Ultimately an optimized approach for handling small samples sizes with Bayesian methods has the potential to increase the information content of individual biomechanical experiments by integrating them into the context of expert knowledge and prior experimentation. |
Amanda French Data Scientist Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Amanda French is a data scientist at Johns Hopkins University Applied Physics Laboratory. She obtained her PhD in mathematics from UNC Chapel Hill and went on to perform data science for a variety of government agencies, including the Department of State, Military Health System, and Department of Defense. Her expertise includes statistics, machine learning, and experimental design. |
Breakout |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 1 (Abstract)
Exhaustive testing is infeasible when testing complex engineered systems. Fortunately, a combinatorial testing approach can be almost as effective as exhaustive testing but at dramatically lower cost. The effectiveness of this approach is due to the underlying construct on which it is based, that is a mathematical construct known as a covering array. This tutorial is divided into two sections. Section 1 introduces covering arrays, introduces a few covering array metrics, and then shows how covering arrays are used in combinatorial testing methodologies. Section 2 focuses on practical applications of combinatorial testing, including a commercial aviation example, an example that focuses on a widely used machine learning library, plus other examples that illustrate how common testing challenges can be addressed. In the process of working through these examples, an easy-to-use tool for generating covering arrays will be demonstrated. |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() (bio)
Ryan Lekivetz is a Principal Research Statistician Developer for the JMP Division of SAS where he implements features for the Design of Experiments platforms in JMP software. |
Tutorial |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 2 |
Joseph Morgan Principal Research Statistician SAS Institute ![]() (bio)
Joseph Morgan is a Principal Research Statistician/Developer in the JMP Division of SAS Institute Inc. where he implements features for the Design of Experiments platforms in JMP software. His research interests include combinatorial testing, empirical software engineering and algebraic design theory. |
Tutorial |
![]() Recording | 2021 |
Breakout Spatio-Temporal Modeling of Pandemics (Abstract)
The spread of COVID-19 across the United States provides an interesting case study in the modeling of spatio-temporal data. In this breakout session we will provide an overview of commonly used spatio-temporal models and demonstrate how Bayesian Inference can be performed using both exact and approximate inferential techniques. Using COVID data, we will demonstrate visualization techniques in R and introduce “off-the-shelf” spatio-temporal models. We will introduce participants to the Integrated Nested LaPlace approximation (INLA) methodology and show how results from this technique compare to using Markov Chain Monte Carlo (MCMC) techniques. Finally, we will demonstrate the short-falls in using “off-the-shelf” models and show how epidemiological motivated partial differential equations can be used to generate spatio-temporal models and discuss inferential issues when we move away from common models. |
Nicholas Clark Assistant Professor West Point ![]() (bio)
LTC Nicholas Clark is an Academy Professor at the United States Military Academy where he heads the Center for Data Analysis and Statistics. Nick received his PhD in Statistics from Iowa State University and his research interests include spatio-temporal statistics and Bayesian methodology. |
Breakout |
![]() | 2021 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 1 (Abstract)
Leadership has placed a high premium on analytically defensible results for M&S Verification and Validation. This mini-tutorial will provide a quick overview of relevant standard methods to establish equivalency in mean, variance, and distribution shape such as Two One-Sided Tests (TOST), K-S tests, Fisher’s Exact, and Fisher’s Combined Probability. The focus will be on more advanced methods such as the equality between model parameters in statistical emulators versus live tests (Hotelling T2, loglinear variance), equivalence of output curves (functional data analysis), and bootstrap methods. Additionally, we introduce a new method for near real-time adaptive sampling that places the next set of M&S runs at boundary regions of high gradient in the responses to more efficiently characterize complex surfaces such as those seen in autonomous systems. |
Jim Wisnowski Principal Consultant Adsurgo LLC ![]() (bio)
Jim Wisnowski is Principal Consultant and Co-founder at Adsurgo, LLC. He currently provides applied statistics training and consulting services across numerous industries and government departments with particular emphasis on Design of Experiments and Test & Evaluation. Previously, he was a commander and engineer in the Air Force, statistics professor at the US Air Force Academy, and Joint Staff officer. He received his PhD in Industrial Engineering from Arizona State University. |
Tutorial |
![]() Recording | 2021 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 2 |
Jim Simpson Principal JK Analytics ![]() (bio)
Jim Simpson is the Principal of JK Analytics where he currently coaches and trains across various industries and organizations. He has blended practical application and industrial statistics leadership with academic experience focused on researching new methods, teaching excellence and the development and delivery of statistics courseware for graduate and professional education. Previously, he led the Air Force’s largest test wing as Chief Operations Analyst. He has served as full-time faculty at the Air Force Academy and Florida State University, and is now an Adjunct Professor at the Air Force Institute of Technology (AFIT) and the University of Florida. He received his PhD in Industrial Engineering from Arizona State University. |
Tutorial |
![]() | 2021 |
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() (bio)
Dr. Peter Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods. His expertise is in collaboratively integrating research objectives, measurement sciences, modeling and simulation, and test design to produce actionable knowledge that supports rigorous decision-making for aerospace research and development. After eight years in private industry, Dr. Parker joined Langley Research Center in 1997. He holds a B.S. in Mechanical Engineering from Old Dominion University, a M.S. in Applied Physics and Computer Science from Christopher Newport University, and a M.S. and Ph.D. in Statistics from Virginia Tech. He is a licensed Professional Engineer in the Commonwealth of Virginia. Dr. Parker is a senior member of the American Institute of Aeronautics and Astronautics, American Society for Quality, and American Statistical Association. He currently serves as Chair-Elect of the International Statistical Engineering Association. Dr. Parker is the past-Chair of the American Society for Quality’s Publication Management Board and Editor Emeritus of the journal Quality Engineering. |
Breakout |
Recording | 2021 |
Breakout Statistical Engineering in Practice |
Alex Varbanov Principal Scientist Procter and Gamble ![]() (bio)
Dr. Alex Varbanov is a Principal Scientist at Procter & Gamble. He was born in Bulgaria. He graduated with Ph.D. from the School of Statistics, University of Minnesota in 1999. Dr. Varbanov has been with P&G R&D for 21 years and provides statistical support for variety of company business units (e.g., Fabric & Home Care) and brands (e.g., Tide, Swiffer). He performs experimental design, statistical analysis, and advanced modeling for many different areas including genomics, consumer research, and product claims. He is currently developing scent character end-to-end models for perfume optimization in P&G consumer products. |
Breakout |
![]() Recording | 2021 |
Breakout Statistical Engineering in Practice (Abstract)
Problems faced in defense and aerospace often go well beyond textbook problems presented in academic settings. Textbook problems are typically very well defined, and can be solved through application of one “correct” tool. Conversely, many defense and aerospace problems are ill-defined, at least initially, and require an overall strategy to attack, one involving multiple tools and often multiple disciplines. Statistical engineering is an approach recently developed for addressing large, complex, unstructured problems, particularly those for which data can be effectively utilized. This session will present a brief overview of statistical engineering, and how it can be applied to engineer solutions to complex problems. Following this introduction, two case studies of statistical engineering will be presented, to illustrate the concepts. |
Roger Hoerl Associate Professor of Statistics Union College ![]() (bio)
Dr. Roger W. Hoerl is the Brate-Peschel Associate Professor of Statistics at Union College in Schenectady, NY. Previously, he led the Applied Statistics Lab at GE Global Research. While at GE, Dr. Hoerl led a team of statisticians, applied mathematicians, and computational financial analysts who worked on some of GE’s most challenging research problems, such as developing personalized medicine protocols, enhancing the reliability of aircraft engines, and management of risk for a half-trillion dollar portfolio. Dr. Hoerl has been named a Fellow of the American Statistical Association and the American Society for Quality, and has been elected to the International Statistical Institute and the International Academy for Quality. He has received the Brumbaugh and Hunter Awards, as well as the Shewhart Medal, from the American Society for Quality, and the Founders Award and Deming Lectureship Award from the American Statistical Association. While at GE Global Research, he received the Coolidge Fellowship, honoring one scientist a year from among the four global GE Research and Development sites for lifetime technical achievement. His book with Ron Snee, Statistical Thinking: Improving Business Performance, now in its 3rd edition, was called “the most practical introductory statistics textbook ever published in a business context” by the journal Technometrics. |
Breakout |
![]() Recording | 2021 |
Breakout Statistical Engineering in Practice (Abstract)
Problems faced in defense and aerospace often go well beyond textbook problems presented in academic settings. Textbook problems are typically very well defined, and can be solved through application of one “correct” tool. Conversely, many defense and aerospace problems are ill-defined, at least initially, and require an overall strategy to attack, one involving multiple tools and often multiple disciplines. Statistical engineering is an approach recently developed for addressing large, complex, unstructured problems, particularly those for which data can be effectively utilized. This session will present a brief overview of statistical engineering, and how it can be applied to engineer solutions to complex problems. Following this introduction, two case studies of statistical engineering will be presented, to illustrate the concepts. |
Angie Patterson Chief Consulting Engineer GE Aviation |
Breakout |
Recording | 2021 |
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases (Abstract)
Generating aerodynamic coefficients can be computationally expensive, especially for the viscous CFD solvers in which multiple complex models are iteratively solved. When filling large design spaces, utilizing only a high accuracy viscous CFD solver can be infeasible. We apply state-of-the-art methods for design and analysis of computer experiments to efficiently develop an emulator for high-fidelity simulations. First, we apply a cokriging model to leverage information from fast low-fidelity simulations to improve predictions with more expensive high-fidelity simulations. Combining space-filling designs with a Gaussian process model-based sequential sampling criterion allows us to efficiently generate sample points and limit the number of costly simulations needed to achieve the desired model accuracy. We demonstrate the effectiveness of these methods with an aerodynamic simulation study using a conic shape geometry. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Release Number: LLNL-ABS-818163 |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
Roundtable Test Design and Analysis for Modeling & Simulation Validation (Abstract)
System evaluations increasingly rely on modeling and simulation (M&S) to supplement live testing. It is thus crucial to thoroughly validate these M&S tools using rigorous data collection and analysis strategies. At this roundtable, we will identify and discuss some of the core challenges currently associated with implementing M&S validation for T&E. First, appropriate design of experiments (DOE) for M&S is not universally adopted across the T&E community. This arises in part due to limited knowledge of gold standard techniques from academic research (e.g., space filing designs; Gaussian Process emulators) as well as lack of expertise with the requisite software tools. Second, T&E poses unique demands in testing, such as extreme constraints in live testing conditions and reliance on binary outcomes. There is no consensus on how to incorporate these needs into the existing academic framework for M&S. Finally, some practical considerations lack clear solutions yet have direct consequences on design choice. In particular, we may discuss the following: (1) sample size determination when calculating power and confidence is not applicable, and (2) non-deterministic M&S output with high levels of noise, which may benefit from replication samples as in classical DOE. |
Kelly Avery Research Staff Member Institute for Defense Analyses ![]() (bio)
Kelly M. Avery is a Research Staff Member at the Institute for Defense Analyses. She supports the Director, Operational Test and Evaluation (DOT&E) on the use of statistics in test & evaluation and modeling & simulation, and has designed tests and conducted statistical analyses for several major defense programs including tactical aircraft, missile systems, radars, satellite systems, and computer-based intelligence systems. Her areas of expertise include statistical modeling, design of experiments, modeling & simulation validation, and statistical process control. Dr. Avery has a B.S. in Statistics, a M.S. in Applied Statistics, and a Ph.D. in Statistics, all from Florida State University. |
Roundtable | 2021 |
|
Panel The Keys to Successful Collaborations during Test and Evaluation: Moderator (Abstract)
The defense industry faces increasingly complex systems in test and evaluation (T&E) that require interdisciplinary teams to successfully plan testing. A critical aspect in test planning is a successful collaboration between T&E experts, subject matter experts, program leadership, statisticians, and others. This panel, based on their own experiences as consulting statisticians, will discuss elements that lead to successful collaborations, barriers during collaboration, and recommendations to improve collaborations during T&E planning. |
Christine Anderson-Cook Los Alamos National Lab ![]() |
Panel |
Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Sarah Burke STAT Expert STAT Center of Excellence ![]() (bio)
Dr. Sarah Burke is a scientific test and analysis techniques (STAT) Expert for the STAT Center of Excellence. She works with acquisition programs in the Air Force, Army, and Navy to improve test efficiency, plan tests effectively, and analyze the resulting test data to inform decisions on system development. She received her M.S. in Statistics and Ph.D. in Industrial Engineering from Arizona State University. |
Panel |
![]() Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
John Haman RSM Institute for Defense Analyses ![]() (bio)
Dr. John Haman is a statistician at the Institute for Defense Analyses, where he develops methods and tools for analyzing test data. He has worked with a variety of Army, Navy, and Air Force systems, including counter-UAS and electronic warfare systems. Currently, John is supporting the Joint Artificial Intelligence Center. |
Panel |
Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Peter Parker Team Lead for Advanced Measurement Systems NASA Langley ![]() (bio)
Dr. Peter Parker is Team Lead for Advanced Measurement Systems at the National Aeronautics and Space Administration’s Langley Research Center in Hampton, Virginia. He serves an Agency-wide statistical expert across all of NASA’s mission directorates of Exploration, Aeronautics, and Science to infuse statistical thinking, engineering, and methods. His expertise is in collaboratively integrating research objectives, measurement sciences, modeling and simulation, and test design to produce actionable knowledge that supports rigorous decision-making for aerospace research and development. |
Panel |
Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Willis Jensen HR Analyst W.L. Gore & Associates ![]() (bio)
Dr. Willis Jensen is a member of the HR Analytics team at W.L. Gore & Associates, where he supports people related analytics work across the company. At Gore, he previously spent 12 years as a statistician and as the Global Statistics Team Leader where he led a team of statisticians that provided statistical support and training across the globe. He holds degrees in Statistics from Brigham Young University and a Ph.D. in Statistics from Virginia Tech. |
Panel |
Recording | 2021 |
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative (Abstract)
In 2019, the DoD unveiled comprehensive strategies related to Artificial Intelligence, Digital Modernization, and Enterprise Data Analytics. Recognizing that data science and analytics are fundamental to these strategies, in October 2020 the DoD has issued a comprehensive Data Strategy for national security and defense. For over a hundred years, statistical sciences have played a pivotal role in our national defense, from quality assurance and reliability analysis of munitions fielded in WWII, to operational analyses defining battlefield force structure and tactics, to helping optimize the engineering design of complex products, to rigorous testing and evaluating of Warfighter systems. The American Statistical Association (ASA) in 2015 recognized in its statement on The Role of Statistics in Data Science that “statistics is foundational to data science… and its use in this emerging field empowers researchers to extract knowledge and obtain better results from Big Data and other analytics projects.” It is clearly recognized that data as information is a key asset to the DoD. The challenge we face is how to transform existing talent to add value where it counts. |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() (bio)
Dr. Laura Freeman is a Research Associate Professor of Statistics and the Director of the Intelligent Systems Lab at the Virginia Tech Hume Center. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance. She develops new methods for test and evaluation focusing on emerging system technology. She is also the Assistant Dean for Research in the National Capital Region, in that capacity she works to shape research directions and collaborations in across the College of Science in the National Capital Region. Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data. |
Roundtable | 2021 |
|
Breakout Uncertainty Quantification and Sensitivity Analysis Methodology for AJEM (Abstract)
The Advanced Joint Effectiveness Model (AJEM) is a joint forces model developed by the U.S. Army that is used in vulnerability and lethality (V/L) predictions for threat/target interactions. This complex model primarily generates a probability response for various components, scenarios, loss of capabilities, or summary conditions. Sensitivity analysis (SA) and uncertainty quantification (UQ), referred to jointly as SA/UQ, are disciplines that provide the working space for how model estimates changes with respect to changes in input variables. A comparative measure that will be used to characterize the effect of an input change on the predicted outcome was developed and is reviewed and illustrated in this presentation. This measure provides a practical context that stakeholders can better understand and utilize. We show graphical and tabular results using this measure. |
Craig Andres Mathematical Statistician U.S. Army CCDC Data & Analysis Center ![]() (bio)
Craig Andres is a Mathematical Statistician at the recently formed DEVCOM Data & Analysis Center in the Materiel M&S Branch working primarily on the uncertainty quantification, as well as the verification and validation, of the AJEM vulnerability model. He is currently on developmental assignment with the Capabilities Projection Team. He has a master’s degrees in Applied Statistics from Oakland University and a master’s degree in Mathematics from Western Michigan University. |
Breakout | 2021 |
|
Breakout Verification and Validation of Elastodynamic Simulation Software for Aerospace Research (Abstract)
Physics-based simulation of nondestructive evaluation (NDE) inspection can help to advance the inspectability and reliability of mechanical systems. However, NDE simulations applicable to non-idealized mechanical components often require large compute domains and long run times. This has prompted development of custom NDE simulation software tailored to high performance computing (HPC) hardware. Verification and validation (V&V) is an integral part of developing this software to ensure implementations are robust and applicable to inspection problems, producing tools and simulations suitable for computational NDE research. This presentation addresses factors common to V&V of several elastodynamic simulation codes applicable to ultrasonic NDE. Examples are drawn from in-house simulation software at NASA Langley Research Center, ranging from ensuring reliability in a 1D heterogeneous media wave equation solver to the V&V needs of 3D cluster-parallel elastodynamic software. Factors specific to a research environment are addressed, where individual simulation results can be as relevant as the software product itself. Distinct facets of V&V are discussed including testing to establish software reliability, employing systematic approaches for consistency with fundamental conservation laws, establishing the numerical stability of algorithms, and demonstrating concurrence with empirical data. This talk also addresses V&V practices for small groups of researchers. This includes establishing resources (e.g. time and personnel) for V&V during project planning to mitigate and control the risk of setbacks. Similarly, we identify ways for individual researchers to use V&V during simulation software development itself to both speed up the development process and reduce incurred technical debt. |
Erik Frankforter Research Engineer NASA Langley Research Center ![]() (bio)
Erik Frankforter is a research engineer in the Nondestructive Evaluation Sciences branch at NASA Langley Research Center. His research areas include the development of high performance computing nondestructive evaluation simulation software, advancement of inspection mechanics in advanced material systems, and application of physics-based simulation for inspection guidance. In 2017, he obtained a Ph. D. in mechanical engineering from the University of South Carolina, and served as a postdoctoral research scholar at the National Institute of Aerospace. |
Breakout |
![]() | 2021 |
Webinar A HellerVVA Problem: The Catch-22 for Simulated Testing of Fully Autonomous Systems (Abstract)
In order to verify, validate, and accredit (VV&A) a simulation environment for testing the performance of an autonomous system, testers must examine more than just sensor physics—they must also provide evidence that the environmental features which drives system decision making are represented at all. When systems are black boxes though, these features are fundamentally unknown, necessitating that we first test to discover these features. An umbrella known as “model induction” provides approaches for demystifying black boxes and obtaining models of their decision making, but the current state of the art assumes testers can input large quantities of operationally relevant data. When systems only make passive perceptual decisions or operate in purely virtual environments, these assumptions are typically met. However, this will not be the case for black-box, fully autonomous systems. These systems can make decisions about the information they acquire—which cannot be changed in pre-recorded passive inputs—and a major reason to obtain a decision model is to VV&A the simulation environment—preventing the valid use of a virtual environment to obtain a model. Furthermore, the current consensus is that simulation will be used to get limited safety releases for live testing. This creates a catch-22 of needing data to obtain the decision-model, but needing the decision-model to validly obtain the data. In this talk, we provide a brief overview of this challenge and possible solutions. |
Daniel Porter Research Staff Member IDA ![]() |
Webinar |
![]() Recording | 2020 |
Webinar A Practical Introduction To Gaussian Process Regression (Abstract)
Abstract: Gaussian process regression is ubiquitous in spatial statistics, machine learning, and the surrogate modeling of computer simulation experiments. Fortunately their prowess as accurate predictors, along with an appropriate quantification of uncertainty, does not derive from difficult-to-understand methodology and cumbersome implementation. We will cover the basics, and provide a practical tool-set ready to be put to work in diverse applications. The presentation will involve accessible slides authored in Rmarkdown, with reproducible examples spanning bespoke implementation to add-on packages. Instructor Bio: Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians. Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities. |
Robert “Bobby” Gramacy Virginia Tech ![]() (bio)
Robert Gramacy is a Professor of Statistics in the College of Science at Virginia Polytechnic and State University (Virginia Tech). Previously he was an Associate Professor of Econometrics and Statistics at the Booth School of Business, and a fellow of the Computation Institute at The University of Chicago. His research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Professor Gramacy is a computational statistician. He specializes in areas of real-data analysis where the ideal modeling apparatus is impractical, or where the current solutions are inefficient and thus skimp on fidelity. Such endeavors often require new models, new methods, and new algorithms. His goal is to be impactful in all three areas while remaining grounded in the needs of a motivating application. His aim is to release general purpose software for consumption by the scientific community at large, not only other statisticians. Professor Gramacy is the primary author on six R packages available on CRAN, two of which (tgp, and monomvn) have won awards from statistical and practitioner communities. |
Webinar |
![]() | 2020 |
Webinar A Validation Case Study: The Environment Centric Weapons Analysis Facility (Abstract)
Reliable modeling and simulation (M&S) allows the undersea warfare community to understand torpedo performance in scenarios that could never be created in live testing, and do so for a fraction of the cost of an in-water test. The Navy hopes to use the Environment Centric Weapons Analysis Facility (ECWAF), a hardware-in-the-loop simulation, to predict torpedo effectiveness and supplement live operational testing. In order to trust the model’s results, the T&E community has applied rigorous statistical design of experiments techniques to both live and simulation testing. As part of ECWAF’s two-phased validation approach, we ran the M&S experiment with the legacy torpedo and developed an empirical emulator of the ECWAF using logistic regression. Comparing the emulator’s predictions to actual outcomes from live test events supported the test design for the upgraded torpedo. This talk overviews the ECWAF’s validation strategy, decisions that have put the ECWAF on a promising path, and the metrics used to quantify uncertainty. |
Elliot Bartis Research Staff Member IDA ![]() (bio)
Elliot Bartis is a research staff member at the Institute for Defense Analyses where he works on test and evaluation of undersea warfare systems such as torpedoes and torpedo countermeasures. Prior to coming to IDA, Elliot received his B.A. in physics from Carleton College and his Ph.D. in materials science and engineering from the University of Maryland in College Park. For his doctorate dissertation, he studied how cold plasma interacts with biomolecules and polymers. Elliot was introduced to model validation through his work on a torpedo simulation called the Environment Centric Weapons Analysis Facility. In 2019, Elliot and others involved in the MK 48 torpedo program received a Special Achievement Award from the International Test and Evaluation Association in part for their work on this simulation. Elliot lives in Falls Church, VA with his wife Jacqueline and their cat Lily. |
Webinar |
![]() Recording | 2020 |
Webinar Adoption Challenges in Artificial Intelligence and Machine Learning for Analytic Work Environments |
Laura McNamara Distinguished Member of Technical Staff Sandia National Laboratories ![]() (bio)
Dr. Laura A. McNamara is Distinguished Member of Technical Staff at Sandia National Laboratories. She’s spent her career partnering with computer scientists, software engineers, physicists, human factors experts, organizational psychologists, remote sensing and imagery scientists, and national security analysts in a wide range of settings. She has expertise in user-centered technology design and evaluation, information visualization/visual analytics, and mixed qualitative/quantitative social science research. Most of her projects involve challenges in sensor management, technology usability, and innovation feasibility and adoption. She enjoys working in Agile and Agile-like environments and is a skilled leader of interdisciplinary engineering, scientific, and software teams. She is passionate about ensuring usability, utility, and adaptability of visualization, operational, and analytic software. Dr. McNamara’s current work focuses on operational and analytic workflows in remote sensing environments. She is also an expert on visual cognitive workflows in team environments, focused on the role of user interfaces and analytic technologies to support exploratory data analysis and information creation with large, disparate, unwieldy datasets, from text to remote sensing. Dr. McNamara has longstanding interest in the epistemology and practices of computational modeling and simulation, verification and validation, and uncertainty quantification. She has worked with the National Geospatial-Intelligence Agency, the Missile Defense Agency, the Defense Intelligence Agency, and the nuclear weapons programs at Sandia and Los Alamos National Laboratories to enhance the effective use of modeling and simulation in interdisciplinary R&D projects. |
Webinar |
![]() | 2020 |
Webinar Can AI Predict Human Behavior? (Abstract)
Given the rapid increase of novel machine learning applications in cybersecurity and people analytics, there is significant evidence that these tools can give meaningful and actionable insights. Even so, great care must be taken to ensure that automated decision making tools are deployed in such a way as to mitigate bias in predictions and promote security of user data. In this talk, Dr. Burns will take a deep dive into an open source data set in the area of people analytics, demonstrating the application of basic machine learning techniques, while discussing limitations and potential pitfalls in using an algorithm to predict human behavior. In the end, Dustin will draw a comparison between the potential to predict human behavioral propensity to things such as becoming an insider threat to how assisted diagnosis tools are used in medicine to predict development or reoccurrence of illnesses. |
Dustin Burns Senior Scientist Exponent ![]() (bio)
Dr. Dustin Burns is a Senior Scientist in the Statistical and Data Sciences practice at Exponent, a multidisciplinary scientific and engineering consulting firm dedicated to responding to the world’s most impactful business problems. Combining his background in laboratory experiments with his expertise in data analytics and machine learning, Dr. Burns works across many industries, including security, consumer electronics, utilities, and health sciences. He supports clients’ goals to modernize data collection and analytics strategies, extract information from unused data such as images and text, and test and validate existing systems. |
Webinar |
![]() Recording | 2020 |
Session Title | Speaker | Type | Materials | Year |
---|---|---|---|---|
Breakout Prior Formulation in a Bayesian Analysis of Biomechanical Data |
Amanda French Data Scientist Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 1 |
Ryan Lekivetz Research Statistician Developer SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Tutorial Pseudo-Exhaustive Testing – Part 2 |
Joseph Morgan Principal Research Statistician SAS Institute ![]() |
Tutorial |
![]() Recording | 2021 |
Breakout Spatio-Temporal Modeling of Pandemics |
Nicholas Clark Assistant Professor West Point ![]() |
Breakout |
![]() | 2021 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 1 |
Jim Wisnowski Principal Consultant Adsurgo LLC ![]() |
Tutorial |
![]() Recording | 2021 |
Tutorial Statistical Approaches to V&V and Adaptive Sampling in M&S – Part 2 |
Jim Simpson Principal JK Analytics ![]() |
Tutorial |
![]() | 2021 |
Breakout Statistical Engineering in Practice |
Peter Parker Team Lead for Advancement Measurement Systems NASA Langley ![]() |
Breakout |
Recording | 2021 |
Breakout Statistical Engineering in Practice |
Alex Varbanov Principal Scientist Procter and Gamble ![]() |
Breakout |
![]() Recording | 2021 |
Breakout Statistical Engineering in Practice |
Roger Hoerl Associate Professor of Statistics Union College ![]() |
Breakout |
![]() Recording | 2021 |
Breakout Statistical Engineering in Practice |
Angie Patterson Chief Consulting Engineer GE Aviation |
Breakout |
Recording | 2021 |
Breakout Surrogate Models and Sampling Plans for Multi-fidelity Aerodynamic Performance Databases |
Kevin Quinlan Applied Statistician Lawrence Livermore National Laboratory ![]() |
Breakout |
![]() | 2021 |
Roundtable Test Design and Analysis for Modeling & Simulation Validation |
Kelly Avery Research Staff Member Institute for Defense Analyses ![]() |
Roundtable | 2021 |
|
Panel The Keys to Successful Collaborations during Test and Evaluation: Moderator |
Christine Anderson-Cook Los Alamos National Lab ![]() |
Panel |
Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Sarah Burke STAT Expert STAT Center of Excellence ![]() |
Panel |
![]() Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
John Haman RSM Institute for Defense Analyses ![]() |
Panel |
Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Peter Parker Team Lead for Advanced Measurement Systems NASA Langley ![]() |
Panel |
Recording | 2021 |
Panel The Keys to Successful Collaborations during Test and Evaluation: Panelist |
Willis Jensen HR Analyst W.L. Gore & Associates ![]() |
Panel |
Recording | 2021 |
Roundtable The Role of the Statistics Profession in the DoD’s Current AI Initiative |
Laura Freeman Research Associate Professor of Statistics and Director of the Intelligent Systems Lab Virginia Tech ![]() |
Roundtable | 2021 |
|
Breakout Uncertainty Quantification and Sensitivity Analysis Methodology for AJEM |
Craig Andres Mathematical Statistician U.S. Army CCDC Data & Analysis Center ![]() |
Breakout | 2021 |
|
Breakout Verification and Validation of Elastodynamic Simulation Software for Aerospace Research |
Erik Frankforter Research Engineer NASA Langley Research Center ![]() |
Breakout |
![]() | 2021 |
Webinar A HellerVVA Problem: The Catch-22 for Simulated Testing of Fully Autonomous Systems |
Daniel Porter Research Staff Member IDA ![]() |
Webinar |
![]() Recording | 2020 |
Webinar A Practical Introduction To Gaussian Process Regression |
Robert “Bobby” Gramacy Virginia Tech ![]() |
Webinar |
![]() | 2020 |
Webinar A Validation Case Study: The Environment Centric Weapons Analysis Facility |
Elliot Bartis Research Staff Member IDA ![]() |
Webinar |
![]() Recording | 2020 |
Webinar Adoption Challenges in Artificial Intelligence and Machine Learning for Analytic Work Environments |
Laura McNamara Distinguished Member of Technical Staff Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
Webinar Can AI Predict Human Behavior? |
Dustin Burns Senior Scientist Exponent ![]() |
Webinar |
![]() Recording | 2020 |