Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Demystifying the Black Box: A Test Strategy for Autonomy (Abstract)
Systems with autonomy are beginning to permeate civilian, industrial, and military sectors. Though these technologies have the potential to revolutionize our world, they also bring a host of new challenges in evaluating whether these tools are safe, effective, and reliable. The Institute for Defense Analyses is developing methodologies to enable testing systems that can, to some extent, think for themselves. In this talk, we share how we think about this problem and how this framing can help you develop a test strategy for your own domain. |
Dan Porter | Breakout |
![]() | 2019 |
|
Tutorial Demystifying Data Science (Abstract)
Data science is the new buzz word – it is being touted as the solution for everything from curing cancer to self-driving cars. How is data science related to traditional statistics methods? Is data science just another name for “big data”? In this mini-tutorial, we will begin by discussing what data science is (and is not). We will then discuss some of the key principles of data science practice and conclude by examining the classes of problems and methods that are included in data science. |
Alyson Wilson Laboratory for Analytic Sciences North Carolina State Univeristy |
Tutorial | Materials | 2018 |
|
Breakout Deep Reinforcement Learning (Abstract)
An overview of Deep Reinforcement Learning and it’s recent successes in creating high performing agents. Covering it’s application in “easy” environments up to massively complex multi-agent strategic environments. Will analyze the behaviors learned, discuss research challenges, and imagine future possibilities. |
Benjamin Bell | Breakout |
![]() | 2019 |
|
Breakout Deep learning aided inspection of additively manufactured metals (Abstract)
The performance and reliability of additively manufactured (AM) metals is limited by the ubiquitous presence of void- and crack-like defects that form during processing. Many applications require non-destructive evaluation of AM metals to detect potentially critical flaws. To this end, we propose a deep learning approach that can help with the interpretation of inspection reports. Convolutional neural networks (CNN) are developed to predict the elastic stress fields in images of defect-containing metal microstructures, and therefore directly identify critical defects. A large dataset consisting of the stress response of 100,000 random microstructure images is generated using high-resolution Fast Fourier Transform-based finite element (FFT-FE) calculations, which is then used to train a modified U-Net style CNN model. The trained U-Net model more accurately predicted the stress response compared to previous CNN architectures, exceeded the accuracy of low-resolution FFT-FE calculations, and were evaluated more than 100 times faster than conventional FE techniques. The model was applied to images of real AM microstructures with severe lack of fusion defects, and predicted a strong linear increase of maximum stress as a function of pore fraction. This work shows that CNNs can aid the rapid and accurate inspection of defect-containing AM material. |
Brendan Croom Postdoctoral Fellow JHU Applied Physics Laboratory ![]() (bio)
Dr. Croom joined Applied Physics Laboratory in 2020 as a Postdoctoral Researcher within the Multifunctional Materials and Nanostructures group. At APL, my work has focused on developing quantitative inspection, analysis and testing tools to ensure the reliability of additively manufactured metals, which commonly fail due to defects that were created during processing. This work involves pushing the capabilities of X-ray Computed Tomography imaging techniques in terms of speed and resolution to better resolve defects, and using machine learning to improve defect detection and measurement interpretation. Before joining APL, Dr. Croom was an NRC Postdoctoral Research Fellow at the Materials and Manufacturing Directorate at Air Force Research Laboratory, where he worked to study the fiber alignment, defect formation, and fracture behavior of additively manufactured composites. He completed his Ph.D. at the University of Virginia in 2019, where he developed several in situ X-ray Computed Tomography mechanical testing techniques. |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks (Abstract)
Collaborative autonomous sensor networks have recently been used in many applications including inspection, law enforcement, search and rescue, and national security. They offer scalable, low cost solutions which are robust to the loss of multiple sensors in hostile or dangerous environments. While often comprised of less capable sensors, the performance of a large network can approach the performance of far more capable and expensive platforms if nodes are effectively coordinating their sensing actions and data processing. This talk will summarize work to date at LLNL on distributed signal processing and decentralized optimization algorithms for collaborative autonomous sensor networks, focusing on ADMM-based solutions for detection/estimation problems and sequential greedy optimization solutions which maximize submodular functions, e.g. mutual information. |
Ryan Goldhahn | Breakout | 2019 |
||
Breakout Debunking Stress Rupture Theories Using Weibull Regression Plots (Abstract)
As statisticians, we are always working on new ways to explain statistical methodologies to non-statisticians. It is in this realm that we never underestimate the value of graphics and patience! In this presentation, we present a case study that involves stress rupture data where a Weibull regression is needed to estimate the parameters. The context of the case study results from a multi-stage project supported by NASA’s Engineering Safety Center (NESC) where the objective was to assess the safety of composite overwrapped pressure vessels (COPVs). The analytical team was tasked with devising a test plan to model stress rupture failure risk in carbon fiber strands that encase the COPVs with the goal of understanding the reliability of the strands at use conditions for the expected mission life. While analyzing the data, we found that the proper analysis contradicts accepted theories about the stress rupture phenomena. In this talk, we will introduce ways to graph the stress rupture data to better explain the proper analysis and also explore assumptions. |
Anne Driscoll Associate Collegiate Professor Virginia Tech ![]() (bio)
Anne Ryan Driscoll is an Associate Collegiate Professor in the Department of Statistics at Virginia Tech. She received her PhD in Statistics from Virginia Tech. Her research interests include statistical process control, design of experiments, and statistics education. She is a member of ASQ and ASA. |
Breakout |
![]() | 2021 |
|
Short Course Data Visualization (Abstract)
Data visualization allows us to quickly explore and discover relationships graphically and interactively. We will provide the foundations for creating better graphical information to accelerate the insight discovery process and enhance the understandability of reported results. First principles and the “human as part of the system” aspects of information visualization from multiple leading sources such as Harvard Business Review, Edward Tufte, and Stephen Few will be explored using representative example data sets. We will discuss best practices for graphical excellence to most effectively, clearly, and efficiently communicate your story. We will explore visualizations applicable across the conference themes (computational modeling, DOE, statistical engineering, modeling & simulation, and reliability) for univariate, multivariate, time-dependent, and geographical data. |
Jim Wisnowski Adsurgo |
Short Course | Materials | 2017 |
|
Breakout Data Visualization (Abstract)
Teams of people with many different talents and skills work together at NASA to improve our understanding of our planet Earth, our Sun and solar system, and the Universe. The Earth System is made up of complex interactions and dependencies of the solar, oceanic, terrestrial, atmospheric, and living components. Solar storms have been recognized as a cause of technological problems on Earth since the invention of the telegraph in the 19th century. Solar flares, coronal holes, and coronal mass ejections (CME’s) can emit large bursts of radiation, high speed electrons and protons, and other highly energetic particles that are released from the sun, and are sometimes directed at Earth. These particles and radiation can damage satellites in space, shutdown power grids on earth, cause GPS outages, and have serious health concerns to humans flying at high altitudes on earth, as well as astronauts in space. NASA builds and operates a fleet of satellites to study the sun and a fleet of satellites and aircraft to observe the Earth system. NASA’s Computer Models combine the observations with numerical models, to understand how these systems work. Using satellite observations alongside computer models we can combine many pieces of information to form a coherent view of Earth and the Sun. NASA research helps us understand how processes combine to affect life on Earth: this includes severe weather, health, changes in climate, and space weather. The Scientific Visualization Studio wants you to learn about NASA programs through visualization. The SVS works closely with scientists in the creation of data visualizations, animations, and images in order to promote a greater understanding of Earth and Space Science research activities at NASA and within the academic research community supported by NASA. |
Lori Perklins NASA |
Breakout | 2017 |
||
Data Science & ML-Enabled Terminal Effects Optimization (Abstract)
Warhead design and performance optimization against a range of targets is a foundational aspect of the Department of the Army’s mission on behalf of the warfighter. The existing procedures utilized to perform this basic design task do not fully leverage the exponential growth in data science, machine learning, distributed computing, and computational optimization. Although sound in practice and methodology, existing implementations are laborious and computationally expensive, thus limiting the ability to fully explore the trade space of all potentially viable solutions. An additional complicating factor is the fast paced nature of many Research and Development programs which require equally fast paced conceptualization and assessment of warhead designs. By utilizing methods to take advantage of data analytics, the workflow to develop and assess modern warheads will enable earlier insights, discovery through advanced visualization, and optimal integration of multiple engineering domains. Additionally, a framework built on machine learning would allow for the exploitation of past studies and designs to better inform future developments. Combining these approaches will allow for rapid conceptualization and assessment of new and novel warhead designs. US overmatch capability is quickly eroding across many tactical and operational weapon platforms. Traditional incremental improvement approaches are no longer generating appreciable performance improvements to warrant investment. Novel next generation techniques are required to find efficiencies in designs and leap forward technologies to maintain US superiority. The proposed approach seeks to shift existing design mentality to meet this challenge. |
John Cilli Computer Scientist Picatinny Arsenal ![]() (bio)
My name is John Cilli, I am a recent graduate of East Stroudsburg University with a bachelor’s in Computer Science. I have been working at Picatinny within the Systems Analysis Division as a computer scientist for little over a year now. |
Session Recording |
![]() Recording | 2022 |
|
Tutorial Data Integrity For Deep Learning Models (Abstract)
Deep learning models are built from algorithm frameworks that fit parameters over a large set of structured historical examples. Model robustness relies heavily on the accuracy and quality of the input training datasets. This mini-tutorial seeks to explore the practical implications of data quality issues when attempting to build reliable and accurate deep learning models. The tutorial will review the basics of neural networks, model building, and then dive deep into examples and data quality considerations using practical examples. An understanding of data integrity and data quality is pivotal for verification and validation of deep learning models, and this tutorial will provide students with a foundation of this topic. |
Roshan Patel Systems Engineer/Data Scientist US Army ![]() (bio)
Mr. Roshan Patel is a systems engineer and data scientist working at CCDC Armament Center. His role focuses on systems engineering infrastructure, statistical modeling, and the analysis of weapon systems. He holds a Masters of Computer Science from Rutgers University, where he specialized in operating systems programming and machine learning. Mr. Patel is the current AI lead for the Systems Engineering Directorate at CCDC Armaments Center. |
Tutorial | 2022 |
||
Tutorial Data Integrity For Deep Learning Models (Abstract)
Deep learning models are built from algorithm frameworks that fit parameters over a large set of structured historical examples. Model robustness relies heavily on the accuracy and quality of the input training datasets. This mini-tutorial seeks to explore the practical implications of data quality issues when attempting to build reliable and accurate deep learning models. The tutorial will review the basics of neural networks, model building, and then dive deep into examples and data quality considerations using practical examples. An understanding of data integrity and data quality is pivotal for verification and validation of deep learning models, and this tutorial will provide students with a foundation of this topic. |
Victoria Gerardi and John Cilli US Army, CCDC Armaments Center ![]() ![]() |
Tutorial | Session Recording |
Materials
Recording | 2022 |
Short Course Data Farming (Abstract)
This tutorial is designed for newcomers to simulation-based experiments. Data farming is the process of using computational experiments to “grow” data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. The focus of the tutorial will be on gaining practical experience with setting up and running simulation experiments, leveraging recent advances in large-scale simulation experimentation pioneered by the Simulation Experiments & Efficient Designs (SEED) Center for Data Farming at the Naval Postgraduate School (http://harvest.nps.edu). Participants will be introduced to fundamental concepts, and jointly explore simulation models in an interactive setting. Demonstrations and written materials will supplement guided, hands-on activities through the setup, design, data collection, and analysis phases of an experiment-driven simulation study. |
Susan Sanchez Naval Postgraduate School |
Short Course | Materials | 2017 |
|
Breakout Dashboard for Equipment Failure Reports (Abstract)
Equipment Failure Reports (EFRs) describe equipment failures and the steps taken as a result of these failures. EFRs contain both structured and unstructured data. Currently, analysts manually read through EFRs to understand failure modes and make recommendations to reduce future failures. This is a tedious process where important trends and information can get lost. This motivated the creation of an interactive dashboard that extracts relevant information from the unstructured (i.e. free-form text) data and combines it with structured data like failure date, corrective action and part number. The dashboard is an RShiny application that utilizes numerous text mining and visualization packages, including tm, plotly, edgebundler, and topicmodels. It allows the end-user to filter to the EFRs that they care about and visualize meta-data, such as geographic region where the failure occurred, over time allowing previously unknown trends to be seen. The dashboard also applies topic modeling to the unstructured data to identify key themes. Analysts are now able to quickly identify frequent failure modes and look at time and region-based trends in these common equipment failures. |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() (bio)
Robert Molloy is a data scientist for the Johns Hopkins University Applied Physic Laboratory’s Systems Analysis Group, where he supports a variety of projects including text mining on unstructured text data, applying machine learning techniques to text and signal data, and implementing and modifying existing natural language models. He graduated from the University of Maryland, College Park in May 2020 with a dual degree in computer science and mathematics with a concentration in statistics. |
Breakout |
![]() | 2021 |
|
Webinar D-Optimally Based Sequential Test Method for Ballistic Limit Testing (Abstract)
Ballistic limit testing of armor is testing in which a kinetic energy threat is shot at armor at varying velocities. The striking velocity and whether the threat completely penetrated or partially penetrated the armor is recorded. The probability of penetration is modeled as a function of velocity using a generalized linear model. The parameters of the model serve as inputs to MUVES which is a DoD software tool used to analyze weapon system vulnerability and munition lethality. Generally, the probability of penetration is assumed to be monotonically increasing with velocity. However, in cases in which there is a change in penetration mechanism, such as the shatter gap phenomena, the probability of penetration can no longer be assumed to be monotonically increasing and a more complex model is necessary. One such model was developed by Chang and Bodt to model the probability of penetration as a function of velocity over a velocity range in which there are two penetration mechanisms. This paper proposes a D-optimally based sequential shot selection method to efficiently select threat velocities during testing. Two cases are presented: the case in which the penetration mechanism for each shot is known (via high-speed or post shot x-ray) and the case in which the penetration mechanism is not known. This method may be used to support an improved evaluation of armor performance for cases in which there is a change in penetration mechanism. |
Leonard Lombardo Mathematician U.S. Army Aberdeen Test Center ![]() (bio)
Leonard currently serves is an analyst for the RAM/ILS Engineering and Analysis Division at the U.S. Army Aberdeen Test Center (ATC). At ATC, he is the lead analyst for both ballistic testing of helmets and fragmentation analysis. Previously, while on a developmental assignment at the U.S. Army Evaluation Center, he worked towards increasing the use of generalized linear models in ballistic limit testing. Since then, he has contributed towards the implementation of generalized linear models within the test center through test design and analysis. |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions (Abstract)
Cybersecurity Metrics and Quantification is a fundamental but notoriously hard problem. It is one of the pillars underlying the emerging Science of Cybersecurity. In this talk, I will describe a number of cybersecurity metrics quantification research problems that are encountered in evaluating the effectiveness of a range of cyber defense tools. I will review the research results we have obtained over the past years. I will also discuss future research directions, including the ones that are undertaken in my research group. |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() (bio)
Shouhuai Xu is the Gallogly Chair Professor in the Department of Computer Science, University of Colorado Colorado Springs (UCCS). Prior to joining UCCS, he was with the Department of Computer Science, University of Texas at San Antonio. He pioneered a systematic approach, dubbed Cybersecurity Dynamics, to modeling and quantifying cybersecurity from a holistic perspective. This approach has three orthogonal research thrusts: metrics (for quantifying security, resilience and trustworthiness/uncertainty, to which this talk belongs), cybersecurity data analytics, and cybersecurity first-principle modeling (for seeking cybersecurity laws). His research has won a number of awards, including the 2019 worldwide adversarial malware classification challenge organized by the MIT Lincoln Lab. His research has been funded by AFOSR, AFRL, ARL, ARO, DOE, NSF and ONR. He co-initiated the International Conference on Science of Cyber Security (SciSec) and is serving as its Steering Committee Chair. He has served as Program Committee co-chair for a number of international conferences and as Program Committee member for numerous international conferences. Â He is/was an Associate Editor of IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), IEEE Transactions on Information Forensics and Security (IEEE T-IFS), and IEEE Transactions on Network Science and Engineering (IEEE TNSE). More information about his research can be found at https://xu-lab.org. |
Breakout | Materials | 2021 |
|
Breakout CYBER Penetration Testing and Statistical Analysis in DT&E (Abstract)
Reconnaissance, footprinting, and enumeration are critical steps in the CYBER penetration testing process because if these steps are not fully and extensively executed, the information available for finding a system’s vulnerabilities may be limited. During the CYBER testing process, penetration testers often find themselves doing the same initial enumeration scans over and over for each system under test. Because of this, automated scripts have been developed that take these mundane and repetitive manual steps and perform them automatically with little user input. Once automation is present in the penetration testing process, Scientific Test and Analysis Techniques (STAT) can be incorporated. By combining automation and STAT in the CYBER penetration testing process, Mr. Tim McLean at Marine Corps Tactical Systems Support Activity (MCTSSA) coined a new term called CYBERSTAT. CYBERSTAT is applying scientific test and analysis techniques to offensive CYBER penetration testing tools with an important realization that CYBERSTAT assumes the system under test is the offensive penetration test tool itself. By applying combinatorial testing techniques to the CYBER tool, the CYBER tool’s scope is expanded beyond “one at a time” uses as the combinations of the CYBER tool’s capabilities and options are explored and executed as test cases against the target system. In CYBERSTAT, the additional test cases produced by STAT can be run automatically using scripts. This talk will show how MCTSSA is preparing to use CYBERSTAT in the Developmental Test and Evaluation process of USMC Command and Control systems. |
Timothy McLean | Breakout | Materials | 2018 |
|
Tutorial Creating Shiny Apps in R for Sharing Automated Statistical Products (Abstract)
Interactive web apps can be built straight from R with the R package, Shiny. hiny apps are becoming more prevalent as a way to automate statistical products and share them with others who do not know R. This tutorial will cover Shiny app syntax and how to create basic Shiny apps. Participants will create basic apps by working through several examples and explore how to change and improve these apps. Participants will leave the session with the tools to create their own complicated applications. Participants will need a computer with R, R Studio, and the shiny R package installed. |
Randy Griffiths U.S. Army Evaluation Center |
Tutorial | Materials | 2018 |
|
Convolutional Neural Networks and Semantic Segmentation for Cloud and Ice Detection (Abstract)
Recent research shows the effectiveness of machine learning on image classification and segmentation. The use of artificial neural networks (ANNs) on image datasets such as the MNIST dataset of handwritten digits is highly effective. However, when presented with a more complex image, ANNs and other simple computer vision algorithms tend to fail. This research uses Convolutional Neural Networks (CNNs) to determine how we can differentiate between ice and clouds in the imagery of the Arctic. Instead of using ANNs, where we analyze the problem in one dimension, CNNs identify features using the spatial relationships between the pixels in an image. This technique allows us to extract spatial features, presenting us with higher accuracy. Using a CNN named the Cloud-Net Model, we analyze how a CNN performs when analyzing satellite images. First, we examine recent research on the Cloud-Net Model’s effectiveness on satellite imagery, specifically from Landsat data, with four channels: red, green, blue, and infrared. We extend and modify this model, allowing us to analyze data from the most common channels used by satellites: red, green, and blue. By training on different combinations of these three channels, we extend this analysis by testing on an entirely different data set: GOES imagery. This gives us an understanding of the impact of each individual channel in image classification. By selecting images that exist in the same geographic location and containing both ice and clouds, such as the Landsat, we test GOES analyzing the CNN’s generalizability. Finally, we present CNN’s ability to accurately identify the clouds and ice in the GOES data versus the Landsat data. |
Prarabdha Ojwaswee Yonzon Cadet United States Military Academy (West Point) ![]() (bio)
CDT Prarabdha “Osho” Yonzon is a first-generation Nepalese American raised in Brooklyn Park, Minnesota. He initially enlisted into the Minnesota National Guard in 2015 as an Aviation Operation Specialist, and he was later accepted into USMAPS in 2017. He is an Applied Statistics Data Science Major from the United States Military Academy. Osho is passionate about his research. He first started working with West Point Department of Physics to examine impacts on GPS solutions. Later, he published a few articles and presented them at the AWRA annual conference for modeling groundwater flow with the Math department. Currently, he is working with the West Point Department of Mathematics and Lockheed Martin to create machine learning algorithms to detect objects in images. He plans to attend graduate school for data science and serve as a cyber officer. |
Session Recording |
![]() Recording | 2022 |
|
Breakout Constructing Designs for Fault Location (Abstract)
Abstract. While fault testing a system with many factors each appearing at some number of levels, it may not be possible to test all combinations of factor levels. Most faults are caused by interactions of only a few factors, so testing interactions up to size t will often find all faults in the system without executing an exhaustive test suite. Call an assignment of levels to t of the factors a t-way interaction. A covering array is a collection of tests that ensures that every t-way interaction is covered by at least one test in the test suite. Locating arrays extend covering arrays with the additional feature that they not only indicate the presence of faults but locate the faulty interactions when there are no more than d faults in the system. If an array is (d, t)-locating, for every pair of sets of t-way interactions of size d, the interactions do not appear in exactly the same tests. This ensures that the faulty interactions can be differentiated from non-faulty interactions by the results of some test in which interactions from one set or the other but not both are tested. When the property holds for t-way interaction sets of size up to d, the notation (d, t ¯ ) is used. In addition to fault location, locating arrays have also been used to identify significant effects in screening experiments. Locating arrays are fairly new and few techniques have been explored for their construction. Most of the available work is limited to finding only one fault (d = 1). Known general methods require a covering array of strength t + d and produce many more tests than are needed. In this talk, we present Partitioned Search with Column Resampling (PSCR), a computational search algorithm to verify if an array is (d, t ¯ )-locating by partitioning the search space to decrease the number of comparisons. If a candidate array is not locating, random resampling is performed until a locating array is constructed or an iteration limit is reached. Algorithmic parameters determine which factor columns to resample and when to add additional tests to the candidate array. We use a 5 × 5 × 3 × 2 × 2 full factorial design to analyze the performance of the algorithmic parameters and provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both. Last, we compare our results to the number of tests in locating arrays constructed for the factors and levels of real-world systems produced by other methods. |
Erin Lanus | Breakout |
![]() | 2019 |
|
Keynote Consensus Building |
Antonio Possolo NIST Fellow, Chief Statistician National Institute of Standards and Technology. ![]() (bio)
Antonio Possolo holds a Ph.D. in statistics from Yale University, and has been practicing the statistical arts for more than 35 years, in industry (General Electric, Boeing), academia (Princeton University, University of Washington in Seattle, Classical University of Lisboa), and government. He is committed to the development and application of probabilistic and statistical methods that contribute to advances in science and technology, and in particular to measurement science. |
Keynote | Materials | 2018 |
|
Webinar Connecting Software Reliability Growth Models to Software Defect Tracking (Abstract)
Co-Author: Melanie Luperon. Most software reliability growth models only track defect discovery. However, a practical concern is removal of high severity defects, yet defect removal is often assumed to occur instantaneously. More recently, several defect removal models have been formulated as differential equations in terms of the number of defects discovered but not yet resolved and the rate of resolution. The limitation of this approach is that it does not take into consideration data contained in a defect tracking database. This talk describes our recent efforts to analyze data from a NASA program. Two methods to model defect resolution are developed, namely (i) distributional and (ii) Markovian approaches. The distributional approach employs times between defect discovery and resolution to characterize the mean resolution time and derives a software defect resolution model from the corresponding software reliability growth model to track defect discovery. The Markovian approach develops a state model from the stages of the software defect lifecycle as well as a transition probability matrix and the distributions for each transition, providing a semi-Markov model. Both the distribution and Markovian approaches employ a censored estimation technique to identify the maximum likelihood estimates, in order to handle the case where some but not all of the defects discovered have been resolved. Furthermore, we apply a hypothesis test to determine if a first or second order Markov chain best characterizes the defect lifecycle. Our results indicate that a first order Markov chain was sufficient to describe the data considered and that the Markovian approach achieves modest improvements in predictive accuracy, suggesting that the simpler distributional approach may be sufficient to characterize the software defect resolution process during test. The practical inferences of such models include an estimate of the time required to discover and remove all defects. |
Lance Fiondella Associate Professor University of Massachusetts ![]() (bio)
Lance Fiondella is an associate professor of Electrical and Computer Engineering at the University of Massachusetts Dartmouth. He received his PhD (2012) in Computer Science and Engineering from the University of Connecticut. Dr. Fiondella’s papers have received eleven conference paper awards, including six with his students. His software and system reliability and security research has been funded by the DHS, NASA, Army Research Laboratory, Naval Air Warfare Center, and National Science Foundation, including a CAREER Award. |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Computing Statistical Tolerance Regions Using the R Package ‘tolerance’ (Abstract)
Statistical tolerance intervals of the form (1−α, P) provide bounds to capture at least a specified proportion P of the sampled population with a given confidence level 1−α. The quantity P is called the content of the tolerance interval and the confidence level 1−α reflects the sampling variability. Statistical tolerance intervals are ubiquitous in regulatory documents, especially regarding design verification and process validation. Examples of such regulations are those published by the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), the International Atomic Energy Agency (IAEA), and the standard 16269-6 of the International Organization for Standardization (ISO). Research and development in the area of statistical tolerance intervals has undoubtedly been guided by the needs and demands of industry experts. Some of the broad applications of tolerance intervals include their use in quality control of drug products, setting process validation acceptance criteria, establishing sample sizes for process validation, assessing biosimilarity, and establishing statistically-based design limits. While tolerance intervals are available for numerous parametric distributions, procedures are also available for regression models, mixed-effects models, and multivariate settings (i.e., tolerance regions). Alternatively, nonparametric procedures can be employed when assumptions of a particular parametric model are not met. Tools for computing such tolerance intervals and regions are a necessity for researchers and practitioners alike. This was the motivation for designing the R package ‘tolerance,’ which not only has the capability of computing a wide range of tolerance intervals and regions for both standard and non-standard settings, but also includes some supplementary visualization tools. This session will provide a high-level introduction to the ‘tolerance’ package and its many features. Relevant data examples will be integrated with the computing demonstration, and specifically designed to engage researchers and practitioners from industry and government. A recently-launched Shiny app corresponding to the package will also be highlighted. |
Derek Young Associate Professor of Statistics University of Kentucky ![]() (bio)
Derek Young received their PhD in Statistics from Penn State University in 2007, where his research focused on computational aspects of novel finite mixture models. He subsequently worked as a Senior Statistician for the Naval Nuclear Propulsion Program (Bettis Lab) for 3.5 years and then as a Research Mathematical Statistician for the US Census Bureau for 3 years. He then joined the faculty of the Department of Statistics at the University of Kentucky in the fall of 2014, where he is currently a tenured Associate Professor. While at the Bettis Lab, he engaged with engineers and nuclear regulators, often regarding the calculation of tolerance regions. While at the Census Bureau, he wrote several methodological and computational papers for applied survey data analysis, many as the sole author. Since being at the University of Kentucky, he has further progressed his research agenda in finite mixture modeling, zero-inflated modeling, and tolerance regions. He also has extensive teaching experience spanning numerous undergraduate and graduate Statistics courses, as well as professional development presentations in Statistics. |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Comparison of Methods for Testing Uniformity to Support the Validation of Simulation Models used for Live-Fire Testing (Abstract)
Goodness-of-fit (GOF) testing is used in many applications, including statistical hypothesis testing to determine if a set of data come from a hypothesized distribution. In addition, combined probability tests are extensively used in meta-analysis to combine results from several independent tests to asses an overall null hypothesis. This paper summarizes a study conducted to determine which GOF and/or combined probability test(s) can be used to determine if a set of data with relative small sample size comes from the standard uniform distribution, U(0,1). The power against different alternative hypothesis of several GOF tests and combined probability methods were examined. The GOF methods included: Anderson-Darling, Chi-Square, Kolmogorov-Smirnov, Cramér-Von Mises, Neyman-Barton, Dudewicz-van der Meulen, Sherman, Quesenberry-Miller, Frosini, and Hegazy-Green; while thecombined probability test methods included: Fisher’s Combined Probability Test, Mean Z, Mean P, Maximum P, Minimum P, Logit P, and Sum Z. While no one method was determined to provide the best power in all situations, several useful methods to support model validation were identified. |
Shannon Shelburne | Breakout |
![]() | 2019 |
|
Breakout Comparing M&S Output to Live Test Data: A Missile System Case Study (Abstract)
In the operational testing of DoD weapons systems, modeling and simulation (M&S) is often used to supplement live test data in order to support a more complete and rigorous evaluation. Before the output of the M&S is included in reports to decision makers, it must first be thoroughly verified and validated to show that it adequately represents the real world for the purposes of the intended use. Part of the validation process should include a statistical comparison of live data to M&S output. This presentation includes an example of one such validation analysis for a tactical missile system. In this case, the goal is to validate a lethality model that predicts the likelihood of destroying a particular enemy target. Using design of experiments, along with basic analysis techniques such as the Kolmogorov-Smirnov test and Poisson regression, we can explore differences between the M&S and live data across multiple operational conditions and quantify the associated uncertainties. |
Kelly Avery Reasearch Staff member IDA |
Breakout | Materials | 2018 |
|
Breakout Comparing Experimental Designs (Abstract)
This tutorial will show how to compare and choose experimental designs based on multiple criteria. Answers to questions like “Which Design of Experiments (DOE) is better/best?” will be answered by looking at both data and graphics that show the relative performance of the designs based on multiple criteria, including; power of the designs for different model terms, how well the designs minimize predictive variance across the design space, to what level are model terms confounded or correlated, what are the relative efficiencies that measure how well coefficients are estimated or how well predictive variance is minimized. Many different case studies of screening, response surface, and screening augmented to response surface designs will be compared. Designs with both continuous and categorical factors, and with constraints on the experimental region will also be compared. |
Tom Donnelly JMP |
Breakout | Materials | 2017 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Demystifying the Black Box: A Test Strategy for Autonomy |
Dan Porter | Breakout |
![]() | 2019 |
|
Tutorial Demystifying Data Science |
Alyson Wilson Laboratory for Analytic Sciences North Carolina State Univeristy |
Tutorial | Materials | 2018 |
|
Breakout Deep Reinforcement Learning |
Benjamin Bell | Breakout |
![]() | 2019 |
|
Breakout Deep learning aided inspection of additively manufactured metals |
Brendan Croom Postdoctoral Fellow JHU Applied Physics Laboratory ![]() |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Decentralized Signal Processing and Distributed Control for Collaborative Autonomous Sensor Networks |
Ryan Goldhahn | Breakout | 2019 |
||
Breakout Debunking Stress Rupture Theories Using Weibull Regression Plots |
Anne Driscoll Associate Collegiate Professor Virginia Tech ![]() |
Breakout |
![]() | 2021 |
|
Short Course Data Visualization |
Jim Wisnowski Adsurgo |
Short Course | Materials | 2017 |
|
Breakout Data Visualization |
Lori Perklins NASA |
Breakout | 2017 |
||
Data Science & ML-Enabled Terminal Effects Optimization |
John Cilli Computer Scientist Picatinny Arsenal ![]() |
Session Recording |
![]() Recording | 2022 |
|
Tutorial Data Integrity For Deep Learning Models |
Roshan Patel Systems Engineer/Data Scientist US Army ![]() |
Tutorial | 2022 |
||
Tutorial Data Integrity For Deep Learning Models |
Victoria Gerardi and John Cilli US Army, CCDC Armaments Center ![]() ![]() |
Tutorial | Session Recording |
Materials
Recording | 2022 |
Short Course Data Farming |
Susan Sanchez Naval Postgraduate School |
Short Course | Materials | 2017 |
|
Breakout Dashboard for Equipment Failure Reports |
Robert Cole Molloy Johns Hopkins University Applied Physics Laboratory ![]() |
Breakout |
![]() | 2021 |
|
Webinar D-Optimally Based Sequential Test Method for Ballistic Limit Testing |
Leonard Lombardo Mathematician U.S. Army Aberdeen Test Center ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Cybersecurity Metrics and Quantification: Problems, Some Results, and Research Directions |
Shouhuai Xu Professor University of Colorado Colorado Springs ![]() |
Breakout | Materials | 2021 |
|
Breakout CYBER Penetration Testing and Statistical Analysis in DT&E |
Timothy McLean | Breakout | Materials | 2018 |
|
Tutorial Creating Shiny Apps in R for Sharing Automated Statistical Products |
Randy Griffiths U.S. Army Evaluation Center |
Tutorial | Materials | 2018 |
|
Convolutional Neural Networks and Semantic Segmentation for Cloud and Ice Detection |
Prarabdha Ojwaswee Yonzon Cadet United States Military Academy (West Point) ![]() |
Session Recording |
![]() Recording | 2022 |
|
Breakout Constructing Designs for Fault Location |
Erin Lanus | Breakout |
![]() | 2019 |
|
Keynote Consensus Building |
Antonio Possolo NIST Fellow, Chief Statistician National Institute of Standards and Technology. ![]() |
Keynote | Materials | 2018 |
|
Webinar Connecting Software Reliability Growth Models to Software Defect Tracking |
Lance Fiondella Associate Professor University of Massachusetts ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Breakout Computing Statistical Tolerance Regions Using the R Package ‘tolerance’ |
Derek Young Associate Professor of Statistics University of Kentucky ![]() |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Comparison of Methods for Testing Uniformity to Support the Validation of Simulation Models used for Live-Fire Testing |
Shannon Shelburne | Breakout |
![]() | 2019 |
|
Breakout Comparing M&S Output to Live Test Data: A Missile System Case Study |
Kelly Avery Reasearch Staff member IDA |
Breakout | Materials | 2018 |
|
Breakout Comparing Experimental Designs |
Tom Donnelly JMP |
Breakout | Materials | 2017 |