Session Title | Speaker | Type | Recording | Materials | Year | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Breakout Leveraging Data Science and Cloud Tools to Enable Continuous Reporting (Abstract)
The DoD’s challenge to provide test results at the “Speed of Relevance” has generated many new strategies to accelerate data collection, adjudication, and analysis. As a result, the Air Force Operational Test and Evaluation Center (AFOTEC), in conjunction with the Air Force Chief Data Office’s Visible, Accessible, Understandable, Linked and Trusted Data Platform (VAULT), is developing a Survey Application. This new cloud-based application will be deployable on any AFNET-connected computer or tablet and merges a variety of tools for collection, storage, analytics, and decision-making into one easy-to-use platform. By placing cloud-computing power in the hands of operators and testers, authorized users can view report-quality visuals and statistical analyses the moment a survey is submitted. Because the data is stored in the cloud, demanding computations such as machine learning are run at the data source to provide even more insight into both quantitative and qualitative metrics. The T-7A Red Hawk will be the first operational test (OT) program to utilize the Survey Application. Over 1000 flying and simulator test points have been loaded into the application, with many more coming from developmental test partners. The Survey app development will continue as USAF testing commences. Future efforts will focus on making the Survey Application configurable to other research and test programs to enhance their analytic and reporting capabilities. |
Timothy Dawson Lead Mobility Test Operations Analyst AFOTEC Detachment 5 ![]() (bio)
irst Lieutenant Timothy Dawson is an operational test analyst assigned with the Air Force Operational Test and Evaluation Center, Detachment 5, at Edwards AFB, Ca. The lieutenant serves as the lead AFOTEC Mobility Training Operations analyst, splitting his work between the T-7A Red Hawk high performance trainer, KC-46A Pegasus tanker, and VC-25B presidential transport. Lieutenant Dawson also serves alongside the 416th Flight Test Squadron as a flight test engineer on the T-38C Talon. Lieutenant Dawson, originally from Olympia, Wa., received his commission as a second lieutenant upon completing ROTC at the University of California, Berkeley in 2019. He served as a student pilot at Vance AFB, Ok., leading data analysis and software development projects before arriving to his current duty location at Edwards. |
Breakout | Session Recording |
![]() Recording | 2022 |
||||||||
Breakout Statistical Process Control and Capability Study on the Water Content Measurements in NASA Glenn’s Icing Research Tunnel (Abstract)
The Icing Research Tunnel (IRT) at NASA Glenn Research Center follows the recommended practice for icing tunnel calibration outlined in SAE’s ARP5905 document. The calibration team has followed the schedule of a full calibration every five years with a check calibration done every six months following. The liquid water content of the IRT has maintained stability within in the specifications presented to customers that the variation is within +/- 10% of the calibrated, target measurement. With recent measurements and instrumentation errors, a more thorough assessment of error source was desired. By constructing statistical process control charts, the ability to determine how the instrument varies in the short term, mid term, and long term was gained. The control charts offer a view of instrument error, facility error, or installation changes. It was discovered that there was a shift from target to mean baseline thus leading to the study of the overall capability indices of the liquid water content measuring instrument to perform within specifications defined in the IRT. This presentation describes data processing procedures for the Multi-Element Sensor in the IRT, including collision efficiency corrections, canonical correlation analysis, Chauvenet’s Criterion for rejection of data, distribution check of data, and mean, median and mode for construction of control charts. Further data is presented to describe the repeatability of the IRT with the Multi-Element Sensor and the ability to maintain a stable process for the defined calibration schedule. |
Emily Timko | Breakout | 2019 |
||||||||||
Breakout AI & ML in Complex Environment (Abstract)
The U.S. Army Research Laboratory’s (ARL) Essential Research Program (ERP) on Artificial Intelligence & Machine Learning (AI & ML) seeks to research, develop and employ a suite of AI-inspired and ML techniques and systems to assist teams of soldiers and autonomous agents in dynamic, uncertain, complex operational conditions. Systems will be robust, scalable, and capable of learning and acting with varying levels of autonomy, to become integral components of networked sensors, knowledge bases, autonomous agents, and human teams. Three specific research gaps will be examined: (i) Learning in Complex Data Environments, (ii) Resource-constrained AI Processing at the Point-of-Need and (iii) Generalizable & Predictable AI. The talk will highlight ARL’s internal research efforts over the next 3-5 years that are connected, cumulative and converging to produce tactically-sensible AI-enabled capabilities for decision making at the tactical edge, specifically addressing topics in: (1) adversarial distributed machine learning, (2) robust inference & machine learning over heterogeneous sources, (3) adversarial reasoning integrating learned information, (4) adaptive online learning and (5) resource-constrained adaptive computing. The talk will also highlight collaborative research opportunities in AI & ML via ARL’s Army AI Innovation Institute (A2I2) which will harness the distributed research enterprise via the ARL Open Campus & Regional Campus initiatives. |
Tien Pham | Breakout | 2019 |
||||||||||
Breakout Using Sensor Stream Data as Both an Input and Output in a Functional Data Analysis (Abstract)
A case study will be presented where patients wearing continuous glycemic monitoring systems provide sensor stream data of their glucose levels before and after consuming 1 of 5 different types of snacks. The goal is to be able to better predict a new patient’s glycemic-response-over-time trace after being given a particular type of snack. Functional Data Analysis (FDA) is used to extract eigenfunctions that capture the longitudinal shape information of the traces and principal component scores that capture the patient-to-patient variation. FDA is used twice. First it is used on the “before” baseline glycemic-response-over-time traces. Then a separate analysis is done on the snack-induced “after” response traces. The before FPC scores and the type of snack are then used to model the after FPC scores. This final FDA model will then be used to predict the glycemic response of new patients given a particular snack and their existing baseline response history. Although the case study is for medical sensor data, the methodology employed would work for any sensor stream where an event perturbs the system thus affecting the shape of the sensor stream post event. |
Thomas A Donnelly Principal Systems Engineer JMP Statistical Discovery LLC ![]() (bio)
Tom Donnelly works as a Systems Engineer for JMP Statistical Discovery LLC supporting users of JMP in the Defense and Aerospace sector. He has been actively using and teaching Design of Experiments (DOE) methods for the past 38 years to develop and optimize products, processes, and technologies. Donnelly joined JMP in 2008 after working as an analyst for the Modeling, Simulation & Analysis Branch of the US Army’s Edgewood Chemical Biological Center (now CCDC CBC). There, he used DOE to develop, test, and evaluate technologies for detection, protection, and decontamination of chemical and biological agents. Prior to working for the Army, Tom was a partner in the first DOE software company for 20 years where he taught over 300 industrial short courses to engineers and scientists. Tom received his PhD in Physics from the University of Delaware. |
Breakout | Session Recording |
![]() Recording | 2022 |
||||||||
Breakout Air Force Human Systems Integration Program (Abstract)
The Air Force (AF) Human Systems Integration (HSI) program is led by the 711th Human Performance Wing’s Human Systems Integration Directorate (711 HPW/HP). 711 HPW HP provides direct support to system program offices and AF Major Commands (MAJCOMs) across the acquisition lifecycle from requirements development to fielding and sustainment in addition to providing home office support. With an ever-increasing demand signal for support, HSI practitioners within 711 HPW/HP assess HSI domain areas for human-centered risks and strive to ensure systems are designed and developed to safely, effectively, and affordably integrate with human capabilities and limitations. In addition to system program offices and MAJCOMs, 711 HPW/HP provides HSI support to AF Centers (e.g., AF Sustainment Center, AF Test Center), the AF Medical Service, and special cases as needed. The AF Global Strike Command (AFGSC) is the largest MAJCOM with several Programs of Record (POR), such as the B-1, B-2, and B-52 bombers, Intercontinental Ballistic Missiles (ICBM), Ground-Based Strategic Deterrent (GBSD), Airborne Launch Control System (ALCS), and other support programs/vehicles like the UH-1N. Mr. Anthony Thomas (711 HPW/HP), the AFGSC HSI representative, will discuss how 711 HPW/HP supports these programs at the MAJCOM headquarters level and in the system program offices. |
Anthony Thomas | Breakout |
![]() | 2019 |
|||||||||
Breakout Carrier Reliability Model Validation (Abstract)
Model Validation for Simulations of CVN-78 Sortie Generation As part of the test planning process, IDA is examining flight operations on the Navy’s newest carrier, CVN-78. The analysis uses a model, the IDA Virtual Carrier Model (IVCM), to examine sortie generation rates and whether aircraft can complete missions on time. Before using IVCM, it must be validated. However, CVN-78 has not been delivered to the Navy, and data from actual operations are to validate the model. Consequently, we will validate IVCM by comparing it to another model. This is a reasonable approach when a model is used in general analyses such as test planning, but is not acceptable when a model is used in the assessment of system effectiveness and suitability. The presentation examines the use of various statistical tools – Wilcoxon Rank Sum Test, Kolmogorov-Smirnov Test, and lognormal regression – to examine whether the results from two models provide similar results and to quantify the magnitude of any differences. From the analysis, IDA concluded that locations and distribution shapes are consistent, and that the differences between the models are less than 15 percent, which is acceptable for test planning. |
Dean Thomas IDA |
Breakout | 2017 |
||||||||||
Breakout Sequential Experimentation for a Binary Response – The Break Separation Method (Abstract)
Binary response experiments are common in epidemiology, biostatistics as well as in military applications. The Up and Down method, Langlie’s Method, Neyer’s method, K in a Row method and 3 Phase Optimal Design are methods used for sequential experimental design when there is a single continuous variable and a binary response. During this talk, we will discuss a new sequential experimental design approach called the Break Separation Method (BSM). BSM provides an algorithm for determining sequential experimental trials that will be used to find a median quantile and fit a logistic regression model using Maximum Likelihood estimation. BSM results in a small sample size and is designed to efficiently compute the median quantile. |
Darsh Thakkar RIT-S |
Breakout | Materials | 2017 |
|||||||||
Breakout A Decision-Theoretic Framework for Adaptive Simulation Experiments (Abstract)
We describe a model-based framework for increasing effectiveness of simulation experiments in the presence of uncertainty. Unlike conventionally designed simulation experiments, it adaptively chooses where to sample, based on the value of information obtained. A Bayesian perspective is taken to formulate and update the framework’s four models. A simulation experiment is conducted to answer some question. In order to define precisely how informative a run is for answering the question, the answer must be defined as a random variable. This random variable is called a query and has the general form of p(theta | y), where theta is the query parameter and y is the available data. Examples of each of the four models employed in the framework are briefly described below: 1. The continuous correlated beta process model (CCBP) estimates the proportions of successes and failures using beta-distributed uncertainty at every point in the input space. It combines results using an exponentially decaying correlation function. The output of the CCBP is used to estimate value of a candidate run. 2. The mutual information model quantifies uncertainty in one random variable that is reduced by observing the other one. The model quantifies the mutual information between any candidate runs and the query, thereby scoring the value of running each candidate. 3. The cost model estimates how long future runs will take, based upon past runs using, e.g., a generalized linear model. A given simulation might have multiple fidelity options that require different run times. It may be desirable to balance information with the cost of a mixture of runs using these multi-fidelity options. 4. The grid state model, together with the mutual information model, are used to select the next collection of runs for optimal information per cost, accounting for current grid load. The framework has been applied to several use cases, including model verification and validation with uncertainty quantification (VVUQ). Given a mathematically precise query, an 80 percent reduction in total runs has been observed. |
Terril Hurst Senior Engineering Fellow Raytheon Technologies ![]() (bio)
Terril N Hurst is a Senior Engineering Fellow at Raytheon Technologies, where he works to ensure that model-based engineering is based upon credible models and protocols that allow uncertainty quantification. Prior to coming to Raytheon in 2005, Dr. Hurst worked at Hewlett-Packard Laboratories, including a post-doctoral appointment in Stanford University’s Logic-Based Artificial Intelligence Group under the leadership of Nils Nilsson. |
Breakout | Session Recording |
![]() Recording | 2022 |
||||||||
Breakout Reliability Fundamentals and Analysus Lessons Learned (Abstract)
Although reliability analysis is a part of Operational Test and Evaluation, it is uncommon for analysts to have a background in reliability theory or experience applying it. This presentation highlights some lessons learned from reliability analysis conducted on several AFOTEC test programs. Topics include issues related to censored data, limitations and alternatives to using the exponential distribution, and failure rate analysis using test data. |
Dan Telford AFOTEC |
Breakout | Materials | 2018 |
|||||||||
Contributed Infrastructure Lifetimes (Abstract)
Infrastructure refers to the structures, utilities, and interconnected roadways that support the work carried out at a given facility. In the case of the Lawrence Livermore National Laboratory infrastructure is considered exclusive of scientific apparatus, safety and security systems. LLNL inherited it’s infrastructure management policy from the University of California which managed the site during LLNL’s first 5 decades. This policy is quite different from that used in commercial property management. Commercial practice weighs reliability over cost by replacing infrastructure at industry standard lifetimes. LLNL practice weighs overall lifecycle cost seeking to mitigate reliability issues through inspection. To formalize this risk management policy a careful statistical study was undertaken using 20 years of infrastructure replacement data. In this study care was taken to adjust for left truncation as-well-as right censoring. 57 distinct infrastructure class data sets were fitted using MLE to the Generalized Gamma distribution. This distribution is useful because it produces a weighted blending of discrete failure (Weibull model) and complex system failure (Lognormal model). These parametric fittings then yielded median lifetimes and conditional probabilities of failure. From conditional probabilities bounds on budget costs could be computed as expected values. This has provided a scientific basis for rational budget management as-well-as aided operations by prioritizing inspection, repair and replacement activities. |
Erika Taketa Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
|||||||||
Profile Monitoring via Eigenvector Perturbation (Abstract)
Control charts are often used to monitor the quality characteristics of a process over time to ensure undesirable behavior is quickly detected. The escalating complexity of processes we wish to monitor spurs the need for more flexible control charts such as those used in profile monitoring. Additionally, designing a control chart that has an acceptable false alarm rate for a practitioner is a common challenge. Alarm fatigue can occur if the sampling rate is high (say, once a millisecond) and the control chart is calibrated to an average in-control run length (ARL0) of 200 or 370 which is often done in the literature. As alarm fatigue may not just be annoyance but result in detrimental effects to the quality of the product, control chart designers should seek to minimize the false alarm rate. Unfortunately, reducing the false alarm rate typically comes at the cost of detection delay or average out-of-control run length (ARL1). Motivated by recent work on eigenvector perturbation theory, we develop a computationally fast control chart called the Eigenvector Perturbation Control Chart for nonparametric profile monitoring. The control chart monitors the l_2 perturbation of the leading eigenvector of a correlation matrix and requires only a sample of known in-control profiles to determine control limits. Through a simulation study we demonstrate that it is able to outperform its competition by achieving an ARL1 close to or equal to 1 even when the control limits result in a large ARL0 on the order of 10^6. Additionally, non-zero false alarm rates with a change point after 10^4 in-control observations were only observed in scenarios that are either pathological or truly difficult for a correlation based monitoring scheme. |
Takayuki Iguchi PhD Student Florida State University (bio)
Takayuki Iguchi is a Captain in the US Air Force and is currently a PhD student under the direction of Dr. Eric Chicken at Florida State University. |
Session Recording |
![]() Recording | 2022 |
|||||||||
Breakout Reliability Growth Modeling (Abstract)
Several optimization models are described for allocating resources to different testing activities in a system’s reliability growth program. These models assume availability of an underlying reliability growth model for the system, and capture the tradeoffs associated with focusing testing resources at various levels (e.g., system, subsystem, component) and/or how to divide resources within a given level. In order to demonstrate insights generated by solving the model, we apply the optimization models to an example series-parallel system in which reliability growth is assumed to follow the Crow/AMSAA reliability growth model. We then demonstrate how the optimization models can be extended to incorporate uncertainty in Crow/AMSAA parameters. |
Kellly Sullivan University of Arkansas |
Breakout | Materials | 2017 |
|||||||||
Webinar The Role of Uncertainty Quantification in Machine Learning (Abstract)
Uncertainty is an inherent, yet often under-appreciated, component of machine learning and statistical modeling. Data-driven modeling often begins with noisy data from error-prone sensors collected under conditions for which no ground-truth can be ascertained. Analysis then continues with modeling techniques that rely on a myriad of design decisions and tunable parameters. The resulting models often provide demonstrably good performance, yet they illustrate just one of many plausible representations of the data – each of which may make somewhat different predictions on new data. This talk provides an overview of recent, application-driven research at Sandia Labs that considers methods for (1) estimating the uncertainty in the predictions made by machine learning and statistical models, and (2) using the uncertainty information to improve both the model and downstream decision making. We begin by clarifying the data-driven uncertainty estimation task and identifying sources of uncertainty in machine learning. We then present results from applications in both supervised and unsupervised settings. Finally, we conclude with a summary of lessons learned and critical directions for future work. |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
|||||||||
Contributed Test Planning for Observational Studies using Poisson Process Modeling (Abstract)
Operational Test (OT) is occasionally conducted after a system is already fielded. Unlike a traditional test based on Design of Experiments (DOE) principles, it is often not possible to vary the levels of the factors of interest. Instead the test is of an observational nature. Test planning for observational studies involves choosing where, when, and how long to evaluate a system in order to observe the possible combinations of factor levels that define the battlespace. This presentation discusses a test-planning method that uses Poisson process modeling as a way to estimate the length of time required to observe factor level combinations in the operational environment. |
Brian Stone AFOTEC |
Contributed | Materials | 2018 |
|||||||||
Short Course Multivariate Data Analysis (Abstract)
In this one-day workshop, we will explore five techniques that are commonly used to model human behavior: principal component analysis, factor analysis, cluster analysis, mixture modeling, and multidimensional scaling. Brief discussions of the theory of each method will be provided, along with some examples showing how the techniques work and how the results are interpreted in practice. Accompanying R-code will be provided so attendees are able to implement these methods on their own. |
Doug Steinley University of Missouri |
Short Course | Materials | 2019 |
|||||||||
Webinar KC-46A Adaptive Relevant Testing Strategies to Enable Incremental Evaluation (Abstract)
The DoD’s challenge to provide capability at the “Speed of Relevance” has generated many new strategies to adapt to rapid development and acquisition. As a result, Operational Test Agencies (OTA) have had to adjust their test processes to accommodate rapid, but incremental delivery of capability to the warfighter. The Air Force Operational Test and Evaluation Center (AFOTEC) developed the Adaptive Relevant Testing (ART) concept to answer the challenge. In this session, AFOTEC Test Analysts will brief examples and lessons learned from implementing the ART principles on the KC-46A acquisition program to identify problems early and promote the delivery of individual capabilities as they are available to test. The AFOTEC goal is to accomplish these incremental tests while maintaining a rigorous statistical evaluation in a relevant and timely manner. This discussion will explain in detail how the KC-46A Initial Operational Test and Evaluation (IOT&E) was accomplished in a unique way that allowed the test team to discover, report on, and correct major system deficiencies much earlier than traditional methods. |
J. Quinn Stank Lead KC-46 Analyst AFOTEC ![]() (bio)
First Lieutenant J. Quinn Stank is the Lead Analyst for the Air Force Operational Test and Evaluation Center Detachment 5 at Outside Location Everett, Washington. The lieutenant serves as the advisor to the Operational Test and Evaluation team for the KC-46A. Lieutenant Stank, originally from Knoxville, Tn., received his commission as a second lieutenant upon graduation from the United States Air Force Academy in 2016. EDUCATION:
|
Webinar | Session Recording |
![]() Recording | 2020 |
||||||||
Tutorial Introduction to Survey Design (Abstract)
Surveys are a common tool for assessing user experiences with systems in various stages of development. This mini-tutorial introduces the social and cognitive processes involved in survey measurement and addresses best practices in survey design. Clarity of question wording, appropriate scale use, and methods for reducing survey-fatigue are emphasized. Attendees will learn practical tips to maximize the information gained from user surveys and should bring paper and pencils to practice writing and evaluating questions. |
Jonathan Snavely IDA |
Tutorial | Materials | 2016 |
|||||||||
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
|||||||||
Short Course Uncertainty Quantification (Abstract)
We increasingly rely on mathematical and statistical models to predict phenomena ranging from nuclear power plant design to profits made in financial markets. When assessing the feasibility of these predictions, it is critical to quantify uncertainties associated with the models, inputs to the models, and data used to calibrate the models. The synthesis of statistical and mathematical techniques, which can be used to quantify input and response uncertainties for simulation codes that can take hours to days to run, comprises the evolving field of uncertainty quantification. The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification so we will begin by providing an overview of how Bayesian techniques can be used to construct distributions for model inputs. We will subsequently describe the computational issues associated with propagating these distributions through complex models to construct prediction intervals for statistical quantities of interest such as expected profits or maximal reactor temperatures. Finally, we will describe the use of sensitivity analysis to isolate critical model inputs and surrogate model construction for simulation codes that are too complex for direct statistical analysis. All topics will be motivated by examples arising in engineering, biology, and economics. |
Ralph Smith North Carlina State Univeristy |
Short Course | Materials | 2018 |
|||||||||
Breakout Probabilistic Data Synthesis to Provide a Defensible Risk Assessment for Army Munition (Abstract)
Military grade energetics are, by design, required to operate under extreme conditions. As such, warheads in a munition must demonstrate a high level of structural integrity in order to ensure safe and reliable operation by the Warfighter. In this example which involved an artillery munition, a systematic analytics-driven approach was executed which synthesized physical test data results with probabilistic analysis, non-destructive evaluation, modeling and simulation, and comprehensive risk analysis tools in order to determine the probability of a catastrophic event. Once the severity, probability of detection, occurrence, were synthesized, a model was built to determine the risk of a catastrophic event during firing which then accounts for defect growth occurring as a result of rough-handling. This comprehensive analysis provided a defensible, credible, and dynamic snapshot of risk while allowing for a transparent assessment of contribution to risk of the various inputs through sensitivity analyses. This paper will illustrate intersection of product safety, reliability, systems-safety policy, and analytics, and highlight the impact of a holistic multidisciplinary approach. The benefits of this rigorous assessment included quantifying risk to the user, supporting effective decision-making, improving resultant safety and reliability of the munition, and supporting triage and prioritization of future Non-Destructive Evaluation (NDE) screening efforts by identifying at-risk subpopulations. |
Kevin Singer | Breakout |
![]() | 2019 |
|||||||||
Keynote Retooling Design and Development |
Chris Singer NASA Deputy Chief Engineer NASA ![]() (bio)
Christopher (Chris) E. Singer is the NASA Deputy, Chief Engineer responsible for integrating engineering across the Agencies 10 field centers. Prior to this appointment in April 2016, he served as the Engineering Director at NASA’s Marshall Space Flight Center in Huntsville, Alabama. Appointed in 2011, Mr. Singer led an organization of 1,400 civil service and 1,200 support contractor employees responsible for the design, testing, evaluation, and operation of hardware and software associated with space transportation, spacecraft systems, science instruments and payloads under development at the Marshall Center. The Engineering Directorate also manages NASA’s Payload Operations Center at Marshall, which is the command post for scientific research activities on-board the International Space Station. Mr. Singer began his NASA career in 1983 as a rocket engine specialist. In 1992, he served a one-year assignment at NASA Headquarters in Washington, DC, as senior manager for the space shuttle main engine and external tank in the Space Shuttle Support Office. In 1994, Mr. Singer supervised the development and implementation of safety improvements and upgrades to shuttle propulsion components. In 2000, he was appointed chief engineer in the Space Transportation Directorate then was selected as deputy director of Marshall’s Engineering Directorate from 2004 to 2011. Mr. Singer is an AIAA Associate Fellow. In 2006, he received the Presidential Rank Award for Meritorious Executives — the highest honor for career federal employees. He was awarded the NASA Outstanding Leadership Medal in 2001 and 2008 for his leadership. In 1989, he received the prestigious Silver Snoopy Award from the Astronaut Corps for his contributions to the success of human spaceflight missions. A native of Nashville, Tennessee, Mr. Singer earned a bachelor’s degree in mechanical engineering in 1983 from Christian Brothers University in Memphis, Tennessee. Chris enjoys woodworking, fishing and Hang gliding. Chris is married to the former Jody Adams of Hartselle, Alabama. They have three children and live in Huntsville, Alabama. |
Keynote | Materials | 2017 |
|||||||||
Breakout Improving Sensitivity Experiments (Abstract)
This presentation will provide a brief overview of sensitivity testing, and emphasize applications to several products and system of importance to the Defense as well as private industry, including Insensitive Energetics, Ballistic testing of protective armor, testing of munition fuzes and Microelectromechanical Systems (MEMS) components, and safety testing of high-pressure test ammunition, and packaging for high-value materials. |
Kevin Singer US Army |
Breakout | Materials | 2017 |
|||||||||
Tutorial Power Anyalysis Concepts |
Jim Simpson JK Analytics |
Tutorial | Materials | 2016 |
|||||||||
Short Course Split-Plot and Restricted Randomization Designs (Abstract)
Have you ever built what you considered to be the ideal designed experiment, then passed it along to be run and learn later that your recommended run order was ignored? Or perhaps you were part of a test execution team and learned too late that one or more of your experimental factors are difficult or time-consuming to change. We all recognize that the best possible guard against lurking background noise is complete randomization, but often we find that a randomized run order is extremely impractical or even infeasible. Split-plot design and analysis methods have been around for over 80 years, but only in the last several years have the methods fully matured and been made available in commercial software. This class will introduce you to the world of practical split-plot design and analysis methods. We’ll provide you the skills to effectively build designs appropriate to your specific needs and demonstrate proper analysis techniques using general linear models, available in the statistical software. Topics include split-plots for 2-level and mixed-level factor sets, for first and second order models, as well as split-split-plot designs. |
Jim Simpson JK Analytics |
Short Course | Materials | 2017 |
|||||||||
Breakout Automated Software Testing Best Practices and Framework: A STAT COE Project (Abstract)
The process for testing military systems which are largely software intensive involves techniques and procedures often different from those for hardware-based systems. Much of the testing can be performed in laboratories at many of the acquisition stages, up to operational testing. Testing software systems is not different from testing hardware-based systems in that testing earlier and more intensively benefits the acquisition program in the long run. Automated testing of software systems enables more frequent and more extensive testing, allowing for earlier discovery of errors and faults in the code. Automated testing is beneficial for unit, integrated, functional and performance testing, but there are costs associated with automation tool license fees, specialized manpower, and the time to prepare and maintain the automation scripts. This presentation discusses some of the features unique to automated software testing and offers a framework organizations can implement to make the business case for, to organize for, and to execute and benefit from automating the right aspects of their testing needs. Automation has many benefits in saving time and money, but is most valuable in freeing test resources to perform higher value tasks. |
Jim Simpson JK Analytics |
Breakout | Materials | 2017 |
Session Title | Speaker | Type | Recording | Materials | Year |
---|---|---|---|---|---|
Breakout Leveraging Data Science and Cloud Tools to Enable Continuous Reporting |
Timothy Dawson Lead Mobility Test Operations Analyst AFOTEC Detachment 5 ![]() |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Statistical Process Control and Capability Study on the Water Content Measurements in NASA Glenn’s Icing Research Tunnel |
Emily Timko | Breakout | 2019 |
||
Breakout AI & ML in Complex Environment |
Tien Pham | Breakout | 2019 |
||
Breakout Using Sensor Stream Data as Both an Input and Output in a Functional Data Analysis |
Thomas A Donnelly Principal Systems Engineer JMP Statistical Discovery LLC ![]() |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Air Force Human Systems Integration Program |
Anthony Thomas | Breakout |
![]() | 2019 |
|
Breakout Carrier Reliability Model Validation |
Dean Thomas IDA |
Breakout | 2017 |
||
Breakout Sequential Experimentation for a Binary Response – The Break Separation Method |
Darsh Thakkar RIT-S |
Breakout | Materials | 2017 |
|
Breakout A Decision-Theoretic Framework for Adaptive Simulation Experiments |
Terril Hurst Senior Engineering Fellow Raytheon Technologies ![]() |
Breakout | Session Recording |
![]() Recording | 2022 |
Breakout Reliability Fundamentals and Analysus Lessons Learned |
Dan Telford AFOTEC |
Breakout | Materials | 2018 |
|
Contributed Infrastructure Lifetimes |
Erika Taketa Lawrence Livermore National Laboratory |
Contributed | Materials | 2018 |
|
Profile Monitoring via Eigenvector Perturbation |
Takayuki Iguchi PhD Student Florida State University |
Session Recording |
![]() Recording | 2022 |
|
Breakout Reliability Growth Modeling |
Kellly Sullivan University of Arkansas |
Breakout | Materials | 2017 |
|
Webinar The Role of Uncertainty Quantification in Machine Learning |
David Stracuzzi Research Scientist Sandia National Laboratories ![]() |
Webinar |
![]() | 2020 |
|
Contributed Test Planning for Observational Studies using Poisson Process Modeling |
Brian Stone AFOTEC |
Contributed | Materials | 2018 |
|
Short Course Multivariate Data Analysis |
Doug Steinley University of Missouri |
Short Course | Materials | 2019 |
|
Webinar KC-46A Adaptive Relevant Testing Strategies to Enable Incremental Evaluation |
J. Quinn Stank Lead KC-46 Analyst AFOTEC ![]() |
Webinar | Session Recording |
![]() Recording | 2020 |
Tutorial Introduction to Survey Design |
Jonathan Snavely IDA |
Tutorial | Materials | 2016 |
|
Short Course Uncertainty Quantification |
Ralph Smith North Carolina State University |
Short Course | Materials | 2019 |
|
Short Course Uncertainty Quantification |
Ralph Smith North Carlina State Univeristy |
Short Course | Materials | 2018 |
|
Breakout Probabilistic Data Synthesis to Provide a Defensible Risk Assessment for Army Munition |
Kevin Singer | Breakout |
![]() | 2019 |
|
Keynote Retooling Design and Development |
Chris Singer NASA Deputy Chief Engineer NASA ![]() |
Keynote | Materials | 2017 |
|
Breakout Improving Sensitivity Experiments |
Kevin Singer US Army |
Breakout | Materials | 2017 |
|
Tutorial Power Anyalysis Concepts |
Jim Simpson JK Analytics |
Tutorial | Materials | 2016 |
|
Short Course Split-Plot and Restricted Randomization Designs |
Jim Simpson JK Analytics |
Short Course | Materials | 2017 |
|
Breakout Automated Software Testing Best Practices and Framework: A STAT COE Project |
Jim Simpson JK Analytics |
Breakout | Materials | 2017 |