Tutorial Speakers Abstracts

Past, Present and Future Directions and Advances in Reliability
Dr. Kailash [Kal] Kapur, P.E.
Professor Emeritus, Department of Industrial & Systems Engineering
University of Washington, Seattle
Email:
kalkapur@hotmail.com

Abstract

A brief history of reliability and underlying assumptions, principles and models are presented. We will cover and integrate many of related qualities or “ilities” including maintainability and availability. This tutorial also integrates the three R’s of Engineering: reliability, robustness and resilience. The three R’s refer to the foundations of a basic skills-orientated educational program of reading, writing and arithmetic. There is tremendous interest these days to design and develop all types of complex systems, including infrastructure, communications, logistics, distribution and service systems that are not only reliable, safe, secure and maintainable but are also robust, resilient and sustainable. Concept of Prognostics started with Hippocrates (460-370BC). A brief history of prognostics and underlying assumptions and principles are presented. It is based on the evolution from open systems to feedback systems, diagnostics, prognostics and continuous future trends and development related to feedforward control systems. Impact of recent growth of technologies related to internet of things [IoT] and artificial intelligence [AI] is presented to develop prognostics for complex systems to improve reliability. Role and integration of prognostics with the three R’s of Engineering and other “ilities” is presented. Objective is to develop new and holistic measures for reliability and their trends and future applications. This is based on systems oriented, integrated and distributed, customer-centered multi-state systems approach including fuzzy logic methodology.

Mechanical Applications of Cepstrum Analysis in Machine and Structural Health Monitoring
Dr. Bob Randall
Emeritus Professor, School of Mechanical and Manufacturing Engineering

University of New South Wales (UNSW), Sydney, Australia

Abstract

It is not widely realised that the first paper on cepstrum analysis was published two years before the FFT algorithm, despite having Tukey as a common author, and its definition was such that it was not reversible, even to the log spectrum. After publication of the FFT in 1965, the cepstrum (now called the “power cepstrum” or “real cepstrum”) was redefined so as to be reversible to the log (amplitude) spectrum, and shortly afterwards Oppenheim and Schafer defined the ‘‘complex cepstrum”, which was reversible to the time domain, but only for transient signals (whose phase spectrum is continuous). They also derived the analytical form of the complex cepstrum of a transfer function in terms of its poles and zeros. The cepstrum had been used in speech analysis for determining voice pitch (by accurately measuring the harmonic spacing in voiced speech), but also for separating the formants (transfer function of the vocal tract) from voiced and unvoiced sources, and this led quite early to similar applications in mechanics, viz. identification of uniformly spaced sidebands from local faults in gearboxes (Randall), and extraction of the cylinder pressure signal in a diesel engine from acoustic responses (Lyon and Ordubadi), since the cepstrum of a response is the sum of the cepstra of the forcing and transfer functions. Gao and Randall in 1996 used this and the analytical form of the cepstrum to curve fit modal parameters of mechanical structures in the cepstrum. Thus, the cepstrum has been around for a long time, but not used to its full capacity. A breakthrough occurred in 2011, when it was found that edited time signals could be obtained by combining an edited amplitude spectrum (using the real cepstrum) with the original phase spectrum of (sections of) continuous signals, for example to remove families of harmonics and sidebands, or to separate response signals into components dominated by intrinsic forcing functions or modal properties, in particular for variable speed machines, where forcing functions vary with the speed, but modal frequencies remain independent of the speed. This has already been used for a wide range of mechanical applications. A very powerful processing tool is an exponential ‘‘lifter” (window) applied to the cepstrum, which is shown to extract the modal part of the response (with a small extra damping of each mode corresponding to the window). This has already been shown to be valuable for Operational Modal Analysis(OMA), in particular of machines, where both forcing functions and modal properties can contain information about condition.
The tutorial is a survey of the history, latest developments, and potential future applications of cepstrum analysis applied to health monitoring of machines and structures.

History of System Reliability Optimization
Professor David W. Coit
Rutgers University, Piscataway, NJ USA
Abstract

In this tutorial, we review the most important contributions to system reliability optimization research, discuss how research and research priorities have changed in response to the needs of reliability professionals and availability of needed data. All engineering and all engineering disciplines are practical implementations of some form of an optimization problem. Almost from the inception of reliability as a formal engineering discipline, accompanied by mathematical principles based on probability theory, there has been research to systematically and rigorously analyze complex problems to produce a uniquely reliable design. The earliest research on system reliability optimization used formal methods, such a dynamic programming and nonlinear programming. This work was pioneering, but current needs are different and ongoing, and future research directions address realistic problems and exploit the availability of data to dynamically optimize as conditions change or trends emerge. Although always changing and advancing, the evolution of system reliability optimization research can be approximately classified into the following three chronological categories. The first era is Rigorous Mathematics, where dynamic programming, linear and nonlinear programming were used to select optimal designs, but for limited types of problem. Second, there is the era of Pragmatism, where heuristics and more approximate methods were used, but for much broader and richer types of design problems. Finally, there is the era of Active Reliability Improvement where reliability optimization becomes a dynamic analysis to continually improve performance. This tutorial will review these methods and discuss current and future trends.

Machine Learning in Data-Driven Prognostics and Health Management (PHM) for condition-based and predictive maintenance

Enrico Zio
PSL University, France and Politecnico di Milano, Italy

Abstract


As the digital, physical and human worlds continue to integrate, the 4th industrial revolution, the internet of things and big data, the industrial internet, are changing the way we design, manufacture, deliver products and services. In this fast-pace changing environment, the attributes related to the reliability of components and systems continue to play a fundamental role for industry. On the other hand, the advancements in knowledge, methods and techniques, the increase in information sharing and data availability, offer new ways for reliable engineering of systems and new opportunities of business in several areas of application. Based on this increased knowledge, information and data available, we can improve our prediction capabilities. Particularly, the increased availability of data coming from monitoring the relevant parameters of components, systems and assets performance, and the grown ability of treating these data by intelligent machine learning algorithms, capable of mining out information relevant to the assessment and prediction of their state, has open wide the doors for disruptive advancements in many industrial sectors, for improved design, operation, management and maintenance.

In this lecture, I frame the different problems that can be tackled by machine learning, both by examples and scientifically, and offer some reflections on the opportunities and challenges related to the use of machine learning in various industrial sectors.

Design a Practical and Effective Reliability Test
Huairui (Harry) Guo
Fiat Chrysler Automobiles US LLC
Abstract

One of the important tasks for reliability engineers is to design and conduct effective reliability tests. Many books and papers on reliability data analysis and modeling were published each year. However, very few literatures address the issue on how to design useful reliability tests. If a reliability test is not effective, then the data generated from it will not be very useful. Time and money are wasted, and the actual failure modes experienced later by the customers are not identified before product launch. But how to design an effective reliability test to evaluate the product performance?

An effective reliability test will at least address the following elements using a scientific way:

-How many samples are needed
-How long should each sample be tested
-What stresses should be applied in the test
-At what level for each stress should a sample be tested
-Should a stress be constant or time varying
-What is the reliability acceptance criteria
-What is the definition of failures

This tutorial will mainly focus on practical aspects of reliability test design. Through this tutorial, the audience will have a complete understanding of the process of planning and conducting an effective reliability test.

Building Multicriteria Decision Models for Risk, Reliability, and Maintenance Decision Analysis
Cristiano Cavalcante and Alexandre Ramalho Alberti
Universidade Federal de Pernambuco

Abstract

Risk, Reliability and Maintenance (RRM) are contexts in which decision problems with multiple objectives have been on the increase in recent years. Decisions on RRM issues can affect the strategic results of any organization, with financial impacts, as well as they may eventually have impacts on other important dimensions, such as human (safety) and environmental, hence the importance of having a well-structured decision process. The objective of this tutorial is to present some developments on building multicriteria models to support decisions on RRM management, and to promote the new book “Multicriteria Decision Models Optimization for Risk, Reliability, and Maintenance Decision Analysis – Recent Advances”, edited by Adiel T. de Almeida, Love Ekenberg, Philip Scarf, Enrico Zio and Ming J. Zuo. The book includes methodological topics related to multicriteria decision making/aiding (MCDM/A) and some new MCDM/A models in RRM contexts, in addition to several optimization and multiobjective models and decision problems related to it. The book was developed after an invitation of the editors of the prestigious “International Series in Operations Research and Management Science”, by Springer, and it follows a first book on this topic, “Multicriteria and Multiobjective Models for Risk, Reliability and Maintenance Decision Analysis”, published in 2015.