The detailed program is now available on the mobile application. Download it!
Technical papers
Submissions are invited for papers for presentation and inclusion in the proceedings. Be part of an active network including students, managers, practitioners, and scientists from academia, government, and industry, from across the globe addressing the following topics:
PHM Design
• Systems engineering aspects of PHM
• PHM related standards and methodologies
• PHM Design and PHM Architectures
• Uncertainty, Explainability and Causal Inference
• Deep Learning/Machine Learning methods for PHM
• PHM at the Component/Sub-system/System level
PHM Development
• Physics of failure for anomaly detection, diagnostics and prognostics
• Fault-adaptive control methods
• Prognostics sensors and detection
• Fault detection and prognostics of MEMS
• Model-based diagnostics
• Data-driven, model-based and hybrid prognostics
• Condition-based and predictive maintenance technologies
• Software and hardware for PHM
• Degradation modelling
PHM Applications
• Asset health management
• Structural health management
• PHM for electronics and electrical engineering systems and plants
• PHM in Cyber-Physical Systems
• Cost Benefit and Return-on-Investment analysis
• PHM for Automotive, Rail, Marine, Wind and Energy
• PHM for Manufacturing (Production planning and Control)
• Deployed applications and success stories
Detailed schedule will be published after the selection of the submitted contributions (March 2024).
Panels
Suggest titles and offer to organize one building on previous PHME editions:
• Diversity and Mentorship in the PHM Community
• PHM for Batteries
• Standards and Regulations – Affecting PHM Development and Application
• PHM in Transportation
More information soon.
Tutorial Sessions
The conference will include three tutorial sessions covering the following topics:
• Transformers: from attention mechanisms to large language models. Delivered by Yvonne Lu – Oxford University
Transformers revolutionized deep learning and generative AI. The name ‘transformers’ suggests this model transforms a set of input vectors from one representation space to a new space. Central to transformers is the concept of ‘attention’, which allows a network to give weight coefficients based on input vectors and capture inductive biases. A successful transformers generates a more informative internal representation in the new space, making it suitable for downstream tasks. Initially introduced in natural language processing (NLP), transformers have since achieved remarkable results in NLP, and image and video generation tasks. Multimodal transformers, like those seen in GPT-4 and other large language models (LLMs), can integrate multiple data types, including text, images, audio, and videos.
A distinctive feature of transformers is their proficiency in transfer learning, enabling models trained on extensive datasets to be fine-tuned for specific tasks. A large transformers model trained on “big data” can be used as a foundation model to solve different downstream tasks. Moreover, their capacity for self-supervised learning with unlabelled data enables them to leverage vast Internet and open-source datasets. Large neural network transformer models, which often have trillions of parameters, are beginning to exhibit a move towards artificial general intelligence (AGI).
This tutorial encapsulates the core of transformers, from the concepts of attention and self-attention mechanisms to the underlying architecture and computational complexities. We will delve into the bidirectional encoder representations from the transformers (BERT) model and explore its pretraining and masking mechanisms. Concluding the tutorial, we will demonstrate a practical LLM use case to explain how to use LLMs to carry out downstream tasks, including the fine-tuning pipeline with Layerwise Relevance Propagation (LoRA), retrieval augmented generation (RAG), evaluation metrics, and the role of human-in-the-loop.
• Clouds: A cloud driven approach to data analytics at Rolls-Royce. Delivered by Chris Dodd – Rolls-Royce
Rolls-Royce civil aerospace’s transition from on-premise data science and engineering to Azure cloud based stack marks a transformative journey in our approach to data analytics and management. This strategic shift enabled us to significantly expand the scope and volume of our data science projects covering topics relating to maximising engine availability to understanding supply chain contracts. By adopting a robust cloud infrastructure, we enhanced our capabilities in handling sensitive, export-controlled data, adhering to stringent industry regulations. This move not only streamlined our processes but also fostered a new level of collaboration among our teams. Our presentation will delve into the challenges, strategies, and successes of Rolls-Royce’s transition, showcasing our pioneering efforts in navigating data science within a highly regulated domain.
Student Track – Doctoral Symposium
For the second time in Europe a Doctoral Symposium will be run as part of a Student Track. This event provides an excellent opportunity for graduate students to present their research interests and plans at a formative stage in their research. The students will receive structured guidance from a panel of distinguished researchers, as well as comments from conference participants and fellow students in a collegial setting.
Short Courses – PHM Fundamentals, Data Analytics for PHM
The courses will be held on the Institute of Experimental and Applied Physics Czech Technical University in Prague, located at this address: Husova 240/5, 110 00 Prague 1, Czech Republic, phone: (+420) 244 105 100, and hence to take place 1st and 2nd July. Link to Google Map.
Data Challenge
After the success of the previous Data Challenge editions, we intend to prepare another dataset for the European PHM Research Community. Winning teams will be invited to publish their work in the conference proceedings of the 8th European Conference of the PHM Society and a representative will be expected to make an oral presentation at the event.
More information soon.