Clava+LARA: A Source-to-source C/C++ Compiler for Instrumentation and Code Transformations
Day: Wednesday 18th of July
Duration: 120 minutes
The development of many applications for embedded systems requires substantial work efforts for dealing with non-functional requirements. During the development process, analysis of the versions of the application code is very important in order to identify bottlenecks and critical code sections. An important development cycle step deals with static analysis and with profiling analysis of the code. Although there are specific tools for this step, in many cases a custom analysis and profiling can be helpful. This tutorial shows the use of Clava, a source to source C/C++ compiler, to help analysis steps, namely by using instrumentation strategies programmed by LARA, a domain-specific language (DSL), based on a separation of concerns, and able to assist software development cycles with recipes (from a library or specifically programmed by the user). In addition, the tutorial shows how the use of Clava+LARA can help engineers and developers to satisfy non-functional requirements addressing, namely, execution time, power and energy consumption, and especially when targeting multicore computing platforms.
João MP Cardoso, Universidade do Porto - Faculdade de Engenharia, Portugal
João Bispo, Universidade do Porto - Faculdade de Engenharia, Portugal
Pedro Pinto, Universidade do Porto - Faculdade de Engenharia, Portugal
Tiago Carvalho, Universidade do Porto - Faculdade de Engenharia, Portugal
Using IOPT-Tools for Petri nets based systems development
Day: Thursday 19th of July
Duration: 120 minutes
The lack of tools ready to be integrated in engineering development frameworks are one major drawback when considering Petri nets usage within specific areas of application. This tutorial addresses usage of a model-based development approach using Petri nets as the underlying modeling formalism. The tutorial is divided into two parts, being the first one more on Petri nets fundamentals and their usage for controller modeling and implementation, while the second one emphasizes on the development of networked controllers. Both parts rely on the support from the IOPT-Tools framework, complemented by hands-on experimentation using IOPT-Tools for the development of controllers.
The IOPT-Tools web-based framework provides support for the complete development flow for cyber-physical systems and embedded systems, offering tools for engineers as well as for academics, including editor, simulator, remote debugger, and property verification tools. Rapid-prototyping is fully supported, allowing automatic code generation ready to be directly deployed in different types of platforms, ranging from FPGAs (where VHDL code is produced) to popular low-cost boards, such as Arduino and Raspberry Pi (where C code is produced), and also including PLCs (through Instruction List generation). The IOPT-Tools framework is publicly available at http://gres.uninova.pt/IOPT-Tools/.
Petri nets main characteristics and classes are presented, including firing semantics and common execution semantics (namely interleaving semantics used in most simulation environments, as well as maximal step semantics used in most control applications), net operations (namely addition and splitting), and proprieties verification techniques (namely formal techniques based on invariants and state space exploration). The tutorial will cover situations where a centralized execution is used, as well as others where distributed execution is the goal.
A few application examples will be used to illustrate the application of the referred tool framework in the development of different kinds of systems and implementation platforms (ranging from industrial PCs, RaspberryPi, Arduinos, and FPGAs). Attendees are welcome to bring their own portable computers or smart phones to play with IOPT-Tools.
Luis Gomes, Universidade NOVA de Lisboa, Portugal
Fernando Pereira, Instituto Politécnico de Lisboa, Portugal
Filipe Moutinho, Universidade NOVA de Lisboa, Portugal
Methods and Tools for Validating Cyber-Physical Energy Systems
Day: Thursday 19th of July
Duration: 100 minutes
Future power systems have to integrate a higher amount of distributed, re-newable energy r-sources in order to cope with a growing electricity demand, while at the same time trying to re-duce the emission of greenhouse gases. In addition, power system operators are nowadays confronted with further challenges due to the highly dynamic and stochastic behavior of renewable generators (solar, wind, small hydro, etc.) and the need to integrate controllable loads (electric vehicles, smart buildings, energy storage systems, etc.). Furthermore, due to ongoing changes to framework conditions and regulatory rules, technology developments (development of new grid components and services) and the liberalization of energy markets, the resulting design and operation of the future electric energy system has to be altered.
Sophisticated (systems and component) design approaches, intelligent information and communication architectures, and distributed automation concepts provide ways to cope with the above-mentioned challenges and to turn the existing power system into an intelligent entity, that is, a “Cyber-Physical Energy System (CPES)” (also known as “Smart Grid”'). While reaping the benefits that come along with intelligent solutions, it is, however, expected that due to the considerably higher complexity of such solutions, validation and testing will play a significantly larger role in the development of future technology. As it stands, the first demonstration projects for smart grid technologies have been successfully completed, it follows that there is a high probability of key findings and achieved results being integrated in new and existing products, solutions and services of manufacturers and system integrators.
Up until now, proper validation and testing methods and a suitably corresponding integrated Research Infrastructure (RI) for smart grids is neither fully available nor easily accessible which fulfils the following main requirements:
- A cyber-physical, multi-domain approach for analysing and validating CPESon the system level is missing today; existing methods are mainly focusing on the component level – system integration topics including analysis and evaluation are not yet addressed in a holistic manner.
- A holistic validation framework (incl. analysis and evaluation/benchmark criteria) and the corresponding RI with proper methods and tools needs to be developed.
- Harmonized and standardized evaluation procedures need to be developed.
- Well-educated professionals, engineers and researchers that understand smart grid systems in a cyber-physical manner need to be trained on a broad scale.
The aim of this tutorial is to tackle the above-mentioned requirements by introducing validation methods and tools for validating CPES which are currently being developed in the European project ERIGrid.
The tutorial is divided into two main parts: Part one (about 70 min) provides an overview of a holistic validation approach, a formal validation description method for CPES and corresponding simulation and laboratory based validation methods/tools. Part two includes a hands-on exercise (about 30 min) by using the formal validation description method on a selected example from the power and energy systems domain.
Thomas Strasser, Austrian Institute of Technology, Austria
Data informatics and machine learning: From systems fault diagnosis to high dimensionality reduction and visualization
Day: Friday 20th of July
Duration: 120 minutes
The proposed tutorial is aimed at introducing the most important machine learning techniques that are particular useful to industrial informatics engineers and researchers. The techniques that will be covered in the tutorial are the state-of–the-art and are still ongoing hot research topics. This tutorial is designed in a way to bridge the areas of industrial informatics and machine learning.
In this proposed tutorial, we will firstly introduce the widely known supervised and unsupervised techniques. The first part of the tutorial will briefly introduce the concept and the comparative advantageous of several widely used supervised techniques such as neural networks, support vector machine (SVM) and deep learning. We will also briefly introduce the importance and characteristics of unsupervised techniques such as Self organizing map (SOM) and clustering.
After introducing the fundamental concepts of supervised and unsupervised techniques, we will use fault diagnosis of the Tennessee Eastman control process to illustrate why and how most engineering problems are modeled as high dimensional problems. The Tennessee Eastman control process is a well-known problem with 33 variables/dimensions and 21 pre-defined faults that can only be solved using high dimensionality reduction approach. We will use it to demonstrate the problem of system conditions diagnosis on high dimensional problems. In this part of tutorial, we will also use our recent work on wireless sensor network fault diagnosis to elaborate the formulation of a practical engineering problem into a high dimensional fault diagnosis problem. Our examples will also include wafer map failure detection and bearing fault detection.
The second part of the tutorial will detail the concepts and applications of different high dimensionality reduction/manifold learning methods for industrial systems fault diagnosis. In this part of the tutorial, we will focus on unsupervised methods to handle system datasets that lack faulty and normal data label. Lack of data label is in fact a highly popular case for practical engineering systems. In these cases, the use of conventional SVM or neural networks supervised training is not possible. We will start from the classical principal component analysis (PCA). We will show how the linear PCA can be applied to Tennessee Eastman control process diagnosis.
We will then discuss the concepts, technical similarity and applications from PCA, multi-dimensional mapping to the more recent non-linear approach such as Isomap. Isomap is well-known with its ability of preserving global and local characteristics in reduced dimensional space visualization and classification. We will illustrate the importance of these characteristics especially for engineering applications. In this part, we will use Isomap to show the important concept of using nonlinear dimensionality reduction to unfolding a manifold or extracting undistorted system information under a high dimensional space. We will use our recent visualization and classification results on bearing fault and other data mining examples as applications to illustrate the concept.
The last part of the tutorial will be briefly introducing the concept of semi-supervised method. As labelling huge dataset is costly and even practically impossible, semi-supervised technique is an approach that uses a small amount of label to improve the whole training process. We will not have enough time to cover this part thoroughly but will show the concept of semi-supervised method using our recent work, and results on wireless sensors networks diagnosis.
Tommy W S Chow, City University of Hong Kong, Hong Kong