Wednesday Nights of Petri Nets and their Extensions (WN-PNE)
Year 2021 has started… And it also seems that the COVID-19 pandemic will keep on raging all around the world for quite a bit. With that much of uncertainty, why not to spend lockdown evenings with a cup of warm tea and in the company of world-class researchers?
Petri nets have been at the core of concurrency for more than 50 years. Many valuable, foundational properties were discovered for ordinary P/T-nets, in turn allowing them to be used for modelling and analysing complex systems. At the same time, ordinary P/T-nets are not expressive enough to attack concrete theoretical and industrial problems. This pushes both researchers and practitioners towards finding extensions that suit better their tasks.
WN-PNE is a virtual event that brings together Petri net researches and enthusiasts from all around the world working on the theory and practice of Petri nets and their extensions (data-aware, multi-agent, algebraic/logic, time etc.). The focus of the event is to present ongoing research as well as more foundational contributions to the domain, and provide opportunities for discussion on possible future developments. We also hope that WN-PNE will inspire young researchers and help to foster new collaborations.
The talks happen every Wednesday at 17:00 (CET/UTC+1) in a dedicated Zoom meeting (the link will be provided to all registered participants in advance and published at this page), starting on Wednesday, February 3 of 2021. Each talk lasts for 1 hour with additional time given for question answering, and its recording will be made public on this webpage. The seminars will continue in 2022.
Please subscribe if you want to receive notifications about the upcoming seminars and Zoom-links for the webinars.
Here you can find the YouTube playlist with the talk recordings.
Past and Scheduled Seminars
Modeling and Describing Behavior along Multiple Behavioral Dimensions
Associate Professor, Analytics for Information Systems group at the Technische Universiteit Eindhoven
Abstract: Processes are a key application area for formal models of concurrency. Petri nets shaped both research and industrial process modeling languages, tools, and analysis techniques like no other discipline from basic model syntax up to automated discovery of process models from event data. The most adopted model-driven techniques are centered around describing and analyzing the control-flow of a well-structured process instance in isolation – within this single dimension one could argue the case to be “solved”.
Unaddressed challenges in modeling and analysis arise where processes are not well-structured or not isolated from each other. In both cases a single process model can no longer adequately describe process behavior.
Taking recorded event data from such processes as a starting point, I will outline and develop a number of challenges and characteristics of such processes that can be observed in practice. I will discuss how the behavior of such processes can be classified along different dimensions and outline a few fundamental net-affine concepts that complement concepts from Petri nets and allow to adequately describe behavior of such processes. I specifically discuss the model of Synchronous Proclets which extend Petri nets with a mechanism that can dynamically synchronize multiple instances of multiple processes on shared transition occurrences. It's specific power lies in the ability to describe behavior and synchronization over 1:n and n:m relations between different processes and data objects using only low-level Petri net concepts and their partially-ordered runs avoiding data-extensions such has Coloured Petri nets with a too high expressive power.
I will outline how this principle is not limited to Petri nets but equally applies to other discrete behavioral modeling formalisms and allows to model and analyze system-level dynamics over multiple processes, data objects, and shared resources in a novel decompositional way, opening up avenues for analyzing processes within the system that executes them.
Short bio: Dirk Fahland is an Associate Professor in the Process Analytics group at Eindhoven University of Technology (TU/e). Starting from a strong background in construction and analysis of distributed systems with formal models, he has, over the years, embraced event data as a central source for system analysis. A central theme in Dirk’s research is analyzing data and systems that are too large or complex to be understood as monolithic end-to-end processes executed in isolation. Visit https://multiprocessmining.org/ for more information.
Dirk’s approach is to analyze and describe such systems as a complex network of behavior from several different angles through large-scale event data pre-processing and querying as well as discovering, synthesizing, and transforming formal models from event data. Dirk has published over 90 articles at international journals, conferences, and workshops.
Petri Net-based Object-centric Processes with Read-only Data
Postdoctoral Researcher, KRDB Research Center for Knowledge and Data, Free University of Bozen-Bolzano
Abstract: During the last decade, various approaches have been put forward to integrate business processes with different types of data. Each of these approaches reflects specific demands in the whole process-data integration spectrum. One particularly important point is the capability of these approaches to flexibly accommodate processes with multiple case objects that need to co-evolve. In this presentation, I will introduce an extension of coloured Petri nets, called catalog and object-aware nets (COA-nets), that provides two key features to capture this type of processes. On the one hand, net transitions are equipped with guards that simultaneously inspect the content of tokens and query facts stored in a read-only, persistent database. On the other hand, such transitions can inject data into tokens by extracting relevant values from the database or by generating genuinely fresh ones. I will also show how COA-nets can be used to represent various multi-case modelling scenarios involving objects with one-to-many correlations, and how this class of Petri nets can be related to other well-known, Petri net-based formalisms in the area of multi-case and data-aware process modelling and analysis. In the second part of my presentation, I will discuss how to analyse COA-nets. In particular, the focus will be put on (data-aware) parameterised verification, using which one can analyse coverability-like properties for any instance of the read-only catalog. I will also comment on how some fragments of COA-nets, with different expressive power, can affect the decidability of not only the verification problem at hand, but also the standard problem of place nonemptiness checking.
Modeling the Interplay between Data and Processes
Jan Martijn van der Werf
Assistant Professor, Utrecht University
Abstract: Data and processes go hand-in-hand in information systems but are often modeled, validated, and verified separately in the systems’ design phases. Designers of information systems often proceed by ensuring that database tables satisfy normal forms, and process models capturing the dynamics of the intended information manipulations are deadlock and livelock free. However, such an approach does not guarantee correctness of the complete system, as perfect data and process designs in isolation can induce faults when combined.
In this talk, I will present the ideas behind an approach that combines information models and process models using an automated theorem prover. In this approach, set theory and first-order logic are used to express the structure and constraints of information, while Petri nets extended with vectors of identifiers are used to capture the dynamic aspects of the system. The modelling approach is not limited to information systems modelling. As I will show in this talk, the approach stems from modelling the interaction between components. Although reachability is decidable under specific conditions, the approach calls for new analysis techniques to verify such complex interactions.
Short bio: Dr. Jan Martijn van der Werf is an assistant professor at Utrecht University on architecture mining, combining process mining with software architecture. His research and teaching focuses on modeling, analyzing and reconstructing interactions between components in large, complex software systems. He is interested in how formal methods, such as Petri nets, can be used in practice to study the dynamics of large systems. Jan Martijn van der Werf is active in both the process mining community, as well as in the software architecture community, where he publishes regularly at international conferences and workshops. He holds a joint PhD degree from Eindhoven University of Technology and the Humboldt Universitat zu Berlin on the compositional design and verification of component-based information systems.
Entropy-Based Conformance Checking Between Designed and Real-World Processes
Senior Lecturer, The University of Melbourne
Abstract: Conformance checking is an area of process mining that studies methods for measuring and characterizing commonalities and discrepancies between processes recorded in event logs of IT-systems and corresponding designed process models that govern the execution of the IT-systems. Applications of conformance checking range from measuring the quality of models automatically discovered from event logs, via regulatory process compliance, to automated process enhancement. Recently, the process mining community initiated a discussion on the desired properties the conformance measures should possess. This discussion acknowledges that existing measures often do not satisfy the desired properties. Besides, there is a lack of understanding of which properties the conformance measures should fulfill in the case of partially matching processes, i.e., processes that are not identical but diﬀer in some process steps. In this talk, I will present our recent work on conformance measures for process mining based on the concept of entropy from information theory. The introduced measures satisfy all the existing relevant properties. I will also talk about our recent results in stochastic conformance checking, a variant of conformance checking that accounts for the frequencies of traces in the compared logs and specially annotated stochastic process models.
Short bio: Dr. Artem Polyvyanyy is a senior lecturer at the School of Computing and Information Systems, Faculty of Engineering and Information Technology, at the University of Melbourne (Australia). He has a strong background in Theoretical Computer Science, Software Engineering, and Business Process Management from the National University of Kyiv-Mohyla Academy (Ukraine), Hasso Plattner Institute (Germany), and the University of Potsdam (Germany). In March 2012, he received a Ph.D. degree (Dr. rer. nat.) in the scientific discipline of Computer Science from the University of Potsdam (Germany). His research and teaching interests include Computing Systems, Information Systems, Distributed Systems, Process Modeling and Analysis, Data Science, Business Process Management, and Algorithms. Artem Polyvyanyy has published over 80 papers on these topics, including academic book chapters, peer-reviewed journal articles, and refereed papers at international conferences and workshops. He has actively contributed to several open-source initiatives that significantly impacted research and practice, including jBPT, Oryx, and Apromore. His research focuses on Process Mining and Process Querying. The research discipline of Process Mining combines studies of inferences from data in Data Mining and Machine Learning with Process Modeling and Analysis to tackle the problems of discovering, monitoring, and improving real-world processes. Process Querying combines concepts from Big Data and Process Modeling and Analysis with Business Process Intelligence and Process Analytics to study techniques for retrieving and manipulating models of processes, both real-world and envisioned, to systematically organize and extract process-related information for subsequent use.
HERAKLIT: How to Model Reliable Big Systems
Professor (em.), Computer Science Institute of Humboldt-Universität zu Berlin
Abstract: HERAKLIT is an initiative for an infrastructure to model and to analyze large, computer embedded systems such as business processes and cyber-physical systems. The resulting models are more intuitive, expressive, analyzable and generally better usable than so far models.
HERAKLIT is based upon most approved formal means: first order logic and abstract data types for static aspects and data, Petri nets for dynamic aspects, and the composition calculus for architectural aspects of big systems.
We present a comprehensive case study to show how HERAKLIT covers and integrates diverging aspects, including
• composition and refinement of large systems, by means of a universal, yet expressive composition operator;
• description of operational behavior without global states (as global states are unrealistic for large systems);
• integration of real world items (such as goods, production processes etc.) with abstract items (such as data) in one model;
• compositional verification of decisive properties of such systems.
Short bio: Wolfgang Reisig is professor (em.) at the Computer Science Institute of Humboldt-Universitaet zu Berlin, Germany. He served as a research assistant and assistant professor at the University of Bonn and at RWTH Aachen, a visiting professor at Hamburg University, a project manager at Gesellschaft fuer Mathematik und Datenverarbeitung (GMD), and a professor at Technical University of Munich.
Prof. Reisig was a senior research at the International Computer Science Institute (ICSI) in Berkeley, California in 1997, got the "Lady Davis Visiting Professorship" at the Technion, Haifa (Israel), the Beta Chair of Technical University of Eindhoven, and twice received an IBM Faculty Award for his contribution to Cross-organizational Business Processes and the Analysis of Service Models. He has been the speaker of a PhD school on Service Oriented Architectures, 2010 - 2017.
Prof. Reisig is a member of a member of the European Academy of Sciences, Academia Europaea. He published and edited numerous books and articles on Petri Net Theory and Applications. He is a Member of the Petri Net Conference Steering Committee since 1982 and a co-editor of the journal "Software and Systems Modeling".
Anti-alignments in Conformance Checking
Assistant Professor, École Normale Supérieure Paris-Saclay
Abstract: We present anti-alignments as a tool for conformance checking. The idea of anti-alignment is to search, for a model N and a log L, what are the runs of N which differ as much as possible from all the runs in L. Among other uses, anti-alignments serve as witnesses for imprecisions of the model, therefore, they are used to measure precision. We give several algorithms to compute and approximate anti-alignments.
Multi-Perspective Process Mining and Verification
Massimiliano de Leoni
Assistant Professor of Computer Science, University of Padua
Abstract: The lion's share of attention in BPM has been on the control-flow perspective of process models, namely on the ordering of activities, thereby ignoring the view on data-driven decisions, and the constraints on resources and time (e.g. deadlines). Models that only focus on the control flow tend to be less precise, thus enabling executions that the real process constraints would disallow. Therefore, the application of, e.g., process mining techniques becomes less insightful, due to conclusions based on an imprecise model. As an example, checking the conformance of an imprecise/underfitting model may yield to diagnose unfitting traces as fitting. This talk will touch upon the multi-perspective process conformance checking, along with a discussion on how to discover a process model that incorporate perspectives other than the only control-flow. Finally, the discussion will also be on how to verify that multi-perspective process models are sound. Verification of model soundness is crucial when the models are used as input for process-mining techniques, which typically assume soundness, and might return unreliable results otherwise. Furthermore, if models are used to configure information systems, an unsound model leads to unsound process executions. The talk will both focus on the foundations of the techniques discussed and discuss some real-life applications.
Revisiting Petri Nets: Adding Objects While Enforcing Lucency
Wil van der Aalst
Full Professor, RWTH Aachen University, Process and Data Science (PADS) Group
Abstract: Petri nets play a key role in process mining. One could argue that process mining revived the interest in Petri nets and foundational notions such as the marking equation and region theory. This talk will be composed of two parts: One part is extending Petri nets to deal with multiple types of objects and the other part is about enforcing Petri nets to be lucent, i.e., there cannot be two markings that enable the same set of transitions. Both parts are inspired by requirements from process mining. From an object-centric event log, we want to discover an object-centric Petri net with places that correspond to object types and transitions that may consume and produce collections of objects of different types. Object-centric Petri nets visualize the complex relationships among objects from different types. Whereas object-centric Petri nets extends traditional Petri nets, lucency limits the class of Petri nets to models where states are fully characterized by the transitions they enable. For process mining this seems to be a relevant property. If the process has two different states enabling the same set of transitions, process discovery becomes more challenging. We can show that all free-choice nets having a home cluster are lucent. It is an open question how to exploit lucency in process mining. Moreover, it is an open question how to characterize a large class of object-centric Petri nets that is lucent.
Short bio: Prof.dr.ir. Wil van der Aalst is a full professor at RWTH Aachen University leading the Process and Data Science (PADS) group. He is also part-time affiliated with the Fraunhofer-Institut für Angewandte Informationstechnik (FIT) where he leads FIT's Process Mining group. His research interests include process mining, Petri nets, business process management, workflow management, process modeling, and process analysis. Wil van der Aalst has published over 800 articles and books and is typically considered to be in the top-15 of most cited computer scientists with an H-index of over 155 and more than 110.000 citations. Next to serving on the editorial boards of over ten scientific journals, he is also playing an advisory role for several companies, including Celonis, Fluxicon, and UiPath. Van der Aalst is an IFIP Fellow, IEEE Fellow, ACM Fellow, and received honorary degrees from the Moscow Higher School of Economics (Prof. h.c.), Tsinghua University, and Hasselt University (Dr. h.c.). He is also an elected member of the Royal Netherlands Academy of Arts and Sciences, the Royal Holland Society of Sciences and Humanities, the Academy of Europe, and the North Rhine-Westphalian Academy of Sciences, Humanities and the Arts. In 2018, he was awarded an Alexander-von-Humboldt Professorship.
Structure-Preserving Process Model Repair
Senior Research Fellow, HSE University, Faculty of Computer Science, PAIS Lab
Abstract: One of the major efforts of the process mining research community was (and is) made to construct better process discovery techniques. Conformance checking algorithms are also considered of high importance. They allow a domain expert to check whether a process model in-use fits the reality. So, what should be done when there are inconsistencies between modelled and observed behaviours? Of course, one may always discoverer a completely new model using one of the many wisely fabricated discovery algorithms. However, by doing so one risks to lose out on useful properties of the initial model or affect the human-readability of the discovered model. Recently, for such cases the algorithms were presented to repair Petri nets based on the given event logs instead of re-discovering them from scratch. We will consider our modular approach which is based on process model decomposition. The general idea is to find unfitting fragments in the decomposed model, and then to replace them with re-discovered fitting fragments. The talk will discuss what constraints need to be satisfied by decomposition and discovery algorithms in order to apply this approach. We will also talk on how our approach relates to other methods for repairing process models.
Short bio: Alexey Mitsyuk is a senior research fellow at PAIS Lab of HSE University. He obtained specialist diploma (~MSc) in applied mathematics from Moscow State Institute of Electronics and Mathematics and Ph.D. in computer science from HSE University. The main research interests of Alexey are: process mining and its applications, Petri nets, software architecture, and applications of data/process analysis methods in software engineering. In all these fields Alexey considers well-readable and concise visual representation as a significant and powerful tool that deserves more attention than it usually gets.
Automated Repair of Process Models with Non-Local Constraints Using State-Based Region Theory
Research Fellow In Process Mining, The University of Melbourne, Faculty of Engineering and Information Technology, School of Computing and Information Systems
Abstract: State-of-the-art process discovery methods construct free-choice process models from event logs. Consequently, the constructed models do not take into account indirect dependencies between events. Whenever the input behavior is not free-choice, these methods fail to provide a precise model. We propose a novel approach for enhancing free-choice process models by adding non-free-choice constructs discovered a-posteriori via region-based techniques. This allows us to benefit from the performance of existing process discovery methods and the accuracy of the employed fundamental synthesis techniques. We prove that the proposed approach preserves fitness with respect to the event log while improving the precision when indirect dependencies exist. The approach has been implemented and tested on both synthetic and real-life datasets. The results show its effectiveness in repairing models discovered from event logs.
All the events will be hosted by: Irina Lomazova (PAIS lab), Marco Montali (KRDB group), Alexey Mitsyuk (PAIS lab), Andrey Rivkin (KRDB group)
Please write to anyone of us in case you have questions or want to propose yourself as a speaker!
For example, you can reach Alex using this address amitsyuk at hse dot ru.
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.