Explore chapters and articles related to this topic
Process Architecture
Published in Vivek Kale, Enterprise Process Management Systems, 2018
The workflow engine is the runtime environment of the WfMS. The workflow engine takes the workflow process model from the process definition tool and enacts the workflow; that is, it creates process instances of the workflow process based on a trigger event for the creation of a workflow process instance. An event is a predefined circumstance the workflow engine is listening for: this could be the arrival of an email or the receipt of a leave request form. In an embedded WfMS, the trigger could be some status change of an application transaction. For example, the purchase_order.create event is raised anytime a purchase order is created; this event could in turn trigger the purchase order approval workflow to be enacted.
Migrating e-Science Applications to the Cloud: Methodology and Evaluation
Published in Olivier Terzo, Lorenzo Mossucca, Cloud Computing with e-Science Applications, 2017
Strauch Steve, Andrikopoulos Vasilios, Karastoyanova Dimka, Karolina Vukojevic-Haupt
The workflows executed by the workflow engine describe the ordered execution of different tasks such as data preparation, computation, or visualization. In our case, these tasks are realized by web services hosted on an application server. During the execution of a workflow, the workflow engine navigates along the predefined control flow and also interacts with these web services through the service bus; that is, it sends a request for invocation of a web service and receives the results back from the web services. The service bus is also responsible for service discovery and selection if information about concrete services to be used is not available during the workflow deployment step.
An ontology-guided approach to process formation and coordination of demand-driven collaborations
Published in International Journal of Production Research, 2023
Nikolai Kazantsev, Michael DeBellis, Qudamah Quboa, Pedro Sampaio, Nikolay Mehandjiev, Iain Duncan Stalker
The suggested ontology-guided approach consists of four steps. First, the product and process specifications are derived from the Bill of Materials and the Bill of Processes. They are asserted as classes and properties of the collaboration ontology, which builds on product assembly requirements, process steps, input/output resources, and semantic rules. Second, the semantic module interprets resource requirements as suggested links between process steps. Third, the semantic links between the process steps are interpreted as a potential process by converting the chunks of classes connected into the BPMN notation. The control flows, and logical junctions (AND, XOR) are specified based on resource dependencies. Fourth, the newly generated process is uploaded to a workflow engine. Figure 5a shows the conceptual model of the proposed approach.
Optimal Data Placement for Scientific Workflows in Cloud
Published in Journal of Computer Information Systems, 2023
Here, , , denotes the SGL values for authentication, confidentiality, and integrity that a given VM will possess. SGL is represented as a percentage between 0 and 1, inclusive. The workflow parser will iterate over the workflow’s tasks and launch the workflow engine. Along with the security constraints, optimized scheduling with a suitable VM must be checked. A security zone is the first selection by the heuristic approach, followed by the resource mapping schedule generation, done by the first genSchedule algorithm. The PSRD Optimization algorithms then optimize this generated schedule. Conversely, another method called SD is used from the user’s perspective. This is the same as the SGL, which must be represented by a percentage value between 0 and 1.
Concept and basic framework prototype for a flexible and intervention-independent situation recognition system in the OR
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2022
Denise Junger, Bernhard Hirt, Oliver Burgert
The Situation knowledge created from sensor and process knowledge is passed to the Workflow Engine in the Workflow Management to control the process (workflow model in the Camunda BPMN Workflow Engine (Wiemuth and Burgert 2019)) based on the most likely detected phase. For the communication with Camunda, a middleware adapted from (Wiemuth and Burgert 2019) (workflow engine access) is implemented and runs on the server. The workflow management handler can use the functions of the middleware (e.g. get interventions, complete a task) via the middleware URL (RESTful). Variables are transferred in JSON format. The middleware itself implements functions by using Flask and uses the Camunda REST API (Camunda Services GmbH 2021) to access its information by URLs. The process models of the interventions need to be manually deployed in Camunda. The models contain user tasks to control the workflow within the Camunda engine. The recognition of the first task in a procedure automatically triggers the start of the process in the OR. If a process instance is already running, the system checks if it is reasonable to complete the actual running task to automatically start the recognised one. Since the Camunda BPMN Workflow Engine does not provide the required management of running activity instances (e.g. after AND gateways all tasks are automatically considered as ‘running’), these are managed via the workflow database simulation. Additionally, the individual SPM is stored by saving the start and end times of the running and completed tasks (i.e. individual situation). Via SDC connector implemented with sdclib (GitHub, Inc. 2021) the most probable phase and RSD are published, using named pipes. The Situation Subscription Management component simulates a context-aware system that subscribes to the phase and RSD metric to be informed in case of changes.