Are self optimizing workflows possible?

Last published at: July 6th, 2021

FlowWright engines are architected as worker processes or robotic workers that perform work for workflow processes.  Whether the engine is processing workflows, forms, ESB events, they are all processed by robotic workers.  Robotic workers are designed for performance, but they are also able to return all resources used by the worker such as CPU, RAM any other resources back to the Operating System (OS).

As FlowWright processes more and more workflows, FlowWright's performance increases.  Microsoft .Net itself is designed to perform better as performance increases; the application might take some initial load time, but once its loaded, engines are designed for high performance.

So how does FlowWright optimize it self?

As you may already know, there are many user configurable settings for global use and at each processes level for optimizing.  As FlowWright engines process workflows, these engines are able to learn from the processed workflows to self optimize.  Most of the technology behind this  process and/or the algorithms used are proprietary.

Why do processes need to optimize?

According the BPM life-cycle, it's all about designing a process, executing, analyzing and optimizing the process.  Here are some of the main reasons to optimize:

  • Redundant steps
  • Path optimizations
  • Changes to flow
  • Break up of sub-workflows

FlowWright's development team is always looking to improve algorithms within the product for several reasons, mainly around performance.   As more and more Artificial Intelligence and Machine Learning data are integrated to the product, self optimizing algorithms for processes will continue to improve.

The next version of FlowWright will improve its internal graph library, library where it computes paths within a workflow process.  Today, most software platforms use graphs for computations, especially with processes where processes are all about vertices and edges. Graph libraries have become popular because of social media sites such as Facebook and LinkedIn.  These sites carry large amounts of data, but all this data is related.  In Facebook, you have friends but your friends might be also related to some of your brother's friends, all this data is represented and stored as graphs for high performance computations.