Are self-optimizing workflows possible?

Learn how FlowWright optimizes by processing workflows

Last published at: March 6th, 2025

FlowWright engines are architected as worker processes or robotic workers that perform work for workflow processes. Whether the engine is processing workflows, forms, or ESB events, they are all processed by robotic workers. Robotic workers are designed for performance, but they can also return all resources used by the worker, such as CPU, RAM, or any other resources, to the Operating System (OS).

As FlowWright processes more workflows, its performance increases. Microsoft .Net itself is designed to perform better as performance increases; the application might take some initial load time, but once it is loaded, its engines are designed for high performance.

So, how does FlowWright optimize itself?

As you may already know, many user-configurable settings exist for global use and optimization at each process level.  As FlowWright engines process workflows, these engines can learn from the processed workflows to self-optimize.  Most of the technology behind this process and/or the algorithms used are proprietary.

Why do processes need to be optimized?

According to the BPM life-cycle, it's all about designing, executing, analyzing, and optimizing the process.  Here are some of the main reasons to optimize:

  • Redundant steps
  • Path optimizations
  • Changes to flow
  • Break up of sub-workflows

FlowWright's development team is always looking to improve algorithms within the product for several reasons, mainly related to performance.   As more and more Artificial Intelligence and Machine Learning data are integrated into the product, self-optimizing algorithms for processes will continue to improve.

The next version of FlowWright will improve its internal graph library, where it computes paths within a workflow process.  Today, most software platforms use graphs for computations, especially with processes where processes are all about vertices and edges. Graph libraries have become popular because of social media sites like Facebook and LinkedIn.  These sites carry large amounts of data, but all this data is related.  On Facebook, you have friends, but your friends might also be related to some of your brother's friends; all this data is represented and stored as graphs for high-performance computations.

How can we improve this article?

Share additional info and suggestions