It feels normal and right to think that a business process should be modeled and designed from start to finish in a single place. Make it one workflow, and everyone knows what to do, you can monitor the workflow, and if you need to make changes, you know exactly where to look.
Except that life is not simple. Business is not simple. Processes are not simple. Quite often, an elaborate process all but demands to be broken up into multiple interconnected workflows.
The Illusion of a Single Process
For illustration’s sake, consider a process common to nearly every organization: the process of onboarding new employees.
Except that it’s not really one process.
The term “employee onboarding” means very different things to different parts of an organization. To Human Resources, onboarding involves getting an employment agreement signed, scheduling a start date, and so on. The IT department sees something else entirely: provisioning accounts, assigning a user to groups, granting permissions, procuring an email account, setting up logon privileges to the CRM system, and so on. Security thinks it’s about a background check and issuing a badge and a parking permit. Accounting cares about payroll details. And that’s not an exhaustive list.
They’re all important, they’re all detailed, they all require the help of domain-specific knowledge – but they happen separately. To make them all happen usually involves HR walking a new employee around to various departments, or a barrage of email, or both.
A Problem to Solve or a Situation to Embrace?
When faced with the above reality of “onboarding” meaning many things in many places, it would be easy to conclude that this is a problem to solve, and the solution is to get all of these departments together to create a unified, omnibus onboarding process that incorporates all of these things.
But coordinating the schedules, priorities, budgets, and missions of multiple departments is difficult and time consuming at best. You might be able to get each department to cede something they do over to an omnibus automation agent, but who gets to administer it? IT, because it’s software? HR, because it’s personnel-related?
It’s not just about trust, there’s expertise to consider as well. Does HR know how to create Active Directory users, assign them to the right groups, and provision email/communication services for them? Does IT know how to ensure state and federal employment laws are being observed?
The process is broken up into multiple places for a reason. Unifying it will take more time and fight the organization’s DNA. There are times where shaking up the status quo might have merit, but is this one of them?
If Boundaries Aren’t a Problem, Don’t Solve Them – Obey Them
This process cries out for automation, but it’s not so much broken as it is inefficient. This is a situation where automating this process the wrong way could actually make it worse.
The ideal way to solve this mirrors the real world. They already have someone in HR walk a new employee around to Security to get a badge, to IT to get a username/password, to accounting to setup payroll, and so on. And that’s after doing a number of things within HR itself. At each stop, someone knowledgable does their part.
Let’s apply that observation to process automation.
If we can get each department to create individual onboarding processes that cover their respective responsibilities, we can craft an “overprocess”, one that calls each of these subprocesses in turn (or even in parallel) to do their part. It will take the place of email and a first-day walking tour of departments.
Encapsulation Has Other Uses
Not only is the above approach politically nice, but it also makes it easier to understand (and recommend improvements for) new employee onboarding as a whole. A single, unified process design would get so bogged down in the details that the big picture would be lost. But an overprocess that hits all the major steps and leaves the details to subprocesses puts the focus in the right place. Similarly, having IT focus on a user provisioning subprocess allows them the right level of focus without being caught up in what Accounting, Security, HR, and other departments must do.
For other processes that take place in a single department, this still has uses. Detailed execution logic might still be offloaded to a callable subprocess, leaving the overprocess easier to understand; the overprocess becomes the “what” workflow, and the subprocesses become the “how” workflows.
And, of course, those subprocesses may well turn out to be usable by multiple overprocesses – and I think everyone is fond of reuse when possible.
Passing Data and Dependencies
Given that this is workflow, the odds are that dependencies exist and sequencing matters. In the onboarding example, IT needs to know a new employee’s manager, start date, etc. in order to assign group membership, directory property values, etc. Security probably ties ID info to that same directory. That’s fine, and it’s the overprocess’ job to handle that.
There does need to be an established protocol for passing requests to subprocesses and getting back return information. This is very product-dependent, but one (or both) of two things will happen:
1. The subworkflows return a block of data back to the overprocess. This could be multiple output parameters or a single XML datagram, for example.
2. The subworkflows might return nothing, or perhaps only a single success/pending/fail value of some sort, in which case the overprocess will need to fetch new data (e.g., from the directory) when needed before passing that information to called subprocesses.
The first option requires less work on the part of the workflow designer and is much easier to understand. If you have tools that get this, use them. But the second method can be made to work.
How Not To Do Distributed Workflow
There’s actually a third technique, but it’s not especially stable. Let’s describe it so you can understand why to avoid it.
You might have an overprocess that can’t actually call a subprocess at all, or if it can, it can’t wait until the subprocess completes to check what happened. In that case, the only option would be to write some data out to a list or table in a store that can be explicitly polled or can trigger an event.
When that happens, you need to understand that many things outside of the workflows could prevent that data from being successfully written and/or being noticed and/or triggering some kind of callback event. In that case, the overprocess might wait forever.
You can establish timeouts in your designs and insert steps for handling them, but then you start transforming processes that should look like business diagrams into things that look like developer flowcharts.
Distributed processes only appear to be more work. The design takes more work, but it can be divided and distributed. Everything else about it is easier, from the human factors that go into building and launching the solution to the ability to make changes later to the degree to which the solution will be adopted and put to use. It’s certainly worth a try.
About Mike “Fitz” Fitzmaurice
Mike “Fitz” Fitzmaurice is Vice President of Workflow Technology for Nintex. He joined Nintex, the world leader in workflow and content automation, in 2008 and spends a good portion of his time on the road, talking to people about the benefits of automating business processes.