Process Mining: The process analysts’ mission is to continuously identify ways to improve and perfect the organization’s operations to reflect on company business gains.
We do this by looking for opportunities to reduce costs and waste and improve customers’ quality and experience with the organization’s products and services.
This assessment involves holistically understanding a business process, understanding issues such as responsibilities in the execution of tasks, operational steps, sequence of activities, the time elapsed in the execution of the action, cost of resources and the execution of the process, distribution of work, availability of resources, bottlenecks and other problems.
Classical process analysis typically relies on observing work being done, collecting data at process control points, and input from people to enable a comprehensive map of how the workflow is performed (AS IS) so that it is then possible to establish a diagnosis, propose solutions (the famous TO BE) and finally carry out the necessary investments and projects to implement them.
This current process analysis (AS IS) activity has always been a topic of great debate among professionals involved in the process improvement discipline.
William Deming said that you couldn’t manage what you don’t measure and can’t measure what you don’t know. But, at the same time, care must be taken with the depth and effort spent on the activity to avoid the well-known risk of analysis paralysis (when the organization cannot leave the analysis phase for a diagnosis and a compelling proposal for a solution and ends up losing momentum process transformation).
Even with all the effort of surveying, structuring, and studying the data collected, it is still common for this view not to be able to perceive variations, bottlenecks, and rework in the process because they were not measured or reported by those involved.
But the process analyst can now count on a technological ally in this endeavor.
With organizations increasingly using information systems and generating thousands of data records about the transactions carried out in each task, we can find in data science an essential tool for discovering, understanding, analyzing, and diagnosing processes: Process Mining.
Process Mining is the combination of applying data science techniques (to collect, treat and analyze large volumes of information) with the discipline of business process management (BPM), which gives data the context necessary to understand how the actions performed in the systems collaborate in the value generation flow of the product or service, thus making it possible to map all the routes executed by the process in each case through their digital footprints.
What Is A Digital Footprint?
A digital footprint in process mining is a key made up of three pieces of data:
- An identification key (who is the case of the process)
- An activity reference (what the activity is)
- A timestamp (when the activity ran for this case)
How About An Example?
Thinking about a typical purchasing process, let’s say that the process usually has these characteristics:
- The invoice arrives electronically, for example, by email;
- In possession of the invoice, a person collects the supplier’s data and needs to access the ERP to check if he is already registered;
- Afterward, you need to access the issuing body’s website and check the invoice data;
- Then locate the order in the ERP from the invoice and supplier data;
- Then, register the invoice data for the respective order;
- Hence it passes for the approval of some people;
- And finally, after approval, it is possible to schedule the payment.
Each of these actions tends to leave its digital footprints on different systems, along with other information that can help analyze the context of each case – such as the name of the executors, the suppliers that issued the invoices, the origin of the documents, the types of purchases, the areas that created the orders, etc.
Process mining is the solution that will help us to transform the large volumes of data obtained from the execution of processes, placing them in a timeline that, when analyzed together, allow us to identify not only the processes that were executed correctly but also:
- How often are activities done in a different order?
- What other activities should have been foreseen in the mapping, which can mitigate problems or rework due to a poorly done task?
- At what points does the process stop most often?
And with context parameters, we can create analysis filters to identify the specific characteristics that lead to variations occurring.
Also Read: Project Management And Big Data: Their Relationship