Every company works with data. Productivity, prices, stock, you name it, anything that makes its way to the next report and helps you take the next step – just numbers. Chosen the right strategy, you can make these numbers work for you.
A case from the trenches
We dedicated the previous blog post to the value and importance of your data. You might already be intrigued to imagine how those numbers can save you a lot of work and even make your company more successful or “just” simplify your decision making process. It’s time to do something about it.
But then what? It might be very tempting to go for the new and shiny, to start an Artificial Intelligence (AI) project because that is the future and you are afraid to miss the boat. But AI requires a very high digitization maturity and work best on very specific use cases.
The technology has some known drawbacks, requiring large amount of (mostly labeled) data. It also lacks in so important auditability, a real and valid concern for many business leaders.
Focusing on this far-way horizon may lead you to over-reaching, over-spending, and might actually backfire. Lacking in tangible results to show, it could reduce the willingness to invest in data and data technologies. You will miss the low hanging fruit that will add value to your business and will help champion data in your organization.
Through a real use case at one of our customers, I will try to explain how we managed to integrate data and create value by integrating a use case that mattered.
The use case
This customer is responsible for the operations of multiple industrial sites, each producing 24/7. At the beginning of the day, engineers gather data, calculate, and analyze the shortfalls in production from the day before. What was the reason, was it planned, what can be done about it, will this impact our overall production, etc.
This data needs to be revised and approved by the operational and reporting team and reported officially to management.
Until very recently, they would extract data from their process data historian, process it through an excel sheet and pretty much run the process by hand using the Microsoft Excel.
A repetitive and error prone process, with little value in the data wrangling. A perfect candidate for automation!
To support this customer, we deployed a solution that would:
- Fetch the data automatically from the OSIsoft PI data historian
- Run an algorithm to detect production shortfalls and faulty input. The algorithm tries as well to infer the cause of the production shortfalls from previous related cases.
- Present the results in a Web UI, including history and raw data
- Present the detected production shortfalls to engineers and operators for analysis and validation
- Extract the results into the right format for exporting to the reporting system
Now, engineers and operators can come to the office in the morning, and the data about the production shortfalls will be ready for them to analyze and validate, resulting in:
- Less work intensive process
- Better accuracy
- Easier analysis based on charts
- History of production shortfalls under your fingertip
Since the first version, we keep refining the process, adding new algorithms and new mechanisms to streamline even more of their operations (For example, producing production reports automatically from the production data)
What can we take from this?
The algorithm for analyzing production data is not based on machine learning or fancy AI techniques. It is a regular algorithm designed together with the process engineers.
But it delivers exactly what the customer wants and provides tremendous value to the business.
While running complex machine learning algorithms might add some value in the future, focusing on them right from the start would have diverted us from what we really wanted to achieve: