Do you
have a question?

+31(0)85 0608400

HOW TO: achieve a performance boost with the PI Web API

Author: Tamara Nagy
mei 2020

At Cuurios we are keen to develop software solutions that use your process data to deliver valuable insights. The OSIsoft PI Asset Frameworks (PI AF) provides a hierarchical, asset centric model of data with detailed history, which is indispensable to monitor and further analyse vital production data in the industry. It means millions of records of real-time and historical data. This data is invaluable, provides not only useful insight but also the possibility of improvement for better control and efficiency when processed and displayed right.

Have you chosen wisely?
Having a database filled with vital information is not more than a pile of values, up until the moment when you put that data to work. Every number, every date, every remark can be useless or on the contrary: it should help your company to work smarter, not harder.
Making the right connections, creating useful relations, calculations and transforming your data into concrete actions is what our motto, data to action means. Poor choice of data to process means poor, sometimes even useless insights. Modern technology provides you the tools, to make your company’s job easier – an overflow of data on the other hand will just make everyone’s life harder.

Let’s get technical
During our several encounter with PI AF (using the Web API) we learned some useful lessons and tricks. For those with an IT experience, the fact that the extended use of a framework also means dealing with its limitations and shortcomings is not all too surprising. In case of the PI AF, the main obstacles we had to overcome were related to the limit of search queries, result sets, and performance issues due to the excessive amount of information.

The use-case
One of our use cases were monitoring and handling inhibition values on offshore gas production platforms – more precisely detecting when an inhibition for any checkpoint is turned on, and track it’s changes until it is turned off. The added value lies in a well-managed and monitored safety system.

Finding the PI points
These checkpoints do not just belong to one asset or even one asset type, in fact, they can be found all over the asset tree. This is where the PI AF built in search query comes to the rescue.


The above query gives us almost 6.000 results. Although PI AF has the possibility to set the size of the result set, the built in maximum limit is 1.000 item per page, which means a minimum of 5 REST  requests in parallel to process the almost six thousand items.
What makes it complicated to work with PI AF at the very beginning is the fact, that the above query does not provide us values, or any actual data about the PI point other than it’s  WebID, the unique identifier used in PI AF. As a next step, querying the WebID gives you direct access to the actual values of a certain checkpoint.

Performance with over 6.000 REST request?
Although now we have all the inhibitions points directly accessible with their unique WebID, the reality is, checking all of their values takes roughly six thousand get requests. If you can’t imagine what that exactly means, below an example:


6.000 of these requests need to be sent and processed. It takes in our particular setup over 5 minutes. Can you imagine pressing a button and waiting for over 5 minutes for the result? Let’s all just be honest – the reality is, most people lose their patience even after a couple of seconds, because even that loading time is unacceptable nowadays, let alone 5+ minutes.

Let’s build stream sets!
Implementing a working Refresh button at this point was the last straw. While the updates where running only hourly in the background, this 5 minutes wasn’t great, but still barely noticeable from a user point of view. Up to the point, when we placed a button on the page for manual refresh purposes. Then it became clear: we need another solution.
After some research we found out, that the PI AF Web API supports bulk data retrieval even for unrelated pi points, the only thing you need is the list of WebID’s. These queries are called stream sets and are providing fast bulk retrieval. 


Only a couple of things to keep in mind:

  • the URL has a length limit, in our particular setup we could retrieve the values for approximately 400 WebID’s in one stream set (the length of the webids varies, make sure you test this extensively in your setup);
  • it is necessary to adjust your method to handle the response accordingly, as the result is a list of json objects called Items,
  • if any of the WebID’s in the stream set runs on error, the whole request runs on error, meaning even one unavailable WebID and the whole request goes down the drain.
  • Implementing stream sets meant we could reduce the number of REST request from 6.000 to 15. Additionally, a stream set request response time is not noticeably longer than a simple value request, so our performance got a big boost: from over 5 minutes down to an average of 10 to 15 seconds. 

Building something really meaningful on top of a database with literally millions of available records, if done smart, can improve the way you organize your work. Focusing on what you need to know, what you need to see from those information at first blink means you can put the carefully collected data to work. It is not going to make the decisions for you, but it can help pointing out when and where you need to make decisions, take actions.

Please register with your name and e-mail address to access the download section of our website:

Contact the author
You can share this blog post

More blog posts

From theory to reality: internship at Cuurios – Meet Bram

Part of my final year of international business at The Hague University of Applied sciences is to do an internship. During this engrossing year I was lucky enough to do my internship here at Cuurios. I have followed almost every business related subject during the last 3 years at my university, ranging from human resources to finance to operations management. I always missed a really tech related subject which is quite important in the modern days. I’m really happy I can fulfil that missing piece here at Cuurios.

I applied to do my internship at Cuurios because it is a relatively young, still small sized but growing company. I like this size during this internship because even though I am an intern at Cuurios I am also part of the team and I am really responsible for the tasks assigned to me. Although the pandemic has made us work from home most of the time, I am still able to get proper guidance and plenty of sparring opportunities.

Whilst I’m doing my internship here, Cuurios participates in the Investor readiness program by YES!Delft. It is a really interesting program where I get the opportunity to translate gained knowledge from my study into reality. I help Cuurios to prepare for an acceleration of their business, this ranges from a good company value proposition and pitch to a financial planning and business plan. The tasks are diverse and I’m able to work however I feel best suited while still being coached and questioned which I find is working perfectly.

"Programming is not a task, but a hobby" - Meet Michael

For me, programming came in late. I wanted to be a lawyer, I had graduated high-school with all hopes of studying law, but a light shone and a voice called out to me - “Michael, study Computer Science instead”. I then honoured the call and started preparation to get admission into University to study Computer Science. Thankfully, I got in.

In my first year (2012), I was introduced into the art of programming.

The idea of me building something for people to use was similar to being given a magic wand, which felt very good. I started experimenting with Visual Basic, the drag and drop system helped me easily visualize my ideas.

Year after year, I delved deeper, building applications for friends and small organizations. Everything changed when I was paid to build an application in my third year, a holy grail was given to me. I didn’t know people would pay you for what you enjoy doing most. It was an eye-opener.

To me, programming is not a task, but a hobby, and creating things is wired in my core. I became a frontend web developer because it’s the closest programmable bit to the user (had not discovered Product Design at that time) and I enjoy that feeling of being able to engineer experiences for users whilst controlling what they see and how they use the application as a whole.

As humans, it’s pure happiness to see people follow you. In programming, it’s the same feeling, if not more when you see metrics of the people that depend on what you build. I like the influence, though little, to control how people carry out their daily important business, leisure or personal tasks using my applications.

My Cuurios-story

I joined Cuurios in October 2018; a very good decision I must say. I applied because I wanted to learn how things are done in other companies, and Cuurios’ “Data to Actions” tagline sounded like a place that would boost my programming knowledge and nudge me to code more complex applications.

At the very beginning, my first project gave me sleepless nights, as I didn’t understand most of the application. I bought whiteboards and started disintegrating the project to understand the whole quite-complex system. Now, however, I have gotten a better grasp of working on complex systems, my frontend skills have improved dramatically. The best decision so far. I feel my role in Cuurios is important (very much to me), I control how and what the customer sees. Though you need to have a very keen eye for design to do this and Cuurios has enabled me to perform this art efficiently, even using my little Product Design skill. Although I cannot single-handedly add a button anywhere I like, but I can make sure the button sits where it can be easily accessed.

At Cuurios, every ticket is like a HackerRank question, especially when it comes from Leen (COO). Sometimes I’d have to read and re-read to be able to digest the problem and think of a suitable solution which has improved my problem-solving ability. I ask questions a lot and that has helped me grow. In addition to that, Gaetan’s (CTO) experience has made me a better programmer. I take time to study the codebases of the applications built. (When you learn from the best, you become like them).

I also wonder sometimes, how Leen does it, that he is everywhere from a business standpoint. I’ve learnt from him that you need to understand the customers’ request in-and-out.

I believe Cuurios is a place to be to sky-rocket your career and build fantastic projects, and most importantly everyone at Cuurios is human.

Work as you're used to: tailor-made domain representation with graphs

One of the common issues we face when developing applications for industrial customers, is the issue of accurately representing their domain.

A domain is the set of assets, equipment, departments, systems, that make up the whole of a company’s operations. Organizations usually have fine grained definitions of who should be responsible for managing a specific asset, which department resupplies systems, etc.

A software system should integrate with an organization structure and enable its operations. More often than not, they achieve the opposite, requiring organizations to fit their processes and structure inside their own rigid asset structure.

While promoting standardization, this approach stifles organizational innovations. It leads to faulty and incomplete domain representation, as assets are not recorded as they are, but as fit the system, or not recorded at all. In the end, many systems end up being hacked by system-integrators or in-house teams to make them fit, or a custom solution is being developed.

Very often these limitations are driven by technological limitations, SQL databases (still the norm for most industrial applications) requiring a rigid structure to be performant.


We think that domains can best be described using graphs. A graph is a representation of information using node (vertices) and links (edges).

The following example should help to shed some light on the concept, and explain why we think it is such a great fit for representing domains:

  • Company A operates a small plant.
  • The plant has systems, composed of equipment or sub-systems. An equipment or a sub-system can be shared by multiple parent-systems.
  • At the same time, each equipment is linked to 2 departments, the maintenance department, and the production department.
  • Each equipment also has supervisor, a specific person, and a back-up supervisor pool, a pool of people that can be called up if the supervisor is unavailable.

Now you can see how this would start to be very complex when designed in the traditional fashion, leading to complex and inflexible implementations.

Now look at how we could implement this using a graph:

Sneak peek: this is how we build software at Cuurios

A couple of introverts sitting in front of multiple dark screens with green or white texts running on each, typing with untraceably fast fingers, only the keyboard clacking breaks the silence...

Although during the last decade the perception of programmers may have changed: instead of mom’s basement, they are now imagined in a fancy, futuristic, well-equipped environment, the basic personality traits of geeks are still perceived the same.

Just typing, and typing all day...

... well honestly, no. As software developers we spend most of our times with designs, research and problem solving. We could actually sit down and start writing your application the second we got the assignment, but that’s not the effective way. You want good, steady result, fast, and there is only one way to that: design, plan, research, and finally code.

Yes, we can type fast. Yes, we can sit in silence and focus for 8 hours without so much as taking a lunch break – or at least some of us can. Yes, we are coding in the evening, in the weekend, in our freetime, even in our dreams sometimes – because we LOVE solving problems. Give us the most complex ideas, the impossible tasks, and we will trigger happy, and start working on them straight away.

Yes, we ARE geeks, but that doesn’t make us mysterious, unapproachable, introvert or unsocial. We can easily come across arrogant, but most of the time it would just take forever to make you understand the details – we don’t have God complex, we just know, that it’s better to get it done, than explaining how will we get it done.

We adore technology and advancement!

I mean, there is probably no surprise there, but we love to surround ourselves with the latest technological advances – be it about our physical surroundig, or our codebase. So we research, we read, we learn. We get familiar with new frameworks, libraries, advanced solutions every day – then apply our newly acquired knowledge in your software, making it better, stronger, faster, safer.

How do you recognize a good developer?

Now this can be hard – as a person not knowing anything about software development, how can you tell, who will be the professional skilled enough to get the job done?

The good news is, you don’t need technical knowledge to make that decision. Good professionals stand out. Not by having millions of frameworks listed in their CV-s, not by having multiple years of experience – although that is not a bad thing –, not by asseverating they are the best, or the only ones who have a solution for you.

Good professionals stand out, because they are enthusiastic and passionate. They simply love what they are doing, they are able to switch to problem solving mode, and even start braimstorming with you to enhance your ideas as soon as they understood your needs. They are perfectionists, simply because they want to be proud of their making, and give it the best they can think of.

„Okay, but what about the sneak peek?”

And yes, here we are, after all this talk about what developers are not doing, let’s see how we at Cuurios actually turn your idea into software, so next time you work with a programmer, you will have a better understanding of what we really do [1].

  • Driven by curiosity we listen carefully what you want and challenge what you need.
  • We read the documentation, getting a nice, overall picture of what you need.
  • We read the documentation again, going into details, stop here and there for a second, making notes.
  • We just sit and stare. Now this might look like we are not doing anything, just staring out of the window, waiting for the day to end, but this is the part where at least 20% of the work gets done. We write and design the whole application in our head, tracing our steps, making mental or actual notes, connections, stripping the whole use case down to logic, numbers, actions, and finally breaking it down to parts.
  • We read the documentation again. I know, by now we should know by heart, right? But at this point, we make notes, create diagrams, start researching for the best solution, the latest technologies, the most useful libraries.
  • Brainstorm. Yes, programmers rarely work completely alone. We use our colleagues recommendations on solving similar problems they encountered, share our experience, and learn from eachother. Even if this happens online.
  • Depending on the duration of a project, we prepare the sprints, or the first couple weeks at least, read the documentation one last time (I know, right?), include every little detail in tickets, organize the workflow, then get started with the typing all day...

From here on out, we do the same routine every day – although it’s never the same and never gets boring:

  • In the morning, we prepare for the day. We go through what we finished the day before, decide on and preapre for the next steps.
  • A daily scrum meeting keeps us accountable – also a very good place to see if someone got stuck, needs help, or just a different approach or idea to get out of a deadlock.
  • During the day: design, research, code, test, debug, finalize, repeat. For each and every small part of the application, until we get a result we would proudly present.

Good software developers take pride in their work. They don’t just enjoy creating solutions for you, they are just as happy – if not even prouder and happier – as you are, when you start using what they made for you.

[1] The working method described here is Cuurios-specific, other companies and teams may have different ways to divide tasks and manage their workflow.

Do you recognize the value of your data?

Every company works with data. Productivity, prices, stock, you name it, anything that makes its way to the next report and helps you take the next step – just numbers. Chosen the right strategy, you can make these numbers work for you.

A case from the trenches

We dedicated the previous blog post to the value and importance of your data. You might already be intrigued to imagine how those numbers can save you a lot of work and even make your company more successful or “just” simplify your decision making process. It’s time to do something about it.

But then what? It might be very tempting to go for the new and shiny, to start an Artificial Intelligence (AI) project because that is the future and you are afraid to miss the boat. But AI requires a very high digitization maturity and work best on very specific use cases.

The technology has some known drawbacks, requiring large amount of (mostly labeled) data. It also lacks in so important auditability, a real and valid concern for many business leaders.

Focusing on this far-way horizon may lead you to over-reaching, over-spending, and might actually backfire. Lacking in tangible results to show, it could reduce the willingness to invest in data and data technologies. You will miss the low hanging fruit that will add value to your business and will help champion data in your organization.

Through a real use case at one of our customers, I will try to explain how we managed to integrate data and create value by integrating a use case that mattered.

The use case

This customer is responsible for the operations of multiple industrial sites, each producing 24/7. At the beginning of the day, engineers gather data, calculate, and analyze the shortfalls in production from the day before. What was the reason, was it planned, what can be done about it, will this impact our overall production, etc.

This data needs to be revised and approved by the operational and reporting team and reported officially to management.

Until very recently, they would extract data from their process data historian, process it through an excel sheet and pretty much run the process by hand using the Microsoft Excel.

A repetitive and error prone process, with little value in the data wrangling. A perfect candidate for automation!

To support this customer, we deployed a solution that would:

  • Fetch the data automatically from the OSIsoft PI data historian
  • Run an algorithm to detect production shortfalls and faulty input. The algorithm tries as well to infer the cause of the production shortfalls from previous related cases.
  • Present the results in a Web UI, including history and raw data
  • Present the detected production shortfalls to engineers and operators for analysis and validation
  • Extract the results into the right format for exporting to the reporting system

Data is about your business, not just technology and algorithms

Nowadays data is everywhere. It is the hot topic of the moment, for businesses and in the general public. There are heated debates in politics about “Data”, data wars, and even a new science entirely dedicated to data!

When it comes to data, you might be wondering, should you build an enterprise architecture with data as your starting point (data-centric)? Or build a data structure around the existing landscape (data-driven)? What about data lakes? Do we actually have any data?

Well, tackling data is highly dependent on the specific configuration of your organization history, business, and IT landscape. 

At Cuurios we believe that Data should be CENTRAL to your organization. Gathering, managing, and acting on your data is your core business. But there are no one-size-fit-all solutions. Data is a mindset, the most important is to just do it, however small the first steps might be.

In this blog post I am giving some insights on how we see Data at Cuurios and sketching the first steps towards data proficiency!


It sometimes feels like data is something new, something very hot, the core of the 21st century technological battlefield. But data has been there for a long time! Measuring, gathering and analyzing data is at the core of the scientific and industrial revolution. Without Galileo gathering data on the moons of Jupiter with his telescope, there would have been no proof that Copernicus was right, and we might still be thinking that the earth is at the center of the universe.

Data is not a thing, a disincarnated entity that exists for and of itself. Data is grounded in reality, it is information that represents assets, people, events in the real world. Data is what makes large-scale organizations possible. Without data, how would you know what the state of you inventory is without having to recount every time? What the state of a critical asset is without having to look at it?

The first form of writing ever discovered, the Sumerian tablets [1], were accounting records of production and exchanges of goods, i.e. data.

What has changed to make data the focus of a new gold rush? 

  • Storage capacity has increased exponentially. When a Sumerian scribe needed hours to imprint a clay table, we can now record terabytes of data for very little cost.
  • The internet (as in the complete networking infrastructure). No need to have people do the measurements and record the data themselves. Everything is be automated.
  • Advanced in computer power has made the application of advanced AI algorithms cheap and rewarding.

This list describes techniques to store, manipulate, and analyze data.

What it does not describe, is a change in the nature of data. People tend to concentrate on the new hype, assimilate data with data science, and equate analysis with machine learning. This is a very narrow view of what data is and limits its usefulness to a few very advanced use cases.

Because first and foremost, data is information, information about your business, its customers, its assets, its financial state. It represents the tangible and is often the only thing you have to steer large complex organizations.

We think that data should play a core role in every organization, be CENTRAL to decision making and action taking. Without data, any decision taken is an educated guess. 


In practice, we often encounter organizations that claim not to have any data. Because they don’t have a data lake.

The first thing they have on their data roadmap then, is to create one. But really, a data lake is just a big database. It won’t tell you what to do with your data. In our experience, many organizations, after having spent an incredible amount of time and budget on creating a data lake, are stuck. They don’t know what to do with it.

Because your data is about your business, not technology, not algorithms.

What we usually see is that organizations already have data, very often plenty of data, scattered around, in custom made applications, asset databases, excel sheets. Because you can’t function without data.

What they lack is an approach, a concrete process to manage data, to embed it in its day to day operations. The data processing, the algorithms, should come in support of operations.

Making sure data is part of your operations day to day business, that is being data-centric.

In order to do that, you need to reverse the data analysis process, look at your data from a business perspective:

  • What are my most important use cases, processes, assets?
  • Which data do I have about them? Where can it be found, in which format? Do I need more of it?
  • How can we automate this specific use case? Which algorithm can be used? 

At Cuurios, we have extensive experience working in the industrial sector. In most Industrial settings, processes will already be described. They will be backed by data for real-time monitoring, stored in a historian. Optimization and analysis algorithms are known.

The actual running of the algorithms, the analysis of the data and the generation of advices is traditionally a step performed by engineers, in Excel, Matlab or others. Tools great for exploration and scientific inquiry, but not made for automation.

For most companies, there is tremendous value to be added by connecting data sources to each other and automating their analysis. Actions can be defined and set-out quicker, with a better response to issues and a higher efficiency.

[1] https://www.cam.ac.uk/research/news/a-stray-sumerian-tablet-unravelling-the-story-behind-cambridge-university-librarys-oldest-written