AnsweredAssumed Answered

PI WEB API Batches - parallel execution

Question asked by MBanchuzhnyy on Feb 11, 2020
Latest reply on Feb 14, 2020 by MBanchuzhnyy

Hi everybody,

 

I have 4500 pi points, and I need to get snapshot values for each using pi web api. So i decided to use batch for that.  Batch contains 4500 "/streams/{webId}/value" items. The execution time on our dev server is around 25 seconds, which I need to improve it. I though if I divide one big batch to smaller ones then I will get some speed improvement, for instance instead of making one big call with 4500 web ids on board, I can do 9 parallel calls with 500 items per batch. I tried 100, 500, 900 etc items in the batch, here is the table of results:

 

# web ids # parallel callselapsed, ms
1004526662.5833
500924468.775
900522488.7408
1300426028.5996
1700321454.6383
2100322389.4013
2500227651.6431


As you can see, the execution time is almost the same, it doesn't matter if i make one big call, or several parallel calls, the total time is around 25 seconds.

 

The question is, how it can be possible? If I make several parallel calls, and if they are severed in parallel, then execution time should be less compare to one big call.

 

The only explanation I have is that calls are not being served in parallel. If so, will batches be processed from different hosts in parallel or not?

 

Are batches placed in a queue and are being executed consequently?

 

If anybody faced the same problem, what is the best way to increase batch processing speed? Or maybe there is another way of requesting snapshots for multiple pi points.

 

Thanks!

 

P.S. Batch here contains independent items, so that I really don't need to use batch, but I found that it is the only way to execute multiple requests and provide data in the body (as a part of post request). 

Outcomes