AnsweredAssumed Answered

Mass Export Archive data of Tag subset

Question asked by aldorexbraam on Oct 25, 2017
Latest reply on Nov 13, 2017 by gregor

Hi for an analytics project I have the requirement to bring data from the PI archives of the last X years for 10K tags (in total 80K) into an external data lake (or swamp)

Since this concerns a lot of data to process (a little under 1TB) i am looking for a way to process the archives in firehose mode, instead of the traditional tools ( PI OLEDB (enterprise), PI-SDK  )

AFAIK all current solutions except AF-SDK, have limitations and don't cut it in terms of performance, storage, usability . Ideally I am looking for an .NET API that taps *directly* into the archives.

Is anybody in the community aware of a *standard* solution that would fit above case ?

Thanks in advance..


Aldo Braam


btw: So far I have assessed the following solutions.

PI OLEDBmulti-hreaded, easy to useCustom development, no bulk fetches, so archive walk for each tag
PI Integratormulti-hreaded, easy to useStreams only SQL Server so inefficient storage
PI configsimple, crudeworks on tag-for-tag basis, single threaded / single core . 70-ties interface, no bulk fetches
PI-Datalinksimplenot suitable for massive data loads , no bulk fetches
PI-SDKsimplesingle threaded, no bulk fetches
AF-SDKversatile, multi-threadedsupports bulk fetches, custom coded solution