SystemLink Forum

cancel
Showing results for 
Search instead for 
Did you mean: 

performance/behavior of tag historian

Hi, I'm doing some basic benchmarks on the tag historian to get a ballpark idea of speed and I have a few general questions:

  1. It feels strange to me that something that is operations related (tag historian and dashboards, which I assume needs high ram and cpu) would be on the same physical server as a generally out-of-band task (package management, which I assume needs high disk space and not a ton else). Is there a way to separate the services onto different physical hardware?
  2. During my testing it looks like the following items are true. Are there any user-accessible knobs to tweak to improve the AMQP performance or to reduce the latency of the HTTP option (even if that means dropping samples?)
    • AMQP takes about 800 usec per tag, but the updates are basically synchronous. If I hit stop, the tags stop updating, and the updates on the tag monitor screen are always just a few seconds behind real-time.
    • HTTP takes about 250 usec per tag, but the updates appear to be heavily queued. After just a few minutes of running, the updates got to be a full minute behind real-time, and the updates kept on being applied long after I hit the stop button on my code
  3. For several minutes after running, CPU usage of mongodb and the tag historian service remain relatively high (~80% of total cpu of a mid-grade desktop processor). Given that this happens even with the AMQP version where the tag updates do not appear to have significant latency, could you shed some light on what its doing? Is it anything to be concerned about (eg if I lost power would I lost just the most recent sample, or would I lose X samples due to some higher latency processing being done out of band)?

Thanks

0 Kudos
Message 1 of 8
(3,427 Views)

It sounds like you are writing to your server very fast and often. Our current recommendation is to write to each tag no more than once per second. If you are needing more samples than that, I would recommend logging to something like a TDMS file, and then using the File Service to upload those logs to the SystemLink Server. When looking at the values on a tag's history, we only display 1 value per second so writing to it more often than that doesn't do you much good. 

-----------------------------------------------
Brandon Grey
Certified LabVIEW Architect

0 Kudos
Message 2 of 8
(3,414 Views)

Well, I'm trying to determine what sort of loading I can push it to, so yes I'm writing perhaps faster than I would normally. However if I want to have the 'best possible' performance and set my loop rate to say, 2000 tags at 0.8 Hz, and then something changes on the server (someone opens up a bunch of clients as one example), I don't want the system to suddenly get a pipeline of minutes worth of data. My priority is the current value, with history being a 'best effort' concern for me. Hence my question about user accessible knobs.

 

That having been said, my data rates are not much higher than 1 Hz with AMQP. I'm doing bulk inserts of 1000 tags, so that 800 usec per tag is actually 1.25 Hz (and http's 250u is 4Hz).

 

0 Kudos
Message 3 of 8
(3,408 Views)

I can confirm that limiting both versions to 1000 tags @ 1 Hz works well, although there still seem to be more jittery sections with the http version. Despite the lower data rate, CPU usage remains about the same -- right around 95% with occasional hiccups down to 20-50%.

0 Kudos
Message 4 of 8
(3,400 Views)

Are you doing 1000 individual tag writes per second or one multi tag write with a 1000 updates?  If you can always try to send data in bulk using the multi tag read and multi tag write VIs.

 

Overall, the AMQP message bus can handle around 1000 messages per second, so you want to do as many bulk operations as possible.  Note a single tag write and a multi tag write with 1000 tag updates are both just a single message.  Also, the HTTP interface is just a proxy for AMQP.

 

As an example, you should be able to have 100 systems publishing 100 tags using the multi write each second to get an overall throughput around 10,000 tags per second without railing the CPU.  By leveraging the multi tag writes, I've been able to push it as high as 100k tag updates per second across 100 clients on a 8-core i7 server with an SSD drive.  Now if you try to send the tags one at a time you won't come anywhere close to that.

 

Your performance will certainly very depending on your computer's specifications.  Remember that all tag updates are being persisted, so the speed of your hard drive is really important.

0 Kudos
Message 5 of 8
(3,380 Views)

The absolute performance isn't necessarily concerning to me, I'm just running this on a spare desktop with an older generation i7 (4c/8t)-- about the same benchmark score as a current model i3. It has an SSD, but it looks like there isn't a ton of disk activity (which makes sense given the data rates). I'm assuming things will scale up on a more server-class machine. Still, having a junkier machine to test on makes it easier to find points where it fails.

 

At present I'm doing 1k (or 2k to see how memory usage scales) tags in a single multi-tag write. In fact I actually cut through some layers of the API since I only have one configuration object so I cut out some of the unneeded sorting code to see if that helped at all.

 

Its also just a single client and should (mostly) continue to be. Prior to thinking about systemlink, we already developed a data aggregation system thats geared toward our needs for full HMIs, so in a real system I'd essentially just be locally copying the data from N DVRs into the skyline multi-write structure and sending them along at some rate.

 

In any case, it sounds like using AMQP is preferable as its the most direct route. I'm definitely confused why its behaving so poorly compared to your own tests -- it looks like I can only hit ~1500 updates/second on my i7 w/ ssd. Perhaps theres some antivirus or something on there causing a bottleneck, so I'll take another look.

0 Kudos
Message 6 of 8
(3,364 Views)

I have seen some issues in the past with antivirus programs that continue to try and scan the database files. If you can try to blacklist C:\ProgramData\National Instruments\Skyline from the scan list and then reboot your system to see if that helps.

0 Kudos
Message 7 of 8
(3,359 Views)

@JoshuaP wrote:

As an example, you should be able to have 100 systems publishing 100 tags using the multi write each second to get an overall throughput around 10,000 tags per second without railing the CPU.  By leveraging the multi tag writes, I've been able to push it as high as 100k tag updates per second across 100 clients on a 8-core i7 server with an SSD drive.  Now if you try to send the tags one at a time you won't come anywhere close to that.

Tried it again without any AV on and after a fresh restart, that seemed to do the trick from a CPU perspective. Now writing 1000 tags in a multi-write as fast as it can, it uses about 20% CPU...which isn't great either. It looks like only one of the tag server exe instances (and one historian instance) is doing anything, the rest are basically idle. With a single client, is there any way to better utilize multiple cores? I tried creating 4 amqp config objects and doing multi-writes in parallel, which did get about 2x performance, but I'm not sure how much of that is just pipelining messages.

 

0 Kudos
Message 8 of 8
(3,347 Views)