SystemLink Forum

cancel
Showing results for 
Search instead for 
Did you mean: 

SystemLink >=19.6: Understanding the the rate limit of tag writes

http://www.ni.com/documentation/en/systemlink/19.6/manual/behavior-changes/ says, "Writing tag values is rate-limited to 1,000 writes per second by default."

 

  1. Does a Multi-Write count as 1 write, or does it count the number of items in the Tag Cluster?
  2. Does the same limit apply to both HTTP writes and AMQP writes? (AMQP writes seem to be a lot more efficient than HTTP writes, after all)
  3. How do we change the default?
Certified LabVIEW Developer
0 Kudos
Message 1 of 4
(2,546 Views)

I'd also appreciate any insight into what led to the introduction of these rate limits.

 

Were negative effects observed from writing too frequently to the NoSQL database? Do we need to be careful with how we use tags?

Certified LabVIEW Developer
0 Kudos
Message 2 of 4
(2,521 Views)

Answering some of my own questions:

 

  • Each tag in a Multi Write is still counted individually. When I called Multi Write at 4 Hz (with 500 tags per Write), every 3rd and 4th call to Multi Write produced an Error -251927.
  • The server does not differentiate between HTTP writes and AMQP writes. The same blanket limit applies to all methods of writing.

 

More questions:

 

  • From further testing, it looks like the write limit is shared across ALL nodes connected to a server. So if I have 100 nodes, each node is only allowed 10 writes per second! This seems quite limiting... what is the rationale for this design? Could we please have a way to configure the global limit, or even better, have a per-node limit?
Certified LabVIEW Developer
0 Kudos
Message 3 of 4
(2,477 Views)

Hi JKSH,

 

Thank you for taking the time to run these tests and provide feedback. I want to share how you can change this setting and why we have this setting in the first place. 

 

You can change the rate throttling limit by editing C:\ProgramData\National Instruments\Skyline\Config\TagIngestion.json and adding the tuple "Throttling.ValuesPerSecond": <your value>. For example I have changed this configuration on a machine to look as follows. 

 

{
   "TagIngestion" : {
      "Mongo.Database" : "nitag",
      "Mongo.Host" : "localhost",
      "Mongo.Password" : "REDACTED",
      "Mongo.Port" : 27018,
      "Mongo.Roles" : "[{\"Mongo.Database\" : \"nitag\", \"Mongo.Role\" : \"dbOwner\"}]",
      "Mongo.UseTls" : false,
      "Mongo.User" : "nitag",
      "Redis.Host" : "localhost",
      "Redis.Password" : "REDACTED",
      "Redis.Port" : 6378,
      "Throttling.ValuesPerSecond": 5000
   }
}

 

Now that we have that out of the way I'd like to explain some of the decisions behind the design:

 

Ask "how can we keep this server healthy" rather than "how fast can we write tags."

The core rational behind adding throttling to a service is to ensure the operation of the service does not adversely affect the operation of other services. In other words we believe it's better to throttle a client than allow a client to interrupt the operation of other aspects of SystemLink when an API endpoint is called rapidly or ingesting a large amount of data in a short period of time. There is security implications to this as well since it helps prevent denial of service (DoS) attacks. Through discussions with SystemLink users we found it was too easy to accidentally consume all of the available CPU and memory resources when using our tag API. I expect more APIs to impose throttling limits in future releases. 

 

The key piece of infrastructure we are protecting with this behavior is actually the tag historian and not the tag service itself. Inserts into the tag historian doesn't happen directly from client APIs (such as LabVIEW) but rather from the tag service internal to the server. Due to this we have decided to throttle at (a default of) 1000 writes per second regardless if they are coming from a multi write or single write VI call (from the historians perspective these result in the same thing). 

 


@JKSH wrote:

 

More questions:

 

  • From further testing, it looks like the write limit is shared across ALL nodes connected to a server. So if I have 100 nodes, each node is only allowed 10 writes per second! This seems quite limiting... what is the rationale for this design? Could we please have a way to configure the global limit, or even better, have a per-node limit?

Yes, this limit is enforced across all nodes and not per client. If we can done this per client its highly likely our default would be a lot lower than 1k at 1Hz. As shown above, this limit can be loosened for scenarios where other services aren't regularly used and therefore not regularly impacted, or scenarios where the historian isn't being used so the resources consumption that could occur we retaining history is not present. If you do change this value I encourage you to benchmark your server to ensure its normal operation doesn't interrupt users or managed nodes. 

 

I expect we'll make this configuration available in our configuration tools in a future release, so you will not have to edit JSON config files directly. In the meantime I encourage you to write application logic that can handle throttling errors and retry automatically. 

Mark
NI App Software R&D
Message 4 of 4
(2,399 Views)