Components

cancel
Showing results for 
Search instead for 
Did you mean: 

CVT Client Communication (CCC)

also what version of labview do you use in case I need to upload some changes?

0 Kudos
Message 61 of 98
(8,323 Views)

I'm using LabView 2015.

 

I haven't been able to get the CPU issue to re-appear, but it was troubling to see.  If I find some free time, I'll try to create a simplified example to re-create that issue.

 

Since yesterday, I re-coded and added a check to determine if the server is active before attempting to make a connection (Using a hearbeat shared variable).  That has stabalized the issue.

 

Regardless, the pop-up about the  PXIe controller not responding ocurred even when the CPU wasn't railing out.  That makes me think there might be some sort of coupling between the TCP client connection attempt from my client VI and the stability check that the LV dev environment is doing against a remote RT target that is currently connected in a project.  I do get an error about the connection being refused so the client can easily detect that, but if I keep re-trying the connection that's when it seems to fall apart.  In this state, even if the server VI is run, the client can't establish a connection until the client VI is stopped and re-started.  That's weird....

 

When the server goes down, it's not clear if it's my server VI or just the dev environment claiming the connection was lost.  The result is the same (Both the dev connection and my client can no longer connect to the VI).  I can't tell if the server VI is even running at that point.

 

 

0 Kudos
Message 62 of 98
(8,319 Views)

I had a moment to experiment with LabView 2015 here concerning the TCP connection issue to the CCC I was having.  Seems it's more fundamental than the CCC.

 

The "Stop waiting and disconnect" pop-up will appear in the most simple of situations.

 

  1. Create a project with My Computer and a RT target (I'm using a PXIe RT target).
  2. Add a client VI which simply attempts to connect to an unused or inactive port on the RT system in a loop (A retry loop simulation)
  3. Add a VI on the RT system which just shows a counter (Just gets the project to connect, deploy, and show some visible dev mode behavior) in a 100ms or so loop
  4. Set the retry timeout to 100ms and run it
  5. After a while, the project claims "Stop waiting for the target and disconnect".  The speed at which this happens seems dependent on how long the RT target has been running and how long the LV environment has been running on the host.

At this point, the RT target is in an unknown state (Most likely) and may need to be manually re-booted.  MAX seems to be able to talk to it at some level, but the project may not be able to reconnect).  labView.exe on the host may become unstable (I had to force terminate it).

 

TCP_Timeout.png

 

Why there's any coupling between the TCP connections of the dev environment/RT target and the TCP errors from a running VI that can't connect to the RT target, I can't say.  But it shouldn't exist.  A TCP error in a user's VI should not be able to affect the ability for the dev system (Project) to maintain its connection to the RT target.  Is this a Windows thing or a LabView thing?

Message 63 of 98
(8,310 Views)

Thats great, I'm glad you identified that problem. I'm not really part of the standard support chain, so I'd strongly recommend contacting support@ni.com or your local phone support team and report the issue. Since you have reproducing code, it should go smoothly.

 

It seems like this is more or less the core of the issues you're seeing. Unfortunately there is no good way to know if the server is connected without just trying it. Since trying it causes a cpu issue, I think you need to contact standard support before you can reasonably continue (besides the workaround you mentioned).

0 Kudos
Message 64 of 98
(8,294 Views)

Here's another question on CCC:

 

Can multiple CCC connections exist simultaneously between a client and server?  I would like to do something like the following at the same time:

 

A high rate sync (100Hz) of CVT names

A low rate sync (1 Hz) of CVT names

An on-demand sync of individual CVT names

 

Would three servers (On different ports) be necessary or can the current CCC server implementation handle multiple clients?

 

Thanks,

 

XL600

0 Kudos
Message 65 of 98
(8,246 Views)

An answer to my last post...

 

yes, multiple ccc servers are necessary.  I've implemented my system with 4 actually.  Each can handle 1 client.  I have three for different rate polling of subsets of my CVT and one for a debug guy which polls the entire CVT.  Works great.  There is one issue with the CCC implementation though.  The CCC is designed to be able to reconnect to a CVT with differing tag lists.  The CVT is designed in a way to make that impossible for both the server and client.  I have tweaked the CCC to accommodate this issue but posted the whole thing to the CVT discussion here.  Hope it can be incorporated into the released CCC since this fixes what appears to be a basic functional disconnect between the two libraries.

0 Kudos
Message 66 of 98
(8,202 Views)

I'd also recommend posting a description here if you have a github account:

https://github.com/NISystemsEngineering/CVTClientCommunication/issues

If you've fixed it already, you can also commit any changes you've made to that github repo and someone on the team will review it. The license is apache.

 

From your other post it sounds like the issue is related to groups cacheing the lookup even when the group is changed. This was something we knew about and resolved in one case (normal CVT read and write) but made the decision not to change for groups as they are a more 'advanced' feature...which doesn't help you with the CCC. I think the easiest solution is to simply yank the group lookup code out of the CVT and copy it (in a non-FGV manner) into the CCC. This would force each server to lookup CVT Tags when it launches, and it sounds like that would solve your problem. Is that how you solved it?

0 Kudos
Message 67 of 98
(8,177 Views)

No, I solved it by adding a 'session ID' onto the end of the group name.  That causes the CVT to see a new group name upon connection/configuration which then causes a full refresh of the group IDs.  That seemed to be the simplest approach.

 

I've not used github before so I don't know when (if) I'll be able to get something posted.

0 Kudos
Message 68 of 98
(8,161 Views)

I can't seem to get my CCC loop on the HMI to run at 1 kHz, nor does it seem to run very consistently at 100 Hz.  At first I thought it was due to my code size and CPU usage on the cDAQ RT target (I am using a cDAQ, even though the CVT XML file references cRIO. Made the switch and haven't changed all the naming.).  However, in the attached code, I pared it down to the bare essentials.  I have a large tag file, maybe 400 tags.  Are there any timing limitations for the CCC read/write VI's?  Does having a larger file begin to affect performance at some point?  Thanks in advance!

0 Kudos
Message 69 of 98
(8,139 Views)

I wouldn't expect any loop to run cleanly on a Windows machine at 1kHz.  I don't think the scheduler can even do that.  100Hz is probably stretching it as well.  Im my own designs I never run HMI loops at more than about 50Hz and usually try to hold them to 10Hz to keep the CPU load down.

 

In my CCC/CVT efforts, 100Hz was about the max I could get cleanly with 200 tags.  What I did to alleviate the issue is realize that almost none of my CVT values need to sync at that rate.  So I use multiple CCC servers in my RT target.  One only handles the 3-4 tags I need at 100Hz and the others handle 10Hz and asynchronous (Large tage transfer, typically large flattened strings) duties.

 

Even the high rate CCC of just a few tags looks like it starts to suffer at about 300Hz (It's using the STM library under the covers).  I wouldn't want to run it that fast anyway because of the hit it would have on the RT CPU load.

 

Any data that needs to transfer at higher rates I don't use the CVT/CCC for.  Things like data acquisition data are buffered into waveforms or arrays then queued to my HMI using network streams.  There's almost no overhead once a stream is connected (Unlike the STM protocol with the CCC layered on top).

 

Hope that helps?

0 Kudos
Message 70 of 98
(8,137 Views)