Showing results for 
Search instead for 
Did you mean: 

How to change Advanced type Shared Variable buffer allocation programmatically (at run-time)?




Here is my latest Shared Variable related question/situation. 


Using the DSC module, I am creating shared variables programmatically, and also enjoying the "Configure" (DSC Module>Engine Control>Variables & I/O Servers) I can very easily select the options like "use Binding" or "use Buffering?" with some predefined Buffer size. 


But the predefined buffer size is something I would like to change. My Data type is "Array of Double Waveform" (so called "Advanced" data type in the Shared Variable Engine/Distributed System Manager). 

And if I change the buffering options in the Variable Properties (through Project), the buffering fields are "No. of arrays", "No. of waveforms" and "Points per waveform". 

However, in the Configure's Network Settings input, the Cluster does not have those fields. It has a single field of default value "50" for buffer size.

In LabVIEW 8.6, there is an example (examples\lvdsc\Variables\Library Generation\Common\Library which has such fields as "BuffSize", "ElemSize" and "PointsPerWaveform" and there are values in these fields. But in the later versions, this example has no such fields. 


Questions - 



  1. Is it possible to change separate Buffer field values for the programmatically created shared variables? If Yes, how? If No, what is an alternative to define them?
  2. If I am accessing such programmatically generated shared variables using DataSocket(DS) API in remote programs, then do I really need to worry about the buffers defined in the shared variable definition in the host library? Since, in DS API, I can set the buffer size. If I just say "Use Buffering" = T (in the SV creation), is it enough, and just let the DS API handle the buffer issue at the clients?
  3. If I am accessing such programmatically generated shared variables using shared variable nodes, how will I set the effective buffering then?
  4. If such buffering specification while programmatically creating shared variables is not possible, is there a way that I can just "clone" already created shared variable? In that way I will have to always copy my "model" shared variable whenever I need to create a shared variable of THAT type.



For my case, the array of double waveform variable type, with buffering enabled while creation, later showed the buffer values as 50, 1, 1 (for the above mentioned buffer fields respectively), which is not quite what I usually specify - 2, 2, 150.


Thanks in advance. 

0 Kudos
Message 1 of 11

I'm not an expert on DSC, but I'll give it a shot.  With DSC, you can create Variables offline within libraries and then later deploy the library to make your changes live.  This interface uses the "green refnum" wire and mimics the settings you would see in the project properties dialog for the variable item.  Look at the project at examples\lvdsc\Variables\Library Generation\Library Generation.lvproj for an example of this (I'm looking at LV 2010 so hopefully it jives with the version your using).


With DSC, you can also make live changes and modify settings for Variables deployed to the server at run time.  This interface uses the "purple refnum" wire, and while the settings will look similar to what you see in the properties dialog in the project, they won't match exactly.  Look at examples\lvdsc\Variables\On-line Access\Create Online Process - for an example of this. 


With the offline interface (green refnum), you'll have access to the buffer size, element size, and points per waveform size settings that you were referring to.  However, they are no longer exposed through any of the VIs shipped on the palette or with the examples.  Instead, you'll need to drop down the property node in order to access them.  If you use the polymorphic VI that ships with the examples, it will only set buffer size (equivalent to max packets from the DataSocket perspective).  It won't change the element size or points per waveform properties from their default value. 


With the online interface (purple refnum), you now have access to BuffSizeLimit (# of packets) and BuffValueLimit (# of bytes) properties.  These attributes behave the same way as the max packets and max bytes properties from the DataSocket API (and I know you're familiar with that).  If you use the VI that ships on the palette to configure the buffer size, it only sets the BuffSizeLimit property, and the BuffValueLimit property will remain at its default value.  Again, if you want to set the other attribute, you'll need to drop the property node and explicitly set it yourself. 


In regards to your questions:

  1. Yes, see above explanation.
  2. Depends, but you'll definitely want to put some thought into it before you decide to enable/disable buffering.  The buffer settings you define in step 1 dictate the buffer settings for the variable deployed to the server.  The buffer settings you define in the DataSocket API dictate the buffer settings for the client connection to the server.  There are always at least three buffers (I still consider a non-buffered client connection or variable deployed with no buffering enabled to be a buffer of size 1).  If you enable buffering on the client writer and client reader but not the server, there is still a good chance you're going to lose data on the server if it can't shuffle the data between the client writer connection and client reader connection fast enough.  Often multiple elements from the writer will arrive in a single TCP packet on the server so it's not uncommon for this to happen.  If on the other hand you use a non-buffered writer with synchronous writes, you don't necessarily need buffering enabled on the server.  In this case, data just queues up on the reader until it gets around to reading it, and you don't have to worry about losing data from the writer connection or server since the write is synchronous and doesn't return until the data has been delivered to the reader.
  3. The shared variable node is an alternative to the DataSocket API for interfacing with the client connection to the Variable on the server.  In this case, the client side buffer settings are derived from the property settings on the variable item within the project (in effect the buffer settings between the client and server are always the same in this case assuming you used the project to deploy the Variables).  If you use the programmatic variable API, the buffer settings for the client side connection can be set using inputs on the Open VIs on the PSP Variable palette.
  4. I think there are round about ways of accomplishing this, but I don't think it will be necessary if you use the information from above to accomplish what you want.
Message 2 of 11


Thanks for the good information.


As from my previous post you can see, I had already seen the examples you mentioned (including the Create Online and Create Online Process -, and on the basis of them I tried to write a smal lprogram for creating libraries and shared variables. But the program I wrote in LV 2010 never allowed me to use Polymorphic Vi's as shown in those examples. But, nevertheless without the Polymorphic VI's, I could still use the Configure and Configure Initial, so I didn't bother much to find what was the real difference. Besides, I had noticed those extra fields in the cluster of buffer specification in the LV8.6 example as I wrote in the first post. Hence my question why were they abandoned in the later versions.


But your clues of "green refnum" and "purple refnum" were really critical, as they made me notice the minute (yet BIG) difference. And there I noticed that the seemingly equal looking VI's are actually different. I had not used VI's like "" or "Add Shared Variable to" because they are not in the functions palette. Somewhere in another thread I read that there are some functions "hidden" but those functions, I felt, are already visible in the palette in LV2010. So as I noticed the different VI's, I was forced to find them out, and surprisingly, all those functions that looked "new" in LV2010, are actually present "hidden" in my LV8.6 installation machine. I also learned how can I put them in functions palette so that I can use them easily like other functions.


So Kudos to you for that tip. Your effort to carefully type "green refnum" and "purple refnum" paid off.


Question 1.

Ok, so these functions allow me to set buffer properties like "BuffSize", "ElemSize" and "PointsPerWaveform" - but how do they (or do they?) correspond to buffer fields of "No. of arrays", "No. of waveforms" and "Points per waveform" respectively?

Buffer Size = No. of arrays?

Element size = No. of waveforms?

Also interesting to know that with the online interface (purple refnum), BuffSizeLimit = "# of packets" and BuffValueLimit = "# of bytes".


Regarding your answer#2 above, thanks for throwing light onto the buffering system and I have two questions - 

Question 2.

(continuing with my original question) If I enable buffering (and not define any buffer values, neither change the default values) at server, and explicitely set buffer values (bytes/packets) at clients (using DataSocket API), will the buffering work well depending on my client side buffer values? As per yuor description it seems even in this case, we have to explicitely define buffer values in the server side as well. Right or not?

Question 3.

How do I enable synchronous write? By padding "?sync=true" in the PSP URL? I saw some other synchronization options as well, but don't remember at the moment where I saw them.


Regarding the answer#3 above -

Question 4. 

I didn't understand it well. I think you meant if [at the client side,] shared variables are defined as nodes within the project they will derive the settings as from the property pages from the project [at the client side itself]. Right? So when you said "if you use the programmatic variable API" ... where? On server or client? Can you please elaborate this third answer well?


In my application, I need variables being dynamically created as new users log on to the system. So I need shared variables at the server to be created programmatically, and at clients (readers and writers) also I need to have shared variables - either nodes or psp through DS. If I use nodes at client side, I can just change their binding URLs using property nodes (and since the nodes will be manually created by me before deploying, I will be able to set the buffer values), and if I use DS API it's easier to refer but need to do good buffer definition. In either case, I will need to set buffer values at the server side programmatically. 


I hope I made my situation clear.

0 Kudos
Message 3 of 11



I got answer to my question 1 above (in my second post).


Actually I started experimenting with the different methods of programmatically creating shared variables, now that I know 2 of them. And set the buffer values, and I see them reflecting now, as I am making "Offline" (with green refnum wires) libraries. 


The three property elements "BuffSize", "ElemSize" and "PointsPerWaveform" refer to the three buffer fields "No. of arrays", "No. of waveforms" and "Points per waveform" respectively in property page of "Array of Double Waveform" data typed shared variables.



Still looking for answers for the questions 2, 3 and 4.


Am preparing the projects so that I can post here or on the other thread for analysis. 

0 Kudos
Message 4 of 11

Question 2.

The simple answer is yes.  The only scenario (that makes sense to me anyway) where you would enable buffering on the client but not the server is where you are making infrequent updates (say slower than 5 times a second) and you don't have to consume data at the same rate on the reader in order to not lose data.  In this case, you might want to read data every 10 seconds or once a minute and read tens or hundreds of points in a loop rather than having to read a single point every fraction of a second.  If you're updating more frequently than that, you run the risk of multiple updates reaching the server in a single TCP packet.  If buffering isn't enabled, the server will only maintain the last point before it is able to forward it to subscribed readers.


Question 3.

Yes, if you look at the context help for the DataSocket Write VI, you'll see the string you need to append to the PSP URL.  I've never used this feature with PSP so I can't provide a lot of guidance here.  It's always seemed a little odd to me to use synchronous communication for N:M topologies.  However, I'm sure somebody has an use case I'm not considering where this might make sense.  If you have a 1:1 topology, I would just use network streams.


Question 4.

There are basically three different ways you can read/write PSP Variables from the client side:

  1. DataSocket - As mentioned before, to enable buffering with this interface, you need to set the mode input as appropriate in the Open VI and set the buffer size properties in the property node.
  2. Shared Variable Node - This is the node you get when you drag and drop the Variable from the project to the block diagram (you can also drop it from the palette).  I usually call this the static variable API since the Variable you access is statically tied to the item in the project at edit time.  If you enable buffering on the Variable in the project, then both the client and server always use buffering and they use the same buffer settings.
  3. For lack of a better name, what I call the dynamic variable API.  You can find this API on the Shared Variable palette underneath the Data Communications palette.  Here you have Open/Close and Read/Write methods (very similar to DataSocket) for accessing deployed variables.  Unlike the static variable API, you can build URL strings at runtime and use this API to dynamically open connections to them.  If you want to enable buffering on the client, you have to use one the Open variants located on the PSP Variable sub-palette to configure the size of the buffer in elements.

If you need to create new variables based on the number or instance of users logged in, then you'll definitely want to use the DSC module to create them dynamically on your server application.  However, on the client side, I would just use the dynamic variable API to access the data (DataSocket if you need to support Mac or Linux).  If you use the static variable API, then yes you'll have to create new variables in the client application that are bound to the points in the server.  This is a limitation of LabVIEW projects since you can only have one desktop target in a project.  This is what necessitates the binding of variables on the client to the variables on the server when using the static node.  If you use the dynamic API (or DataSocket), you can bypass this step completely and just connect directly to the server.  This is a lot easier and more efficient than having a server process running on each client machine which in turn is exchanging data with the bound points on the server process running on the server machine.

Message 5 of 11


Thanks a lot again for the detailed answers. It's been getting very useful. I could only get to my system today to reply your post, but nevertheless read your answers on my mobile device on Friday and Saturday respectively and since then started digesting the idea.


Ok, going point-wise again to your 3 answers above, 


1. Yes, that enabling buffering at the server side is necessary (in the scenarios that we talked) but my question was what about the buffer elements. Because the two variable creation methods - offline and online have their pros and cons. I would prefer the online method, i.e. not creating physical variables but just creating them online so that they appear on the Distributed systems Manager and no need to write them to the library and deploy that library. This looks good in terms of creation of the variables (they are temporary and not needing forced deployment). But they are not with good support of that elaborated buffer settings. So my question, to be more specific, was "if I enabler the buffering at server, how exactly do I define the server buffer elements in case I am creating them the Online way (purple refnum wire) ?"


2. Ok, so specifying that extra string is the only way to enable synchronization. My case is also 1:1 (in case of Audio/Video streams - which are difficult to get lossless). I also considered the Network Streams, but after reading the material I found that they require (public) IP (or DNS) in order to define the other endpoint. In my application, the two users will not always have a public IP (and of course not fixed user locations), so I cannot always predefine the endpoints as required. The users will run their applications from two different computers located in two separate LANs, connecting through the Internet. So they will always need some common reference point. In Network Streams, they cannot have any middle point. Just 1:1. In Shared Variables and DataSocket, there is a "server" which could have a public IP for all to access. So my applications will communicate over the Internet through the SV or DS server. Do you have a suggestion in this?


3. At client side, I think I can have both the ways - static nodes or dynamic variables (either through DS or Shared Variable API), but my biggest issue is at the server side where I must create variables dynamically no matter what. In client applications, I can even keep the 2 pairs of variables pre-defined (each for audio+video) and just change their binding URL, or access the remote variables dynamically through the above two API's (which support the dynamic URLs), but at Server application I must create these variables, and there at the server


  • if I create them with green refnum wire, it allows me to create them with the data type I want (by specifying the data type constant) and with buffer settings I want, but I must then deploy this library again (and I still have to check the effects of re-deploying a library which is in use right now for some other variables;
  • and if I create them with purple refnum wire, it also allows me to create the complex data type shared variables, but it doesn't allow me to set the buffer settings I want and it is very quickly (automatically) deployed too as soon as I commit the variables. I prefer this so-called "online" method, in that the shared variables don't stay physically (in a file) in the server libraries forever. They stay in the Dist. Sys. Man. only till the computer is not shut down. The next time it's ON, they are no more (my understanding).


So basically looking for a way to get the buffering benefits in the "online" method. Any ideas?


4. Thank you for your tip of the so called "dynamic variable API" and now I need to explore them more. In the other post you have said about them also, but just asking here, are there any specific advantages of using them over the DS API? I would have to install the LV2010 on this computer to access this API, but nevertheless I wanted to know what is the biggest advantage in my case. 


Thanks again! 

0 Kudos
Message 6 of 11



Still looking for some of the answers.



0 Kudos
Message 7 of 11

Let me take one more crack at clarifying the buffer settings for Variables.  Let's take the example where the data type is an Array of Double Waveform which is probably the most complicated case.  When configuring buffer settings in the project (right click properties menu on the Variable), you'll see buffer configuration settings for Number of arrays, Number of waveforms, and Points per waveform.  If you're using the "green refnum" API to programmatically create or set the buffer configuration, you should see a very similar set of properties in the property node.  In this case, the properties are Network.UseBuffering, Network.BuffSize, Network.ElemSize, and Network.PointsPerWaveform where:


  • Network.UseBuffering enables or disables buffering regardless of the other settings.
  • Network.BuffSize is the number of elements in the buffer.  This is equivalent to Number of arrays from the Variable properties dialog.  Since your element type is an Array of Double Waveform, this indicates how many arrays of waveforms the buffer will hold.
  • Network.ElemSize is the size of the array element and is equivalent to Number of Waveforms from the Variable properties dialog.  This setting only has meaning if the data type of your Variable is an array of some form.  In this case, it indicates the maximum number of waveforms that each element will contain. 
  • Network.PointsPerWaveform is the size of each waveform contained in the buffer and is equivalent to Points per waveform from the Variable properties dialog.  In this case, this setting specifies the size of the array of doubles contained by the waveform.  This setting only has meaning if your data type is a waveform or array of waveforms.

Because of the data type you've chosen, the buffer in essence becomes a three dimensional array and the three properties above are used to specify the size of each dimension. 


When you actually deploy the Variable, these configuration settings get converted to the Network.BuffSizeLimit and Network.BuffValueLimit properties that you see packaged with the DSC "purple refnum" where:


  • Network.BuffValueLimit is the maximum number of elements that the buffer will hold.  This is equivalent to the Network.BuffSize setting present on the green refnum.  In your case, this is the maximum number of Array of Double Waveform elements that will be maintained by the buffer.
  • Network.BuffSizeLimit is the maximum number of bytes that the buffer will hold.  This value is derived by multiplying all of the above settings together along with some other settings like the size of the element type of the waveform. 

Whichever limit of Network.BuffValueLimit or Network.BuffSizeLimit is reached first will cap the buffer size and prevent it from growing any larger.  If you don't care about placing a byte limit on the buffer size and prefer to just think of things in terms of elements (number of arrays of waveforms in your case), then set Network.BuffSizeLimit to something huge like 4,294,967,295 and set Network.BuffValueLimit to however many elements you want to hold on to.


In regards to the advantages/disadvantages of Dynamic Variable API vs. DataSocket API, some key benefits of the variable API are:


  • Generally has better runtime performance
  • Better runtime behavior since it doesn't interact with the UI thread
  • Supports programmatic discovery of deployed variables
  • Will also provide local and remote access to I/O Variables

The only compelling reasons I can think of off the top of my head to choose DataSocket over the Variable API is if you need to run on Mac/Linux or you're communicating over ModBus or OPC servers and you've found some DataSocket feature that isn't yet supported by the Variable API.  However, in your case it doesn't sound like any of these reasons apply.

Message 8 of 11

Ok, getting back to the discussion, I implemented the programs in LabVIEW 2010, as it allows to create a Shared Variable programmatically with "Array of Double Waveform" datatype in the "purple refnum" API. But once again I had a setback. Because, now my requirement also increased 😄 and after you pointed out the IMAQ variable (with Image data type) I am preferring to use it (at least to test performance) and again, the purple refnum API's variable creation VI has options without this Image data type.


This is something I have to complain about. The options in programmatic creation of SV are always lacking something of what is available in the Properties from Project Explorer. In version 8.6 there were just 4 data types, and now in 2010 they included the other data types and missed the new ones (like Image data type). If only it was included in the programmatic variable creation API, I would have much less trouble in exploring other options.


Regarding the Green Refnum, I had understood the corresponding fields of Network.UseBuffering, Network.BuffSize, Network.ElemSize, and Network.PointsPerWaveform, but was not sure about the Purple Refnum. So thanks for the detailed explanation once again.

But as I wrote in the other thread, I don't know what is the meaning of Number of arrays and Number of Waveforms while talking about the sound format of Sound API. Can you please shed some light on that, either here on on the other thread?


So far I thought to use the Purple Refnum option, but as I explained above, it is not possible to create Image data type shared variables in that method. So I will stick to Green Refnum method.

By the way, the Network.BuffSizeLimit corresponds to number of bytes or elements is contradictory from your previous post. So got confused.


So after all this hassle, sticking to your suggestion, I just use "bytes" value, and putting it something like 5 MB (5242880) to be on the safe side. Probably I should increase it.


In the attached programs, in the Server project (, the Offline method is the one that I use to create or update the library.

One important question - is it OK to update a library which is in use? So far I didn't have problem, but not sure if it could create a problem in a real-life scenario. Due to lack of functionality in the purple refnum API, I had to go for the green refnum API (offline mode), which means I physically create shared variables and then deploy the library (which is already running in the Distributed System Manager, with some of the Shared Variables). Will this create problems in the already existing Shared Variables? If yes, what is the way to go?



About the attached programs:


  1. In the Client programs project (, I have total 12 VI's. 6 for each of A and B client. Out of these 6 versions, 3 are with Synchronization and 3 are without. These 3 of each type (with or w/o sync) are namely - 1. Static Shared Variable Nodes (programmatically bound URLs), 2. Static Shared Variable Nodes (same as 1st) using Image data type (for IMAQ images) and 3. Dynamic Variable API. The VI names are intuitive after this explanation. I created these versions to test differences in performances.
  2. The library VideoChatLibLV2010_b.lvlib in the Clients project has Shared Variables bound to the library variables (of VideoChatLibLV2010.lvlib) on the Server. I use suffix or prefix "b" to indicate that the library or shared variable is bound and not the master (hosted).
  3. In the programs, in the first frame of the main Flat Sequence (or in the Stacked Sequence Structure's 2nd frame inside the first frame of the main Flat Sequence), I have changed the server address to "server-location" which you will have to change to put your server address and also please change the binding of the shared variables in the library VideoChatLibLV2010_b.lvlib to the shared variables of the VideoChatLibLV2010.lvlib depending on wherever you deploy the host/server library.


Now, about using the Dynamic Variable API.


Thank you for the points you have made. They helped me in making my decision. Hence, the above separate versions of the same program


Also, from your phrase

"... This is a lot easier and more efficient than having a server process running on each client machine which in turn is exchanging data with the bound points on the server process running on the server machine..."

I made an assumption that if I use Dynamic Variable API, I don't need neither the library nor the shared variables on the client machine (hence no Shared Variable Engine processes) and based on this assumption, my versions of the client programs (A+V_Dynamic_IMAQSubsc_*.vi) do not have any physical shared variable, neither any library references.

Please let me know if I understood correctly.


I selected the bottom up approach, because this particular program version (with Dynamic SV API) takes the least (no library, DSC etc.) so on a fresh computer (with LV8.6 only), I wanted to start from this program installation, as it will help me deduce what components are not necessary for the Dynamic Variable API approach. If I had used the Static Variable API, I would have already installed the components which I would not know if are necessary in case of Dynamic API approach or not.


Each of the program in the Client project, has an EXE builder under Build Specifications. Each one's name corresponding to the name of the VI it builds EXE from. In case of the Dynamic Shared Variable API programs' builders, if you check the build specification "A+V Chat - Dynamic IMAQ Subscriber A" properties, there are several settings different than the other build specifications of the versions above it.

1. Source Files - the "Always Included" does not contain any library, unlike build specifications like "A+V Chat - Static Bound Subscriber A".

2. Advanced - I don't "Enable Enhanced DSC Run-Time support" as I don't need it now, presumably.

3. Shared Variable Deployment - I don't deploy the library at application execution (and the programs with "Dynamic" in their name, don't deploy any library either).


Now if you check the corresponding Installer of this build specification "A+V Chat - Dynamic IMAQ Subscriber A - Setup", you will see some more settings.

Additional Installers -

  • I don't include NI Variable Manager. This is critical. Should I?
  • What about NI Variable Engine 2.4.0?
  • As in the build specification, I don't include NI Enhanced DSC Deployment Support for LabVIEW 2010. In case of Static Variable API, if I am using DSC functions like change library binding URL, will I have to include this one?
  • In NI LabVIEW Run-time Engine 2010, I excluded NI LabVIEW 2010 Real-time NBFifo and many other sub-components. Is it OK?


Today, the setup generated by this configuration was very bulky. For a small program (and even after excluding a lot of LVRTE2010's sub-components) the setup was whopping 538 MBs. It took me 1 hour to generate the setup on my Vista computer with 2 GB RAM, and its installation on a test computer took more than 2 hours. Why so? Analyzing the prepared Setup, I found a lot of stuff not necessary and no way to exclude them.


Now, the result of the test run.


The above settings somehow worked, but only for a few seconds. Even without any library deployment on the test machine, I could get live video and audio from that client computer to another client computer. The server was just idle (the Distribute System Manager was not open, neither was LabVIEW). So after receiving the live data from the other client, suddenly the Server computer showed a message from nowhere. It was a Windows System message when a program suddenly ends and it asks whether we want to send the report to Microsoft or not.

"National Instruments Variable Engine Stopped".


I am not sure if this happened because it was not configured or if my setup was lacking something.


I didn't find any KB article saying what should be the "Installer" properties for VIs with Dynamic API. There are such instructions in case of Shared Variable deployment and DSC functionality inclusion. But nothing about Dynamic API.



Looking at the post, I feel like I almost wrote an article (except that here are more questions than answers, but nevertheless some information) 😉  ... which actually I was supposed to do now - updating a conference paper - and will do it now finally that I am finishing this post. 🙂 


Once again thank you very much for giving your time. And looking forward to comments on my questions/doubts in this post.


Best regards.

Download All
0 Kudos
Message 9 of 11

Summarizing the big post above. 


On the server computer's project: 


1. In creation of shared variables programmatically, the online method of adding them in process (purple refnum) does not allow adding/creating SV of Image type. In fact, the purple refnum method always lacks some data types which are available in project explorer's shared variables properties. In LV 8.6 it didn't have options to create anything more complex than String, Boolean, Double and in LV 2010 it has more options but lacks what is new in project explorer's SV properties, for eg Image data type. Why does it always lack something? Is there any way to overcome this, so that I can take advantage of the online process modification by working with the purple refnum?


2. This leaves me with only "green refnum" method to create and deploy shared variables programmatically. This means, they must be created in libraries which might already be deployed (and being accessed remotely) and redeploy them after adding more shared variables in them. If a library is deployed (a process in shared variable engine) and if I create some shared variables in it, save it and deploy it again will it interfere with the library's functionality (for e.g. the other shared variables deployed and being accessed, will be disturbed in some way)? Don't have means to test working conditions. So need to ask this rather theoretical question.



On the client side: 


3. While using Dynamic Shared Variable API (bundled with LV2009/LV2010), what are the requirements? In help I didn't find much details about implementation. For e.g., while deploying programs which use DSC API, it is necessary to deploy libraries with the bound shared variables, and there are KB articles on how to do it programmatically and the steps in preparing the application builder. What are such steps in case of shared variable API? As I understood, I don't need the libraries with bound shared variables on the client side. Just by using the API I would refer to the deployed shared variables from desired URL (of shared variables hosted on server) and access them. Right? Do I need to include Shared Variable Engine in Installer's Advanced/Additional Installer features? Do I need to deploy any library on client programs/projects/builders? How about buffering on the client side in case of Dynamic SV API? Do I need to keep the Distributed System Manager "open" on Server? I think it always is running (as I could access the shared variables of the server computer from client computer's project explorer/DSM when server's DSM was not open. 


The programs attached in the previous program are with the client and server implementations. Can you please tell me what is missing. I have explained the test I made and the problem occured. 


I hope this summarizes the previous questions clearly.

0 Kudos
Message 10 of 11