From 11:00 PM CDT Friday, Nov 8 - 2:30 PM CDT Saturday, Nov 9, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

In-place overwriting of memory chunk by external libraries

I'm trying to understand some example code from the SDK of an instrument (a Picotech oscilloscope, link to SDK). If I look at one of the example routines, such as the snippet below, my immediate expectation is that the I16 array going into the inner loop would have its values update once per iteration of the outer loop and would stay constant while the inner loop executes. In fact, when you hook up the hardware and run it, the array updates every time the inner loop executes. 

 

What's going on is that those subVIs are essentially just wrappers around some calls to external routines linked in through DLLs, and the routine just before the inner loop (PS4000 Start Stream) sets up a fixed-size buffer (the I16 array) and the routine inside the inner loop (PS4000 Get Stream Values) does an in-place overwriting of some part of that chunk of memory associated with that buffer array, so that when the array subset command executes, it extracts the latest measurements from the buffer into a dedicated array for display or further processing.

 

My question is essentially what are the rules associated with this sort of thing? In my own code, I can just slavishly copy the structure of this example, but I'd prefer to understand exactly what are the constraints in terms of what can I do with that buffer array that won't break the behind-the-scenes updating and also if there's any straightforward way of telling that this is going on in general code (since it does essentially break the dataflow paradigm). What I've been doing in the early versions of my own code is to take each chunk as it's spat out by the Array Subset operation, stick it into a queue, and then use a separate consumer loop to dequeue and process as needed; so just treat the initial source of the data as a bit of a black box.  I'm nervous about using (or writing!) code that I don't fully understand, so that's not really ideal...

 

Thanks,

Daniel

 

picoscope snippet.png

0 Kudos
Message 1 of 11
(3,594 Views)

Funny coincidence, someone else just asked a very similar question about a similar instrument yesterday: http://forums.ni.com/t5/LabVIEW/When-is-dataflow-not-data-flow-Updating-LabVIEW-Arrays-through/m-p/3...

0 Kudos
Message 2 of 11
(3,579 Views)

Huh. I swear I did search before posting, though I was looking for the generic case and didn't search on the specific instrument type... Skimming that thread, it seems like labeling that chunk of code with 'here be dragons' and making it as simple as possible (no branches or anything in the array wiring) is the safest approach. Where 'safest' is decidedly marginal. 

 

Basically, it looks like the SDK authors took their standard C functions (which work by explicitly passing a pointer to a preallocated buffer, and also use callbacks) and just jammed that round peg into a LabView-square shaped hole. 

Message 3 of 11
(3,565 Views)

Believe it or not, I use Google to search these forums.  Type in "LabVIEW" and whatever you are searching for, and usually one of the hits will include a link to "More results from forums.ni.com".  That will be a treasure trove compared to what you get searching with the NI forum search tool.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 4 of 11
(3,542 Views)

So what would be the right way of handling this sort of thing? I confess to never having done any sort of manual memory management in LV, so I'm not sure how one would go about either creating a pointer (that points to an allocated block of appropriate size) or reading data back from that block when the DLL functions indicate that new data is available. Trawling around through the docs a bit, it looks like calling DscNewPtr to allocate the memory and then MoveBlock to read it back on demand? 

0 Kudos
Message 5 of 11
(3,515 Views)

@dmsilev wrote:

Trawling around through the docs a bit, it looks like calling DscNewPtr to allocate the memory and then MoveBlock to read it back on demand? 


That's the way I would recommend doing it. Use DSNewPtr to allocate a block of memory and pass the pointer to the DLL. Then you control the lifetime of that pointer, so there's no chance of LabVIEW moving or releasing it on you. Moveblock, despite the name, is essentially memcpy, and you can use it to copy data out of that pointer into a LabVIEW array. Of course there's a small performance penalty if the array is large, because you're making an extra copy of it, but it will be safer.

0 Kudos
Message 6 of 11
(3,507 Views)

Thanks. I'll look into that and give it a shot. At first glance, it doesn't look like it'd require much modification of the supplied SDK code; replace the native LV array allocations with DSNewPtr calls, call MoveBlock every time the new-data-ready routine fires, and then deallocate the memory as part of the Close routine at the end. 

 

The arrays aren't huge; the total buffer defaults to 500,000 points and an individual chunk is maybe 60,000 or so, so making a copy of a chunk once every 6 milliseconds (max stream rate for my particular model is 10 MSamples/sec) isn't that big a deal performance-wise. 

0 Kudos
Message 7 of 11
(3,502 Views)

@nathand wrote:
Of course there's a small performance penalty if the array is large, because you're making an extra copy of it, but it will be safer.

Actually that performance penalty is pretty much non-existent. The Array Subset does the same! Smiley Happy

 

One disadvantage of using DSNewPtr() is that this pointer is now managed by you instead of LabVIEW. That means that you have to deallocate it at some point to avoid memory leaks. And while you might create a (class) library that manages that properly you still depend on the user of your library to really call the cleanup/close or whatever function you provide for this purpose.

 

Still I consider this a smaller drawback than relying on some specific LabVIEW behavior that may change at some point as the internal optimizer gets more and more advanced in squeezing out the latest femtoseconds out of a CPU :-).

Rolf Kalbermatter
My Blog
0 Kudos
Message 8 of 11
(3,492 Views)

Thanks. You answered one of my nuts-and-bolts questions (about memory leaks). I assume that if I or someone using my code screw up and forget to deallocate the memory block, it's lost until LabView quits at which point the OS memory manager reclaims it? 

 

The other question is about cross-platform portability. Ideally I'd like this code to be usable both on Windows and on the Mac (and possibly Linux at some future point); in my copy of LV Mac (2014), I didn't see either MoveBlock or DSNewPtr in the list of functions provided by LabView. Are they just hiding in some other DLL which has to be explicitly linked in, or is that functionality not present in the Mac version? The online help suggests it should be, but the functions didn't seem to be there. 

0 Kudos
Message 9 of 11
(3,475 Views)

@dmsilev wrote:

Thanks. You answered one of my nuts-and-bolts questions (about memory leaks). I assume that if I or someone using my code screw up and forget to deallocate the memory block, it's lost until LabView quits at which point the OS memory manager reclaims it? 

Correct!

 

The other question is about cross-platform portability. Ideally I'd like this code to be usable both on Windows and on the Mac (and possibly Linux at some future point); in my copy of LV Mac (2014), I didn't see either MoveBlock or DSNewPtr in the list of functions provided by LabView. Are they just hiding in some other DLL which has to be explicitly linked in, or is that functionality not present in the Mac version? The online help suggests it should be, but the functions didn't seem to be there. 


They are all there! Use "LabVIEW" without quotes as library name and LabVIEW will resolve them to wherever it implemented them in its development or runtime system. With this name you will not be able to select the name of the function from the drop down function selection list but have to type it in explicitly, but that is a minor discomfort.

 

The real challenge will be to have your shared library work on the different platforms the same. You might like to have a look at these blog posts posts that I resurrected from the depths of the net recently.

Rolf Kalbermatter
My Blog
0 Kudos
Message 10 of 11
(3,469 Views)