LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Config to string memory leak?


@Mads wrote:

Rolf - the only DLL in the application (or .out file in this case) I'm using that is not part of the LabVIEW core is the lzlib.out file you (if I'm not mistaken) made for OpenG.

So I'm sure that's a solid piece of code Smiley Happy


As much as I would like to claim that this is the case, this shared library has been not very much stress tested. I can´t vouch for it not doing something bad. The Windows shared lib is a lot more tested and also easier to exercise, but on the cRIO everything goes a bit blind, unless you have access to the VxWorks Diabolo workbench software but that is way beyond my financial abilities to spend for an Open Source tool.

Rolf Kalbermatter
My Blog
0 Kudos
Message 11 of 20
(987 Views)

@Mads wrote:

Rolf - the only DLL in the application (or .out file in this case) I'm using that is not part of the LabVIEW core is the lzlib.out file you (if I'm not mistaken) made for OpenG.

So I'm sure that's a solid piece of code Smiley Happy


As much as I would like to claim that this is the case, this shared library has been not very much stress tested. I can´t vouch for it not doing something bad. The Windows shared lib is a lot more tested and also easier to exercise, but on the cRIO everything goes a bit blind, unless you have access to the VxWorks Diabolo workbench software but that is way beyond my financial abilities to spend for an Open Source tool.

Rolf Kalbermatter
My Blog
0 Kudos
Message 12 of 20
(987 Views)

Perhaps you are onto something Rolf...

The problem has evaded irradication yet again. Having replaced the code that seemed to cause the issue (i.e. if it was removed, things were running fine) with one based on binary files instead of config files, I had good hopes..But the only effect is that the problem seems to pop up quicker than before(!) Removing the new code fixes the issue again seemingly though, so I'll have to see if there are any common parts remaining in there that could cause it. The link to this part of the code seems firm enough to not indicate that it is a corruption issue emanating from elsehwere...but who knows.

Murphy's Universe has a habit of making  random or combinational errors seem systematic and single-sourced - just to fool you during debuggingSmiley Frustrated

0 Kudos
Message 13 of 20
(968 Views)

 kind of gave up Yesterday and converted the project back to LabVIEW 2012. Now I do not get any thread consumption warnings anymore...but it's too early to tell if that's because the problem is 2013, or if 2013 is just better at detecting the thread consumption...Smiley Indifferent

0 Kudos
Message 14 of 20
(928 Views)

I'm having the same problem with a cRIO 9022 + chassis cRIO-9114 and modules cDAQ-9205 and cDAQ-9264.

 

The LabView 2016f5 application has been running on the cRIO for nearly two years now, but a few weeks ago the cRIO crashed so I re-formatted the internal storage and re-installed everything again. Soon after, it would stop the measurements out of a sudden, until the programmed regular restart on the next morning at 03:05AM UTC. Here are the lines from the lvrt.out_16.0__cur.txt log file:

 

<DEBUG_OUTPUT>
06/10/19 07:42:16.478 PM
DWarnInternal 0x62E90A23: Memory error 2 in DSSetHandleSize
source/MemoryManager.cpp(175) : DWarnInternal 0x62E90A23: Memory error 2 in DSSetHandleSize

 

This one repeats several times, and then:

 

<DEBUG_OUTPUT>
06/10/19 07:42:30.000 PM
DWarn 0xC1EAEA9C: Internal error 2 occurred. The top-level VI "[RT] Main.vi" was stopped at unknown "" on the block diagram of "Measure and control sequence 2.vi".
source/server/RTEmbEditor.cpp(107) : DWarn 0xC1EAEA9C: Internal error 2 occurred. The top-level VI "[RT] Main.vi" was stopped at unknown "" on the block diagram of "Measure and control sequence 2.vi".

 

We have several other cRIOs of the same type and configuration, one of them running exactly the same LabView project for a few years now, but not showing the same problems.

0 Kudos
Message 15 of 20
(625 Views)

I also get the error:

 

####
#Date: Wed, Feb 12, 2020 05:12:24 AM
#OSName: Linux
#OSVers: 4.14.87-rt49-cg-7.1.0f0-xilinx-zynq-41
#OSBuild: 265815
#AppName: lvrt
#Version: 19.0
#AppKind: AppLib
#AppModDate: 


InitExecSystem() call to GetCurrProcessNumProcessors() reports: 2 processors
InitExecSystem() call to GetNumProcessors()            reports: 2 processors
InitExecSystem()                                      will use: 2 processors
starting LV_ESys1248001a_Thr0 , capacity: 24 at [3664329151.82972479, (05:12:31.829725000 2020:02:12)]
starting LV_ESys2_Thr0 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr1 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr2 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr3 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr4 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr5 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr6 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
starting LV_ESys2_Thr7 , capacity: 24 at [3664329152.53595400, (05:12:32.535954000 2020:02:12)]
Thread consumption suspected: 1 Try starting 1 threads
starting LV_ESys2_Thr8 , capacity: 24 at [3664330995.25741291, (05:43:15.257413000 2020:02:12)]
Thread consumption suspected: 4 Try starting 2 threads
starting LV_ESys2_Thr9 , capacity: 24 at [3664333393.08913612, (06:23:13.089136000 2020:02:12)]
starting LV_ESys2_Thr10 , capacity: 24 at [3664333393.08913612, (06:23:13.089136000 2020:02:12)]

<DEBUG_OUTPUT>
02/12/20 06:35:14.050 AM
DWarn 0xC1EAEA9C: Internal error 2 occurred. The top-level VI "main_RT_general.vi" was stopped at unknown "" on the block diagram of "SDF_Data_Loop.vi".
/builds/labview/2019/source/server/RTEmbEditor.cpp(108) : DWarn 0xC1EAEA9C: Internal error 2 occurred. The top-level VI "main_RT_general.vi" was stopped at unknown "" on the block diagram of "SDF_Data_Loop.vi".


</DEBUG_OUTPUT>
0xB644C990 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 15A990
0xB6911160 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 61F160
0xB691171C - /usr/local/natinst/labview/./liblvrt.so.19.0 + 61F71C
0xB6595CF8 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 2A3CF8
0xB67E36DC - /usr/local/natinst/labview/./liblvrt.so.19.0 + 4F16DC
0xB6471624 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 17F624
0xB68ECC30 - WSendEvent + 204
0xB68ECCD8 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 5FACD8
0xB68C133C - /usr/local/natinst/labview/./liblvrt.so.19.0 + 5CF33C
0xB68BF538 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 5CD538
0xB63F3E78 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 101E78
0xB63F40D8 - /usr/local/natinst/labview/./liblvrt.so.19.0 + 1020D8
0x00008AB0 - ./lvrt + AB0
0xB6BEA580 - __libc_start_main + 114

 Have you found any solution?

0 Kudos
Message 16 of 20
(579 Views)

Eventually, we had to post the cRIO-9022 to NI Support for repair. It was out of warranty.

0 Kudos
Message 17 of 20
(557 Views)

So using an another cRIO of the same model worked ok?

0 Kudos
Message 18 of 20
(551 Views)

Yes, as long as your RT application is OK.

Ours is, and we simply deployed it on another cRIO-9022 with the same settings and NI software set.

So I haven't tried the other cRIO repaired by NI, our technicians put it on stock. But it was definitely a hardware (cRIO-related) issue.

0 Kudos
Message 19 of 20
(546 Views)

My problem was a software bug obviously. First, cRIO worked for 1-3 hours and then threw error 2 in LabVIEW Log in NI MAX.

 

The problematic VI is a for loop with parallelism, inside of it was while loop which ran "permanently". In this loop there was a disable structure, and INSIDE THE ENABLED CASE there was one VI (with shared reentrancy). 

 

I focused on the disable structure because its behaviour was suspicious when ran through LabVIEW Project. Parallel for loops are not debuggable by its nature, so when you turn Highlighting Execution on, you cannot see anything happenning in parallel for loop. But, I could see the dataflow in wires and the green arrow on the subVI inside the disable structure, which caught my attention as it should not be possible. I could even place a probe on the wires inside, it should show No Debug Info when inside parallel loop but instead it showed Not Executed. There is some weird interaction between parallel loop and disable structure.

 

Just removing the structure and leaving the code the same as before did it for me.

 

Windows 10, LabVIEW 2019 32-bit, drivers 17.0.0f0.

Message 20 of 20
(514 Views)