01-04-2009 10:48 AM
I do not have measurement studio, but am trying to interface s-series NI-DAQ hardware using the NIDAQmx.h interface.
My problem boils down to the following in a c++ console application project (visual studio 2005):
#include <NIDAQmx.h>
int main(){
float64 data[128*128*6]; //98304 64-bit numbersor 786432 bytes (not a megabyte)
}
This compiles and runs just fine.
If I increase the allocated size of the data array over 129491 (float64s), the program compiles fine but when I run the console program, I get an unhandled windows exception. Now that number is just shy of 20-bits of addressing for bytes, but it's not like it's exactly on the number or anything.
I'm going to continue to look around for a solution, but if anyone can help, That'd be grand.
Thanks!
Solved! Go to Solution.
01-06-2009 10:48 AM
Hey Gus,
Is there a reason we're using float64 rather than float or double? Since we haven't called any National Instruments hardware at this point, the issue might be where we're calling float64 from (not a standard primitive in VS).
Clarification: addressing is done on the bit level, not the byte level
2^22=4,194,304
2^23=8,388,608
float64data[98304] is 6291456 bits ( 786432 bytes)
float64data[129491] is 8287424 bits (1035928 bytes) ~0.988MB
01-06-2009 01:21 PM
Memory is addressed at a per byte interval. That's why you can only have 4 gigaBYTES of memory on a 32 bit processor (2^32).
To move from one CHAR to the next requires you index your char pointer by one. This increments your &ptr value 1 where pointers are 32 bit numbers. 64 bit numbers are indexed by increasing a pointer value by 8 (i.e. 8 bytes).
When I mouse over "float64" in visual studio, i get the "#define double float64" command from the NIDAQmx.h include.
I believe my problem has something to do with a page file size in windows. I've worked around the issue to some extent by switching to 16-bit numbers and reading fewer of them at a time for processing. This makes the system overall faster anyway.
But eventually I'd like to deal with images and such and having a size limit on this scale could prevent buffering several images in memory.
01-07-2009 03:51 PM
Hey Gus,
You're absolutely right! Pardon my incorrect word choice. What I was trying to say is that a 64 bit number times the size of your array yields a bit value that we can divide down. This is where I derived my calculations from. Were you not able to get 32 bit numbers to work, or did you drop down to 16 bits for a specific reason?
01-07-2009 06:30 PM
I just moved on from the issue. I went down to 16 bits because the NIDAQ is only 12-bit resolution. I don't need more than that precision and I don't need signed numbers.
Thanks
01-08-2009 04:01 PM
If you run into more problems with this issue in the future we can look into it a bit more. I'm curious to see if we can use float32 (float) with this setup.
01-08-2009 04:02 PM
01-22-2009 06:50 PM
Hi Gus,
The default stack size for applications compiled with Visual C++ is 1 MB. Note that the OS and C runtime use some stack space before your main() function is ever called, so you have slightly less than a megabyte to work with. You can increase the stack size using the Visual C++ /F option, but a much better approach is to allocate the array on the heap instead (using malloc()/free() or new[]/delete[]).
Brad
01-22-2009 07:14 PM