01-11-2023 09:46 PM
There is a vi call write to system log.vi under programming - dialog&userinterface to write the log programmatically. but how can we recall the information by LabVIEW?
Could you give me some advice? thanks.
Solved! Go to Solution.
01-12-2023 04:16 AM
you may take a look at https://www.ni.com/docs/en-US/bundle/labview/page/glang/write_to_system_log.html
You can view the messages this VI writes using the system log viewer for your operating system.
(Windows) Open the Application page of the Windows Event Viewer.
01-12-2023 04:53 AM - edited 01-12-2023 05:00 AM
Thanks for the infomation.
I just intend to read the string and time programmatically.
I'm tring to work with C:\Windows\System32\advapi32.dll, not sure if it is possible to do so.
I have some difficulty to transfer the data types to LabVIEW data types.
such as below data.
An application-allocated buffer that will receive one or more EVENTLOGRECORD structures. This parameter cannot be NULL, even if the nNumberOfBytesToRead parameter is zero.
A pointer to a variable that receives the number of bytes read by the function.
ReadEventLogA function (winbase.h) - Win32 apps | Microsoft Learn
and don't know how to input below parameter.
which described in python is flags = win32evtlog.EVENTLOG_SEQUENTIAL_READ | win32evtlog.EVENTLOG_BACKWARDS_READ
Use the following flag values to indicate how to read the log file. This parameter must include one of the following values (the flags are mutually exclusive).
||Begin reading from the record specified in the dwRecordOffset parameter.
This option may not work with large log files if the function cannot determine the log file's size. For details, see Knowledge Base article, 177199.
||Read the records sequentially.
If this is the first read operation, the EVENTLOG_FORWARDS_READ EVENTLOG_BACKWARDS_READ flags determines which record is read first.
You must specify one of the following flags to indicate the direction for successive read operations (the flags are mutually exclusive).
01-12-2023 11:15 AM
Create a cluster that corresponds to the EVENTLOGRECORD struct, where a DWORD is a uint32 und a word a uint16. Then, initialize a u8 array with a big number of elements (start with like 1024) and pass that as a parameter
lpBuffer. Type is Adapt to Type and Data Format is Array data pointer.
pnBytesRead is the number of bytes of the array.
The flags are uint32, you can combine flags with the or operator.
When that function returns, you will get the number of bytes you need to read at least one event record (last parameter that you pass in by pointer). If your array was big enough: great, otherwise increase you array size to pnMinNumberOfBytesNeeded, maybe a bit more and call the function again. You can then use unflatten from string (little-endian) to convert the first part of the byte array to an eventlogrecord. The strings and additional data are in the remaining part of the array.
There is a nice example Querying for Event Information, if you want to go further.
01-12-2023 09:17 PM
Thank you too much for the instruction.
but I'm still could not fully understand your description, though it may be very clear already.
1.initialize a u8 array , how should I do with the cluster. I have attached the VI. I don't know the element of the array should be cluster or uint8.
2.You can then use unflatten from string (little-endian) to convert the first part of the byte array to an eventlogrecord. The strings and additional data are in the remaining part of the array.
Is the bye array means lpBuffer, and what does first part means. could you help to give me an example if possible.
I have read the example before, but c++ is hard for me to understand too. I don't have the experiece on the script programming.and for the python above, it is from a friend, I know Or operator is eaqual to the python script, what I meant was how can we know we should use Or from the document of Mircosoft.
01-13-2023 03:15 AM
That looks quite good already. The array element is u8. You can often use a cluster as input, but not in this case because what follows after the cluster depends on the event record and you cannot map that directly to LabVIEW data types (See the Remarks section in the EVENTLOGRECORD documentation).
A couple of things:
The fields within the cluster (StringOffset, UserSidOffset, DataOffset) give you the starting index of strings, usersid and event specific data. See if you can extract what you need.
You cannot really know about or'ing the flags from this documentation, but it's part of the Querying for Event Information example.
01-13-2023 07:31 PM - edited 01-13-2023 07:56 PM
Thanks for the addtional information.
I have been able to retrieve the generated time already, but the data string seem to be come out with messy codes, don't know if it is due to Chinese string in the data.
Have a nice weekend!
01-14-2023 07:13 AM - edited 01-14-2023 07:15 AM
Thanks for the addtional information.
- You do not need to specify an absolute path for system libraries, just advapi32.dll in the path field is enough.
- it seems LabVIEW automaticallly update the path field already. but thanks for telling me about this.
It indeed often does. But you still should make sure to initially enter just the library name without any path. LabVIEW remembers what you entered and the Application Builder will use this to determine if a DLL dependency is private to the application or not. Full library path names are considered application private and copied into the support folder for the application. File name only library paths are considered system libraries and are not copied into the application directory. You do ABSOLUTELY NOT want a private copy of a Windows DLL in your own application directory.
Aside from legal issues (you have no license to distribute these files), there is a technical one too and that is much more important. Your DLL version might not be the same than what your end users system has and that can cause nasty problems. And some of those Windows DLLs do all kind of resource tracking and other things to make them work properly and a resource allocated in the system version of the DLL is then simply garbage for the version of your private copy.
- The HANDLE should be an unsigned pointer sized integer.
- I have updated the handle data type according to your advice, but may I know what kinds of issue will hapen if I didn't update. I don't quite understand the difference between pointer and normal integer.
It is mainly an issue if you ever happen to want to go to LabVIEW for Windows 64-bit. There pointer sized variables are 64-bit long. If you configure them as 32-bit variable, everything works fine under 32-bit LabVIEW but things go pretty bad when trying to run the same code under 64-bit LabVIEW.
LabVIEW itself doesn't know a pointer sized variable type. They are always fixed size at compile time. But C does and in order to be able to interface to DLLs, LabVIEW supports this type in the Call Library Node. In the LabVIEW diagram you always treat these elements as 64-bit integer, but when you pass this wire to a parameter that is configured to be pointer sized, LabVIEW will pass the relevant part of that 64-bit value to the underlaying DLL function depending on its actual bitness.
01-27-2023 06:49 PM
It really helps.
Do you know is there any offical doucument from NI that tell us how to transfer the C/C++ data types to LabVIEW data types?
01-28-2023 01:36 PM - edited 01-28-2023 01:52 PM
Well, there are examples: <LabVIEW>\examples\Connectivity\Libraries and Executables\DLL Calling VIs
ZYOng's link is also a good reference.
But an official and exhaustive documentation is pretty much impossible. If you talk about the pure C native datatypes, things are fairly simply: void, char, short, int, long, long long, double and float are basically all there is. Trivial? Not really! What an int really means depends on the compiler, the CPU it creates code for, the sleep pattern and the availability of enough coffee for the compiler designer and a few other arbitrary items that could suddenly result in something different. Even a long is not always the same, under Windows it means always a 32-bit integer, but under Linux it means 32-bit for the 32-bit compilation and 64-bit for the 64-bit compilation. Each OS API then defines its own datatypes that are derived from these C basic types, sometimes with special attributes. The C99 standard defines also a whole number of new datatypes that try to define specifically sized types that try to avoid the compiler specific trouble but while all recent compilers support the stdtype.h include, this wasn't really guaranteed even a few years ago.
And to make things just that tiny little bit more challenging many library developers go to another extra length to define their own "more standardized" basic types.
Add to that enums, some compilers have a preferred integer type that can be int, but can be something else too, or some compilers like to use the smallest possible integer type that can still represent the maximum enum item that is defined.
You would like structs? Well there is something called alignment which defines on what boundary a single element inside a struct is aligned. The default alignment nowadays is 8 byte, but the programmer can change that either with pragma statements anywhere in the code or he can also globally change that in the compiler settings.
An Intel x86 CPU doesn't really care if you try to access a 32-bit integer on an address that is not a multiple of 4 byte. It has a few million transistors in its micrologic that do nothing else than allowing such unaligned accesses without causing a tremendous performance penalty. RISC CPUs used to be very different. If you try to do such unaligned access then it depends on the compiler if the system goes into a core dump or if some very involved software emulated routines are invoked that access the different elements of that numeric in separate reads, then does multiple bitshifts, masking with bitmasks and combining everything into the final value through an OR operation. Such an unaligned access then costs 10 to 50 times as much time than if the access had been aligned. This is why there is this alignment in C to allow to avoid such unaligned access.
C was designed to be a language very close to the silicon hardware. It means to only define the absolute bare minimum and leave a lot to the compiler designer to be able to let it create the most effective code for the target CPU in question. C doesn't define the size of a char, it only says that it needs to be big enough to contain a character type. This could be (and has been in the past 7 bits, bit it could be also 9 or 16 bits). An int is defined to be the preferred integer datatype for the hardware in question. This used to be 16 bit but is nowadays usually 32 bit but wasn't changed when going to 64-bit for some reason. C is in many ways not defined in what something exactly means but more often in what it should mean at the least and sometimes also at the most. A lot of things in the C language are also explicitly "defined" to be undefined. It doesn't specify what a compiler should do in such undefined cases. It just says that it is very unwise to depend on any particular behavior in that case because another compiler can do something totally different.