When running a plugin script on a large datafile there does not seem to be a way to indicate to a user what the status is. You can use the dbm() call but the external tool and format limitations are an issue.
Is there an interface into the status bar in DIAdem? When running a vbsd script and importing data it seems to just rotate while importing indicating I think activity but not progress. Anyway to overload the progress bar, status text in the bottom of DIAdem? Or write messages to the logfile tab within DIAdem?
Thinking about writing a little status server or use Windows notifications to send status messages to display vbsd execution status. Lines processed, Time etc. Just seeing what others are doing to provide feedback to users while running data plugins scripts.
Solved! Go to Solution.
I am not sure if this is exactly what you are looking for but here are a couple of good places to start.
First there is a way to display a message in the status bar that you could use to show when the script is complete:
Displaying a Message in the Status Bar: http://zone.ni.com/reference/en-XX/help/370858N-01/procauto/procauto/procauto_msglinedisp/
Next this forum post has a lot of good information about how to interface with the progress bar. Especially the post from Brad:
Display progress bar while using Calculate function: https://forums.ni.com/t5/DIAdem/Display-progress-bar-while-using-Calculate-function/m-p/2720027
Finally this document has some information about programatically controlling the progress bar which will probably at least be a good place to start:
Programmatically controlling the progress bar in DIAdem: http://digital.ni.com/public.nsf/allkb/570B7DC1B5C09ED786256C470054714B?OpenDocument
Hope this helps!
The LoopInit function is not available in the vbsd scope nor is the MsgBox etc. I think the solutions here are all for the vbs namespace or am I mistaken? Perhaps I can import functionality in someway? So plugins by their nature are headless parsers but is the parsing progress be available while running the vbsd script?
MsgLineDisp not red
ActiveX available, probably not available since MsgBox is not
LoopInit not red.
dbm is red and functional during plugin vbsd execution.
Just to give a fixed example:
This is what I dumping to the debug pipe.
 NI: (THB) Importing 129000 row. [ 29% ]
 NI: (THB) Importing 293000 row. [ 65% ]
 NI: (THB) Importing 319000 row. [ 71% ]
 NI: (THB) Importing 349000 row. [ 77% ]
 NI: (THB) Importing 430000 row. [ 95% ]
 NI: (THB) Importing 450000 row. [ 100% ]
This is executing in 2-4 minues.
You are correct that a VBScript DataPlugin runs in the DataPlugin VBScript host that is entirely separate from the DIAdem VBScript host or the SUDialog VBScript host. The DataPlugin VBScript host, which ships in the common USI software layer shared by DIAdem, LabVIEW and the DataFinder, has no access to DIAdem functions. In the first version of VBScript DataPlugins (DIAdem 9.1), the native VBScript MsgBox command was still available. It wouldn't suit your purposes here anyway because it's modal, but it was removed in the next version (DIAdem 10.0) when the DataFinder was first released. It made no sense to allow DataPlugins to pop up modal dialogs when files were being indexed headlessly in a single process by the DataFinder.
So that leaves you with no good options for notifying DIAdem manually, with file I/O or DOS shell commands or whatnot. I am curious why your DataPlugin is loading so slowly, though. If you are able to use DirectAccessChannels in the DataPlugin, DIAdem automatically shows a progress bar update for each channel loaded, which often helps. There's an outside chance we could lessen the severity of the problem by expediting the loading process.
DIAdem Product Support Engineer
Hello Brad, thanks for the follow up...
Somewhat generically the data format is:
Socket,Board,Data1,Data2,Data3, Data Z
With the Socket count being say 200 and board count being 10 and Z being 30.. Though it is arbitrary.
Currently I am looping through each item and generating a channelGroup for each Socket, Board called creatively TestName #socket-#board and loading in channels D1, D2 through DZ for each channelGrp.
Looking at the interface if I use DirectChannelAccess then the whole column is imported at once. So the data in the channel will be interleaved with socket and board information, right? Is it better import interleaved data via DirectChannelAccess load and then decimate?
So if I used the DirectAccess load the progress bar would go from 0 to 100% on each channel rather than showing progress for the whole plugin progress?
It is true that if you loaded each entire column as a DirectAccessChannel that you would see a separate progress bar in DIAdem for each channel as it loaded one after the other into the Data Portal. That would give you some of the progress visibility you want, but it would mean that you'd have a lot of rework to do after the data is loaded. Do you have users loading these data files interactively, or are the files always loaded as part of a running VBScript?
It probably doesn't help you, but let me mention to anyone else reading this that you can set a starting and ending position for each DirectAccessChannel, so if the data you wanted in a given channel was located in a block of values in contiguous rows, then DirectAccessChannels might still make sense for you. It is also the case that you can declare multiple DirectAccessChannels, one for each relevant block, then logically specify that they need to be stitched together (concatenated) into one long data channel when loaded into DIAdem. But if you need to join together an individual row with another N rows down from it, then this becomes impractical. Also, the concatenation of DirectAccessChannels can become memory intensive.
The type of data you describe is something I've often seen, and usually the best way to process that type of data is to declare each row as it's own entity (Group or Channel) and declare the values in that row as properties. In this approach, you always have to run a query with the DataFinder to pick the rows you care to load together, but the concatenation of those disparate rows is automatic in the Search Results. What do you do with your grouped data once you load it?
DIAdem Product Support Engineer
The mainline goal is to write native TDMS and bypass plugins altogether. But as we transition there is a need to process the data files currently being generated. Initially thought it better that the TDMS format would be the same between the plugin processed and native logging so that whatever reporting is generated it is inherently compatible. Might need to rethink how to get there as the simple PT by PT parsing may be too slow for the data size. The user interaction is not yet fully formed but I would think it would be combination of manual and script depending on the user. Is the LabVIEW plugin toolkit a wrapper to the USI API? Or is the plugin toolkit native to the LV runtime so the parser may run faster? Can you write plugins using python on the DataFinder or analysis server? If so is the performance similar? Just want to make sure there is not just a bigger hammer in the toolbox that I am not aware exists.
I will have to try importing everything as properties and see how that flow goes as hard for me to visualize. If I did do this would I be able to parse data into channels faster than linear parsing in VBSD? I am currently processing about 28k fields into channels per second. Currently we are summarizing the whole dataset performance over time as this is a dataset that collected over months and creating templates for reporting. Right now I think this processing would be done via DataFinder and scripts.
For me I like progress bars that show end to end completion timing rather than just activity. I always get my hopes up when a progress bar is near 100% and if it just resets back to 0 and starts to count again...I am crushed. But an activity progress bar is way better than a progress bar that sits at 97% for what seems to be 90% of the progress time. Using the linear parser there is no status in the DIAdem progress bar and no interface to it so will have to rely on the DebugView.
just out of curiosity, in your first post you said that dbm() and external tool and format limitations are an issue.
Could you please explain what kind of limitations? What would have helped to make dbm() better suited for your use case?
In your last post it looks like you switched back to dbm() and DebugView, which is from my point of view the best way to trace VBS DataPlugins.
It sounds like the LabVIEW DataPlugin SDK might be a good avenue for your to pursue, if you're comfortable parsing ASCII files in LabVIEW and also comfortable with LabVIEW projects and building DLLs and installers. VBScript runs interpreted, so point by point reads in LabVIEW's compiled runtime would load into the Data Portal MUCH faster that the same approach with a VBScript DataPlugin. You do need to have the matching version of the LabVIEW Runtime engine installed on any computer using a LabVIEW DataPlugin, but you can include that matching Runtime in the DataPlugin installer if you wish. I also agree that it would be ideal to have the DataPlugin deliver content to the Data Portal that is identical to what you load from the newer TDMS files. There is not currently a Python option for DataPlugins, though that would be a logical expansion for NI, given industry interest in that language.
Declaring all the fields as properties will NOT load faster, but it will query quickly after indexing the file once. I think given your latest revelation about the transition to TDMS files that it probably makes more sense to work on a LabVIEW DataPlugin or just a LabVIEW ASCII ==> TDMS file converter VI.
DIAdem Product Support Engineer