I can see a few spots where things could be polished, but overall, I think the MyRIO is in great shape.
The installation was smooth. The quick start guide was great, and things worked just as it said they would. I downloaded the Cool MyRIO App and it installed smoothly. VIP was a great tool for setting that up.
In the category of "could use improvement": The wifi button and LED don't function, which I know is a known issue. But when you go to the ni.com/info articles referred to for them, all they say is "they don't work". There is no link to how to set up the wifi. Also nothing in the MyRIO manual that I could find about how to connect. I found it in MAX, which didn't really take too long. But it might be a bit more of a stumbling block for students and NI newbies.
The example projects are great to have.
I'm currently trying to get the Audio project running. (Which is why I needed Wifi.) The first time I ran it, the Desktop app said it was connected but the RT didn't. Never really figured it out. Restarted LV and it came up fine. A minor comment: On the desktop app, the Stop button looks like it belongs to the Record button. E.g. press Record, then when you're done recording press Stop. Nope, it stops the program, not the recording. Perhaps a layout revision could help. Also, I don't really care for the True state of those buttons at all. The sickly slightly-orange hue doesn't really mean "depressed" or On to me. Actually, it does remind me of "depressed" as a mood, but not as a button state.
All in all, very impressive. Good job!
I've been working a bit with the Voice Recorder example project and a program of mine based on that one. There are a few things I've noticed.
The Voice Recorder RT VI has a bug in it. After running it once and quitting, the "End?" Boolean remains True. The next time you run it, the Network Streams loop quits right away, and the Host program never gets any data. It needs a local to init the "End?" button false at the beginning.
I tried to beef up this example and send four channels of data up to the host - two of the original audio and two of filtered data. The system became very unstable. If I drop the sampling rate down to 2k, instead of the original 8k, it works alright. But even 4k (which should be sending the same amount of data as the 8k 2 channel original program) bogs it down. It gets very hard to debug, since the RT disconnects without any errors or explanations. I'm pretty sure, though, that it is a combination of network overload and CPU overload, handling the extra array manipulation. I can get somewhat similar behavior with the Voice Recorder example program by increasing the sample rate, but it doesn't fall apart until you hit 20k, which is a factor of two above nominal, whereas my program crashes at the same nominal data throughput (4k but twice as many channels). Hence I think that my additional array manipulation contributes.
The weird part is that my program sometimes crashes and disconnects before I even start the host. For example, I run it, it crashes. I cycle power on the MyRIO and stop all program. Wait for it to boot. Run the RT. It disconnects immediately, even though I haven't run the host, and hence the Network Stream isn't even active. Bizarre.
All that is pretty specific to my app. If I can't resolve it acceptably, I'll have to send it in for help. But one thing that could help in the meantime is the Distributed Systems Manager. The MyRIO target shows up, but the contents are empty. No data, no connection, no CPU loading, etc. Is this a known issue? Perhaps the support for MyRIO in the DSM just isn't ready yet? Or do I need to reinstall some component or do something else to get this working?
Oh - one other minor suggestion for the Voice Recorder example - There's a big spike on the frequency graph near DC (not surprisingly), so making the default manual X min setting something like 20 helps the Y axis autoscale a lot.
As I mentioned before, I get crashes with the RT system when I try to pump too much data from the RT to the Desktop using my audio program or the Voice Recorder example program. I suspect that NI set the audio sample rate at 8K in the Voice Recorder because of this issue. If you go much higher, you risk crashes. Obviously, though, 8K isn't ideal for full audio. I'm thinking of dropping to a mono program just to get higher sampling rates.
I was just wondering if anyone could verify that these limits make sense. Here are the numbers:
On MyRIO, which says it has a Xilinx Z-7010 FPGA with a dual core 667MHz processor, you can easily push 2 channels of 8KS/s data. Each data point is 2 bytes, so that's a throughput of 32,0000 bytes/second through the Network Stream.
I have another project that uses a cRIO-9068 chassis. It has an Artix-7 FPGA with a dual core 667MHz processor. On that, I've got a similar architecture that can push 32 channels of doubles from RT to host, at 15.625 KS/s. That's a throughput of 4,000,000 bytes/second.
Obviously, the fact that both processors both run at 667 MHz doesn't mean that they are equally as powerful, but the difference in throughput is a factor of 125!
Oh, another difference is Wifi on the MyRIO vs wired 100Mbit ethernet on the cRIO.
Can anyone (an NI, probably) verify that the MyRIO Network Streams really are topping out at around 32,000 bytes/sec?
I'm still struggling with networking on the MyRIO.
The instructions for the Voice Recorder example state that to use the Network Streams, you have to have WiFi connected on the MyRIO. I have found that this is not the case. I can turn wifi off, reboot the MyRIO, connect to the IP address that the USB connection shows, and the Voice Recorder example works. Or, I can enable WiFi, change the project and the Network Streams to point to the WiFi IP, and it works even with the USB cable not connected. Unfortunately, using WiFi does not alleviate any of the bandwidth limitations mentioned in my previous post. I can still only get 32,000 bytes/sec over the Network Stream.
Another issue that has arisen is that my app is very sensitive to Network Shared Variables. As soon as I include one Shared Variable in the RT program, it becomes unstable and crashes frequently. MAX shows that the Network Variable Engine and the Network Variable Client Support are installed on the MyRIO. And my software deploys, it just crashes after it starts running. So I would think that I must have all the required software components installed on the MyRIO. Any ideas?
I was able to get DSM to show the myRIO's CPU and memory. You need to have System State Publisher installed to your myRIO. I have System State Publisher 3.1 installed on my device, running LabVIEW 2013 with DSM 2013. Are you able to publish CPU and memory to DSM? If so, what sort of behavior are you seeing from the CPU and memory during your network streams test?
I'm putting together a throughput test using the sample Voice Recorder project; I'll let you know if I find anything interesting and what my results are. It is interesting that both Wifi and the wired USB connection have given you the same results. Perhaps another factor is in play, such as the rate data is passed from FPGA to RT. Can you give some specifics on your implementation that resulted in the 32k bytes/s? How different is your code from the Voice Recorder sample, or is that result from a completely different project?
Network shared variables bring the entire network variable engine in when first added to an application. As such, an application that is near the limits of CPU or memory may crash from the added overhead. Is your application showing high CPU or memory values before the crash? I haven't had any issues using shared variables working with myRIO, and don't know of any applicable bugs. Although it sounds like you have all dependencies covered, I've attached a screen shot showing what software is installed to my target. Any differences for your installation?
Using the Voice Recorder sample project, my network stream (over USB) maxed out at 32333 bytes/sec, very close to what you were seeing. Going much over the sample rate/channels/etc. you discussed above caused the target to lose connection, exactly as you are seeing.
Next, I tried maxing out the network stream without having the FPGA involved. I removed the sound sampling code and instead generated a random array of 16-bit integer data, and tried to max out throughput. I was able to get speeds over 1.5e6 bytes/sec, around 1.5 MB/s, which is at least in the same realm as cRIO performance levels, and this was without optimizing packet size, loop rates, etc.
Network streams are clearly not the limiting factor here. Instead, the FPGA is struggling to communicate back to the RT any more quickly than 32 kb/s. Taking a look at the FPGA VI for the Voice Recorder sample project, the AudioIn analog input section is very basic. We may be able to somehow optimize data throughput from FPGA up to RT. Your thoughts?
The big issue in the sample code for the Voice Recorder is that it uses Register I/O for communication between the FPGA and RT instead of DMA. Streaming analog data in this type of application should be done using DMA to reduce the load on the RT processor.
In addition you would want to minimize any data handling or processing in RT that is not absolutely necessary. The sample code converts the analog audio data to the waveform data type before passing it to the network stream which could also be eliminated and the audio data should be passed directly to network stream. Lastly if you switch to using DMA for the FPGA to RT streaming, you could eliminate the RT FIFO to pass data between the two loops as the DMA transfer includes its own buffer. In this case you would only have one loop in RT which reads data from the DMA buffer and passes it to the network stream.
32,000 byte/s definitely sounds slow... I wouldn't have suspected that to be any bottlenecks within the network itself unless you were making many many hops or the network was very busy.
If you are seeing crashing behavior by simply adding Shared Variables, I would definitely take this to technical support as we should look into it further. The simpler the application that reproduces this the better but, its understandable if not. (Sometimes that's just the nature of the beast). You can always pull the crash dump information from the controller as well using a Right-Click on the controller in MAX and selecting "View Error Log" This can be pretty helpful at helping to determine the cause of a crash on the target.
Nice work on that troubleshooting PatJamSim! This is definitely useful information. To piggy back on this If you guys wanted to benchmark the networking throughput, I would focus on a simple project that uses just networking (Simple Network Streams.lvproj - NI Shipping Example) so that you have a more isolated test. But, based on this information, it sounds like its more processor limitations.
Thanks for all the feedback and ideas.
System State Publisher was not installed on my MyRIO. Now that it is, DSM works. Thanks. Perhaps this is something that should be part of the standard install?
Your idea about Shared Variables pushing a teetering system over the edge makes sense.
Christian's point about DMA is well taken as well.
Before I submit a SV example for tech support, I think I'll rewrite my app to use DMA, then see how things are working. I suspect that will help a lot. (If it does, I would suggest that rewriting the Voice Recorder example would be useful. It would appear that it is approaching its limit at 8000 KS/s.)
I'll work on this more next weekend and get back to you.