In my last blog post, I discussed the type of data test engineers are collecting. The key takeaway: we’re dealing with a lot of data.
All this data leads to my next topic: the three biggest problems test engineers face. At
NI, we refer to these collectively as the Big Analog Data™ problem.
Finding the Valuable Data Points in a Mountain of Data
All the data from the increasingly complex tests run every day eventually needs to be stored somewhere, but it's often stored in various ways—from a local test cell machine to an individual employee’s computer. Simply locating the data you need to analyze can be a giant pain, let alone trying to sift through pages of meaningless file names and types without metadata for context.
We see too many of our customers wasting time because they don't have an efficient way of searching files. Even if engineering groups are lucky enough to have a centralized database to store test data, they still run into difficulties accessing it because it’s not optimized for waveform data types and rarely shared between groups.
All of this leads to “silos” of data that then can’t be used efficiently, causing more wasted time trying to access it. In extreme cases, these problems even cause companies to rerun tests because they simply can’t find data they’ve already collected.
Validating Data Collected
An issue that most don’t think about until they experience it firsthand is the validation of collected data. Ideally, every test runs the way it’s intended to, but there’s no way to know, unless some validation steps are performed.
There are countless ways to get incorrect data, from improper test rig setup to data corruption. If invalid data goes on to be analyzed and used in decision-making, there could be disastrous results.
Big Analog Data validation presents extra headaches due to the sheer volume and variety of data types. A gut-wrenching example of this is NASA’s 1999 Mars Climate Orbiter that burned up in the Martian atmosphere because engineers failed to convert units from English to metric.
Manual processes work, but are extremely time-consuming. To save engineers from wasting valuable person-hours, an automated solution is usually required.
Analyzing Large Volumes of Data Efficiently
Studies show that on average only five percent of all data collected goes on to be analyzed. By not analyzing more of the data you collect, you risk making decisions without considering the bigger picture.
A great way to illustrate this in the engineering world is the Nyquist Theorem, which states that you must use a minimum number of data points in your analysis to get accurate results. For example, without analyzing more data points, you may just see an exponential signal (Figure 1) instead of the sine wave that’s actually there (Figure 2).
Figure 1
Figure 2
There are two reasons why test engineers don’t analyze more data. The first, as I mentioned earlier, is being unable to find the right data in the mountains of Big Analog Data they’ve collected. But, they’re also using systems and processes that aren’t optimized for large data sets. Manual calculations with inadequate tools are typically the roadblock when it comes to analyzing large quantities of data.
Even when the right tools are used, processing Big Analog Data can be troublesome and usually requires an automated solution where processing can be offloaded to a dedicated system.
In my next post, I’ll give you some options for tackling your Big Analog Data problem, so you can be sure you’re making the best data-driven decisions possible.
NEXT:Addressing Your Big Analog Data Challenges >>
Find out more about NI’s solutions for data-driven decisions >>