LabVIEW News

Community Browser
Showing results for 
Search instead for 
Did you mean: 

The release of LabVIEW 2016 just a few short weeks ago marked the 30-year anniversary of the seminal 1.0 launch of LabVIEW on the Mac. To ultimately improve your productivity in building engineering systems, NI has consistently invested in abstracting low-level programming tasks, exposing leading technologies and compute platforms, maximizing performance and memory management, and of course doing all of this inside of a stable platform. You might think that the features you’ve see over the past few releases, such as Channel Wires, represent the totality of the investment being made in LabVIEW. But, you would be wrong.

NI also announced the NI Software Technology Preview, a program designed to provide current users of NI software active on a Standard Service Program contract with an early look at conceptual capabilities. Through this program, you can access pre-release software builds, product guides, reference materials, help modules, and discussion forums to help you along the way.

The software builds available today demonstrate investment in these 6 key areas:

Icon84-LV3-1.pngApproachable AutomationIcon84-Knowledge-5.pngIntegrated Learning and Help ContentIcon84-Diagnostic-5.pngInteractive Data Analysis and Management
Icon84-Upward Trend-5.pngEnhanced Productivity for ProgramingIcon84-Data Viz-5.png

Modern UI Development from the Desktop to the Web

Icon84-Big Data-5.pngServer Side Data Analysis and Management

The Next Generation LabVIEW Features for Desktop DAQ and Instrument Control Technology Preview build demonstrates the fruition of these investments in graphical programming specifically.

LabVIEW FEatures Tech Preview.png1920x1080 BD Option2.png

Visit to sign up and ZOOM into the future of LabVIEW now!


There are a million scenarios where data can get lost. Did someone forget to move the data from the test machine to the final data store? Was the spreadsheet you were analyzing closed without saving any changes? Did a test fail and cause a corrupt file to be created? One effect of losing data could be that a test has to be re-run.

We all know that feeling. Imagine yourself working on a PowerPoint presentation and you are in your groove. All of a sudden PowerPoint crashes and you realize the last time you saved was 30 minutes ago! And of course auto-recovery doesn’t fully capture all of the changes you made in the last hour. So you say a few choice words, put your head down, and re-create all of the work you just lost.

When test data is lost, you go through the same process. And sure, the second time around it goes by faster, but sometimes this just isn’t an option and capturing the data is critical.

Now, currently in LabVIEW we have the project to organize and store your VIs, subVIs, controls, documentation, libraries, etc., but our question to you is, "where is the data?" Why is that data you are collecting in your application not automatically saved to the project?

NI is investing in this exact scenario to automatically save data to the project and to collect data without programming (as discussed in a previous blog). Now, when you share a project, the data files will be sent along with the application. Just think of the possibilities! When there is a bug and the data is returning something different than usual, you can package up the whole project to send to the troubleshooting team. There could be an overall increase in efficiency because data will never be lost again. Just look at what benefits the High Power Laboratory at ABB Switzerland has achieved by focusing on analyzing the data they already have available: "We can save up to $50k per test by simply avoiding the costs of rerunning tests for which existing d...."

Data Blog.png

The bottom line is, we want you to be as efficient as possible when developing your application, and ensuring that your data is well managed is one of NI’s priorities for the future.


When you look at traditional programming languages, you see all the same things: text characters and punctuation symbols. To understand the meaning of this code, you read and interpret a form of text that was designed from the perspective of a machine’s sequential operation. Various development environments apply transforms to this text to help you – color-coding keywords, automatically indenting sections of code to show scope, collapsing sections of a large file for easier navigation – but in the end, you are still left facing a wall of text that you must interpret.

The graphical programming language in LabVIEW describes functionality the way that users think: visually. Data flows along patterned, color-coded wires, parallel processes are shown side by side, and code sections are abstracted into nodes with visual depictions of their functionality. Over 30 years, engineers and scientists have used graphical programming as their tool of choice for automation and system design, because it visually reflects their way of thinking, rather than the computer’s.

It follows that visual design in a graphical programming language affects not only aesthetics, but also utility. As we developed the LabVIEW platform over these last 30 years, we have added thousands more functions across a variety of areas, from data acquisition, to embedded control and monitoring, to 5G research. With such a diverse application space, it is important that the platform stay visually consistent and disciplined. With that in mind, we have embarked on a significant visual design initiative intended to keep all developers productive.

The first major result that you will see from this initiative is more consistent and meaningful iconography for VIs. In the past, you may have found different glyphs that meant “search” – binoculars or different styles of magnifying glass – in the future we will have a single glyph used throughout the platform. The same applies for “write,” “configure,” “reset,” and so on.

Modern UX.png

Figure 1: Consistent iconography across the software platform makes visual metaphors more effective.

We've also turned our attention to VI icon design. We took inspiration from everyday traffic signs that are simple to understand at a glance, and applied this to color scheme and glyph use. The results are icons that have only one bold color, one accent color, and a few key glyphs per icon. This reduces visual complexity, while still elevating important functionality.


Figure 2: The NI-DAQmx palette uses a bold dark color and secondary accent color.

Graphical programming has always derived its value from effective visual design. With continued investment, we will further differentiate this benefit compared to traditional programming languages.

Let us know what you think.


There’s no way around it, expectations around UIs have changed dramatically with the rise of touch-friendly smartphone apps and the proliferation of sophisticated web applications (we’ve talked about it before). For 30 years, NI has empowered engineers to build UIs that look exactly like they want in LabVIEW, but it’s time to do more.

These new experiences are not just the result of an evolving design aesthetic for what makes a UI effective. Even more important than changing shapes and color schemes is the underlying technology that enables these new capabilities. This underlying technology is what we are investing in to enable you to meet design expectations.


WinForms (Desktop)


WPF (Desktop)


Silverlight (Web)


HTML5 Prototype (Web)

Let’s take a look at the evolution of UI technology over the last 30 years. From Windows Graphics Device Interface (GDI) to Windows Forms (WinForms) to Windows Presentation Foundation (WPF), Microsoft has continually provided new APIs and libraries for composing UIs in Windows applications.

Over the same time period, web browser and mobile OSs such as Android and iOS have emerged as new application platforms. With each of these platforms, there has been an entire evolution of UI technologies – Java, Flash, Silverlight, and HTML5, to name a few.

Right now we are investing in two UI technologies – WPF and HTML5. WPF is hardware accelerated and based on DirectX, incorporating advanced graphical features like gradients, opacity, animation, and an advanced composition model. Simply put, with WPF we can build you stunning, theme-able controls that you can customize with artwork imported directly from Adobe design tools. We added WPF support to Measurement Studio in 2013 and are currently working to bring this technology to the rest of our software platform.

Meanwhile, HTML5 has emerged as the de facto web standard for beautiful, interactive web applications. In addition to richer graphics, HTML5 can facilitate dynamic animations and responsive layouts in combination with CSS3 and JavaScript. Very importantly, HTML5 does not require any client-side plugins like Silverlight and is supported in all modern web browsers.

We’re excited about bringing these technologies into tools like LabVIEW because of the new things it will enable application developers to do. Each brings new capabilities for native controls, such as WPF tables with mixed data types or Google Maps integrated into an HTML5 page. Controls also have the potential to be customized in new ways using vector-based graphics that can scale flawlessly to different resolutions. In addition to these advances, technologies like WPF and HTML5 bring rich ecosystems of existing controls and frameworks that can be reused by application developers.

Changing UI requirements are much more than just changes in design fashion from skeuomorphic to flat. UI technology has been evolving, and NI is investing to keep up.

What UI technologies are you excited about?


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.

You’re finally given the project you’ve been working towards: it’s “THE” project – big, hairy, complex, and visible to top management. It’s the project the company needs to get back on track. Before you dive in – remember the company needs this, meaning if it doesn’t go well it can threaten the very existence of the company and, therefore, your livelihood.

Now that there’s no pressure – what should you consider? With large software projects typically running 66% over budget and 33% over schedule, you need to have a plan to defy the statistics. Complex project management must be coordinated with clear accountability, clear communication, and shorter iterations. Luckily for you, there are reams of organizations that specialize in consulting for large application management. But – NI can help, too what we know is software for designing, prototyping, and deploying your engineering solution. And we also know that the right tool makes all the difference.

Imagine for a moment you didn’t go to engineering school, but instead became a drywaller. You’ve recently been hired by a local small business and you show up for your first day of work excited for a busy day. You and your team of three other workers show up to the first house – a straightforward job of room repair, taping and floating. But here’s the catch – you have a step stool, a piece of sand paper, and a putty knife. You begin on a small square and quickly become frustrated with bumpy spackle and tearing sand paper. Your colleagues are on stilts quickly moving across the ceiling with their drywall saws, automated sanders, and expert tape. They’ve completed three rooms and you’re still working on one patch.

You get the point – the right tools make all the difference. You need an engineering software tool for your engineering job. LabVIEW has been proven for over 30 years to be the most productive engineering software package on the market. But that’s not enough – the touch points to the process and the system are also critical. Tracking the relationship from requirements to test, measurement, and control software is crucial for validating implementation, analyzing the full impact of changing requirements, and understanding the impact of test failures. Performing this coverage and impact analysis helps engineers meet their requirements and streamline development efforts. NI Requirements Gateway is ideal for applications that simulate or test complex components against documented requirements in industries such as automotive, defense, aerospace, and consumer electronics.  Requirements Gateway works seamlessly with your NI software – from LabVIEW to TestStand to LabWindows™/CVI.

Whether you are running THE project, developing a simple UI, or creating the next test system for the team – ensure you use the right tools to allow you to focus on solving the complexity of the engineering challenge, not trying to unravel the complexity of application software tooling.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.

It’s like a bad dream. The kind of dream that reoccurs, never changes, never gets better, just happens over and over again. And there’s nothing that you can do about it. I had a conversation with someone who downloaded the LabVIEW evaluation, but we saw no further engagement from. I asked what prevented that particular individual from continuing.

His answer?

“Well, I’m really just trying to automate this instrument. I gave it a look, but LabVIEW just isn’t for me.”

The words pierced my heart like a perfectly placed knife inserted by the expert hands of Jason Bourne. Not for him? Not for him? He’s doing the EXACT thing that LabVIEW was conceived 30 years ago to do – automate benchtop instruments.

Those words have haunted me since that day. The sad fact is that he was right. LabVIEW has evolved so much as an enabling technology for any engineer to accomplish almost literally anything, it’s no longer highly optimized for any specific task. Within the walls of NI, we call this the “Blank VI Syndrome”. Even a blank PowerPoint slide says “Click to Add Title”. That’s why the investment we’ve been making in in-product learning is so important.

How do you teach someone to use a tool that can do anything?

Well, the answer to that is beautifully simple. You don’t teach them to use the tool. You teach them to accomplish their task using the tool. It might seem subtle, but that subtlety is important. You aren’t taught how to use a pencil. You’re taught how to write with the pencil.

Within the entirety of NI software, and not just LabVIEW, we’re building capabilities that solve a few issues.


Within the walls of NI, we call this the “Blank VI Syndrome”. Even a blank PowerPoint slide says “Click to Add Title”. We're working on fixing this.


There’s a ton of valuable IP, functions, and controls built into LabVIEW. You can find them if you know where to look. But, by definition, new users don’t know where to look. Making these capabilities easily and naturally discoverable is a critical aspect to shortening the learning curve.

Capture (1).PNG

Integrated Guidance

Today, when you’re learning to use LabVIEW, your best resource is the vast expanse of the internet. The internet, where funny cat videos, MEMEs, and bad lip reading can take you away from the task at hand in a moment’s notice. The software product itself needs to be smart enough to help you solve the task. Be both the tool and the teacher.

Capture (1).PNG

Better Starting Points

As LabVIEW has become world-renowned for its unrivaled ability to integrate hardware – any hardware, acquiring data from that hardware is a common starting place. Of course, not everyone has this same starting point. LabVIEW has become popular in design applications as well, where the starting point is typically hardware-agnostic. Regardless, we can take the vast majority of applications and boil the starting points down to a manageable number.

Then, we should just design approachable starting points and flows around those. Right?

Capture (1).PNG


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.

The days of the droning instructor—whether in undergrad or executive education—are (thankfully) long gone. No one has the time or patience to learn new techniques and technologies the old way. 

The pace of technology is ridiculous, the Internet of Things (IoT) is exploding system complexity, and we’re all finding it hard to keep up. The facts are undeniable, you and your teams have more to learn in your field and have to learn beyond your field, too, if you really want to be competitive. Learning formats must be compatible with your lifestyle.

Thankfully there’s good news. Technology, investments, and management trends are in our favor. I’m seeing a rise in learning technologies and techniques from massive open online courses (MOOCs) at universities to in-product learning from customer education departments. But I’m also seeing innovation outside traditional spaces. Options like Connexions, TechShop, and Khan Academy are popping up everywhere.

The 70:20:10 Rule: Learning Is More Than the Classroom

We need to expand our definition of ‘training’ beyond the classroom to all forms of learning. The 70:20:10 rule reinforces this concept. Traditionally, "customer education" content lives in the 10% (formal learning). NI is also providing the 20% (social) heavily rooted in peer to peer (user groups, summits, developer days). We are evolving our portfolio to include more content that lives in the 70% (experiential learning). Learning spans tutorials to online modules, YouTube to code snippets, mentoring to code reviews, and seminars to white papers. Learning is popping up online, in cubes, and in-product.

One key learning enhancement in LabVIEW, shaped by the LabVIEW Community, first came in LabVIEW 2012 with the introduction of templates and sample projects. These recommended starting points are designed to ensure the quality and scalability of a system by demonstrating recommended architectures and illustrating best practices for documenting and organizing code. More than a conceptual illustration of how to use a low-level API or technology within LabVIEW, these open-source projects demonstrate how the code works and best practices for adding or modifying functionality, so you learn by doing.

But we aren’t stopping there, we have learning built into LabVIEW Communications Design Suite (the revolution in rapid prototyping for communications) to minimize design flow interruptions and encourage seamless learning. "Just-in-time" learning and access to learning material in product allows you to learn by doing—commonly referred to as "performance support." 


Overcoming the Access Hurdle

Regardless of the learning format you prefer, it’s clear to me that access is the key hurdle to overcome. If you know what you don’t know and can access the right training to learn what you need to learn—study upon study demonstrates you will be significantly more productive.


We are doing our part here as well. For the past several years, NI has included online training with most of our products as part of staying on active software service. This online format is optimized to fit your schedule while complementing other formats including live instructor-led virtual training, classroom training, and on-site custom courses. The online courses respect your time and your budget as Thomas Sumrak from PAR Technologies reported, “I estimate that the training courses have saved me more than 150 hours in self-paced learning time to get to an equivalent skill level.”

It’s All About Proficiency

As your intuition would tell you, learning, and more importantly—proficiency—really matters. Everyone knows someone who takes every corporate course available and doesn’t learn a thing. You have to learn, not just listen, for the investment to matter. When you do—it does. After becoming proficient (via certification), customers reported the following:

  • Over 54% said the quality of their work improved
  • Nearly 45% said their peers’ perceptions of them improved
  • Nearly 30% received new project opportunities

Take your learning into your own hands and take advantage of the many new resources available and suited to your learning preferences, time constraints, and budget needs. Don’t just check a box, but take the time to do, and cement your understanding through experience. It’s at your fingertips.

Identify the skills you need and find learning resources to help you successfully develop your application. Visit or download the course catalog to review the training courses, certification exams, and proficiency events available in your area of interest.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.

The light switch—that little component of our lighting system that is afforded no delay, no ramp time, and no warm up routine. When you flip up that switch, you expect to see light immediately. This expectation is the result of years of mental conditioning; however, many fail to understand how it actually happens.

What’s the light switch for LabVIEW developers? It’s the compiler. It’s that tiny run arrow button that switches your code from edit mode to run mode. Again, many years of mental conditioning force you to expect the immediate translation of your code into executable action. But do you understand what actually happens?

The run button isn’t the action that spawns the compiler down the magical world of optimizing, unrolling, and memory managing. The LabVIEW compiler is constant; it’s always on and compiling your code. From a usability standpoint, this is one area that makes LabVIEW so engaging. Our brains live off the dopamine hits of being right, solving problems, and answering questions. With each incomplete step of your code development, the compiler visually warns you that your code won’t run. Obviously, the next step is to fix that. Alas! The run button is complete again.

And LabVIEW says, “you’re welcome” for that adrenaline rush you just enjoyed—perhaps even subconsciously.

Unlike other languages where the compile is an explicit step that you take when you’re done and ready to run, LabVIEW does this action constantly. This always-on compiler streamlines the development process for programmers.

I can’t summarize the complaints I get from LabVIEW users about the compiler, because, frankly, I just don’t get them. Yes, I hear a lot about the need for by-reference classes and generics, but those are more language implementations and not specific compiler complaints. Subtle, but I’ll consider it accurate just so I’m right.

As we continue to iterate, improve, and evolve the LabVIEW compiler, our focus is on two places: writing code faster and writing faster code (this gem of language genius came from one of our senior LabVIEW architects, Darren Nattinger, also known as the fastest LabVIEW programmer on the planet).

Write Code Faster

LabVIEW was designed for engineers and scientists. The semantics of writing graphical code maps directly to most engineers who lay out solutions in their minds (see any of the maps I drew on napkins before I was blessed with a smart phone).

We’ve had a significant focus over the last few years on reducing mouse clicks and keyboard strokes. This is one measure of an engineer’s productivity—getting to final code in minimal physical effort (right-click shortcuts, default wiring options, and so on).

Sneak Peek: As you’ll see in the upcoming release of LabVIEW (at NIWeek perhaps?), these types of improvements are continuing.

In addition to these critical improvements in the compiler, we’re investing in areas that carry this benefit into other product areas. I previously discussed one such example in the realm of hardware discovery and configuration. But, we’re also looking to extend into other areas such as managing deployed systems, simplifying data communication, and introducing other language components.

Capture (1).PNG

Write Faster Code

The optimizations introduced in LabVIEW 2010 with the new DataFlow Intermediate Representation (DFIR) and integration of off-the-shelf compiler technology in the Low-Level Virtual Machine (LLVM) has drastically increased the run-time performance of code without requiring rewrites of the code itself.

This compiler overhaul has laid the groundwork for continued innovation for both code optimization on the desktop and code optimizations within deployed targets, both real-time processors and FPGAs. The newly introduced multirate diagram in the LabVIEW Communications System Design Suite is a perfect example of this innovation—a novel algorithm representation that enables one high-level representation of a mathematical algorithm, even with different execution rates and sample counts from node to node.



This is representative of the focus on the LabVIEW compiler in a constant commitment to highly optimized code.

Sneak Peek: Again, you’ll see some very impressive performance improvements for large applications in the next release.

Capture (1).PNG


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.

Grace Murray Hopper (1906–1992), computer scientist and US Navy rear admiral, is on record as the individual who invented the first compiler and popularized the concept of machine-independent programming languages, and her work led to the development of COBOL. Her influence earned her the nickname Grandma COBOL[1]

Hopper’s contributions led to the productivity through abstraction that engineers and scientists benefit from regularly. While her work was ahead of her time, compilers have advanced significantly since the Harvard Mark I computer Hopper used in 1944. Compiler design, even for a trivial programming language, can easily become complex, which makes compiler theory one of the most specialized fields among software professionals today. This complexity translates to either a frightening or magical trick for the majority of engineers.

However, all engineers can benefit from this art form of software mastery. From Python to LabVIEW, compilers are primarily used to translate source code from a high-level programming language (G, C#, and so on) to a lower level language (assembly or machine code). Even more beneficial are cross-compilers that can create code to execute on a computer where the target and development CPU or OS are different.

Modern LabVIEW provides a multiparadeigmatic language that embraces a wide variety of concepts including data flow, object orientation, event-driven programming, and, more recently, multirate data flow, which extends LabVIEW development by giving you the ability to implement multirate, streaming digital signal processing algorithms more intuitively. LabVIEW also supports cross compilation, a powerful language that offers flexibility to protect the investment in your code by developing on one platform and deploying to Windows, MacOS, NI Linux Real-Time, CPUs, and FPGAs. All of this to say, the LabVIEW compiler is a pretty awesome, sophisticated element of our platform.



Figure 1. LabVIEW has a cross-compiler and contains a multiparadeigmatic language that embraces a wide variety of concepts including data flow, object orientation, event-driven programming, and, more recently, a multirate diagram, which defines a synchronous multirate dataflow system (shown here).

For LabVIEW and most languages, the compiler is an area that is always reviewed, renewed, and improved. Major compiler investments (following the first huge transition from an interpreter to a compiler in LabVIEW 2.0) began in LabVIEW 2009 when we added 64-bit compilation and DataFlow Intermediate Representation (DFIR). Complementary to that investment was the adoption of a Low-Level Virtual Machine (LLVM) into the compiler chain in LabVIEW 2010. These significant improvements provided more advanced forms of loop-invariant code motion, constant folding, dead code elimination, and unreachable code elimination, as well as new compiler optimizations such as instruction scheduling, loop unswitching, instruction combining, conditional propagation, and a more sophisticated register allocator.

So aren’t other compilers amazing, too? Of course, anything that provides a valuable abstraction from machine code to encourage innovation and increase productivity is amazing. However, all compilers are not created equal. 

Python, for example, has a compiler that compiles to a byte code used by a virtual machine, similar to Java. This provides portability but keeps it very far from “the metal” or hardware creating noteworthy challenges if your application relies on timing or I/O. 

Why does this all matter? Why do you care about Grandma COBOL’s work? As a busy, practical engineer who programs to get an application built and trusts the compiler will just work, what’s in it for you? When done well, compilers can contribute significant value to your day-to-day development by increasing your executable code’s performance. Modern compilers can also contribute to your productivity if they are designed to be extensible so that partners can build on tools, languages, and integrated development environment features for more rapid innovation. This complex and critical element of every language provides a constant “green field” of innovation for computer scientists—an opportunity for continuous improvement in each release.

What’s your favorite compiler? Where should we invest next to improve LabVIEW compiler technology for you?

The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis.


[1] Hopper, well known for her lively style, is also credited for popularizing the term debugging, for solving small glitches with engineering problems, when her associates discovered a moth stuck in a relay (Dahlgren, Virginia 1947).


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Phillips headshot.jpg

Today’s Featured Author

Omid Sojoodi is currently the leader of application
and embedded software for National Instruments.

With the rise of the Industrial Internet of Things, one thing is clear: engineers need to extract meaningful information from the massive amounts of machine data collected.

Data from machines, the fastest growing type of data, is expected to exceed 4.4 zettabytes (that’s 21 zeros) by 2020. This type of data is growing faster than social media data and other traditional sources. This may sound surprising, but when you think about those other data sources, which I call “human limited,” consider that there are only so many tweets or pictures a person can upload throughout the day. And there are only so many movies or TV shows a person can binge watch on Netflix to get to the next set of recommendations. But machines can collect hundreds or even thousands of signals 24/7 in an automated fashion. In the very near future, the data generated by our more than 50 billion connected devices will easily surpass the amount of data humans generate.

The data that machines generate is unique, and big data analysis tools that work for social media data or traditional big data sources just won’t cut it for engineering data. That is why NI is investing in tools to help you overcome common challenges and make data-driven decisions based on your engineering data (no matter the size) confidently.

Challenge 1: 78 percent of data is undocumented.

According to research firm International Data Corporation (IDC), “The Internet of Things will also influence the massive amounts of ’useful data’—data that could be analyzed—in the digital universe. In 2013, only 22 percent of the information in the digital universe was considered useful data, but less than 5 percent of the useful data was actually analyzed.”

Data that is considered useful includes metadata or data that is tagged with additional information. No one wants to open a data source and wonder what the test was, what the channels of information are called, what units the data was collected in, and so on. NI is helping to resolve this issue with our Technical Data Management (TDM) data model. With it, you can add an unlimited number of attributes for a channel, a group of channels, or the entire file. We are constantly updating the infrastructure of this binary (but open) data file, and have recently reached streaming rates of 13.6 GB/s. To make documenting data easier, NI is investing in technologies that will recommend metadata to save with your raw data while offering you the flexibility to add attributes at any point before, during, or after acquisition.

Challenge 2: The average NI customer uses three to five file types for projects.

With so many custom solutions on the market, your current application likely involves a variety of vendors to accomplish your task. Sometimes these vendors require you to use closed software that exports in a custom format. Considered a common pain point, aggregating data from these multiple formats often requires multiple tools to read and analyze the data. NI addresses this challenge with DataPlugins, which map any file format to the universal TDM data model. Then you can use a single tool, such as LabVIEW or DIAdem, to create analysis routines. To date, NI has developed over 1,000 DataPlugins. If one isn’t readily available, NI can write a DataPlugin for you.

Challenge 3: It takes too long to find the data you need to analyze.

The Aberdeen Master Data Management research study interviewed 122 companies and asked how long it takes to find the data they need to analyze. They answered five hours per week! That’s just looking for the data—not analyzing it. From an engineering perspective, this to me is not that shocking. How many of us have faced what I consider to be the ”blank VI syndrome” for data? How do you even begin to start analyzing your data?


A little-known technology that NI continues to invest in is DataFinder. DataFinder indexes any metadata included in the file, file name, or folder hierarchy of any file format. Again, this relies on a well-documented file, but by now I’m sure you have decided to use TDM for your next application.

Once the metadata has been indexed, you can perform queries—either text-based, like you would in your favorite search engine, or conditional queries like in a database—to find data in seconds. With this advanced querying, you can return results at a channel level to track trends in individual channels from multiple files over time.

In addition, NI is continuing to innovate to make analyzing your data easier than ever. Imagine a future when, as soon as a file is saved, the DataFinder recognizes the data, indexes the metadata, and cleanses the raw data (normalizes channel names so that rpm = speed = revs or performs statistical calculations automatically). Then an analysis routine, written in your language of choice, acts on each data file and automatically archives the data or sends a report to your email or mobile device. This technology ensures that your data-driven decisions are being made with 100 percent of the data and not just 5 percent, as IDC estimates suggest today.

Stay tuned, everyone.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.

If you were to ask me what the most critical element of LabVIEW is, I would have to say “data.” It’s so elementally important to LabVIEW that the term to explain the software’s execution semantics is “dataflow.” The data itself is the factor that defines the timing, flow, and output of any LabVIEW code. In fact, LabVIEW features many elements that elevate data beyond what a general-purpose programming language would.

Native data types in LabVIEW, such as the waveform data type, treat the nature of the data as important—and not just the data values that are acquired. This data type packages together the timing information associated with the data to provide context. LabVIEW includes over 950 built-in analysis and signal processing functions because they’re fundamental to getting to data insights, not just to getting to the raw data. LabVIEW also features a lesser known ability to define Data Plugins, which are essentially drivers for software file types that help you easily format your measurement data into known file types for sharing, streaming, or even report generation macros.

Anyone who knows my “LabVIEW story” knows that I was academically trained on The MathWorks, Inc. MATLAB® software—so much so that I actually did my engineering homework problems by writing .m files that rotated different input data sets through the same set of equations. When I was first introduced to LabVIEW, I struggled (of course, I did get it eventually, and now I am an avid LabVIEW evangelist who doesn’t use MATLAB® anymore). One of the areas that I struggled with was the lack of interaction with the data itself. I’ve heard this echoed by many users.

I talked about a similar challenge in last month’s post: being able to get to measurement data without the requirement of writing code. The same concept extends to analyzing and interacting with measurement data. To actually analyze data within LabVIEW, you need to lay down blocks of code, wire them together, and run the code. Of course, this is the use case that LabVIEW was optimized for, but, in many cases, users want to iteratively develop the analysis or even dive into it interactively. LabVIEW almost forces users to leave the environment for that type of interaction. At NI, we have complementary products like DIAdem that are designed around this use case.


We are aggressively investing in the ability to pull this interactive model into all of our software products to simplify your development process.

You can see some elements of this in the LabVIEW Communications System Design Suite featuring an interactive data viewing window and built-in analysis routines that can be run against the data sets.

The real beauty of this approach is being able to actually build the G code behind this interactive model to even further simplify the development of the automated code itself.

Capture (1).PNG

The amazing thing about a data-driven product is that everyone is “doing the same thing.” They’re trying to analyze the data and draw insights from it. The challenging thing with a data-driven product is that nearly every engineer is trying to accomplish these tasks by following different steps, applying different algorithms, or even having a different ending point for that insight. The more that we know about what you’re trying to do, the better we can get at designing a flow around that into the product.

So, I ask you now: What are you trying to do with your data? Where is LabVIEW falling short?

MATLAB® is a registered trademark of The MathWorks, Inc.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.

Data is critical to engineers and central to LabVIEW. Most engineering and scientific applications are primarily concerned with turning real-world signals into meaningful information for the purposes of measurement and control. As a result, data from hardware drives the behavior of these systems—making a language built around the data itself a natural expression of how these systems should behave.

Graphical data flow is the primary way to describe the behavior of a LabVIEW system. LabVIEW graphical diagrams literally depict the flow of information between functions, which execute when they receive all required inputs, and afterward produce output data that is then passed to the next node in the dataflow path. Visual Basic, ANSI C++, JAVA, and many other traditional programming languages follow a control flow model of program execution. In control flow, the sequential order of program elements determines the execution order of a program, as opposed to the data itself. In LabVIEW, the flow of data rather than the sequential order of commands determines the execution order of block diagram elements. Consequently, LabVIEW developers can create block diagrams that have simultaneous operations.

Dan Woods, Forbes contributor, presents a compelling case that our need and desire for data is only growing with the Internet of Things. Taking the reports by analysts, the billions of sensors and millions of connected devices will yield almost $2 trillion by 2020. However, says Woods, “..the path to creating that value is not what most people think it is. A problem that has not disappeared is what to do with all of the data generated by the sensors. The challenge is not just the volume of data, but the fact that the modern world of data analysis is something that uses an ensemble of technologies and each will require its own slice of the data.”

This type of insight is exactly why we on the software leadership team at NI have significantly increased our investment in data exploration, discovery, and engineering analytics in our platform. You can see results of this increased investment in several key areas including, but not limited to:

Insight CM.png

InsightCM Enterprise—a deployment-ready software solution with tightly integrated, flexible hardware options for online condition monitoring applications. The suite can acquire and analyze measurements, generate alarms, help maintenance specialists to visualize and manage data and results, and simplify remote management for large deployments of CompactRIO-based monitoring systems. It provides insight into the health of critical rotating machinery and auxiliary rotating equipment to optimize machine performance, maximize uptime, reduce maintenance costs, and increase safety.


DIAdem—a single software tool that you can use to quickly locate, load, visualize, analyze, and report measurement data collected during data acquisition and/or generated during simulations. It is designed to meet the demands of today’s testing environments, which require you to quickly access, process, and report on large volumes of scattered data in multiple custom formats to make informed decisions. DIAdem is a component of the NI Technical Data Management (TDM) solution.


DIAdem DataFinder - Locate data quickly and intuitively using the NI My DataFinder. Each version of DIAdem software includes a self-configuring data management system that provides advanced search and sophisticated data mining functionality right out of the box—you do not need any additional IT support to set up or maintain DIAdem.

Some of the data interactions you find in these other products make sense to bring into more NI software products, including LabVIEW—where data insights are going to be more valuable and more necessary. Stay tuned for even more approachable and interactive data analysis capabilities. In the meantime, share your engineering data needs with me here. 

What do you need from your data? Let us know in the comments.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.

This past Saturday, I was watching the Disney Junior television series Jake and the Never Land Pirates with my daughter. After 473 straight episodes, my mind started to wander toward unfinished business at work. As my subconscious carried my mind back to my computer, I heard Captain Hook yell at the bumbling Mr. Smee “time is everything!” after the first mate spoiled yet another impossibly ridiculous attempt to trap the antagonist pirates. That got me thinking about a UX review we were doing at work related to “time to first measurement” design flows in LabVIEW.

The concept of time plays many roles within LabVIEW, but perhaps the most important to the highest volume of LabVIEW users is the time it takes to get to data. A few weeks ago, I visited with an account who, despite being an avid and very experienced programmer, asked to see the ability to access data without the requirement of writing code. LabVIEW has proven to be an amazing tool for developing code to automate the acquisition, analysis, and presentation of measurement data. However, there are three key areas that we’re focusing on to improve LabVIEW as a general data acquisition tool.

No Requirement to Program


Amongst the purists, there’s an age-old debate over whether LabVIEW is a tool or a programming language. Make no mistake. LabVIEW is a tool that was specifically designed to give the engineering community the ability to interface with the world around them through PC-based hardware. At the core of LabVIEW is G, a graphical dataflow language that uses abstracted blocks to represent functional logic and physical wires to transport data between those blocks. Although I believe with every fiber of my being that G (inside of LabVIEW) is the best and most efficient way to develop automated measurement applications, there are many times that you just want to get the data and don’t care about doing so over and over. We’re working on some unique UX flows within the product that give you the ability to move from seeing the hardware to getting the data without needing to write code on the block diagram.

Capture (1).PNG

Improved In-Product Configuration


I touched on this a bit in a previous article, but you should be able to discover your hardware inside of LabVIEW without a secondary improvements to LabVIEW. From the discovery of hardware to thedeployment of code to it, your one-stop shop for that experience is being designed into LabVIEW. Measurement & Automation Explorer. This is a key tenet as we prioritize our improvements to LabVIEW. From the discovery of hardware to the deployment of code to it, your one-stop shop for that experience is being designed into LabVIEW.

Capture (1).PNG

Improved Driver Discoverability

The third key area is driver installation. In the modern era, we are all typically used to having our device either find its driver automatically or come preloaded with it. As I envision a perfect future, I see a world where LabVIEW, as a “hardware-aware” tool, can automatically detect the driver you need, automatically determine the version that is compatible with other hardware interfaces in the same system, go find that version, and install it in real time.

Capture (1).PNG

Of course, the concept of time extends far beyond this instantiation in LabVIEW. LabVIEW is the only measurement software with a waveform data type that propagates the concept of time throughout a compile chain. LabVIEW is the only measurement software with a Timed Loop—a logic flow that is controlled by either a hardware clock, rising or falling edges of a digital signal, or even the predetermined close rates of an OS itself. From saving you time in your day-to-day job to exposing the elusive concept of time in a highly synchronized and distributed application, LabVIEW is designed from the ground up (and being improved inside and out) to continue down this path.

We’re not forgetting about a mortal enemy by the name of Tick-Tock the Croc. Tick-tock, tick-tock.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Time: a precious commodity. It’s even more precious when talking about your own time.

Wikipedia, as usual, provides an enlightening perspective: “Time has long been a major subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars. Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.” NI is no different than this generic definition. We heavily incorporate time into our respective measurement and control systems.

A Brief History of Time is a favorite among the NI team in part due to our intimate history with time—both in our own engineering and within our products for other engineers. We were among the first in test and measurement to effectively implement time into products. For decades, LabVIEW has incorporated time in native G programming concepts. We first introduced real-time system integration (RTSI) for measurement timing and synchronization in our PCI DAQ products wherein LabVIEW  and the NI-DAQmx driver rout the RTSI automatically.  Additionally, we are one of the only major test and measurement vendors to offer a PXI timing and synchronization product line (for PXI, RTSI is built into the backplane). Through shared timing and synchronization, you can vastly improve the accuracy of measurements, apply advanced triggering schemes, or synchronize multiple devices to act as one for extremely high-channel-count applications.

Internally, we refer to “time” as a “first-class citizen” of our platform. Engineers benefit from this “first class-ness” every day. You see this elevated status in applications like the world’s first real-time optical coherent tomography (OCT)  imaging system to enable early cancer detection. This application uses LabVIEW and PXI for OCT and combines 320 simultaneous channels at 10 MS/s while LabVIEW performs >700,000 FFT/sec. Time is also a major player in controlling the world’s largest telescope. With 1.87 teraflops of data, 64 compute notes at 512 cores, and 14,000 samples every 2 ms; time matters. The Large Hadron Collider (LHC) at CERN is also taking full advantage of NI’s complex timing capability.  The LHC is a complex of interconnected circular and linear accelerators. All devices that serve the accelerators (magnets, kickers, and more) must be precisely synchronized and controlled by a central control system. CERN adopted LabVIEW FPGA to serve as the timekeeper for more than 100 collimators. Inside the collimators, LabVIEW performs collimator control for approximately 600 stepper motors with millisecond synchronization over the 27 km of the LHC.

However, this blog series as I hope you know by now, is about where we are investing in our products. With new technologies, we know we can significantly improve your ‘first time to measurement’ AND complex mastery of time for highly distributed, synchronized, heterogeneous systems.

Time to First Measurement

Our insatiable need for instant gratification translates directly to our measurement and control systems. When you plug in your DAQ device or myRIO, you want to see data immediately to not only know that everything is working properly, but to gather information and insights as quickly as possible. We are currently exploring a scalable use-model of “interactive panels” to ensure you have instant access to measurements, as well as approachable and interactive analysis in a manner that integrates seamlessly into your G experience when you need to automate.


Figure 1. Interactive panels will provide instant data access, visualization, and options for exploration.

Time-Sensitive Networks

Building on our leadership in measurement and control, we are participating within the IEEE 802 standards group and the AVnu Alliance to help define the Time Sensitive Networks (TSN) standard. TSN will be part of the next revision to standard Ethernet technology and will provide a standard mechanism for network-based clock synchronization, high reliability using bandwidth reservation and path redundancy, and bounded latency enabling network-based, closed-loop control.   This evolution of the Ethernet standard is essential for LabVIEW-based system design tools and will enable creation of distributed, coordinated measurement and control systems that will be heart of the industrial internet of things.


Figure 2 - Distributed control system paradigm shift 1974 to 2016 and beyond.

You—the engineers and scientists of the world—are solving the grand challenges, and you need time. You need time so that your measurements and control algorithms are correct and synched. You need time so that you have an accurate reference for analytics, insights, and correlations. Most importantly, though, you just plain need time. You need YOUR time to tackle those grand challenges in the most efficient and effective manner possible. Oh, and getting some of your time back means you can enjoy a little more of that thing called life; but that’s in another blog. 

Until next time…

Shelley Gretlein Headshot

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

System Configuration

Home, Business, and Engineering | The Gauntlet Accepted

Regardless of whether your background is in hardware, software, or no-ware at all, systems management and configuration—that line integrating hardware and software into systems—is always a pain point and major productivity drag. In the business world, systems configuration tends to fall under the purview of the oft-maligned IT organization. In the home world, systems configuration falls to the most technical family member. We’ve all received the dreaded “home help desk” call from friends and family alike. Within the engineering domain, I am ashamed to admit we are well behind IT in terms of formalizing the configuration, provisioning, management, and maintenance of integrated hardware and software systems. It is thus our natural imperative as competitive engineers to address the unnatural gap between IT and engineering. 

All kidding aside, IT organizations have in their toolbox a bevy of excellent and mature OSs, including Linux, Mac, and Windows, plus a large range of productivity tools and frameworks for managing the deployment and configuration of devices such as servers, PCs, laptops, tablets, printers, phones, and a vast array of peripherals. Of course, life took a turn for the complex when mobile and cloud trends fractured the Wintel Duopoly, and the IT industry has scrambled to keep pace with both the era of “bring your own device” and the cost-saving cloud. In comparison with IT, scientists and engineers have historically lacked the equivalent toolbox for managing our systems. Specifically, we lack an “Engineering OS” and IT-scale systems management frameworks.

The Engineering Operating System | The Foundation

OSs at their core provide key abstractions to computing hardware and standard APIs for driver and application software. Through its standardization of driver APIs across a wide array of devices, one of the most basic value propositions Microsoft provided in its early days was the near certainty that your mouse, display, keyboard, and printer would all integrate and function well. While the bar for general purpose OSs has obviously evolved a bit for “standard” peripherals, the sheer breadth and irreducible complexity of purpose-built engineering systems has not kept pace. NI has a long history in providing “key abstraction to computing hardware” and “standard APIs for driver and application software.” 


Over the last few years, we have redoubled our energy seeking to standardize on an open set of APIs for device and target discovery, visualization, exploration, configuration, and state management. It is our goal to provide these first-class APIs for our own hardware as well as to integrate broader ecosystems of hardware. We also recognize the value of the mature IT standards and of the general-purpose OSs. Combining these elements, we have three key philosophies for our recent efforts which you will see in the market place over the next few years:

  1. APIs First | Build public APIs to all our services then layer our UIs on the same APIs that we want our community to use
  2. Leverage standards
  3. Be IT-friendly

Hardware Aware and System Visualization | Innovate on the Foundation


With our engineering OS providing us the core APIs for managing many different kinds of engineering devices, we foresee a great opportunity to innovate on the state of the art in systems management relative to both IT and engineering. LabVIEW has always been a hardware-aware design environment, but we are looking to greatly increase the degree of hardware and software integration as well as radically improve the scale of devices and systems that we can manage. Over the last few years, we have directly integrated the hardware state into our core compiler. That is, just as LabVIEW has a rigorous syntax checker for our programming languages, we will also provide a syntax checker for the hardware that validates not only the basic hardware settings, but also validates that those settings are correct for a given software application. In essence, we will validate the software syntax, the hardware syntax, and the hardware/software syntax using the same compiler techniques we have evolved over the last 20 years. By doing this, we provide you with a unified design environment speeding your development of systems and accelerating your debug and diagnostic cycles at all phases of design, development, deployment, and maintenance.


LV Comm System Designer_FIFO DataLink ss.png

In addition to the seamless hardware state management, we are looking to revolutionize how engineers design large systems of many devices and manage the interconnections of all system elements ranging from sensors and actuators to network connection of multiple embedded devices.     For our time-critical measurement and control systems, we need to understand the tradeoffs of different networks, buses, and I/O choices. Sorting through the implications of these design space tradeoffs can be complex and often involves tedious review of web sites, product manuals, and hardware specifications. We believe we have created a world-class visual canvas that elegantly, quickly, and fully describes your system topographies the way you would like to see them and at the same time is the input language to our hardware-aware compiler. From this system design canvas, you can rapidly explore hardware capabilities and make your key design time tradeoffs optimizing size, bandwidth, latency, compute power, and price. 

Very practically, we have found our system design canvas serves as a living specification document for a system and provides a simple and natural place to integrate all of the hardware and system-related documentation ranging from performance specifications to pin outs. All of this can be integrated into different types of generated documentation to describe your system to developers, system administrators, technicians, or even your customers.



As application needs grow in size and complexity, our systems management tools must also scale to deal with large systems. In the 80s and early 90s, a large-scale NI system might include one PC, an SCXI chassis, and a few hundred DAQ channels. Today, we have some installations with thousands of CompactRIO devices, tens of thousands of I/O channels, and their associated sensors and actuators. We are radically innovating in how we leverage large-scale servers and the cloud to administer and manage so many devices. Within our engineering OS, we are building out IT-friendly security, logging, system health, diagnostic and administration services.  At this level of scale, the often separate worlds of IT systems management and engineering systems management merge.  Now that they do, we plan to be excellent corporate and secure global citizens connecting our engineering ecosystem to the business ecosystem.


Because the LabVIEW design environment leverages our engineering OS, as we scale it out to servers and the cloud, your applications and systems get the same scaling benefit. We are maniacally focused on ensuring that the LabVIEW editor provides you the right experience for productively managing these extremely large scale systems and are ensuring that nearly EVERY editing operation supports “batch edits” on a near limitless number of VIs, devices, targets, or any element of your system that is configurable.

In Conclusion


We believe engineering systems management is ready to transition from a long standing pain point to a key accelerator in your productivity. We believe the systematic integration of IT-friendly standards, general-purpose OSs, and the right engineering OS sets a foundation for managing all of the elements of your system. Further, we believe key innovations in hardware-aware design environments, hardware design space exploration, system visualization, and massive scalability will radically alter our collective ability to solve the most complex and demanding engineering problems of our time.

Fuller headshot.jpg

Today’s Featured Author

David Fuller has nearly 20 years experience in software engineering and is currently NI’s vice president of application and embedded software. You can’t follow him on Twitter because he’s a software engineer.


The vast majority of LabVIEW applications, particularly in the broad test and measurement market, start with hardware configuration. Whether it’s installing the software components in the wrong order, default device name errors, or device discovery—the initial process of configuring the hardware is often an intrusive and unnecessary barrier to starting the application.

Among the many market research studies, focus groups, and customer feedback engagements that  our group drives, strong hardware integration continues to be one of the top two benefits to LabVIEW users (along with the productivity of graphical programming). I can talk—or I guess write—for days about the benefit of our driver APIs, consistency in programming paradigms across hardware families, and the sheer number of devices that LabVIEW can communicate with, but I’d rather focus on where we can improve.

One of the aspects of my role in Product Marketing that I immensely enjoy is teaching our sales engineers how to demo LabVIEW. Every one of these engineers are experts in the product and can teach it all day long with their eyes closed, but demoing a product is a completely different skill than knowing it or even teaching it. It’s maddening to see how many of these demos actually start by opening Measurement and Automation Explorer. Gasp! As great as LabVIEW is at hardware integration, it’s actually quite lacking in the system configuration aspect of system design, so let’s focus there.

There are three key areas of system design where we’re innovating:

Initial Device Discovery

Some of the most consistent feedback I hear on LabVIEW typically sounds something like  “I have to write code to do anything.” Even miniscule tasks, such as verifying a hardware target is discovered, require wires to be connected and the run button pressed. My response to this feedback initially was a nice, professional way of saying “Duh, it’s a programming language.” But, let’s be disciplined about separating how you accomplish a task in G and how you accomplish a task in LabVIEW. For a software that carries the torch for “world-class hardware integration,” you should expect to discover your connected devices without writing code. Sure, doing so in G requires coding, but the LabVIEW environment should empower this simple task.


Real-Time Feedback on Signal Connectivity

Human physiology is an amazing, beautiful thing. Our brain actually chemically rewards us with dopamine—a hormone that improves our cognitive ability—for things like being right, completing a task, or working out. This is one of the fundamental reasons that engineers derive emotional satisfaction from LabVIEW. Since LabVIEW is constantly compiling code, instead of a large explicit compile step at the end, you’re continually getting these dopamine hits while you’re building out the individual components of your VI.

We should be driving this instant gratification within the configuration of hardware. Beyond just knowing the hardware is connected and recognized, you


should also be able to validate its intended functionality quickly and easily—and without that pesky requirement of writing code.

To the right is a snippet of a screenshot from a super-secret prototype version of LabVIEW. This shows some of the concepts we’re driving towards, like integrating some “liveness” into the environment. What you see is a view within the environment itself confirming the signal input with a preview visualization.


System Visualization

Particularly for a graphical environment housing a graphical language, we can surely find a better way than a vertically organized tree to manage and display hardware. Imagine the world where you plug in a device (left) and open the software to see a physical representation that reinforces not only what the hardware looks like, but visualizes the connectivity and functional organization of the device (right).

Device_1.PNG Untitled.png

This level of integration is made possible by NI’s platform approach, and the natural synergy that exists between our hardware and software. From plug-in hardware to deployed modular platforms, this level of insight and visualization would drastically simplify system visualization and configuration.

Now, if only this could display the next level of connections for sensors or DUTs.


Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.


Being in product marketing for much of my career, I’m often insulted by the software tools I run across that are all style and no substance. They look good, but they don’t do a whole lot. As a seasoned product manager as well, I’m just as often disappointed with the products that are all substance with zero style. This category of products is functionally powerful yet embarrassing from a usability standpoint.

The most rewarding experiences of course are when we get it right: powerful products with intuitive workflows—style AND substance. This simple-to-understand, but difficult-to-execute scenario is exactly what I challenge my product managers to specify and my product marketers to demand. 

This perfect balance of style and substance isn’t always worth the extra effort of course—if you need to design a ‘file’ menu, no need to innovate on the power or ease of use vector, just create your File>>Open options and move on. However, the software areas that stand to offer significant productivity to those applications and domain experts that need it absolutely deserve the research and rigor to get it right.  I believe the current area in engineering that needs this level of attention is system design. 

From a Wikipedia standpoint, “system design” is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. In our world of engineering and science, this overlaps with systems engineering and system architecture roles in many companies. But the tools and technologies here are far from where they need to be as we look into the not-so-distant-future. The Internet of Things will add system design complexities like we’ve never seen—from security systems to distributed nodes, data centers to timing engines, co-simulation to deployed prototypes—we need a system design tool and view to manage heterogeneous, distributed, intelligent systems. And to be blunt,what you’ve got today won’t cut it. 

What you’ve got from NI won’t cut it. What you’ve got from the math vendors won’t cut it. What you’ve got from the box instrumentation vendors won’t cut it.

Today, NI asks you to leave your development environment to discover, configure, and manage your hardware systems. Math vendors take a simulation only or simulation-centric view that doesn’t make sense for real-world prototyping or deployments. Test and measurement vendors take a narrow approach to system design, perhaps only focusing on the physical layer or wireless systems with no inclusion of necessary implementation flows or supporting I/O. Industrial vendors appear to be closer to providing a visual representation of your hardware system only to let you down once you try to act on that information through compilation or application logic. Each option today fails you in style, substance, or sadly sometimes both. 


Your systems will soon require more. You deserve more. Fortunately, we understand these needs and have a rich roadmap to address the substantial gaps we see today. 

LV Comm System Designer_FIFO DataLink ss.png

The future of system design delivers style and substance…stay tuned.

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Jeff_LV1.0_Block Diagram_ss.jpg

As a child, I was obsessed with playing games on the TI-99/4A home computer my grandfather gave me when I was 12 years old. I programmed in my own games from programs listed in computer magazines and one of my most joyful childhood memories was the day I could afford a tape cassette to persist my programs and play my games without typing them in over and over. In many ways, my first years captured a pattern I now live out as a professional computer scientist—namely, figuring out how to optimize software against a bunch of interesting hardware capabilities.

The LabVIEW team charter is to maximize your productivity as you design, build, and deploy measurement and control systems. To do that, much as in my childhood, we seek to maximize how the software uses available computing resources so you can design, build, and deploy quickly and easily without having to constantly worry over the details of the underlying hardware.

LabVIEW must productively abstract hardware capability without obstructing system performance. 

LabVIEW was a pioneering programming language that cleanly abstracted critical computer science details such as explicit memory management, type management, and concurrent program execution. I believe there exists a natural tension between level of abstraction and performance. The ideal abstraction is infinitely productive with no performance penalty. Sadly, in the reality of hardware and software systems, higher-level abstractions often come with a commensurate performance penalty. However, as healthy abstractions mature, productivity goes up and performance penalties go down. Natural tension also lies between NEW hardware capabilities and software adoption. It’s fair to say that most software languages poorly abstract multicore hardware, never mind the more advanced GPUs and FPGAs, never mind elastic cloud fabrics. Software capability constantly lags behind hardware capability, especially when you add higher-level software abstractions into the mix.

With LabVIEW, we feel we have an excellent, mature abstraction for scheduling concurrent clumps of code while cleanly abstracting the data movement between them. We can compile and execute these independent clumps and handle the scheduling and data movement for you. Back when the Father of LabVIEW, Jeff Kodosky, and the early LabVIEW team were developing the algorithms for automatically “clumping” code together to then schedule, we chose the simple and eloquently descriptive name for that activity as “the Clumper.” The Clumper persists to this day and is one of the central algorithms for identifying the optimal code execution schedule. Now, as the hardware platforms LabVIEW supports have evolved, so too has the spirit of the Clumper. From the early days, LabVIEW was ideally suited to map code to multicore processors, but I would say we really hit our stride when we started targeting FPGAs from Xilinx. While mainstream computing and the EDA/FPGA spaces use different terms to describe the process, the computer science behind characterizing code execution and then deciding how to clump the code up and then schedule it out is the same regardless of hardware target. However, the characterization process that determines the ultimate performance of the executing code is unique to each hardware platform.

To maximize your productivity with LabVIEW and still deliver execution performance AND minimize FPGA resource utilization, we have invested heavily in creating a robust system to characterize LabVIEW code on a variety of Xilinx FPGAs. Our system characterizes “clumps” of LabVIEW code as we compile it and then runs it through Xilinx’s Vivado flow. To give you an idea of scale, we have a large execution farm that compiles designs for Xilinx FPGAs, such as the Kintex-7 and ZYNQ SoC. The farm does HUNDREDS of THOUSANDS of compiles to characterize the timing and resource usage of core components. We also have hundreds of sample program inputs that we run through the clumping, code generation, and synthesis process. We instrument that process to track design characteristics of merit ranging from execution speed to utilization of key FPGA resources such as LUTs, DSP48s, and so on. From that, we can create a massive optimization search space that we use as inputs into our FPGA compiler. Recently, one of our key IP and compiler developers created an LDPC code design, a state-of-the-art approach to error-correction code that outperforms the traditional Viterbi algorithm. The search space for finding a great design is 2.3 X 10^32 possible implementations. Using a clever and patented decision space pruning technique, we found a solution that achieved 24 Gbps throughput in only three minutes. This technique, combined with Xilinx’s massive improvements in the speed of Vivado’s flow versus ISE, will greatly increase your productivity while delivering highly performant results. While there is always room to improve a compiler, we are proud of the innovations we have made with LabVIEW FPGA. You can check out the results via the LabVIEW Communications System Design Suite, the first product to include our newest technology for FPGAs (along with a host of other innovations).

I can’t say the innovations we have made in the FPGA compiler give me quite the same child-like joy that I had when I got my first disk drive, but it is pretty close. Stay tuned as we strive to find the “Genius of the And,” and deliver you the most productive abstractions AND the best executing results!

Review the features of LabVIEW Communications today.

Fuller headshot.jpg

Today’s Featured Author

David Fuller has nearly 20 years experience in software engineering and is currently NI’s vice president of application and embedded software. You can’t follow him on Twitter because he’s a software engineer.


Today’s post is part of a series exploring areas of focus and innovation for NI software.

Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.

An internal goal at NI is to be a trusted advisor for engineers. In many cases,

this perspective has focused on translating emerging technology trends into meaningful

Jeff_LV1.0_Block Diagram_ss.jpgknow-how for you. Microsoft’s switch from XP to Vista, multicore processing, and now the Internet of Things are examples where NI has partnered with industry leaders, including Intel and Microsoft to ensure that you understand the impact these trends had, or will have, on you.regardless of our products’ involvement.

Another example is field-programmable gate array (FPGA) technology, which is quickly becoming broadly relevant for both test and control applications. The FPGA silicon provides a rare combination of high-performance throughput and ultimate reliability, with a purely digital logic implementation that eliminates the need for an embedded OS. The main challenge of the FPGA, however, is how incredibly difficult it is to program. Currently, programming an FPGA requires a low-level language such as VHDL or expensive and unreliable code conversion utilities.

The LabVIEW graphical dataflow paradigm maps perfectly to the FPGA architecture. The primary attraction that LabVIEW provides for FPGA programming is abstracting a low-level programming paradigm into a high-level graphical representation. Even with that abstraction, LabVIEW users are telling us that:

  1. ) Mapping algorithms to the FPGA requires many iterations of manual code re-writes
  2. ) Compile times are long
  3. ) Performant applications still require an intimate knowledge of the FPGA architecture

As I’ve mentioned before, there are several investments we are making with LabVIEW (previously I’ve discussed UI and UX) to bring back the level of innovation that you expect. The newly-announced LabVIEW Communications System Design Suite brings to market a redesigned FPGA compiler that addresses all of the issues above. This new FPGA compiler is designed specifically for algorithm designers looking to map theoretical floating-point math to the FPGA. However, there are several components that you can apply more broadly, such as these three key innovations:

  1. ) A data-driven float-to-fixed compiler
  2. ) The multirate diagram, a new dataflow-based model of computation that enables the synchronous execution of code at varying rates
  3. ) A built-in profiler that provides compile recommendations based on input parameters - such as desired throughput

This redesigned FPGA compiler is indicative of the level of innovation going into LabVIEW. Although LabVIEW Communications is designed specifically for wireless prototyping, the majority of the innovation can be broadened to a much larger portion of the existing LabVIEW user base.

Review the features of LabVIEW Communications today.


Shelley Intro Image.JPG

I’m going to contradict myself in this month’s blog post. I don’t use clichés much, however I (usually) find them to be accurate and descriptive. (Note: because most clichés have become trite or irritating, we often forget that their novel origin was based in truth). Let me take you back to when I started at NI.

In the late ‘90s, reprogrammable silicon was considered mainstream across consumer, automotive, and industrial applications. Based on the critical invention of the XC2064 FPGA by Freeman and Vondershmitt, the FPGA was becoming a coveted technology for the compute power, field-upgradability, and performance capabilities. However, tools to program the FPGA prevented domain expert access: creating a technology that was too good to be true. Or so I thought.

In 2001, I began working with an in-development product we had demoed at NIWeek a few years earlier, but hadn’t released yet. This not-so-secret project, code named “RVI” or reconfigurable virtual instrumentation, was a graphical design approach to programming an FPGA. Having a computer science and math background, abstract and software-centric is more comfortable and familiar to me than meticulous hardware design. So the idea that you (or even a CS-person like me) could abstract a ton of silicon details and program the hardware with a productive tool like LabVIEW (rather than an HDL) seemed impossible.

This is where the contradiction begins. It wasn’t too good to be true; the cliché was wrong. It was good AND it was true. Luckily, I could rely on another well-known phrase used at NI to describe the innovation taking place: “the genius of the AND” inspired by author Jim Collins. With productive, graphical programming; system abstraction; AND hardware design for dedicated determinism including 25 ns I/O response, protocol customization, and rapid prototyping, LabVIEW FPGA breaks the cliché.

I’m not the only geek who gets excited about this capability. Stijn Schacht of T&M Solutions took advantage of the control accuracy of an FPGA to lift 20-metric-ton unbalanced trays of uncured concrete more than 6 meters while maintaining a strict accuracy of two millimeters. Because he used LabVIEW to get that precision from the FPGA, his team developed an application in only two months and was able to reuse the modular code for their next project.

Kurt Osborne at Ford Motor Company is a believer as well. Ford used LabVIEW FPGA to design and implement a real-time embedded control system for an automotive fuel cell system.


The LabVIEW Communications environment enables an entire design team to map an idea from algorithm to FPGA using a single high-level representation.

So what’s next? I encourage you to explore the latest cliché contradiction that takes FPGA design to the next level – LabVIEW Communications System Design Suite.

LabVIEW Communications is a complete design flow (with bundled software defined radio hardware) for wireless communications algorithms. This suite includes everything from an integrated FPGA flow, to an HLS compiler, to a new canvas (Multirate Diagram) for streaming algorithms, and an innovative way to explore your hardware system with the NI System Designer. The genius of the AND lives on in LabVIEW Communications.

Explore the latest cliché contradiction today at

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.


The level of hazard for all radioactive nuclear waste diminishes with time, so to protect public health and the environment it’s important to construct containers that won’t fall apart as they age. In order to develop a measurement system to provide insight on long-term nuclear storage, ProtoRhino researched copper cracking mechanisms using FlexRIO and LabVIEW.


ProtoRhino used FlexRIO and LabVIEW to measure the mechanics of copper cracking under stress in a millisecond timescale. The nuclear waste containers must last 10,000 years and retain their integrity under geological stress and corrosion. LabVIEW was used to program the FPGA. The FlexRIO was used for image processing and the collection of data generated from testing the copper’s deformation under stress. By monitoring the cracks in the copper, ProtoRhino may soon be able to find a safe nuclear waste container, which is key to environmental protection and pollution prevention.

>> Read the full case study.


In most LabVIEW applications, the front panel of a subVI is displayed in order to retrieve inputs from the user, or to simply display information. By following these steps, brought to you by Darren Nattinger, you can make your subVI panels easy to develop and debug.

1. Handle the opening and closing of the subVI panel programmatically. Many LabVIEW users choose to use the SubVI Node Setup right-click option to display the subVI panel. But the downside to this option is that your subVI won’t run any initialization codes before displaying its panel. To bypass this problem use the FP.Open method to display the subVI panel once you have initialized it, and use the FP.Close method to close it once the user dismisses the subVI dialog.


2. Configure the VI Properties of the subVI properly. Set the Window Appearance in VI Properties to "Custom” and customize your settings to match the image below. As you can see there are various setting changes, in particular the “Show Abort Button” option is disabled to prevent users from closing your entire application by pressing Ctrl-. while your subVI dialog is active.


3. Call your modal dialog subVI dynamically. Whenever you run a VI, all of its subVIs are reserved for running, which means none of the subVIs are in edit mode. Any reserved subVI that has a modal window appearance will immediately be displayed if its front panel is already open. Since it is a modal window, you cannot dismiss it because it's reserved for running. The best way to avoid this problem is to call the modal dialog subVI dynamically by changing the subVI to “Load and retain on first call.”


>> Get more LabVIEW tips from Darren.


Today’s post is part of a monthly series exploring areas of focus and innovation for NI software. term “user interface” doesn’t adequately capture the depth of experiences we have while interacting with today’s mobile and desktop computing platforms that serve as portals into the evolving virtual world. Personally, I have never been more motivated by the trends in computing platforms with the web and cloud and radical innovations in UI ranging from touch to advanced visualizations of massive amounts of data. However, the fracturing of the Wintel monopoly creates a challenge for software developers of all kinds to be able to develop solutions that can reach all customers on all of the screens they use.

There isn’t really one technology that can deliver native, first-class experiences on all mobile platforms, all web platforms (aka browsers), and all operating systems. We are faced with a clear cost versus reach versus “native-ness” tradeoff.

It’s interesting to watch as key technologies like HTML5 rapidly adapt their core capabilities and browser vendors continue to optimize the performance of their JavaScript engines. There is massive potential in these rapidly improving Web technologies with strong use cases that not only provide traditionally-oriented controls and indicators for data visualization, but also have the power to support full “sovereign” capabilities like rendering and even editing LabVIEW diagrams themselves.

You can’t really talk about Web front ends without acknowledging the back-end server or cloud infrastructure with which the front end communicates. When NI looks at the world through the lens of acquire, analyze, and present in the context of the Web, we naturally map our presentation layer of VIs and controls and indicators to elements within the browser. However, the acquisition, storage, and analysis functions need to run server-side and server-side at scale with Big Data, or as we like to say, Big Analog Data. We are creating IT-friendly, server-side middleware that can manage the acquisition and storage of Big Analog Data and then provide Web-friendly APIs to that data so customers can quickly create VIs to visualize and analyze the data.

We are not only seeing massive shifts on the Web UI front, but also with mobile and tablet experiences, specifically around touch. For a language like LabVIEW, predicated on direct manipulation and a rich visual and spatial interaction model, we see a clear match between touch-enabled platforms, graphical programming, and interacting with your data.

We feel LabVIEW is the most touch-ready language on the planet and we think the best way to interact with controls and indicators for data visualization is also touch-based. Thus, we want to enable touch-based features for LabVIEW and touch-based features for your VIs. 

Today, the NI R&D team is simplifying a version of the LabVIEW editor, in collaboration with LEGO, which is touch-ready and tablet-friendly. The early prototypes in the lab are a delight to use and when you see and use it, I hope you agree. NI is well suited to map the beneficial evolutions of UI capabilities on the Web and in touch to engineers, scientists, and students and are working hard to do exactly that.

Stay tuned as we discuss more features the NI team is exploring for updates to NI software.

Fuller headshot.jpg

Today’s Featured Author

David Fuller has nearly 20 years experience in software engineering and is currently NI’s vice president of application and embedded software. You can’t follow him on Twitter because he’s a software engineer.

1 Comment

Today’s post is part of a monthly series exploring areas of focus and innovation for NI software.

Jeff_LV1.0_Block Diagram_ss.jpg

The introduction of LabVIEW to the market was defined by a revolutionary graphical user interface that included a combination of built-in libraries for engineering applications and a drag-and-drop UX schema. I’ve heard stories from the “original” generation at NI regarding the jaw-dropping nature of LabVIEW in the early 80s and 90s due primarily to the simplicity of the development of the UI. Of course, 30 years ago the Internet didn’t exist, cellular networks were a novel idea for military applications, and touch-screen tablets were a distant dream.

For decades, the GUI set LabVIEW apart from traditional programming languages for two reasons. First, the UI components were built into the environment instead of being paid, add-on libraries. Second, the UI components were custom designed for engineering applications including gauges, tanks, and intensity charts. As a generality, the most critical aspect of UI development in an engineering application is the simplicity of data visualization. The LabVIEW UI was designed for the concept of visualizing time-based waveform measurements, the kind of measurements most commonly associated with real-world analog data such as temperature, vibration, and voltage.

Almost 30 years later, the LabVIEW UI is still well-suited for visualizing measurement data. Over the years, engineering tools have continued to evolve, closing this gap in UI functionality and ease of use. But it was the evolution of the consumer mobile market that seriously changed user expectations. Users now not only want, but expect advanced concepts in their UIs such as gestures, resolution scaling, and dynamic movement.

Most pragmatic users of LabVIEW are entirely content with the functional display capabilities of LabVIEW today. However, they voice their opinion on the Idea Exchange, and these typically fall into three categories:

1.      General Look and Feel
The LabVIEW UI was designed to mimic the front panel of a physical instrument, graphing measurement data and providing a layout of knobs and sliders to control the measurement parameters. The design decisions made to this effect are often antiquated for the expectations of today’s modern UI principles. I often hear LabVIEW users say “this UI looks like it was built in the 80’s.”

2.     Customizability
Generally speaking, the Control Editor inside of LabVIEW helps developers apply a wide range of customizations to UI elements, breaking down each control to its elemental components where pictures, designs, and color schemes can be applied.

Blog Screenshot.png

It’s not the functionality capability that most developers want to see improved, but the simplicity in applying sophisticated customizations and extensions. LabVIEW lacks a well-structured API to interface to these customizations, instead relying on an interaction editor to accomplish the task. That lack of automation from a tool centered on the concept of automation is limiting.

3.     Portability
Lastly, the portability of LabVIEW UIs are minimal. The portability requirement is generally either to scale to a different size or resolution of monitor, or to a mobile-based device. By design, the LabVIEW UI is vector-based, which fundamentally limits the scalability or portability to different sizes or platforms.

These limitations leave LabVIEW behind where we want to be in terms of UI design, functionality, and portability. Redesigning the foundation of any UI is a difficult, multi-year effort. Fortunately, we’re significantly far into this investment-unfortunately, we’re not quite far enough. Over the next few years, you’ll begin to see our investment reach the point of deployment…deployment to you.

Follow along as we near that deployment; we’ll be sharing more information with you along the way.

Phillips headshot.jpg

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at @TheLabVIEWLion.


Today’s post is part of a monthly series exploring areas of focus and innovation for NI software.

Shelley Intro Image.JPG

Whether we love them or hate them, UIs–the veneer on our code, configuration, hardware, and data analytics–define our experience and influence our productivity. GUI history is remarkable. From the unsuccessful yet influential Xerox Star in 1981, to the 1984 Macintosh and Windows 3.0, we saw multipanel windows; desktop metaphors with paper, folders, clocks, and trashcans; browser wars brought dashboards; and Windows 8 introduced live tiles. This history provides a map for what GUIs may look like in the future.

Engineering and scientific UIs were historically distinct from consumer or architectural design. For NI, GUIs are rooted in the success of virtual instrumentation, which mimicked legacy box instruments. The virtual instrumentation approach trumped traditional test and measurement software teams who lost sight of basic UI design in a frenzy to build more features.

LabWindows/CVI is an example of an engineering-specific GUI, an interface to mimic traditional box equipment.

Today, we are in a transition where UI design is more than a competitive advantage, it’s a requirement. Heavily influenced by consumer experiences, today’s users seek solutions which offer as much intuitiveness as function.

We should all demand GUIs which are:

  1. (actually) Graphical – End unintuitive TUIs (I thought I made that up but didn’t).
  2. Skinnable – Different than customizable, your UI should come with themes, skins, and documented extensibility points.
  3. Modern - This means a clean, minimalist design (more is NOT better in the UI world) to let you focus on the data and information rather than the control.
  4. Designed – Layout, color, font, and hue saturation all matter. Don’t assume these elements aren’t necessary.

To meet these demands, vendors are investing in interaction, user experience, and user interface designers (including NI). I predict we will see more UI trends such as:

  • Flat – 2013 was the year most design experts began teaching and preaching the minimalistic design approach featuring clean, open space, crisp edges, and bright colors to emphasize usability
  • Mobile design – with consideration for high-end graphics power and heat leads to simpler interfaces and two dimensionality (Metro and 2012 Gmail redesign)
  • 3D – led by gaming and content industries, direct 3D and OpenGL technologies give us a beautiful experience on powerful platforms with 3D rendering, shading, and transparency effects (AIGLX for Red Hat Fedora, Quartz Extreme for Mac OS X, and Vista's Aero)
  • Virtual Reality – growing in feasibility, heads up displays are no longer reserved for pilots, VR is showing up everywhere from the 2013 PriusAirbus Smart Factory

Regardless of future designs, the most important element to plan for: design trends will and must evolve.   Profit margins and adoption of your products will be defined by the user experience – which is first experienced through your user interface. 

Need more convincing? Forrester Research finds that “a focus on customers’ experience increases their willingness to pay by 14.4 percent, reduces their reluctance to switch brands by 15.8 percent, and boosts their likelihood to recommend your product by 16.6 percent.”

Do you agree? Tell us what you think by commenting below or connecting with the UI Interest Group to learn tips and tricks from top LabVIEW developers.

Shelley headshot.jpg

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.


Happy Halloween, everyone! To celebrate, we're going to debunk some LabVIEW myths. Don't be tricked by these common misconceptions.


MYTH: The palettes are the best tool for dropping new objects in your VIs.

There are lots of treats hiding in the LabVIEW palettes, but it can be tough to actually find the things you want. If you want to leave work early to shop for that perfect Halloween costume, use Quick Drop to write your VIs and you'll be done in half the time it would have taken you with the palettes. Just press Ctrl-Space and type the name of the object you want to drop, and click in the VI to drop it. It'll be downright spooky how fast you write your VIs once you start using Quick Drop!

MYTH: Using subVIs always slows down your code.

LabVIEW is very efficient at calling subVIs, and proper use of subVIs can allow LabVIEW to manage memory more effectively. For those times where subVI overhead may be a concern, you can use the "Inline subVI into calling VIs" setting in VI Properties to completely eliminate any overhead associated with calling a VI as a subVI. You can be afraid of ghosts and ghouls, but don't be afraid of subVIs!

FACT: There is a picture of a duck hidden in the Duct image library in the LabVIEW Datalogging and Supervisory Control Module.

Enough said.

duck duct.png


Bipedal humanoid robots have been around for over 30 years, but developing and implementing intelligent motion algorithms to keep their moves from looking Frankenstein-esque have remained a challenge. Using NI hardware, LabVIEW, and third-party add-ons, a team of engineers at the Temasek Polytechnic School of Engineering have built a teenager-sized humanoid robot with a smooth gait.


The project focused on developing a user-friendly graphical interface to implement motion control algorithms. Engineers used LabVIEW to create control software that students used to easily develop and debug the program, and they’re going to be able to flexibly adapt and redeploy the program in the future on other robotics projects. PXI-8101 was the main system controller and students programmed wireless LAN using the LabVIEW Internet Toolkit. The LabVIEW MathScript RT Module executed The MathWorks, Inc. MATLAB® code to generate gait trajectory.

LabVIEW reduced development time to one semester, made it possible to perform motion simulation with SolidWorks, and executed code created with MATLAB. The bipedal humanoid robot made its debut at the SRG 2014 Singapore Robotics Games.

>> Read the full case study.


By default, intensity graphs and intensity charts have a blue color map for the Z scale, but did you know there are other color maps to choose from? To reconfigure the blue color map, right-click any of the numbers in the Z scale and choose “Marker Color.”

You can also programmatically manipulate the scale with the color table property of the intensity graph and chart. By following the link below, you can download a VI that will allow you to select between several pre-defined color maps for your intensity graphs and charts.

Thanks to Darren Nattinger for this LabVIEW tip!



>> Download the VI here.


We’re excited to announce the launch of three NI Service Programs intended to help you design, develop, and maintain high-quality measurement and control systems. By enrolling in one of these programs, you’ll experience benefits that span across software, hardware, and system services.


Key Benefits:

  • Decrease your learning curve by 66 percent and develop 50 percent faster by completing training available through the software service programs.
  • Reduce time to first measurement by up to eight man-hours with system assembly, test and software installation available with a service program for PXI, cRIO, or cDAQ systems.
  • Save 10 days of downtime by leveraging advanced replacement available through the Premium Service Program for Hardware or Systems.

>> Learn more about the NI Service Programs.