Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
33 Views
0 Comments

A next-generation UE conformance tester will be a critical component to enabling the 5G ecosystem.

Read more...

231 Views
0 Comments

Announcing a collaboration with Samsung to validate 5G

Read more...

587 Views
0 Comments

Finalizing non-standalone and standalone 5G New Radio standards means more challenges, tests

Read more...

8247 Views
0 Comments
6938 Views
0 Comments

We’re collaborating with Shanghai University on a 5G ultra-reliable low-latency testbed for V2X communications

 

 

Read more...

15206 Views
0 Comments

Test is changing, and your job isn’t getting any easier. The tools you use must evolve with the demands of smarter devices under test. 

Read more...

1813 Views
0 Comments

Test smarter with the latest enhancements to LabVIEW NXG

Read more...

2448 Views
0 Comments

If you ask Brian Hoover to summarize what LabVIEW does for him in one sentence, he’d say “LabVIEW makes my work fun.” We couldn’t have said it better ourselves. 

Read more...

11224 Views
0 Comments
1346 Views
1 Comment

Most know the show for it’s significant consumer electronics product announcements; but it broadly covers many vectors of technology evolution, including the automotive industry.

Read more...

1257 Views
1 Comment

PCIe upgrades for legacy PCI DAQ devices 

Read more...

1926 Views
0 Comments

Two ways you can solve your Big Analog Data challenges

Read more...

1113 Views
0 Comments

A lot happened in the world of 5G this year. Stay on top of the standard updates and get our director of RF and wireless research, James Kimery’s perspective on the incredible changes the wireless industry saw in 2017:

 

What to Expect from 5G Wireless in 2017

2017 was pivotal year with monumental milestones for 5G. There were several unknowns in January; find out if we were right about how these new technologies took shape.

 

3GPP Moves 5G Forward in Dubrovnik

The 3GPP agreed to accelerate the 5G deliverables by up to six months. A complete Non-Stand Alone (NSA) architecture is expected to be finalized by March of 2018 with the SA version using a 5G core network coming six months later.

 

Verizon 5GTF comes to life

In March 2017, NI demonstrated a real-time working prototype of the Verizon 5GTF at the IEEE Wireless Communications and Networking Conference (WCNC) in San Francisco.

 

Test is a key challenge for 5G

5G components and devices must be tested differently with new techniques and technologies from their 4G predecessors.

 

Is 5G for real?

Not yet, but the foundation is being laid today. Semiconductors, software, infrastructure, data centers, edge computing, and of course test and measurement must rise up to realize the vast potential that 5G promises.

 

To prepare for next year, check out more perspectives and technology driving next-generation wireless communications.

845 Views
1 Comment

When the 3GPP announced the beginning of 5G development back in 2015, the group proposed many performance objectives pertaining to all aspects of the network. Eventually the 3GPP distilled the various metrics down to three distinct use cases: 

 

  1. enhanced mobile broad band (eMBB),
  2. Ultra Reliable Low Latency Communications (URLLC), and
  3. Massive Machine Type Communications (mMTC).

There has been much written and published on the eMBB use case and even the mMTC to a lesser extent with the anticipation of smarter devices and pervasive IoT. Although the URLLC use case has garnered less attention, it may in fact prove more impactful than either of the former use cases.

 

The pivotal term in URLLC is “low latency”. In some ways ultra reliable and low latency represent two separate and potentially orthogonal goals.

  

Latency Def.png

 

While physical layer researchers differ on the exact definition of latency,  it's generally referred to as the round trip time from when a transport block contained in a slot is sent from a base station (BTS) and the user equipment (UE) responds to the initial transmission of the transport block. This is a narrow view, but makes it controllable from a research perspective at the foundational level of the standard.

 

All variables that impact latency in this definition can be controlled by the physical layer designers. With the pending finalization of 3GPP Release 15, the initial 5G NSA Phase 1 release, the 3GPP contributors addressed latency at the physical layer with several optimizations, which I’ll outline below.

 

First, the flexible numerology scheme allows for slots in a subframe (1ms) to be defined as uplink, downlink or some combination of the two. Second, the time duration of the slot is flexible and depends on the sub carrier spacing (SCS). The 3GPP specifies several SCS options dependent on spectrum and bandwidth. Each slot represents 14 OFDM symbols and each subframe can scale in time as noted by the table below:

 

SCS

Slot Duration

15 kHz

1000 µs

30 kHz

500 µs

60 kHz

250 µs

120 kHz

125 µs

 

For the URLLC case, the shorter slot durations are important. The 3GPP has also defined “mini-slots” that further reduce the timing to 2, 4, or 7 symbols and would cause the timing in the table above to scale linearly.

 

Finally, the 3GPP also defined the “self contained” subframe case. In this mode, transmit and receive from the UE side occurs wholly within a single subframe. Self contained subframes include the HARQ which in theory makes a significant reduction in latency possible. The HARQ timing generally increases access time depending on the quality of the link and can increase latency significantly.

 

Network researchers note that the physical layer and improvements in the full stack, ie the RAN, are only part of the latency equation. The data bits must be sent and received from a UE, however if the sending device is located on the other side of the world, minimal latency targets will be difficult to realize. Physics dictates the travel based on distance, and even the fastest networks must address this challenge.

 

Researchers refer to this type of latency as end-to-end or E2E. E2E low latency is not possible without modification to the core network and potential inclusion of network slicing and/or Mobile Edge Computing (MEC) nodes. By separating the control and user planes in the standard, the 3GPP opened the door for new network topologies to enable network slicing. More to the point, a distributed network control methodology where the network can direct a packet using the shortest path to a computational node to efficiently reduce the E2E latency is needed.

 

The 3GPP has laid a solid foundation for realizing lower latency in 5G networks. While the improvements in the physical layer will be realized for the eMBB and mMTC use cases, the potential for URLLC remains. However, URLLC and true low latency applications may have to wait for further definition of the upper layers in the 5G Core Network scheduled for release in December of 2018.


This blog originally appeared in Microwave Journal as part of the 5G and Beyond series.

2536 Views
3 Comments

In my last blog post, I discussed the type of data test engineers are collecting. The key takeaway: we’re dealing with a lot of data.

 

All this data leads to my next topic: the three biggest problems test engineers face. At

NI, we refer to these collectively as the Big Analog Data problem.

big_analog_data_infographic_feature.jpgFinding the Valuable Data Points in a Mountain of Data

 

All the data from the increasingly complex tests run every day eventually needs to be stored somewhere, but it's often stored in various waysfrom a local test cell machine to an individual employee’s computer. Simply locating the data you need to analyze can be a giant pain, let alone trying to sift through pages of meaningless file names and types without metadata for context.

 

We see too many of our customers wasting time because they don't have an efficient way of searching files. Even if engineering groups are lucky enough to have a centralized database to store test data, they still run into difficulties accessing it because it’s not optimized for waveform data types and rarely shared between groups.

 

All of this leads to “silos” of data that then can’t be used efficiently, causing more wasted time trying to access it. In extreme cases, these problems even cause companies to rerun tests because they simply can’t find data they’ve already collected.

Validating Data Collected

 

 

An issue that most don’t think about until they experience it firsthand is the validation of collected data. Ideally, every test runs the way it’s intended to, but there’s no way to know, unless some validation steps are performed.

 

There are countless ways to get incorrect data, from improper test rig setup to data corruption. If invalid data goes on to be analyzed and used in decision-making, there could be disastrous results.

 

Big Analog Data validation presents extra headaches due to the sheer volume and variety of data types. A gut-wrenching example of this is NASA’s 1999 Mars Climate Orbiter that burned up in the Martian atmosphere because engineers failed to convert units from English to metric.

 

Manual processes work, but are extremely time-consuming. To save engineers from wasting valuable person-hours, an automated solution is usually required.

Analyzing Large Volumes of Data Efficiently

 

 

Studies show that on average only five percent of all data collected goes on to be analyzed. By not analyzing more of the data you collect, you risk making decisions without considering the bigger picture.

 

A great way to illustrate this in the engineering world is the Nyquist Theorem, which states that you must use a minimum number of data points in your analysis to get accurate results. For example, without analyzing more data points, you may just see an exponential signal (Figure 1) instead of the sine wave that’s actually there (Figure 2).

 

 Figure 1Figure 1Figure 2Figure 2

There are two reasons why test engineers don’t analyze more data. The first, as I mentioned earlier, is being unable to find the right data in the mountains of Big Analog Data they’ve collected. But, they’re also using systems and processes that aren’t optimized for large data sets. Manual calculations with inadequate tools are typically the roadblock when it comes to analyzing large quantities of data.

 

Even when the right tools are used, processing Big Analog Data can be troublesome and usually requires an automated solution where processing can be offloaded to a dedicated system. 

 

In my next post, I’ll give you some options for tackling your Big Analog Data problem, so you can be sure you’re making the best data-driven decisions possible.

 

NEXT:Addressing Your Big Analog Data Challenges >>

 

Find out more about NI’s solutions for data-driven decisions >>

830 Views
1 Comment
20471 Views
0 Comments

The latest model of the Industrial Controller adds IP67 reliability to high-performance processing and control applications.

 

Read more...

2559 Views
3 Comments

From connected grids to road networks to social media, we create a lot of data—“big data.

Read more...

928 Views
1 Comment

Ensure your signal rises above the noise

Read more...

939 Views
0 Comments

Inaccurate timing can lead to invalid analysis 

Read more...

665 Views
0 Comments

Your go-to technical resource for understanding and improving the quality of your analog measurements

Read more...

2535 Views
0 Comments

Instrumentation shouldn’t limit innovation

Read more...

838 Views
1 Comment

Test engineers of “in-the-loop” systems such as model in the loop (MIL), software in the loop (SIL), and hardware in the loop (HIL) can choose the best tool for their application thanks to the ASAM XIL standard governed by the Association for Standardization of Automation and Measuring Systems (ASAM).

 

The standard has gained wide adoption in automotive testing. It offers the assurance that test cases can run independently of the executing hardware. With this approach, users can create test cases that can be reused between projects, protecting the investment and reducing test cost.  

 

We’re a proud adopter of the ASAM XIL standard because it gives our users the freedom to design their test system regardless of vendor and aligns with our belief that users should be in complete control of their test system. Periodically, test tool vendors demonstrate standard adherence. This year, we hosted a cross test event on October 11 and 12 at our offices in Munich, Germany.

 

XIL Cross Test People.png

Cross test participants included representatives from carts GmbH/MicroNova AG, dSPACE GmbH, embeddeers GmbH, NI, TraceTronic GmbH, and Volkswagen AG 

 

Prior to the cross test, ASAM AE XIL 2.1 was released, boasting a ton of new features like pause simulation, a common interface to real-time scripts, simultaneous read/write, DiagPort redesign, and more.

 

The 2017 cross test focused on validating the interoperability of electrical error simulation (EESPort) in XIL systems. Typical use cases of this center on testing wiring errors such as loose contacts, short circuits, and broken wires.

 

The cross test began by testing the XIL servers with an ASAM-provided test suite. Subsequently, all combinations of client test tools and servers were tested. In general, there was great interoperability between vendors, and EESPort implementations proved ready to use.

 

XIL Cross Test Graphic.png

 

The event was a success with a solution-oriented atmosphere and good relations among participants.

 

So, what’s next?

The group agreed to continue work on XIL 2.1.1 in 2018 and 2.1.2 in 2019 to incorporate feedback from the growing user base. Additionally, XIL working group members are encouraging all tool vendors to be consumers of ASAM XIL as well as actively contribute their ideas and experiences to further improve the standard.

 

Learn More About NI HIL Solutions >>

2153 Views
1 Comment

The emergence of smarter cars means automotive companies are looking for smarter ways to work. And they know getting there requires new efficiencies in their existing design and development processes.

 

One way companies want to do this is by extracting more knowledge from corporate data. As a common goal across the industry, we see the same issues arise when tackling this challenge. We also see a solution: Viviota’s Time-to-Insight Software Suite™ (TTI) combined with NI’s Data Management Software Suite.

 

 

Address in a model-based design process

 

 

In a model-based design (MBD) process, data from physical testing is combined with data from model-based simulations, and then analyzed. The main point is to build products faster. So, when one of our international automotive customers sought ways to address inefficiencies, we looked at test data preparation, analysis, and reporting processes.

 Screen Shot 2017-10-24 at 12.23.28 PM.png

 

Their engineers were spending precious time not only finding, managing, and standardizing test data but also analyzing and reporting it—upward of 10 hours in one case! This was happening for three reasons:

 

  • Decentralized data management
  • Inconsistent file formats
  • Limited access to reliable historical data

 

Using TTI, which harnesses the power of the Data Management Software Suite, we proposed a solution and piloted it with a single work cell. We targeted three areas for efficiency improvement:

 

  • Cleanse and standardize data to provide a consistent format and labeling system for all data
  • Index cleansed data by a powerful search engine
  • Provide a server-level analysis engine and interface to centralize and speed analysis

 

The power of managing data saves valuable time

 

 

Our solution moved the processing of more than 1,000 data files from individual engineering systems to a server-class machine (32 cores). Before the Viviota implementation, engineers in a single test cell typically spent five hours per week locating data and five hours processing and analyzing the data.

 

With TTI and the Data Management Software Suite, the time to locate and analyze data went from 10 hours to seven minutes. Although this initial project was aimed at one automotive work cell (powertrain), we’re eager to replicate this success in other work cells such as engines, brakes, and transmissions.

 

Check out the Data Management Software Suite >>

See how you can save time with TTI >>


This blog is a part of a series with our Alliance Partner, Viviota. Find out how NI Alliance Partners are uniquely equipped and skilled to help solve your toughest engineering challenges. Alliance Partners are a business entity independent from and has no agency, partnership, or joint-venture relationship with NI.

2105 Views
0 Comments

Defining a test strategy that reduces your costs and maximizes the efficiency of your production is tough, but building your test system can be even more challenging. We’ve put together four on-demand webinars to help you, whether you are building from scratch or augmenting your current system.

 

Screen Shot 2017-10-11 at 11.02.20 AM.png

 

Learn from NI test engineers and guest presenters from Bloomy Controls Inc. and Bose Corporation as they share their experiences.These sessions are on demand, so tune in when it’s best for you!

 

What I Wish I Had Known Before I Started Deploying Test Systems

Gain effective approaches to developing a successful deployment process for test systems with this 15-minute webinar presented by James Kostinden, lead software test engineer at Bose Corporation. Watch now.

 

Switching, Mass Interconnect, and Fixturing Considerations

Our own David Roohy, systems engineer at NI, helps you learn about different switching architectures and ways to determine the best switching strategy for your test system needs in this 19-minute presentation. Watch now.

 

Improving ATE Test Sequence Adaptability Using HALs and MALs

Explore common options for implementing hardware abstraction layers and measurement abstraction layers, from out-of-the-box drivers to object-oriented solutions, in this 25-minute webinar by Grant Gothing, ATE business unit manager at Bloomy. Watch now.

 

Thermal and Power Planning of Automated Test Systems

Patrick Robinson, principal test engineer at NI, shows you best practices for planning cooling and power systems in your automated test systems in this 21-minute session. Watch now.

 

For a complete guide to building a test system, dowload the practical guides now.

2211 Views
0 Comments

A critical gap has emerged as software defined radios (SDRs) become more common in remote applications like large deployed antenna arrays, dense urban areas, and active battlefields. Situations like these require remote operators to be confident the device is functioning and hasn’t exceeded critical operational or environmental parameters.

 

N310_Iso.jpgIn response to this challenge, Ettus Research, a National Instruments company, has developed an SDR that improves the remote management, maintainability, and reliability in deployed environments.

 

Let’s explore the USRP N310 features:

 

  • A four RX/TX channel SDR in a half RU form factor, so you can easily and densely scale it to fit the size of the application
  • The ability to update firmware, recover from errant software or firmware updates, and push critical patches to multiple deployed radios
  • Ethernet-based synchronization to precisely synchronize devices reliably and securely when radios are geographically distributed or in GPS denied environments with a high risk of jamming and spoofing

Learn more about the USRP N310 >>