A lot happened in the world of 5G this year. Stay on top of the standard updates and get our director of RF and wireless research, James Kimery’s perspective on the incredible changes the wireless industry saw in 2017:
The 3GPP agreed to accelerate the 5G deliverables by up to six months. A complete Non-Stand Alone (NSA) architecture is expected to be finalized by March of 2018 with the SA version using a 5G core network coming six months later.
Not yet, but the foundation is being laid today. Semiconductors, software, infrastructure, data centers, edge computing, and of course test and measurement must rise up to realize the vast potential that 5G promises.
When the 3GPP announced the beginning of 5G development back in 2015, the group proposed many performance objectives pertaining to all aspects of the network. Eventually the 3GPP distilled the various metrics down to three distinct use cases:
enhanced mobile broad band (eMBB),
Ultra Reliable Low Latency Communications (URLLC), and
Massive Machine Type Communications (mMTC).
There has been much written and published on the eMBB use case and even the mMTC to a lesser extent with the anticipation of smarter devices and pervasive IoT. Although the URLLC use case has garnered less attention, it may in fact prove more impactful than either of the former use cases.
The pivotal term in URLLC is “low latency”. In some ways ultra reliable and low latency represent two separate and potentially orthogonal goals.
While physical layer researchers differ on the exact definition of latency, it's generally referred to as the round trip time from when a transport block contained in a slot is sent from a base station (BTS) and the user equipment (UE) responds to the initial transmission of the transport block. This is a narrow view, but makes it controllable from a research perspective at the foundational level of the standard.
All variables that impact latency in this definition can be controlled by the physical layer designers. With the pending finalization of 3GPP Release 15, the initial 5G NSA Phase 1 release, the 3GPP contributors addressed latency at the physical layer with several optimizations, which I’ll outline below.
First, the flexible numerology scheme allows for slots in a subframe (1ms) to be defined as uplink, downlink or some combination of the two. Second, the time duration of the slot is flexible and depends on the sub carrier spacing (SCS). The 3GPP specifies several SCS options dependent on spectrum and bandwidth. Each slot represents 14 OFDM symbols and each subframe can scale in time as noted by the table below:
For the URLLC case, the shorter slot durations are important. The 3GPP has also defined “mini-slots” that further reduce the timing to 2, 4, or 7 symbols and would cause the timing in the table above to scale linearly.
Finally, the 3GPP also defined the “self contained” subframe case. In this mode, transmit and receive from the UE side occurs wholly within a single subframe. Self contained subframes include the HARQ which in theory makes a significant reduction in latency possible. The HARQ timing generally increases access time depending on the quality of the link and can increase latency significantly.
Network researchers note that the physical layer and improvements in the full stack, ie the RAN, are only part of the latency equation. The data bits must be sent and received from a UE, however if the sending device is located on the other side of the world, minimal latency targets will be difficult to realize. Physics dictates the travel based on distance, and even the fastest networks must address this challenge.
Researchers refer to this type of latency as end-to-end or E2E. E2E low latency is not possible without modification to the core network and potential inclusion of network slicing and/or Mobile Edge Computing (MEC) nodes. By separating the control and user planes in the standard, the 3GPP opened the door for new network topologies to enable network slicing. More to the point, a distributed network control methodology where the network can direct a packet using the shortest path to a computational node to efficiently reduce the E2E latency is needed.
The 3GPP has laid a solid foundation for realizing lower latency in 5G networks. While the improvements in the physical layer will be realized for the eMBB and mMTC use cases, the potential for URLLC remains. However, URLLC and true low latency applications may have to wait for further definition of the upper layers in the 5G Core Network scheduled for release in December of 2018.
This blog originally appeared in Microwave Journal as part of the5G and Beyondseries.
In my last blog post, I discussed the type of data test engineers are collecting. The key takeaway: we’re dealing with a lot of data.
All this data leads to my next topic: the three biggest problems test engineers face. At
NI, we refer to these collectively as the Big Analog Data™ problem.
Finding the Valuable Data Points in a Mountain of Data
All the data from the increasingly complex tests run every day eventually needs to be stored somewhere, but it's often stored in various ways—from a local test cell machine to an individual employee’s computer. Simply locating the data you need to analyze can be a giant pain, let alone trying to sift through pages of meaningless file names and types without metadata for context.
We see too many of our customers wasting time because they don't have an efficient way of searching files. Even if engineering groups are lucky enough to have a centralized database to store test data, they still run into difficulties accessing it because it’s not optimized for waveform data types and rarely shared between groups.
All of this leads to “silos” of data that then can’t be used efficiently, causing more wasted time trying to access it. In extreme cases, these problems even cause companies to rerun tests because they simply can’t find data they’ve already collected.
Validating Data Collected
An issue that most don’t think about until they experience it firsthand is the validation of collected data. Ideally, every test runs the way it’s intended to, but there’s no way to know, unless some validation steps are performed.
There are countless ways to get incorrect data, from improper test rig setup to data corruption. If invalid data goes on to be analyzed and used in decision-making, there could be disastrous results.
Big Analog Data validation presents extra headaches due to the sheer volume and variety of data types. A gut-wrenching example of this is NASA’s 1999 Mars Climate Orbiter that burned up in the Martian atmosphere because engineers failed to convert units from English to metric.
Manual processes work, but are extremely time-consuming. To save engineers from wasting valuable person-hours, an automated solution is usually required.
A great way to illustrate this in the engineering world is the Nyquist Theorem, which states that you must use a minimum number of data points in your analysis to get accurate results. For example, without analyzing more data points, you may just see an exponential signal (Figure 1) instead of the sine wave that’s actually there (Figure 2).
Figure 1Figure 2
There are two reasons why test engineers don’t analyze more data. The first, as I mentioned earlier, is being unable to find the right data in the mountains of Big Analog Data they’ve collected. But, they’re also using systems and processes that aren’t optimized for large data sets. Manual calculations with inadequate tools are typically the roadblock when it comes to analyzing large quantities of data.
Even when the right tools are used, processing Big Analog Data can be troublesome and usually requires an automated solution where processing can be offloaded to a dedicated system.
In my next post, I’ll give you some options for tackling your Big Analog Data problem, so you can be sure you’re making the best data-driven decisions possible.
Test engineers of “in-the-loop” systems such as model in the loop (MIL), software in the loop (SIL), and hardware in the loop (HIL) can choose the best tool for their application thanks to the ASAM XIL standard governed by the Association for Standardization of Automation and Measuring Systems (ASAM).
The standard has gained wide adoption in automotive testing. It offers the assurance that test cases can run independently of the executing hardware. With this approach, users can create test cases that can be reused between projects, protecting the investment and reducing test cost.
We’re a proud adopter of the ASAM XIL standard because it gives our users the freedom to design their test system regardless of vendor and aligns with our belief that users should be in complete control of their test system. Periodically, test tool vendors demonstrate standard adherence. This year, we hosted a cross test event on October 11 and 12 at our offices in Munich, Germany.
Cross test participants included representatives from carts GmbH/MicroNova AG, dSPACE GmbH, embeddeers GmbH, NI, TraceTronic GmbH, and Volkswagen AG
Prior to the cross test, ASAM AE XIL 2.1 was released, boasting a ton of new features like pause simulation, a common interface to real-time scripts, simultaneous read/write, DiagPort redesign, and more.
The 2017 cross test focused on validating the interoperability of electrical error simulation (EESPort) in XIL systems. Typical use cases of this center on testing wiring errors such as loose contacts, short circuits, and broken wires.
The cross test began by testing the XIL servers with an ASAM-provided test suite. Subsequently, all combinations of client test tools and servers were tested. In general, there was great interoperability between vendors, and EESPort implementations proved ready to use.
The event was a success with a solution-oriented atmosphere and good relations among participants.
So, what’s next?
The group agreed to continue work on XIL 2.1.1 in 2018 and 2.1.2 in 2019 to incorporate feedback from the growing user base. Additionally, XIL working group members are encouraging all tool vendors to be consumers of ASAM XIL as well as actively contribute their ideas and experiences to further improve the standard.
The emergence of smarter cars means automotive companies are looking for smarter ways to work. And they know getting there requires new efficiencies in their existing design and development processes.
One way companies want to do this is by extracting more knowledge from corporate data. As a common goal across the industry, we see the same issues arise when tackling this challenge. We also see a solution: Viviota’s Time-to-Insight Software Suite™ (TTI) combined with NI’s Data Management Software Suite.
Address in a model-based design process
In a model-based design (MBD) process, data from physical testing is combined with data from model-based simulations, and then analyzed. The main point is to build products faster. So, when one of our international automotive customers sought ways to address inefficiencies, we looked at test data preparation, analysis, and reporting processes.
Their engineers were spending precious time not only finding, managing, and standardizing test data but also analyzing and reporting it—upward of 10 hours in one case! This was happening for three reasons:
Decentralized data management
Inconsistent file formats
Limited access to reliable historical data
Using TTI, which harnesses the power of the Data Management Software Suite, we proposed a solution and piloted it with a single work cell. We targeted three areas for efficiency improvement:
Cleanse and standardize data to provide a consistent format and labeling system for all data
Index cleansed data by a powerful search engine
Provide a server-level analysis engine and interface to centralize and speed analysis
The power of managing data saves valuable time
Our solution moved the processing of more than 1,000 data files from individual engineering systems to a server-class machine (32 cores). Before the Viviota implementation, engineers in a single test cell typically spent five hours per week locating data and five hours processing and analyzing the data.
With TTI and the Data Management Software Suite, the time to locate and analyze data went from 10 hours to seven minutes. Although this initial project was aimed at one automotive work cell (powertrain), we’re eager to replicate this success in other work cells such as engines, brakes, and transmissions.
This blog is a part of a series with our Alliance Partner, Viviota. Find out how NI Alliance Partners are uniquely equipped and skilled to help solve your toughest engineering challenges. Alliance Partners are a business entity independent from and has no agency, partnership, or joint-venture relationship with NI.
Defining a test strategy that reduces your costs and maximizes the efficiency of your production is tough, but building your test system can be even more challenging. We’ve put together four on-demand webinars to help you, whether you are building from scratch or augmenting your current system.
Learn from NI test engineers and guest presenters from Bloomy Controls Inc. and Bose Corporation as they share their experiences.These sessions are on demand, so tune in when it’s best for you!
What I Wish I Had Known Before I Started Deploying Test Systems
Gain effective approaches to developing a successful deployment process for test systems with this 15-minute webinar presented by James Kostinden, lead software test engineer at Bose Corporation. Watch now.
Switching, Mass Interconnect, and Fixturing Considerations
Our own David Roohy, systems engineer at NI, helps you learn about different switching architectures and ways to determine the best switching strategy for your test system needs in this 19-minute presentation. Watch now.
Improving ATE Test Sequence Adaptability Using HALs and MALs
Explore common options for implementing hardware abstraction layers and measurement abstraction layers, from out-of-the-box drivers to object-oriented solutions, in this 25-minute webinar by Grant Gothing, ATE business unit manager at Bloomy. Watch now.
Thermal and Power Planning of Automated Test Systems
Patrick Robinson, principal test engineer at NI, shows you best practices for planning cooling and power systems in your automated test systems in this 21-minute session. Watch now.
A critical gap has emerged as software defined radios (SDRs) become more common in remote applications like large deployed antenna arrays, dense urban areas, and active battlefields. Situations like these require remote operators to be confident the device is functioning and hasn’t exceeded critical operational or environmental parameters.
In response to this challenge, Ettus Research, a National Instruments company, has developed an SDR that improves the remote management, maintainability, and reliability in deployed environments.
Let’s explore the USRP N310 features:
A four RX/TX channel SDR in a half RU form factor, so you can easily and densely scale it to fit the size of the application
The ability to update firmware, recover from errant software or firmware updates, and push critical patches to multiple deployed radios
Ethernet-based synchronization to precisely synchronize devices reliably and securely when radios are geographically distributed or in GPS denied environments with a high risk of jamming and spoofing