More complexity, more flexibility - today’s test managers are faced with unprecedented needs for test system management. Here are the improvements orgs are looking for in test system management in 2017.
In early December, I attended the 3GPP RAN#74 TSG meeting in Vienna, Austria. This was the last plenary meeting before the official 5G work item (WI) kicks off at the 5G RAN #75 meeting in Dubrovnik, Croatia next March. The 3GPP membership has poured a lot of investment into the study of new technologies and methods to meet the 5G architecture requirements, and with the first big milestone just three short months away, here are a few high level takeaways from the 3GPP Workshop covering 5G.
First, many companies continue to push their concepts and technologies for inclusion into the first WI. However, time is running out. In March, the 5G Phase 1 WI will start and serve as the base for the initial 5G specification (3GPP Release 15). Although the 3GPP is planning for Phase 2 to start 18 months after Phase 1, use cases or technologies not included in Phase 1 must wait an additional 18 months, which could be commercially challenging for some.
On the opposite side, the 3GPP leadership proposed narrowing the scope of some of the work in Phase 1 to increase the probability of meeting the March deadline and ultimately the Sept 2018 finalization goal. Although no consensus was reached in Vienna, it is clear something has to give in order to move forward. With time the ultimate equalizer, innovation may need to be tempered with the reality that the study items must be completed and consensus reached before the definition phase can begin.
In parallel, the 3GPP continues to evolve LTE 4G particularly for the IoT (NB-IoT), MTC (LTE-MTC), and V2X use cases. In fact, some have proposed delaying 5G work related to these use cases to assess whether these new LTE evolved technologies can address the IMT-2020 requirements. The 3GPP has already signaled that a primary use case for Phase 1 will be eMBB (enhanced Mobile Broadband), and this may be the major achievement of the Phase 1 work.
Also of interest, eMBB is looking more like a mmWave or cmWave system utilizing multi-carrier OFDM and up to 8 component carriers with a minimum of 100 MHz of bandwidth. Consensus was reached on channel coding with LDPC proposed for data and polar coding targeted for the control channels. Requiring two coding methods is curious because this requires a mobile device to include both methods in the physical layer with the mobile switching between the two depending on the state of the link - increasing cost and adding complexity. Although each method has its merits, it will be interesting to see if the current status quo goes unchecked in March.
Finally, the 3GPP came to consensus on 5G terminology. The new 5G physical layer will be officially named, “NR” for new radio. The new 5G core network will be called, “5G CN”. A connection between NR and 5G CN will be named “NG”. (I think it is safe to say that the marketing experts were conspicuously absent from this discussion.) On to Dubrovnik!
This blog originally appeared in Microwave Journal as part of the 5G and Beyond series.
An interview with James Truchard (Dr. T) and Jeff Kodosky, co-founders of NI, on our 40th birthday as a company.
When did you know you wanted to be an engineer? And how did you meet?
Dr. T: “I was one of the Sputnik kids. Sputnik had just been launched in 1958. I graduated from high school in ‘60. And Wernher von Braun, the big hero of the US program, was a physicist, and so that meant I wanted to be physicist, too, not really knowing what physics was or what it was about. So I got a bachelors and masters in physics. Working at the University of Texas, I realized that, you know, you have to really, really be smart to be a great physicist because it’s very hard. Plus they had all these mysterious particles that they talked about and I couldn’t always see them. So I gravitated towards more pragmatic things in engineering and started designing circuits. The last year of my masters, I was working full-time. Then I continued to work full-time at the lab and in seven years, one course at a time, I got my Ph.D. in electrical engineering.”
Jeff: "I didn’t! I thought I wanted to be a physicist, a theoretical physicist.”
Dr. T: “Jeff came to Texas when Texas was really the star because we were building the Super Collider and a lot of funding was coming to Texas, and everyone was excited about what physics could do. I distracted Jeff from his career as a physicist.”
Jeff: “I grew up in New York, but I had this professor who said, ‘Apply to UT Austin, they’ve got a great graduate school in physics,’ and so that’s what brought me down here. I started as a TA at and then discovered I didn’t like teaching, which should have been the first sign that I was in the wrong career. I’d heard about Applied Research Laboratories (ARL) because I knew they hired lots of students part-time there. Dr. T interviewed me and hired me, and so, basically, Dr. T is the only person I’ve ever worked for. When I started to work at ARL, that’s when I got exposed to the PDP8 and started thinking about software. Prior to that, programming was just Fortran or assembly language…it wasn’t very interesting. But when I got to play with the computer myself, that’s when I guess the engineering side started coming out. I realized it’s a lot more fun building things than theorizing about them. I would say I was probably more a computer programmer than anything else, and Jim came up with lots of ideas, and has always comes up with lots of ideas, and we would work on experiments together.”
Dr. T: “We started meeting early '76. We made a list of ten different projects we could do…and then we voted, which I view as a very random process for deciding. We said, ‘Okay which is the best idea?’ and we picked the instrument interface. I always say that was lucky because it was right between computers and instruments…what better place to be if you want to revolutionize instrumentation with computers? It gave us access to a customer base of scientists and engineers. You know, in the rearview mirror, it all looks very simple, like success just came to us without any effort, but there were 23 companies making GPIB controllers in that timeframe when we started the…Hewitt Packard had HP2000 computers with interfaces to them and we essentially saw how we could do the same thing for the PDP11, and that’s what we did.”
When did you get your big break?
Dr. T: "In November of 1979, I started full-time and I went on a sales call that first week with our rep up in New England. We visited Brown & Sharpe, had a very cordial visit, and it looked like they were going to use our product. When we got home, we had gotten a $90,000 order for this bus extender. And so we built it, we shipped it, and with the profit, we bought Jeff a PDP11 44 computer and a $20,000 copy of Unix operating system…”
Jeff: “..and we were off and running.”
Dr. T: “We were off and running. And then, two years later we get a call from the purchasing agent saying they had made a mistake. They were supposed to get a quote, not place an order….so, that’s where I came up with my saying, 'nothing beats dumb luck,’ and of course don’t exclude it. That really kick-started us, allowed Jeff to buy his computer, us to start full-time, and also for me to buy the reference books I wanted. I splurged at the end of that year to buy reference books. “
Jeff: "To clarify, When I started working for Jim it was '73. It was '76 when we began this big measurement system project at Applied Research Laboratories (APL), and that’s also when Jim mentioned someone could start a business building an interface that would connect HP instruments to popular computers. So, we were working full time at ARL on this massive measurement project for the navy, and then moonlighting on NI to get it going— designing and building GPIB boards. We started designing one board in the spring and shipped it that fall. That was design, development, debugging, manufacturing…all while we are moonlighting. It boggles the imagination to see what we were able to accomplish in that short time!”
What makes this new version so special? For starters, we added NI Linux Real-Time capability for all software defined radio (SDR) products. This added capability empowers you to develop real-time algorithms for execution on the NI Linux Real-Time operating system, work with other tools to move up the protocol stack to MAC and network layers, and access the vast repositories of open source tools and technologies needed to build complete system prototypes.
We also introduced the MIMO Application Framework, a fully configurable, parameterizable physical layer written and delivered in LabVIEW application source code that helps build massive MIMO prototypes.
We’re excited to share this LabVIEW update that allows you to be more efficient and have the time to focus on what you do best – creating new technologies and solutions for future 5G systems.
The National Instruments Engineering Impact Awards recognize engineers and students who excelled in developing systems that solve problems across a range of categories.
The 2016 Engineering Impact Awards received nearly 100 submissions from almost 200 authors around the globe. NI’s technical panel of experts reviewed the papers and narrowed it down to just 15.
And the winners are…
2016 Customer Application of the Year
The University of Bristol and Lund University used the NI MIMO Prototyping System to test the feasibility of massive MIMO as a viable technology for bringing greater than 10X capacity gains to future 5G networks. In doing so, they implemented the world’s first live demonstration of a 128-antenna, real-time massive MIMO testbed and set two world records in spectrum efficiency.
Their massive win at the 2016 Engineering Impact Awards must be another new world record, as they took home five separate awards in recognition of their 5G wireless achievement!
Not only did Bristol & Lund win the 2016 Customer Application Award of the year, they were also the winner of the Wireless Communications Award, the Hewlett Packard Enterprise Big Analog Data Award, the Xilinx FPGA Award, and a special Engineering Grand Challenges Award sponsored by NI’s Dr. T himself for their commitment to grand engineering challenges.
Congratulations to PhD student Paul Harris and Steffen Malkowsky for their success at the 2016 Engineering Impact Awards!
Taking home the Transportation and Heavy Equipment Award for their amazing work with cross rail and having worked with Bombardier on this project, is Frazer Nash UK.
Authored by Senior Consultant, Colin Freeman, the paper explains how model-based design techniques were used to optimize everything from requirements capture and validation, through design and on to validation and verification testing, at a sub-system and system level.
The test facility Train Zero uses NI VeriStand and PXI for both integration testing of train systems and validation of the models, allowing any changes made to models to easily be revalidated in the same environment they would be running in later.
Congrats to Colin Freeman! Train Zero won Transport category in #NIWeek 2016 Engineering Impact Awards
The proud owners of the Intel Internet of Things Award are V2i from Belgium. They created a real-time measurement and diagnostic tool to determine welding quality and avoid unplanned production stops due to weld tearing developed with help of NI’s CompactRIO hardware and several sensors.
There were many more amazing projects this year that inspired us about the social impact of engineering! Every year we are blown-away by the creativity, especially that of the young engineers, so NI shines a spotlight on these bright and talented people in theStudent Design Competition Award.
But this year’s winner was University of Leeds with Project ALAN. This multidisciplinary team are helping stroke-survivors regain lost muscle control with a commercially-viable, robotic rehabilitation system.
Team ALAN with the people who helped make it happen!
Check out all the category winners and finalists and read their award-winning papers here: http://bit.ly/2bmP3nu
This summer we commissioned a study from research agency IDC, studying the best practices for internet of things (IoT) implementations and how companies can prepare themselves for IoT-based operations.
Here are the best practices for taking on any IoT project as a business:
Have a clear understanding of business objectives and what business value an IoT project will deliver.
Start with an objective that has organizational pull and already has identified business value.
Have an executive champion who will make the project a priority.
Have a start-up mentality. Start small, such as with a pilot project, and establish clear milestones.
Build in security from the start. Security and privacy concerns are the number 1 hindrance to the deployment of an IoT solution.
Own the data that will result from the project, and know if you’ll manage it yourself or will need to have others manage it for you.
Connect the IT and operations teams in your org to the end customer that will be served by the IoT project. This connection along the value chain of your project provides a level of trust that will smooth its approval, implementation, and use.
Small and medium-sized businesses with limited IT departments should plan on having systems integrators and other partners work onsite as much as possible.
Use products based on standard platforms as opposed to custom platforms as possible. Standards help ensure the compatibility and scalability of the end solution.
Use products that are flexible and extensible. Ideally based on software, such products can adapt after their initial deployment to evolving requirements with no hardware changes.
Companies that’ve already worked to define and implement IoT projects and access the IoT’s value are trailblazing an immature and evolving environment.
By applying these best practices for implementing IoT in your org, you can access the benefits of this huge emerging market while avoiding the significant pitfalls experienced by these trailblazers.
“I'd like to think that if actual engineers were involved in more projects, we wouldn't live in a world where it's a given that most websites, applications, apps, and embedded systems are poorly designed, overly buggy, and insecure. And though one would like to imagine that all safety-critical systems, at least, are created under the aegis of engineers and engineering principles, I have my doubts.”
This is one of the reasons we exist. We supply an integrated software-based platform that enables engineers to do exactly what Mr. Dunn states. Our choice in this endeavor isn’t to explore a path where engineers needed to become software developers. Instead, our mission is to create an engineering system design tool that empowers engineers to build world-class, complex, mission-critical, software based systems.
LabVIEW is at the core of our engineering platform, and when coupled with our modular hardware platforms, becomes a gateway to innovation that engineers the world over are using to make products safer, get to market faster, and accomplish amazing things.
LabVIEW simplifies the development of complex engineering applications. It’s native graphical language uses a concept called dataflow to define the parameters of execution and combines its native language with an open interface that integrates code from other software approaches. With this, we ensure that engineers can choose the approach they’re most familiar with for any individual component of the application.
We’ve invested 30 years into LabVIEW, making it one of the most productive tools on the planet. And we’re excited to share that the best is yet to come.
When paired with our software defined radio (SDR) hardware, our new MIMO system provides a well-documented, reconfigurable, parameterized physical layer written and delivered in LabVIEW source code - enabling researchers to build both traditional MIMO and Massive MIMO prototypes.
Our LabVIEW Communications MIMO Application Framework lets you develop algorithms and evaluate custom IP to solve a lot of the practical challenges associated with real-world, multi-user MIMO deployments. Scalable from 4 to 128 antennas, the MIMO Application Framework - when used with the NI USRP RIO and NI PXI hardware platforms - allows you to create small to large scale antenna systems with minimal system integration or design effort.
Researchers can use the system out of the box to conduct Massive MIMO experiments and seamlessly integrate their own custom signal processing algorithms in a fraction of the time compared to other approaches, speeding up the overall design process as the wireless industry races toward 5G.