BreakPoint

cancel
Showing results for 
Search instead for 
Did you mean: 

Value of Side-by-Side Comparisons?

I don't know a whole lot about the measurements... or rather, I don't remember a whole lot.  Back in school, this would have made a lot more sense to me.

 

But I would be pretty skeptical of side-by-side presentations for the reasons already beat to death.  My gut is that Agilent's reaction did them no favors, if only politically, as it made it look like they were resorting to silencing you since they lacked a response on the merit.  An invitation to a third party moderated comparison and debate would have served them better (assuming they actually did believe in their product).

 

All that said, as a consumer of these products (or other instruments like them, anyway), I think vigorous competition between producers is a wonderful thing!  Iron sharpening iron, as it were.  Y'all don't have to be friends and it's better for the rest of us if you aren't.

Message 21 of 28
(6,620 Views)

Jeff and PiMaster - Thank you for your thoughts. Definitely agree that competition is healthy; not only for you as a customer but even for us, since it keeps us on our toes :). It is a little disappointing (and flattering at the same time) that Agilent did not really defend their engineering through a technical debate or even question our engineering, instead they chose to use non-technical methods to get rid of the demo from the show. (I had spent an hour with them at the trade show where I explained the demo in full detail. They said that they will get back to me and took my business cards but never got back). We definitely welcome a healthy debate, whether it is on the PXA vs the NI 5665 or a higher level debate on the philosophy of using a boxed instrument vs a PXI instrument, but unfortunately we have not been given this pleasure yet.

Taking a step back though, it seems like a lot of you don’t like the fact that the comparison came from NI, even if it was correct and I really appreciate this feedback. Does that mean that one of you is signing up to do the comparison for us next time ;)??

In all seriousness, we would provide the hardware + software + support if one of you is willing to do a comparison and report on it.

Please keep the comments coming. Exactly the kind of feedback that we are looking for. Thanks.

Raajit L
National Instruments
0 Kudos
Message 22 of 28
(6,587 Views)

 


@Raajit Lall wrote:

Does that mean that one of you is signing up to do the comparison for us next time ;)??

 


I already do!  That is, I have a personal and vested interest in providing optimal solutions for my client's individual needs and have often performed such comparisions of products, presented the strengths and weaknesses of each as well as analisys of suitability for alternate potential end use cases.  This is a test engineer's natural element.  My reputation suffers when I am in error or assume a customer's tolerance for novel solutions. (I've proposed a few that work on paper but had never been done before)

 



In all seriousness, we would provide the hardware + software + support if one of you is willing to do a comparison and report on it.


An independant industry organization with a pool of case studies, white papers and application notes moderated by a Forum type community of vested end users, App engineers, integrators and developers ...... Just thinking out loud.... perhaps even funded to consult on individual cases....  Worst case for any T&M manufacturer is they would be encouraged to get rid of dogs that suck resources that would be better invested in marketing their competitve products.  sort of a marriage of IVI consortium, the IDN and real world testing.  


 


"Should be" isn't "Is" -Jay
0 Kudos
Message 23 of 28
(6,566 Views)

Whenever you do side by side comparisons, its a question of the boundary conditions.

For 'standard' products or use cases , ranging from number cruncher (top 500 list) , office PCs, MPG or l/(100km) for cars, there are standards for the the test. Which leads to the situation that these products are mostly optimized to these standards to get good test results. But who of the customers know the standard? Will it represent the individual use case?

(Not talking about the process to establish a standard, with all those lobby interests etc 😉 )

 

The big benefit of virtual instruments is the software and the bricks of hardware you can fit to your needs. And NI wouldn't be that big if there would be only 10(100) standards. The individual list of requirements for the task to solve set the standard.

 

For simple tasks (a brick) a side by side comparison is useful, but for a complex system? It is already a complex measuring task, hard to explain, even more to define to detail. 

And speaking frankly, wherever there is one of the big players claiming (blaming) loudly that he is better than someone else, I always hope for the smirk when he fail.

 

If you want side by side comparisons, let others define a target, say a mobile phone company providing a test case (hardware EOL test) , time and location (exposition) and make it open for everyone. (It might even be hard to define the metrics). But that is nothing NI can start. 

And it would be interesting if R&S, NI, AG, TEK etc jump in. But I'm afraid they don't. Would you (NI) have publisched the results if it wouldn't have been positive for your product?

 

 

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 24 of 28
(6,468 Views)

I don't know if you are still watching response to this article; however, I want to add that I welcome side-by-side comparisons. As engineers we are tasked with attempting to evaluate various equipment when putting together specifications for systems.

 

I immediately recognized that some of the Agilent performance hits came because you had to query the device over Ethernet, GPIB or Serial; this can be discounted in many situations unless you require the highest acquisition times possible. Chances are, if you a looking at a boxed system, you are not very concerned with acquisition data transfer times.

 

This leaves us with quality of measurement, which for both devices is similar, Therefore, when evaluating two devices that have similar performance, the decision comes down to price, form factor, and quality of service.

 

I performed a similar comparison two years ago between voltage bridge acquisition systems for a customer. In this case the competing vendor's price per acquisition channel was significantly higher while the performance was not significantly better. The other consideration was the form factor. We looked at a PXI based system which was easily portable where as the competing equipment required a small crane to move it from lab to lab. In the end, the decision was made to purchase the NI equipment. The customer has been very happy and there have been requests from other departments at the customer requesting my company to quote and build similar systems to the one deployed.

 

As far as Agilent's less than professional attitude at the show, I can understand that one too. They do not want their equipment shown at any other booth much less a booth where the demonstration shows their equipment to be lacking. The only option there as to white box the "Competitor's" meter and remove all references to Agilent in the demo. This probably would have been met with great skepticism unless you had a customer who wanted to purchase your equipment. At that time, they would have been invited to a closed door meeting to discuss the specifics of the test setup. You are probably fortunate the corporate lawyers didn't get involved. It does not make their actions right or justified, but that is the reality of the situation.

 

Drew

0 Kudos
Message 25 of 28
(6,390 Views)

Like most others, I find side-by-side comparisons of limited value.  It seems that those set up by specific vendors are invariably set to give the vendor's equipment/software an advantage over the competition - by judicious choice from the competitor's product range, or by not taking advantage of some of the available settings, etc.  It's rare to find a comparison that's NOT set up by a vendor.

 

Very few of the comparisons I've seen in any field cover the exact circumstances of the job for which I need the equipment/software.

 

On the other hand, quite often when a comparison is shown, I have no need for that specific task at the time.  By the time I do need that sort of equipment or software, time and technology have usually well-and-truly marched on.

 

When I am in a position to make such choices, I prefer to do my own comparisons tailored to the job at hand with the current available offerings.

 

For me, most side-by-side comparisons are little more than entertainment.

 

As for standards:  The wonderful thing about standards is that there are so many from which to choose...

 

Regards,


Geoff

--
Geoff Field, BE, CLAD, MIEAust
Professional geek, amateur stage-levelling gauge
0 Kudos
Message 26 of 28
(6,352 Views)

@Drew.Rankin wrote:

 

I immediately recognized that some of the Agilent performance hits came because you had to query the device over Ethernet, GPIB or Serial; this can be discounted in many situations unless you require the highest acquisition times possible. Chances are, if you a looking at a boxed system, you are not very concerned with acquisition data transfer times.

 

 

Drew


Drew, you bring up a great point here.  The time taken to transfer a single point across ethernet or GPIB is negligible compared to the processing time, however, when the instrument starts making multiple calls over the bus, that is when you will start seeing a somewhat significant impact.  In this scenario however, the bulk of the time taken by the Agilent PXA is processing time (close to 420 ms for 10 averages) and the communication time is pretty small (in the ms range).  This is mostly expected since the processor technology on boxed instruments can get outdated pretty easily and it can be a hassle to upgrade processors.  The PXA for example has a dual core processor that is being used, whereas we used the PXIe-8133 that has a quad core Intel I7 processor.

 

Overall, because of processor technology differences in boxed instruments vs. PXI, we see at least a 5X improvement (15X in this case - WCDMA ACLR) for a majority of RF tests.

 

I am glad that you welcome side by side comparisons.  We were hoping that a demo like this would at the very least be a first step in the proof of concept (poc) phase of a purchasing decision.

Raajit L
National Instruments
0 Kudos
Message 27 of 28
(6,328 Views)

Side-by-Side Comparisons Value: High (very useful)

 

However; with cynical mode enabled, there is only going to be one result: 

 

'The instrument of the company performing the Side-by-Side comparison is going to come out on top!'. 

 

For me the key here is independence.   The results would carry more weight if the comparison was done by a recognised external test body.

0 Kudos
Message 28 of 28
(6,282 Views)