Hello,
I must thank everyone who followed this discussion.
Meanwhile I could not reply because I did not have time to do so. In the
frequency domain things are
not good either .. 🙂 my bandwidth is very very samll.
In between I started to make more testings in my application and I
realised that my problem is far
more complex than just simple "plotting".
I have used several Windows tool like Visual Basic, Delphi and Labview.
The idea to test the speed is to plot directly from an array in memory
(not in disk to avoid disk
access bottleneck speed) a big amount of data.
My starting point was my own (and very slow) direct plotting whit a
"custom made"
reduction/compression tecnhique. That led me to put this question.
Of course I now am convinced that the problem lies in my graphical
reduction technique.
Without any mathematical spline tecnhique the ploting is a bit slow
using either VB ou Delphi. If the
reduction technique is associated with a polinomial curve fitting then
the speed increases a lot ... but
my custom made program is still slow.
Obviously this techniques and many more are "embeded" in comercial
products. I tried the teechart
pro demo version and the results are much better. I also think Labview
is a very good choice for this
sort of applications.
In case anyone is interested the example in the teechart Activex is very
good, very enlightning.
The specific problem, has I told earlier, is not on the graphics speed,
but in the fact that the data are
transfered from a remote computer over the lan and that the ethernet
cards I have are limited to
10Mbit/s!
-The system also extracts data from a VME control module ... not simple.
Anyway, I am currently working on data reduction at the source ... the
remote machines ...
Maybe it will work,
Thanks once again,
Best regards,
Pedro
In article <382cbe18@newsgroups.ni.com>,
"Christian Altenbach" wrote:
>
> David wrote in message
> news:382CA0A4.F38C7EDA@visto.com...
> > See your text...
> >
> > Christian Altenbach wrote:
> > >
> > > Just a note:
> > > most monitors have less than 1024 columns and probably around 1
million
> > > pixels. Unless you want to fill your entire screen with points,
the
> question
> > > boils down to:
> > > How much computation do you want to spend to reduce the data so
you
> still
> > > get a visually accurate representation on your screen? If you just
> shuffle 1
> > > million points to you graphics card, you're doing something wrong.
> > > LabVIEW does a great job in data reduction keeping all key
features
> (e.g.
> > > local spikes) in view.
> > >
> > > Here are some numbers ( no tweaking with debugging/priority
settings)
> > >
> > > --> 1 million random array to a waveform graph: less than 2
seconds (AMD
> > > K6-2, 400MHZ)
> >
> > But you don't show all these points at the same time. As you pointed
out
> > yourself, the 1024 wide screen allows display of only 0.1 percent of
the
> > data. Is it like shifting through the data like on a strip chart
recorder?
> >
> > -David
>
> David
>
> This was just to provide a rough number to Pedro, he was asking: "To
be more
> precise ... How long does it take LabVIEW ...". I actually send
1000000
> points to the graph terminal and leave it up to LabVIEW how to deal
with the
> display.
>
> As I mentioned, the more important question is: If you graph a 100000
point
> array to an autoscaling waveform graph, will you still see the
essential
> features of the graph? How does LabVIEW map the 1000000 points into
the
> <1024 "one pixel-wide" bins? Try it with a random array (0..1) that
you pipe
> through 1/x. There are rare spikes that are only one point wide, but
LabVIEW
> shows all of them. Each bin show the full min-max range in that
particular
> data segment.
>
> So, yes, of course not all points are shown, but at any time you can
zoom
> into any small area of your graph to see any detail you want, even if
the
> program is no longer running!
>
> Cheers
> Christian
>
>
Sent via Deja.com http://www.deja.com/
Before you buy.