LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Prime Factor Coding Challenge


fahlers wrote:
You beat me with my own (sorry: with Dan Shanks') algorithm. Did you discover it independently?

I knew about it before you annouced that you used it to achieve your first impressive time, yes. I was busy implementing Brent's Rho while you were doing SQUFOF, I suspect.  I had started reading Jason Gower's Ph.D dissertation around the point that your first timings and comments were posted.  And my priorities changed a bit after that, I'll admit.  Smiley Wink  I was surprised at first at how much more efficient it was - I believed at the time that the Rho method could compete.  Brent's Rho generally requires only a few hundred to perhaps a thousand iterations to find factors of this size, while Shanks algorithm sometimes needs tens of thousands.  I thought that this would negate the extra work involved in dealing with higher precision numbers.  I think that both algorithms are commonly used to factor relations used in the quadratic sieve because of their speed with numbers of this size.

I imagine you would have at least equaled my execution speed had you had more time to develop your code.  I guess I managed to sacrifice a bit more sleep for the sake of this challenge!

I submitted several versions of my code, and here I've attached one of them - not sure if its the one Bruce used as the final one or not.  I've never claimed to be a Labview expert, so I'd welcome any critiques of it as well as questions, if there are any.  Written in Labview 7.1. 

- Ben.

Message 161 of 186
(4,252 Views)
Thanks for the code,

Since I only have 6.1, can someone save it for 6.1 please?  I'd like to peruse the code and have a weep while I think of all the time I "wasted" trying to get a bit more speed from the trial division method.  OK, I knew there were faster versions, but this one does ALL 100 primes before I can even do one (Statistically speaking - my first prime takes a long time due to the prime table generation).  Ouch.

And to ben,  the fact that you're relatively new to the coding challenges is OK.  Personally, I've only really taken part in 3 of them.  I like the open nature of the coding challenge in that you never know who's going to pop up with a fantastic solution to the problem.  It's easy to keep an eye on the usual suspects, but I find it refreshing if someone "new" comes along and shows us all how it's supposed to be done.

Would I be right in saying that this is the clearest winning margin in any of the coding challenges?

Shane.

PS please don't forget LV 6.1!
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 162 of 186
(4,219 Views)

As 6.1

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 163 of 186
(4,210 Views)
Code size and execution speed have never been positively correlated in any language.  The first real numerical calculation I wrote (in FORTRAN IV on a DECsystem 20 mainframe using a Tek 4014 graphics terminal for output) gave me an order of magnitude in speed for every doubling of code size.  It was an iterative solution to the many-body gravitational system problem.  Back when we had to write our own graphics drivers, execution speed was often far faster if we used fixed point routines using integers rather than floating point for things like circles.  This also caused massive code inflation, but much greater speed.  Dealing with large data sets in LabVIEW also produces some of the same problems - larger code gives much faster execution.

That said, however, I must admit that simple, small, easily-maintained code is always my goal.  I agree that it is better to give up some speed if the code is easy to maintain, provided you can afford to do so.

Bonus points: I wrote that first real numerical calculation my senior year as an undergraduate at Tulane.  Can you guess my approximate age?
Message 164 of 186
(4,228 Views)

I believe the KL-10 came out in about 1979-1980. Those machines where 36 bit processors with up to 2 meg of multiported core memory.

I guess 45!

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 165 of 186
(4,232 Views)
Or 3*3*5 on this thread!

Lynn
Message 166 of 186
(4,253 Views)

Nice come back Lynn!

I should also recognize that Pitt and CMU were still using their DecSystem 10's and 20's as late as 1990.

BTW:

KL-10 instruction set was the most powerful instrction set I have ever worked with. Hyperbolic functions were implemented in machine code and variabel length bit fields were supported (a "byte" code be defined as being from 1 to 36 bits long). It took a freight train to move one those babies. If you had bad memory, you had to count rows of cabinets to find the the right cabinet. They actually used light bulbs and not LEDS!

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 167 of 186
(4,250 Views)

Ok, brief tangent on a real-world application of prime factors that doesn't relate to cryptography or hashing.

I was working on an LabVIEW RT app a few years back where the data rate was too high to stream to disk or TCP/IP.  Fortunately the duration was short enough that we could simply stuff data into RAM until the end of the test and write to disk afterward.

Due to a quirk/bug/anomaly/whatever, we found that we couldn't use the simple approach of allocating memory at the beginning of each run because RT didn't de-allocate fully at the end of the prior run.  We would regularly get only 2-4 runs in before having to reboot the RT box.  Eventually we found that instead of calling "Initialize Array", we were able to get memory allocation by calling "Reshape Array" with an empty array input.  By wrapping a loop and unitialized shift register around the whole thing, we could now perform actual allocation on our first run after boot but then keep reusing the exact same memory space for all subsequent runs.  Voila!  No more need to keep rebooting the RT box!

Where do the primes come in?  Well, we had to pick an appropriate total size for the storage array.  In our case, the user could choose to "record" anywhere from 1 to 16 channels of AI data, using a potentially different number on each run.  So the question is, can we choose an array size that will exactly fit an integer # of samples no matter how many channels are in each sample?  And the answer is yeah, sure, just use prime factoring to help determine the least common multiple for all integers from 1 to 16.  Leaving some of this as an exercise for the reader, the spanning set of prime factors is 2*3*2*5*7*2*3*11*13*2 = 720720.  So a 1D array with a dimension of 720720 (or some integer multiple of it) can be exactly resized into any needed 2D shape to accomodate the 2D data returned from an AI Read. 

This allowed fast turnaround from one test to the next because were no longer trying to allocate big contiguous chunks of RAM (which could take tens of seconds).  The Reshape simply tells LabVIEW how to interpret the row & column indices, it requires no manipulation of the actual memory at all.

Anyhow, I thought it was kinda cool to have another real-world use for small primes.  (The other one has to do with gearing -- it's often desirable for mating gears to have numbers of teeth that are mutually prime, i.e., no common prime factors.  This way each tooth encounters every other tooth before repeating.)

-Kevin P.

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 168 of 186
(4,234 Views)

Nice story Kevin,

I may use your reshape idea next week.

As I understand it, primes come into play with cicadas where they come in different versions, each being dormant most of the time but only coming out to reproduce in prime number multiples of years. That way the 7 year cicada only comes out the same year as the 13 year cicada once every 91 years.

Yep, primes are cool!

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 169 of 186
(4,228 Views)
Very close!  The time was the spring of 1981.  The answer is 2*23.
Message 170 of 186
(4,197 Views)