07-08-2010 04:58 PM - edited 07-08-2010 05:03 PM
Here is Knuth's book.
TAOCP Volume 4 Fascicle 2, Generating All Tuples and Permutations (2005), v+128pp. ISBN 0-201-85393-0
Fascicles 0,1, and 2 are spectacular. In fact, almost anything this guy produces (ie. LaTeX) is fabulous.
The algorithm is as follows:
0. Sort the array in ascending order
1. Find largest m such that array[m+1] > array[m]
2. Find largest n such that array[n] > array[m]
3. Swap array[n] and array[m]
4. Reverse elements array[n+1...N]
If step 1. fails you have reached final iteration.
Should add: Step -1: Check for empty string.
Edit: I was forced to use IE, seemed to lose all linefeeds ala iPhone. Back to Firefox to fix.
07-09-2010 07:17 AM
Thanks for keeping this thread alive Darin.
I must admit, I was blown away with the elegance and ease of Altenbach's solution. I had not thought of the Qutient and Remainder approach.
Plus, I must admit that I did look at solutions from other languages, such as C/C++.
I would have gone along the lines of something like this: (of course, implementing it in LV)
// This program finds permutations using a recursive method
// modified Dev C++ from a wonderful article at:
// http://www.codeproject.com/cpp/cppperm1.asp
#include<iostream>
#include<cstring>
using namespace std;
void char_permutation(char str[],char append[])
{
int length = strlen(str);
if (length)
{
for(int i=0;i<length;++i)
{
char* str1 = new char[length+1];
int cnt;
int cnt2;
for(cnt=0,cnt2=0; cnt<length; ++cnt,++cnt2)
{
if (cnt == i)
{
str1[cnt] = str[++cnt2];
continue;
}
else
str1[cnt] = str[cnt2];
}
str1[cnt] = '\0';
int alength = strlen(append);
char* append1 = new char [alength+2];
strncpy(append1,append,alength);
append1[alength] = str[i];
append1[alength+1] = '\0';
char_permutation(str1,append1);
delete []str1;
delete []append1;
}
}
else
{
cout << append << endl;
}
}
int main()
{
char str[] = "ABCD";
char append[] = "\0";
cout << "Original = " << str << endl;
char_permutation(str,append);
cout << "Done ........" << endl;
cin.get(); // wait
return 0;
}
07-09-2010 09:42 AM
@Ray.R wrote:
Thanks for keeping this thread alive Darin.
I must admit, I was blown away with the elegance and ease of Altenbach's solution. I had not thought of the Qutient and Remainder approach.
Plus, I must admit that I did look at solutions from other languages, such as C/C++.
I would have gone along the lines of something like this: (of course, implementing it in LV)
I am a sucker for questions like this one. By the time I got to it, there were already about 5 or 6 pages, and I see it was actually Norbert who made two excellent points (why do I automatically guess DFGray?). Quotient and Remainder and rolling the dice, both very nice.
Let's face it, almost everything has been done when it comes to these algorithms, but LV is unique is some ways. I happen to like Knuth's books which give algorithms in terms of some hypothetical programming language. Another common approach is to find a C/C++ implementation like Ray.R's post. I think that a lot is "glossed over" in the parenthetical remark (...implementing it in LV). As you either love or hate, LV has some differences with C/C++. We see a lot of code on the forums that is a butchering of something that is natural in C/C++, so there is still an art to implementing something in LV while maintaining the beauty of the original code. Every so often we are fortunate to find ways that LV is actually more suited to a particular algorithm than C/C++.
So, in this case I say the challenge doesn't end with the finding of the C++ code, it only begins.
07-09-2010 10:52 AM
Hi Darin,
That's my case exactly. I was looking at a text-based solution instead of a native LabVIEW solution.
My experience so far with both languages (C/C++ & LabVIEW) is that if I base LabVIEW code on code from C, it does get messy and is not optimized (for LabVIEW).
However, if I code a prototype version of a code in LabVIEW, the resulting C code (from the LabVIEW code) is very nice and refined.
Moreso, may be the fact that I suffer from Labviola-Lackalitus, which is derrived from Textaphobia-Dementia.
But I'm not complaining 😉
07-09-2010 11:17 AM
The algorithm I used seems to be bit harder to find these days, since it apparently disappeared from wikipedia (see old quote here). Interesting.... 🙂
Seems to be a hot topic, with ~500 revisions since 2007. 😮
07-09-2010 11:23 AM
And I thought you did it from scratch... 😉
Thanks for the link to the original thread. Interesting to read the algorithm from Wiki. Glad you had posted it then. 🙂
07-09-2010 12:36 PM