zprogd
BAN USERSend me email: zprogd@gmail.com
For example on prompt from "scanf("%f",&a);" we will write "2.6"
Float variable "a=2.6" looks as hex 0x40266666
or in binary
01000000001001100110011001100110
Here:
0 - sign
10000000 - exponent (1 + 127 = 128)
1001100110011001100110 - mantissa
Number 0x40266666 in the little-endian memory placed as <66 66 26 40>, here 40 include the exponent.
When we call function printf() without typecast in integer
printf("%f",a);
Compiler translate 32-bit float into 64 bit double. In the little-endian memory our double "2.6" looks as <00 00 00 c0 cc cc 04 40>
Here last <04 40> include the exponent. I'm lazy to explain here the binary representation double in memory.
When we call printf() with typecast in integer
printf("%f",(int)a);
Compiler translate 32-bit float into 32 bit int. Float 2.6 will be integer 2. In the litlle-endian memory our integer "2" looks as <02 00 00 00>
Then printf() parses format string "%f". It wants 64-bit double <00 00 00 c0 cc cc 04 40> but instead it gets <02 00 00 00 00 00 00 00>. I don't know why there is <00 00 00 00> after <02 00 00 00>. On my system this "0" is result "push edi" when system saves registers in the begin current procedure. I think this feature is very system dependent.
Any way function printf() gets <02 00 00 00 00 00 00 00>. In this case exponent bits are allways "0" i.e. exponent always is -1023. This is very-very small number with a lot of zeroes after point even if we will use the max possible integer 0x7FFFFFFF.
The format string "%f" prints string with only 8 characters length and can't print all zeroes.
Another way if you change precision in format string as below
printf("%.340f",(int)a);
And it's wonderful! You will look something like this:
0.0000000000000000000000000000000000000000
000000000000000000000000000000000000000000
000000000000000000000000000000000000000000
000000000000000000000000000000000000000000
000000000000000000000000000000000000000000
000000000000000000000000000000000000000000
000000000000000000000000000000000000000000
000000000000000000000000000000098813129168
249309
buf[100] it is native "C" stack. When you call printAllCombinations recursively you copy your string every time. I use only one small buffer.
But it is not matter. Matter is your algoritnm not pure. I think you need remake if (N == 1) and loop for (int j = 0; j < prev.length(); j++) from printAllCombinations.
int buf[100];
void PrintAllCombinations(int N)
{
printf("%d=\r\n", N);
PrintAllCombinations(N, 0);
}
void PrintAllCombinations(int N, int level)
{
if (N==0) {
PrintBuf(level);
return;
}
for (int i = 1; i <= N; i++) {
if (!level || i<=buf[level-1]) {
buf[level] = i;
PrintAllCombinations(N-i, level+1);
}
}
}
void PrintBuf(int level) {
if (level<=1) {
return;
}
printf("%d", buf[0]);
for (int i = 1; i<level; i++) {
printf("+%d", buf[i]);
}
printf("\r\n");
}
For every word create auxiliary array size of 256 and count every character in the word(like counting sort). Sort strings with using auxiliary arrays. Or better count CRC for auxiliary arrays and compare CRC.
struct Word {
const unsigned char* str;
char* hash;
};
bool Compare(Word& w1, Word& w2) {
return (memcmp(w1.hash, w2.hash, 256) < 0);
}
void Sort(const unsigned char** a, int len)
{
Word* arr = new Word[len];
for (int i = 0; i < len; i++) {
arr[i].str = a[i];
arr[i].hash = new char[256];
memset(arr[i].hash, 0, 256);
for (const unsigned char* p = arr[i].str; *p; p++){
arr[i].hash[*p]++;
}
}
sort(arr, arr+len, Compare);
for (int i = 0; i < len; i++) {
a[i] = arr[i].str;
delete [] arr[i].hash;
}
delete [] arr;
}
First of all we print all leaf siblings for currnet item. Then for every not leaf sibling deep to next(child) level and at the same time push every way Item to stack.
- zprogd June 19, 2012