Game Development Community

Integers greater than 1,000,000

by Michael Cordner · in Torque Game Builder · 10/13/2005 (4:43 pm) · 6 replies

Strangeness. To illustrate, I have the following block of code:

function GameClock::getVirtualHour(%this){
   
   echo("");
   echo ("%this.virtualTime: "@%this.virtualTime);
   echo ("%this.%this.SECONDS_IN_DAY: "@%this.SECONDS_IN_DAY);
   echo ("%this.SECONDS_IN_HOUR: "@%this.SECONDS_IN_HOUR);
   echo ("%this.virtualTime % %this.SECONDS_IN_DAY: "@%this.virtualTime % %this.SECONDS_IN_DAY);
   echo ("return value: "@mFloor((%this.virtualTime % %this.SECONDS_IN_DAY) / %this.SECONDS_IN_HOUR));
   echo("");
   return mFloor((%this.virtualTime % %this.SECONDS_IN_DAY) / %this.SECONDS_IN_HOUR);
   
}

Now, %this.virtualTime is an integer that gets updated every second or so (takes the real time and multiplies to speed up so to speak).


For values less that 1,000,000: the output is as follows
%this.virtualTime: 999880
%this.%this.SECONDS_IN_DAY: 86400
%this.SECONDS_IN_HOUR: 3600
%this.virtualTime % %this.SECONDS_IN_DAY: 49480
return value: 13

On the vert next tick (the value gets incremented by 240 each tick,) the output is thus:

%this.virtualTime: 1.00012e+006
%this.%this.SECONDS_IN_DAY: 86400
%this.SECONDS_IN_HOUR: 3600
%this.virtualTime % %this.SECONDS_IN_DAY: 1
return value: 0

(expected output would be)

%this.virtualTime % %this.SECONDS_IN_DAY: 49720

Tracing into the engine source some (looked at the modulus production of the attribute grammar), I don't think it's an issue with the modulus operator, as the value passed into it is already incorrect. Looks as if it's getting corrupted as it's being placed onto the argument stack (though I'm not sure where exactly this occurs).

-Mike

#1
10/13/2005 (5:11 pm)
Well what's probably happening is that it's parsing the value as a string. so when the string is 999880 it gets parsed as "999880" but when it's 1.0002e+006 then it gets parsed as "1.0002e+006"

and probably the next function takes that string and converts it to int again.

so it would change "999880" to 999880

but it would change "1.0002e+006" to 1

of course this is just a guess, but i'm 95% sure it's something related to the int to string conversions.
#2
10/13/2005 (7:07 pm)
See this thread for some potentially useful information on the current limitations on numeric precision. I believe Ben mentioned fixing this up in TGE 1.4 which will get rolled into T2D 1.1. There are some stabs at fixes in that thread but there are a lot of places that the precision needs to be updated for it to work in all cases, so be careful and test the heck out of things if you make any changes. :)
#3
10/14/2005 (2:06 am)
@Jason, that definitely looks like a likely cause. I think for the moment I'm just going to change this clock functionality to reset, or keep time in some manner other than infinitely incrementing an integer.

That link is non T2D owners (like most useful looking information is). Would someone be able to repost the essence of it here?

Thanks lads
#4
10/14/2005 (1:27 pm)
Oops, I had thought it was in the T2D section (since that's what I was posting about when I contributed to the conversation), sorry. There's a heck of a lot of info there and not one specific definitive solution, but I'll try to summarize where you should look:

This was my first post which outlined where the problem begins:

----------------------------------------------------------------------
In the event this isn't checked into 1.4 yet (or for those looking to gain this advantage in current T2D version) I was able to increase precision by changing the following in compiledEval.cc around line 144:


void setFloatValue(F64 v)
   {
        validateBufferSize(start + 32);
        dSprintf(buffer + start, 32, "%.9g", v);
        len = dStrlen(buffer + start);
   }

around line 300 in compiledEval.cc:


char *getFloatArg(F64 arg)
{
   char *ret = STR.getArgBuffer(32);
   dSprintf(ret, 32, "%.9g", arg);
   return ret;
}

around line 601 in compiler.cc:


U32 CompilerStringTable::addFloatString(F64 value)
{
   dSprintf(buf, sizeof(buf), "%.9g", value);
   return add(buf);
}

and around line 143 in consoleTypes.cc

static const char *getDataTypeF32(void *dptr, EnumTable *, BitSet32)
{
   char* returnBuffer = Con::getReturnBuffer(256);
   dSprintf(returnBuffer, 256, "%.9g", *((F32 *) dptr) );
   return returnBuffer;
}


By default %g uses six digits of precision. Nine was sufficient for me and doesn't seem to adversely effect anything; your mileage may vary. :) Some of the changes above may be extraneous to achieve the desired behavior and/or there may be other %g formats that need to be converted, but the above works for me in T2D for the time being, until 1.4 is rolled in.
----------------------------------------------------------------------


Unfortunately this doesn't solve the problem entirely, so I added:


----------------------------------------------------------------------
Ok, a few more changes fixes this situation, but there might easily be other places I've missed that still use 32 bit values and therewore would have similar problems. Perhaps Ben can sanity-scan this if he's got the time:

in consoleInternal.h around 153:


U64 ival;  // doubles as strlen when type = -1
         F64 fval;

in consoleInternal.h around 168:


F64 getFloatValue()

in consoleInternal.h around 206:


void setFloatValue(F64 val)

in consoleInternal.h around 211:


ival = static_cast<U64>(val);

And one small change back in consoleTypes.cc around 142:


dSprintf(returnBuffer, 256, "%.9g", *((F64 *) dptr) );

(in addition to the .9 precision change before).


This fixes up both stored and immediate values as far as my extremely brief testing has shown, but be careful if you use this, I could have easily broken something else that I didn't catch. I realize this is now in 1.4 but I've no idea how soon 1.4 will be rolled into T2D.
----------------------------------------------------------------------

Continued next post
#5
10/14/2005 (1:27 pm)
Continued from last post


Finally, there was a glitch with some saved $pref variables that were getting butchered by the precision shift, so I did the following:

----------------------------------------------------------------------
Ok, the unsigned int that got changed in a few places from U32 to U64 is causing the (null) problem (and I assume the gamma was getting screwed up by that) so if you revert the following two lines:


U64 ival;  // doubles as strlen when type = -1
...
ival = static_cast<U64>(val);

so they're back to U32 again, it fixes the output problem. I'm a bit concerned though that something might try to use the ival value someplace instead of fval and lose the extended precision, however I've done a bit of regression testing for our original goal and it seems to be ok. Just something to keep an eye out for. This stuff is low-level enough that side-effects are likely to pop up in weird places. :)
----------------------------------------------------------------------

The problem is I'm almost certain there are other areas that need to be changed for a "complete" fix, and I'm hoping TGE 1.4 will bring this to T2D in 1.1. Nevertheless, the above is enough to get precision working well enough for on-screen displays of score and whatnot, and I've noticed no crippling side-effects since implementing them. Good luck. :)
#6
10/14/2005 (2:00 pm)
This is almost totally unrelated. I ran into this on Marble Blast Xbox on leaderboard scores. It would put it into scientific notation, then I'd try to atoi it and get 0. I fixed it via a horrible, horrible hack, but I am glad to see a better solution has been found.