Improving timing and smoother script flow
Posted: Sat Apr 04, 2009 6:18 am
In working with my own system I ran into some timing problems.
In particular I noticed some issues where something would work fine on fast machine, but then doing something different on slower machines.
Example: A short key press to move forward would work fine on my Core Duo, but then on an old slower Pentium 4 machine, the key press seemed to last much longer.
The core of the problem is the main sleep/rest routine
Here is yours AFAIK (current?):
If you think about it a bit, you will see that you can't guarantee this routine is going to take "msec" time.
What ever happens inside the coroutine might take longer then your smallest time tick division. Also inaccuracies do to other factors such as general OS
operation, what kind of priority threads are at, etc.
And since Windows is not a "Real-Time Operating System" (known problem) there is no guarantee how accurate a "Sleep()" (the core Windows API, call or the internal 100ns "NtDelayExecution()").
You might go to do a "yrest(75)" thinking you are getting a 75ms delay, when in truth after yielding to your system coroutine and the fact the damn game is taking near 100% CPU, the wait actually takes 250ms! (Just an example, could be better or worse)
How to at least alleviate this is to time how long you have been waiting over your series of time slices/ticks.
We have a bit different API but hopefully my routine here should make sense:
Example:
Some differences in API I'm using all fractional seconds, not ms.
And time functions with 1ms accuracy (internally "timeGetTime()", with a "timeBeginPeriod(1)" on init).
(I switched to that rather then "QueryPerformanceFrequency()" since it takes several times more overhead per call.)
I save the the time before waiting then compensate, and, or bail out after every time slice.
You can see I also use a slice of 20ms. This seems to be pretty smooth, although as I write this you would think 10ms or smaller would be better..
I changed everywhere I did timing from a static time slice way to this way and every thing got smoother and less error prone..
In particular I noticed some issues where something would work fine on fast machine, but then doing something different on slower machines.
Example: A short key press to move forward would work fine on my Core Duo, but then on an old slower Pentium 4 machine, the key press seemed to last much longer.
The core of the problem is the main sleep/rest routine
Here is yours AFAIK (current?):
Code: Select all
-- A pretty standard rest/sleep command.
-- It automatically will yield, though.
function yrest(msec)
safeYield();
if( msec < 10 ) then
rest(msec);
return;
else
sections = math.floor(msec / 100); -- split into 10msec sections
ext = math.mod(msec, 100); -- any left overs...
for b = 1,sections do
rest(100);
safeYield();
end;
if( ext > 0 ) then
rest(ext);
end
end
end
What ever happens inside the coroutine might take longer then your smallest time tick division. Also inaccuracies do to other factors such as general OS
operation, what kind of priority threads are at, etc.
And since Windows is not a "Real-Time Operating System" (known problem) there is no guarantee how accurate a "Sleep()" (the core Windows API, call or the internal 100ns "NtDelayExecution()").
You might go to do a "yrest(75)" thinking you are getting a 75ms delay, when in truth after yielding to your system coroutine and the fact the damn game is taking near 100% CPU, the wait actually takes 250ms! (Just an example, could be better or worse)
How to at least alleviate this is to time how long you have been waiting over your series of time slices/ticks.
We have a bit different API but hopefully my routine here should make sense:
Code: Select all
--== Sleep with time sliced yield
local cSleepTick = 0.020
--
function CoreSystem:Sleep(Period)
local StartTime = time.Get()
-- Slice out the wait time with yield
while true do
-- Yield to core
self:SafeYield()
-- Need to track time elapsed to compensate for time spent in core, and in general improve accuracy.
local TimeLeft = (Period - time.Delta(StartTime))
if (TimeLeft >= 0.001) then
-- Use a sleep tick or the remaining time, which ever is smaller
if (TimeLeft > cSleepTick) then TimeLeft = cSleepTick end
-- Sleep a time slice
time.Sleep(TimeLeft)
else
return
end
end
end
Code: Select all
-- Sleep for 1/4 of a second
tCoreSystem:Sleep(0.250)
And time functions with 1ms accuracy (internally "timeGetTime()", with a "timeBeginPeriod(1)" on init).
(I switched to that rather then "QueryPerformanceFrequency()" since it takes several times more overhead per call.)
I save the the time before waiting then compensate, and, or bail out after every time slice.
You can see I also use a slice of 20ms. This seems to be pretty smooth, although as I write this you would think 10ms or smaller would be better..
I changed everywhere I did timing from a static time slice way to this way and every thing got smoother and less error prone..