Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

Making time go faster

Making time go faster

  • Comments 12
I got the following email the other day from a reader:

I cam across your post on pc time and thought you may have a quick answer. Thanks in advance.

I have a program that is a reader that reads a log and sends a message to a second program that is the counter that counts the number of events it gets in a five minutes.

Each entry in the log has a time stamp and is sequential. The log file reader reads the log and waits until it has hit the time in the time stamp (hour, minute and second) and then sends that event to the listner at that time. So to process a log for one day this set takes one day.

I was wondering if there was some way that I could speed up the clock so I dont have to change the logic of either of the programs but have the task completed in say 4 hours (by spedding up the clock by 6 times).

That's a fascinating idea, actually - is there a way of making time go faster?

Whenever someone asks a question like this one, it's time to fall back on the tricks of the master.  Whenever he gets a question like this, Raymond always turns around and asks a question like "What if <x> could happen?" (or "What if two applications tried to do <y>?").

So let's ask the question: "What if there was a way of speeding up the system clock so you don't have to wait so long?"

What would that do to system timers?  If the entire system time is running faster, then they would also run faster. Would there be a consequence if something that's supposed to happen every 5 minutes all of a sudden started happening every 5 seconds?

How about timeouts?  In other words, if I read some data from the disk and wait (with a timeout) for the disk read to complete, will that timeout run faster?  What happens the read timeout is 3 seconds and the disk normally takes 2 seconds for an I/O to complete?

Monitor refresh frequency?  If your app is synchronized to the monitor refresh wait, and the application all of a sudden starts running twice as fast, what would be the results?

If you're trying to make the system run faster than in real-time, it seems like all these would also speed up (otherwise you couldn't get the log playback operations to be faster, since these all run off the same timer).

This is the crux of the problem - if you want your sleeps to go faster, then all the system events also need to be faster.  Unfortunately, there's all this physical hardware attached to the computer, and that physical hardware takes time to complete operations.  If you also have timeouts associated with those hardware operations, then you may introduce real problems.

So the simple answer to the question is: "Not usually".  It might be possible to do something in a VM environment, and there are some utilities on the web that claim to be able to do this, but I wouldn't recommend them.

Why can't you change the program that plays back the log to simply not wait as long?

  • This made me think of a follow-up. At one point I did some x86 assembly coding in DOS. In the process I would hook into the system timer to make events happen on a schedule. Often I would make a mistake and make the clock run much slower or faster than it was supposed to. Would something like this effect a modern Windows OS, or does it not use the RTC for timing purposes anymore?
  • Actually you can make time faster - check the timeXxx APIs which change the clock rate for Windows. But it doesn't effect timers and timeouts.

  • He could always attempt to accelerate himself close to the speed of light. IIRC, Einsteins famous theory suggests that this results in time slowing down for the traveller (which, relatively speaking, means external time has speeded up).

    Seriously, though, I don't think the timeBeginBeriod/timeEndperiod actually speed up or slow down time measurement. My interpretation of the docs is that they simply change the resolution/granularity of the timer updates.

    That said, I suppose it would be possible to "speed up" the clock by simply polling the system clock and advancing system time by 1 second every x milliseconds (where x is a number < 1000). Very bad idea, I think, but surely it is actually possible... That would, of course, affect only programs which actually query the actual time rather than a tickcount.
  • Sorry for replying to my own comment, but it would seem that this can be rather easily accomplished using SetSystemTimeAdjustment (if the user is running an NT-series OS and has the appropriate privileges).
  • Your argument assumes that the system clock and system time are locked together. Maybe that's true for a PC, I'm not sure.

    On a VAX, there was a periodic clock interrupt that would update the system time counter by adding an increment to it. The interrupt came every n milliseconds, and it would add n milliseconds to the current time.

    But, with the right privileges, you could modify the increment. Thus if you made the increment different than the interrupt period, you could make system time run slow or fast.

    Some observatories used VAXen to control their telescopes, and, to simplify things, they ran the VAXen on sidereal time simply by adjusting the interrupt.

    VMS kept local time, rather than GMT+offset like many other OSes. When Daylight Saving Time came and went, adjusting the system clock at once screwed up programs that expected time to smoothly increase. The solution was to schedule a job that would run the clock 25% fast or slow for four hours overnight. Hardware with criticial timing relied on the hard interrupt rather than the system's concept of the current time. Thus both hardware and software could cope with the time shift.

    When we started putting TCP/IP stacks on our VAXes, clever programmers provided network time protocol daemons that smoothly synched the system time with network time by adjusting the increment rather than the system time itself.

    Granted, you'd probably have difficulty trying to make time run significantly faster or slower (like the 6 times of the original poster), but many of us have run our clocks fast and slow as needed.
  • That's what SetSystemTimeAdjustment does. It specifies the way the clock interrupt modifies the time-of-day clock.
  • This article got me thinking of two other ways to do it, one which I by mistake experienced.

    If you set the PCI latency wrong in the BIOS (I reflashed a box, and it turned out that value had changed position inbetween BIOS versions so that it now held the value of zero instead of 32 or whatever it needed) you can get ... interesting results re. time. :-)

    A more controlled way could be to use an emulator where the source code was available, like QEMU or BOCHS. That way you could modify the emulator to present faster/slower timers of all kinds (PIT, TOD, RDTSC, ...).

    Of the two choices I came to think of, I'd personally prefer the more controlled one. :-)
  • http://www.hot-shareware.com/games/speed-gear/

    Speed Gear.

    The main use of Gear is/was to cheat in online games -- cheater's games ran faster, sent their updates to the server faster, and without checking on the server for some of the items, the cheaters could run/shoot faster than their opponents.
  • Just a thought, but there are some tricks people play to with the import table to hook system APIs that are implicitly linked in.

    Memproof is my personal favourite example
    http://www.automatedqa.com/downloads/memproof/

    I'm assumming the timing APIs this log reader uses fall into that category.

    So: if we launched (and appropriately tinkered with) 24 of these applications, each with their time querying API functions hooked to return values that "lie" by 0-23 hours, could we not just wait for an hour for the results?

    Apart from the obvious showstopper I'm sure I've overlooked, I imagine there are issues with the sharing mode of the log file, the rate at which the log reader wakes up and checks the clock, and so on.
    However there is no limit to the lying available if one can devote the effort.

    There are variations: to avoid multiple processes simply increase the "amount of mendacity" over time for just one victim process, and hope it keeps up?

    Or to get round exclusive access to the log file, one could direct each of the mob of 24 to open their individual copy of the log file a la root kit, for example.

    If the processes aren't waking up often enough, try sending extra fake WM_TIMER messages, if needs be.

    Anyone think this would work?

  • Pretty sure you can force this behaviour per app with a shim. I think in the appcompat toolbox of testing there was a shim for testing timer looparound by overriding the return of timeGetTime or somesuch for the shimmed app. So I would think it's possible, depending on how your app gets time...
  • I'm underwhelmed by all of this. All of these comments seem to be trying to solve the symptom instead of the problem. Problem - this person would like a quick way of gathering some kind of audit frequency data from an existing log. The solution is not to use software designed to monitor real-time for log flooding. The solution is to use some other software to parse the log and extract that data. Seems like the kind of thing one could solve with a fairly simple script.

    If all you have is a hammer . . .
  • I seem to recall the mouse driver on the Amstrad PC1512/1640 did that -- it sped up the periodic interrupt from 18.2 times a second to some multiple of that so that it could poll the mouse position registers more frequently, and then only passed on every n-th tick to the previous interrupt handler.

    I also seem to recall that this occasionally caused havoc with the time-of-day clock when otherwise software tried to play with the same timer or assumed it was still running at the usual 55ms interval...
Page 1 of 1 (12 items)