Scheduler Design


Quartz Composer

Checks its current time against an external reference in order to correct time.

E.g. moving a square across the screen: is the square at the right place and is there at the right time?


The nice thing here is that everything is sample-accurate / sample-synchronous.


A note from Trond (on jamoma-devel list)

the line object seems to be subscribing to Einstein relativity theory, with the relative duration of time depending on time granulation. In the patch below 100 ms is 900 ms and 990 ms respectively, due to the fact that the first step of the ramp is taken instantaneously.


what is the design rationale behind this behavior?

My intuitive expectation would be that the line would start from start value now, and arrive at destination value after a time defined by the second argument of the list, similar to how I would e.g. want an automation value to ramp from 0 to 100 over a full bar in a DAW program, not starting or arriving early.

From Adrian (on jamoma-devel list) regarding granular synthesis, but informative here:

Hi Tim, Nathan:

Could you elaborate a bit on the way you are thinking to implement granular processing in Jamoma? From reading the DAFx paper ("Jamoma audio graph layer") I assume it will be somehow modelled on FTM\Gabor. As much as I prefer this model over using audio rate signals to control parameters, the way it works in FTM\Gabor at the moment also caries some limitations.

Btw. have you ever looked at LuaAV:

Here is a quote regarding its scheduler:

LuaAV has a powerful timing system, using on an internal scheduler that preserves deterministic ordering and logical timestamps to nanosecond accuracy. The deterministic ordering and accuracy is preserved in many messages to the audio system, such as adding/removing synths.(

A paper with a bit more details:

I am look forward to seeing where it's all going and I'd be happy to provide some input.



From the PortMedia list -- -- (PortAudio, PortMIDI, etc.) for keeping scheduler in sync

Hi Guys

Coming in very late on this but just wanted to add my two cents...

Roger Dannenberg wrote:

I'm not totally sure about the proper way to get audio output time from

The intended way is using:


and we recently upgraded Audacity to use a more experimental
version of PortAudio because an earlier version we had been using did
not have audio output time implemented as well. I don't know what
systems and API's support accurate audio output time.

With CoreAudio, the system provides information which can be (relatively)
easily mapped to timeInfo->outputBufferDacTime. Sometime last year I made
some changes to PA that make this work better on OSX than the previous

On Windows there are in-general no specific APIs to get accurate time
information. DS has a buffer position API but it is known to be extremely

In most cases PA does something like:

timeInfo->outputBufferDacTime = callbackInvocationTime +

This is, of course, an approximation to this:

timeInfo->outputBufferDacTime = idealCallbackInvocationTime + allLatency;

callbackInvocationTime = idealCallbackInvocationTime +


allLatency = knownBufferingLatency + unknownBufferingLatency + dacLatency;

So the accuracy of timeInfo->outputBufferDacTime is dependent on two things:

1. OS scheduling jitter in callbackInvocationTime

2. Availability of accurate knownBufferingLatency information.

Note that if the OS has a mixer (like Windows kmixer) it may perform
internal "buffer size adaption" and that makes the
osSignallingAndSchedulingJitter even more complex since it may include
periodic buffer adaption delay offset too.

In my experience, best-cast accuracy of callbackInvocationTime is about
400us on OSX, and somewhat higher on Windows (800us?). When using
DirectSound or or WMMW, or some ASIO drivers (eg ASIO4ALL) jitter can be
much higher if the driver or kmixer is doing buffer adaption. For example
I've seen ~15ms callback delay variation with ASIO4ALL even with a 5ms

It is possible to significantly reduce (1) using by filtering callback
invocation times using PLL/DLL techniques and/or other types of clock
recovery. That's what I do in AudioMulch, and this gives me subsample
accurate clock even with large amounts of callback jitter.

ASIO and CoreAudio drivers can return knownBufferingLatency information
which makes (2) relatively accurate. For WMME and DirectSound
knownBufferingLatency currently only includes latency introduced by PA and
WMME/DirectSound buffers (in fact with DS right now I think we only use PA
buffering latencies, if any).

Note that the next major milestone (V19-M1) is the "latency milestone" and I
plan to clean up a number of open tickets relating to this stuff.

I've also been considering implementing some low level telemetry tools so we
can inspect scheduling jitter and latencies in PA. I've recently done this
in a network audio streaming project and have already coded analysis and
visualisation tools using NumPy.

Long ago, we wrote
some code for winmm, which transfers samples to devices through a list
of buffers. The timing of buffers was full of jitter, so we had some
complicated smoothing software (maybe Kalman filters) to estimate the
sample output times. It worked pretty well, but illustrates how hard the
problem can be with "mainstream" audio APIs.

Agreed, with many APIs it is non-trivial.

JACK uses a digital PLL for clock recovery:
(that link seems broken but if you google for "Using a DLL to filter time"
you can use Google Quick-View to read it).

I think there are better ways to do it.

I've considered building clock recovery into PortAudio to filter the
callback timestamps. I have an algorithm that I believe performs better than
the second order filter used by JACK, but I need to do some more work to
prove it.

Conclusion: at the moment you can't rely on raw PortAudio timestamps being
super-accurate for MIDI scheduling on all platforms. Like many things in PA
this is simply a case of PortAudio providing support that is no better than
that provided by the underlying host APIs. On the other hand, on OSX at
least, I think you will get sufficient accuracy for the timestamps to be
usable without further filtering.



The concept of k-rate in general (though elements of ChucK would be nicer, if more cpu intense?)

Also we should bear in mind that the C++ frameworks eventually might
be used with other environments then Max, e.g. Csound, where k-rate
might be set to a, effectively turning control-rate into being

And finally, could we imagine at some point in the future that
Jamoma Graph might be driven by a different scheduler to the Max
clock, and hence have a different mechanism for syncing control rate
processes to signal rate? My guess is that Jamoma Graph at the time
being don't have a clock at all?

For the moment there is no scheduler to drive Jamoma Graph independently
of Max. This is needed in a number of case for porting Hipno to
Plugtastic though. This is where my vague allusion to a control rate
ramp in the next 12-18 months was rooted.

One thought about the k-rate Graph: Would it be possible to run a second
AudioGraph in a very low sample rate (e.g. 10 Hz) for continuously
changing control messages?