Callback Overload

The real time executor currently has a scheduling policy that is designed to allow the pipeline to continue to make progress even if a particular callback is taking a very long time to execute.

This section details the specifics of that policy; it may be useful to you if you are debugging performance issues onboard.

When any subscriber or timer callback in a stage is ‘ready’, a ‘callback execution period’ for that stage will be scheduled on the thread pool.

Each ‘callback execution period’ works roughly like this:

  • Each timer, if ready, will execute exactly once
  • A count of outstanding messages for each subscriber is established
  • In round-robin order, for each subscriber:
    • If the count of outstanding messages is zero, the subscriber is ignored
    • Otherwise, the subscriber is executed, and the count of outstanding messages is decreased by one
  • If a timer/subscriber is still ready, another callback execution period is scheduled for future execution

For example, if you have a situation where you have:

  • One callback for message ‘a’ that takes 200ms to execute and a queue size of 1
  • One callback for message ‘b’ that takes 1us to execute and a queue size of 200
  • A timer that takes 250ms to execute and is running at 10Hz
  • Data is arriving to ‘a’ and ‘b’ every 10ms

Then a ‘callback execution period’ would typically look like:

  • The timer would execute
  • Message ‘a’ would execute once (assuming the queue was full)
  • Message ‘b’ would execute 200 times (assuming the queue was full)

Note that if you have a large queue size and your callback takes a long time to process, other callbacks in a stage will still end up being starved. In this situation, it might be better to break your stage up into multiple stages.