Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

NEW REGISTRATIONS - please contact us if you wish to register on the forum

Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright

synchronising ruby with the audio thread

For general discussion related FlowStone

synchronising ruby with the audio thread

Postby MichaelBenjamin » Fri Jul 24, 2020 12:11 am

.
Last edited by MichaelBenjamin on Mon Sep 21, 2020 10:38 am, edited 1 time in total.
MichaelBenjamin
 
Posts: 275
Joined: Tue Jul 13, 2010 1:32 pm

Re: synchronising ruby with the audio thread

Postby tulamide » Fri Jul 24, 2020 1:44 am

MichaelBenjamin wrote:from page 148 of the manual:

"If you have any of the DirectSound or ASIO primitives in your schematic and these are switched on then the clock will automatically switch to run in sync with the audio processing. You can then schedule events to occur with sample precise timing within any audio frame."

this sounds pretty good, but how is this guaranteed to be synced?
how do you define audioframes and access them?
has anyone made a testbed for this or further knowledge about it?

Scheduling is the keyword here. In Ruby you can call an event to be executed on a specific time (or a method). This scheduling switches to a sync'd timing with ASIO/DirectSound active, because those sound drivers work buffer based. This means, instead of an assumed steady flow of data, the data will be output one block after the other and thereby give any application a small time frame in which to execute whatever they need to. A typical ASIO buffer for example is somewhere around 3 ms to 12 ms. If we set it to e.g. 10 ms, then Flowstone needs to output a block of data every 10 ms, not each single audio sample as soon as it's processed. When the scheduled event or method falls into such a frame, it's simple math to calculate the position in the buffer from the time. With scheduling, Flowstone has enough time to do those calculations beforehand and therefore guarantee sample precise timing within a frame.
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: synchronising ruby with the audio thread

Postby trogluddite » Fri Jul 24, 2020 3:16 pm

To work with Ruby audio sync, there are three components...

Mono-to-Frame: This collects samples from a mono stream, creating a 'Frame' object from them (similar to an Array). Each time the Frame is filled (exactly once per ASIO buffer), the Frame is sent to the output, which will trigger a following RubyEdit to process the Frame.

Frame Sync: Sends a RubyEdit trigger at every boundary between ASIO buffers, but doesn't collect a Frame. Useful when the Ruby code is a pure generator of audio or sample-synced MIDI.

Frame-to-Mono: The reverse of Mono-to-Frame - a Ruby Frame at the input will be read out as the samples of a mono stream.

Note that, because of the way that FS streams simulate "one sample at a time" processing, using Ruby Frames usually incurs exactly one buffer of additional "round-trip" latency to audio travelling via those paths (non-Ruby audio isn't affected).

All of the above also applies to the DLL component, which uses Frames to pass audio into DLL functions.
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
User avatar
trogluddite
 
Posts: 1730
Joined: Fri Oct 22, 2010 12:46 am
Location: Yorkshire, UK

Re: synchronising ruby with the audio thread

Postby MichaelBenjamin » Fri Jul 24, 2020 10:50 pm

.
Last edited by MichaelBenjamin on Mon Sep 21, 2020 10:38 am, edited 1 time in total.
MichaelBenjamin
 
Posts: 275
Joined: Tue Jul 13, 2010 1:32 pm

Re: synchronising ruby with the audio thread

Postby tulamide » Sat Jul 25, 2020 12:26 am

MichaelBenjamin wrote:output 0, @(whatever variable from ruby), time

An offset of zero doesn't mean "now", but "as soon as you can". The earliest it will happen is on the next tick (Ruby thread runs at 100 Hz). That of course defeats the purpose - your timing will be slightly off. I found that recognizing the 100 Hz and taking it into account gives better (reliable) results. I never go lower as time + 0.01, and in fact, if you had something that calls scheduling repeatedly via a timer or so, you will find that Flowstone shuts down the RubyEdit (because you may call the same action to be executed dozens of times in the current timeslice), if you don't offset it at least 1/100 from time.

Having said that, feel free to experiment, though. More powerful processors might be able to deal with it, or get better precision.

And Trog might add more technical information. I found out weeks ago, that my explanations tend to lose informations on paper, that are in my brain.
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: synchronising ruby with the audio thread

Postby MichaelBenjamin » Sat Jul 25, 2020 1:49 am

.
Last edited by MichaelBenjamin on Mon Sep 21, 2020 10:38 am, edited 1 time in total.
MichaelBenjamin
 
Posts: 275
Joined: Tue Jul 13, 2010 1:32 pm

Re: synchronising ruby with the audio thread

Postby tulamide » Sat Jul 25, 2020 2:02 am

MichaelBenjamin wrote:"as soon as you can" is not good enough in the audio world, where every sample counts, either it is synchronised or not.

Exactly. That's my point. You have to use proper scheduling to get exactly sample precise. Not just using ::time. There is no real difference between those two:
Code: Select all
output(0, @value, time)
output(0, @value)


Either schedule ahead of time and use sample-precise timing, or don't use scheduling and be bound to 100 Hz timing with inaccuracies (since the audio thread always comes first!)

But with SM 1.0.7, why are you asking this? You will never be able to use it anyway. :?:
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: synchronising ruby with the audio thread

Postby MichaelBenjamin » Sat Jul 25, 2020 3:40 am

.
Last edited by MichaelBenjamin on Mon Sep 21, 2020 10:38 am, edited 1 time in total.
MichaelBenjamin
 
Posts: 275
Joined: Tue Jul 13, 2010 1:32 pm

Re: synchronising ruby with the audio thread

Postby trogluddite » Sat Jul 25, 2020 2:26 pm

MichaelBenjamin wrote:"as soon as you can" is not good enough in the audio world

Indeed; and this is why we use buffering. There is then only one constraint which has to be met - that one full buffer of samples is available when the next one is requested. Provided this happens, it does not matter exactly when any individual sample of the buffer is processed, nor what order we process them in, as a sample's "timing" is solely a function of its index within the buffer.

When Ruby is synced to audio, the queue of Ruby events is checked at each buffer request, and any events falling within the current buffer's time period will be executed. When you schedule a Ruby event at "time + x", a calculation is done which incorporates the time-stamp of the current buffer request and the sample rate, the result of which determines the index of a sample within the current buffer. When the final output audio is computed, samples calculated from this index onwards will use the result of the corresponding Ruby processing.

Events which don't have an explicitly scheduled time always take effect at the next available buffer boundary (if Ruby is synced to audio). This applies to all Ruby events where no time-stamp was given, and also to incoming "green" triggered input values. So "at the next buffer boundary" is effectively "as soon as you can" for asynchronous events (this is also the case when "green" values are passed to "blue" or "white" audio streams).

Whether a complete buffer can be filled within the latency period will depend on how much processing there is to be done, of course. But that is an "all or nothing" state - either it does get filled, in which case time-of-arrival of all samples is guaranteed (subject only to latency and hardware clock-jitter); or it does not get filled, which does not strictly affect the hardware timing of samples, it just means that they'll contain garbage values (SPLAT!)
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
User avatar
trogluddite
 
Posts: 1730
Joined: Fri Oct 22, 2010 12:46 am
Location: Yorkshire, UK


Return to General

Who is online

Users browsing this forum: No registered users and 74 guests