If you have a problem or need to report a bug please email : support@dsprobotics.com
There are 3 sections to this support area:
DOWNLOADS: access to product manuals, support files and drivers
HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects
USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here
NEW REGISTRATIONS - please contact us if you wish to register on the forum
Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright
DSP code box - flawed?
19 posts
• Page 2 of 2 • 1, 2
Re: DSP code box - flawed?
I think I understand now, what you mean. However, the whole point was, that I know of the DSP editor not offering the buffer. My point was, that it should do so.
The framebuffer you access with Ruby is the buffer as supplied by the sound driver (ASIO/DirectSound). There is no copying going on (unless you convert the frame to an array, of course). From the User's Guide
I give you another hint as to why Flowstone DOES use buffers. We know in general there is block processing and sample processing. But Flowstone makes use of SSE. Mono4 is one example. And SSE is block-based. However, as soon as block-based functionality is involved, you have to use double-buffering. See this schematic:
https://www.eetimes.com/wp-content/uploads/media-1067220-adifigure11-big.gif
All I say is, the DSP editor shouldn't be so restrictive and give us access to the buffers. It wouldn't be half as complicated to use!
The framebuffer you access with Ruby is the buffer as supplied by the sound driver (ASIO/DirectSound). There is no copying going on (unless you convert the frame to an array, of course). From the User's Guide
These are generated every time a frame of samples is requested from ASIO or Direct Sound. The size of the frame depends on the ASIO or DirectSound setup you have [...] delivers it one frame at a time in precise sync with the stream.
I give you another hint as to why Flowstone DOES use buffers. We know in general there is block processing and sample processing. But Flowstone makes use of SSE. Mono4 is one example. And SSE is block-based. However, as soon as block-based functionality is involved, you have to use double-buffering. See this schematic:
https://www.eetimes.com/wp-content/uploads/media-1067220-adifigure11-big.gif
All I say is, the DSP editor shouldn't be so restrictive and give us access to the buffers. It wouldn't be half as complicated to use!
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: DSP code box - flawed?
tulamide wrote:DSP editor not offering the buffer. My point was, that it should do so.
Well, "should" is an opinion, so I have to allow you that! Bur DSPr long ago decided that "one sample at a time" is easier to understand for novices, non-coders, analogue-electronics engineers, etc. - and the whole architecture is now built around that. DSPr have always felt very strongly that SM/FS should be a "sandbox" where users do not know or see "technical details" (I used to argue a lot with Malc about that!). Also, like my Ruby example, "one sample" can eliminate many temporary buffers/arrays, so can be more efficient and use less memory (you store old samples only if you need them). Both styles have advantages and disadvantages.
Also you said "the" buffer (singular), as Spogg did too. I wonder if this is part of the misunderstanding. As my Ruby meant to show, to have DSP code buffers, you must have a buffer for every streamin/streamout - the DS/ASIO buffer alone can never be enough (can't even be "updated" by each component, as there may be parallel audio paths needing the original data). Also, to have buffers with "future" samples doesn't mix well with "single sample" - the schematic should be "all buffer" or "all single sample" - mixing them together will add latency while buffers are filled and emptied (see below about Ruby Frames).
tulamide wrote:The framebuffer you access with Ruby is the buffer as supplied by the sound driver (ASIO/DirectSound)
No, this is not so - and in fact, Ruby Frames bring the problem of mixing "buffered" with "single sample". It can't be the ASIO/DS buffer, because there may be many components upstream of the mono2frame - it has to be an "inter-component" buffer. Not only is there copying, but the copying happens "one sample at a time" - equivalent to a DSP code writing each sample into a mem. This is why using Ruby Frames incurs a latency (yes, and easy to prove with a simple schematic) - because the buffer cannot be passed to Ruby until it is completely filled. The User Guide quote is badly phrased IMHO - it mixes up "wrapper" concepts (request of a buffer) with "inside sandbox" concepts (storage of the samples) - though, to be fair, the process is so indirect that it can't be said so easily as one sentence.
tulamide wrote:But Flowstone makes use of SSE.
SSE is just an example of SIMD (Single Instruction Multiple Data). The "Multiple Data" can be whatever the programmer chooses. Yes, it was marketed as an aid to stream processing (hence the SSE/MMX names), and stream processing uses buffers - but SIMD instructions are a form of parallelisation, not buffering or block processing. In FS the "Multiple Data" is four samples from the same moment in time, so not buffer related.
I must stress that I am not guessing here. I worked closely with Malc for many years as a Beta-tester, and I learned a great deal about the inner workings of FS. You certainly understand buffering well, and your intuitions about the FS engine are very intelligent, there is no doubt about that, but they apply only to the outer "wrapper" of FS. Inside the "sandbox", everything is conceptually different; it's not just that buffers aren't exposed, they're just not normally used at that level, and to make them available is very awkward (e.g. Ruby Frame latency). Whether buffers "should" be available or not, the simple fact it that, with the current architecture, there would be a high cost for doing it - and not many people have ever asked for it, so it won't be a big dev' priority (we who have used Juce and similar are very few here).
Finally. If you really are still unconvinced, I am happy for you to ask if MyCo will critique my posts. If anyone will know, it is him (and if he doesn't, we're really in trouble!! ) I think he would criticise me for over-simplifiying, and missing a few buffers that do exist (between CPU threads, for example) - but I haven't been wrong yet when he has judged my comments on Slack or I posted bug reports.
BTW: I have used buffered systems too, and I'm curious why you say it makes the code easier to write (setting technical background aside). For example; you mentioned smoothing using sample[n - 1], sample[n], sample[n+1]. In this case...
- At first sample of buffer, sample[n-1] is part of previous buffer. It's gone now.
- At last sample of buffer, sample[n+1] doesn't exist, it's first sample of next buffer ("reading the future").
My experience is that once corner-cases like those are dealt with (e.g. handling buffer transitions), working directly with the buffers isn't much simpler. I find most of the DSP code difficulties are due to lack of branching in SSE rather than lack of buffer access. Different "familiarity" between us, I suppose, and no criticism intended.
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
Don't stagnate, mutate to create!
-
trogluddite - Posts: 1730
- Joined: Fri Oct 22, 2010 12:46 am
- Location: Yorkshire, UK
Re: DSP code box - flawed?
I don't doubt your expertise, and never have. But you also know me well enough that I don't write about things I don't know anything of. SSE (streaming SIMD extensions) is a block-based functionality. There are (I think) 16 registers, which need to be filled first and then are processed parallel. Which is the definition of "block-based".
But in general, let me remind you that I made this thread to show the issues that occur to me with the DSP editor. To do that I explicitly used an example I know a lot of, which is double-buffered "one-sample-at-a-time" pixel shaders. Double-buffering here means that one buffer is processed, while the other is used. The language is close to C/C++, yet it didn't hinder me for a second - because everything is LOGICAL in a pixel shader. All the code I create just changes ONE pixel, yet I could easily create things like zoom blur shaders or complete texture replacements. And please keep in mind that pixel shaders, just like audio, work on an ever changing stream of data, which allows for animations, similar to how you implement envelopes for audio (so, effects that apply over time)
The situation, we are currently at is that you say "it's sample based, and that's good", and I say "it should be block based, that's better".
You doubted, when I said that Flowstone is the exception by not exposing the buffers. But take a look at all the audio dsp libraries out there. Over the years I looked at so many, and I can't do the reading for you (at least not compressed to a few minutes). I assure you, Flowstone is an exception.
It's still missing logic a lot. The fact that I can easily program pixel shaders in a language I never got used to, while I can't even write a simple audio processing bit for the DSP editor, yet have no issue at all doing it with Ruby and buffer-based, should be proof enough. Combine that with the statements, that you need to learn ASM to speed-optimise your creations, while pixel shaders are already speed optimized, and it gives you a hint, why there are better concepts of "one sample at a time" - better in regards of understandable and logical.
You talked about edge cases and that's a good point. Pixel shaders have to deal with edge cases too. And they do. What DSP has to use in general for every sample (additionally storing samples with code, as you can see so often in schematics), is what pixel shaders use just for edge cases. Not elegant, but the least introduction to "code junk" possible.
I'm not saying that pixel shaders are the way to go for our DSP editor. But I'm saying there are more logical and efficient methods out there, that are used by the majority of code for a reason. The half-baked DSP editor box doesn't really shine against that. Just look at how many DSP coders there are here. Yep, a handful. The others (me included) just thankfully grab whatever they come up with and hope that it will suit their needs. Because they couldn't make changes to it.
But in general, let me remind you that I made this thread to show the issues that occur to me with the DSP editor. To do that I explicitly used an example I know a lot of, which is double-buffered "one-sample-at-a-time" pixel shaders. Double-buffering here means that one buffer is processed, while the other is used. The language is close to C/C++, yet it didn't hinder me for a second - because everything is LOGICAL in a pixel shader. All the code I create just changes ONE pixel, yet I could easily create things like zoom blur shaders or complete texture replacements. And please keep in mind that pixel shaders, just like audio, work on an ever changing stream of data, which allows for animations, similar to how you implement envelopes for audio (so, effects that apply over time)
The situation, we are currently at is that you say "it's sample based, and that's good", and I say "it should be block based, that's better".
You doubted, when I said that Flowstone is the exception by not exposing the buffers. But take a look at all the audio dsp libraries out there. Over the years I looked at so many, and I can't do the reading for you (at least not compressed to a few minutes). I assure you, Flowstone is an exception.
It's still missing logic a lot. The fact that I can easily program pixel shaders in a language I never got used to, while I can't even write a simple audio processing bit for the DSP editor, yet have no issue at all doing it with Ruby and buffer-based, should be proof enough. Combine that with the statements, that you need to learn ASM to speed-optimise your creations, while pixel shaders are already speed optimized, and it gives you a hint, why there are better concepts of "one sample at a time" - better in regards of understandable and logical.
You talked about edge cases and that's a good point. Pixel shaders have to deal with edge cases too. And they do. What DSP has to use in general for every sample (additionally storing samples with code, as you can see so often in schematics), is what pixel shaders use just for edge cases. Not elegant, but the least introduction to "code junk" possible.
I'm not saying that pixel shaders are the way to go for our DSP editor. But I'm saying there are more logical and efficient methods out there, that are used by the majority of code for a reason. The half-baked DSP editor box doesn't really shine against that. Just look at how many DSP coders there are here. Yep, a handful. The others (me included) just thankfully grab whatever they come up with and hope that it will suit their needs. Because they couldn't make changes to it.
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: DSP code box - flawed?
tulamide wrote:There are (I think) 16 registers, which need to be filled first and then are processed parallel
SSE/MMX/AVX registers and opcodes are no different to any other, except that the registers contain more bits so that they are "wide" enough to carry multiple (four in our case) values at once. An SSE opcode can be called at any time, which immediately computes all four values simultaneously of whatever registers the opcode acts upon (only ever one or two registers). If what you said were true, the CPU instruction pipeline would be constantly stalling while it waited for registers to be "filled", and it would disallow using the registers freely multiple times for different purposes within the same ASM block - and most of the optimised ASM codes I've ever written would break unless that were possible!
More generally. I wasn't saying that "one sample at a time" is intrinsically better, and for a conventional dsp library in regularly "written" code language it usually wouldn't be. The issue, I think, is that you are not comparing like with like. FlowStone is not a "dsp library" in the conventional sense - it's an application which implements a dsp-engine "wrapper" around a graphical, modular "sandbox". SynthEdit is the same kind of thing, and so works in much the same way - hence I said that FlowStone is not the only exception. "One sample at a time" is just the optimal implementation within this specific context - restricting the user to the "sandbox" inside the "wrapper". The way in which FS uses SSE channels is also part of what imposes that limitation; due to the lack of ability to do branching, and the difficulty of accessing arrays via their indexes (if you've ever seen the ASM equivalent of DSP array access, you'll see why - the SSE channels have to be split apart and recombined, which is very inefficient).
I do understand your frustration that FS doesn't work in the way that you are used to, and I'm not meaning to be dismissive of that - just pragmatic about the tools we have available. The fact that FlowStone is "unconventional" is not just a matter of having "hidden" things - there really is no way to have both the "one sample" wrapper and buffer access within the same DSP element. I'm afraid that you have a choice to make - either stick with FlowStone and learn the "one sample" way of working; or, commit to OOP and a dsp-library and do everything that way - there simply isn't any way to have the "best of both worlds". If there were, I'm pretty sure that MyCo would have added those features to the code editor by now, as he'll no doubt be very familiar with that way of working himself.
At the end of the day, we are stuck with FlowStone as it is, and it would be be a huge rewrite of the "engine" to include buffer access - though I do understand why you would want it (I'm comfortable with that way of working too, so I really can see the attraction). It's been an interesting debate (aside from my bad jokes!), but I'll leave at that, I think - we're talking of things that, ultimately, are beyond our control, which is always a frustrating kind of topic. And, of course, I'm perfectly happy to help you get your head around FlowStone's odd way of working, as I'm sure Adam, Martin, et al would be too.
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
Don't stagnate, mutate to create!
-
trogluddite - Posts: 1730
- Joined: Fri Oct 22, 2010 12:46 am
- Location: Yorkshire, UK
Re: DSP code box - flawed?
Yes, we should leave it at that. It'S a shame that it won't change, but all arguments were made and we would just start to go in circles.
But regarding SSE2, you still didn't convince me. On Intel's tech-page, they describe (literally!) the processing of 4 integer simultanously with one instruction (SI from SIMD), and 2 double presision floats simulatanously. How the data arrives there in a serial sytem like our PCs without waiting for the data portions to arrive is a mystery that you didn't explain. Normally you would indeed need to wait for all data to be there to execute the instruction (and therefore introduce buffering, hence level 1, level 2 and level 3 buffers), so it doesn't make sense that you say, exactly that doesn't happen.
Or is it magic? (just a joke)
But regarding SSE2, you still didn't convince me. On Intel's tech-page, they describe (literally!) the processing of 4 integer simultanously with one instruction (SI from SIMD), and 2 double presision floats simulatanously. How the data arrives there in a serial sytem like our PCs without waiting for the data portions to arrive is a mystery that you didn't explain. Normally you would indeed need to wait for all data to be there to execute the instruction (and therefore introduce buffering, hence level 1, level 2 and level 3 buffers), so it doesn't make sense that you say, exactly that doesn't happen.
Or is it magic? (just a joke)
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: DSP code box - flawed?
tulamide wrote:Or is it magic? (just a joke)
Arthur C Clarke's famous quote comes to mind: "Any sufficiently advanced technology is indistinguishable from magic." Even after all my years coding, I still turn on my computer and feel that way about it sometimes!
As for SSE: an analogy may go some of the way. 32-bit integers are treated in hardware and software as a single unit. An ARGB colour can be handled in this form, even though it represents four 8-bit "fields" - for example, a single CPU instruction could copy/move the whole colour without having to split it. SSE is similar except it is 128-bits consisting of four 32-bit float "fields". You are correct that there is serial operation in the FS "wrapper" to gather the samples from, say, left and right channels; but within the "sandbox", the 128-bit registers/variables are treated like single indivisible units. In another part of the tale, this is the cause of bitmasks having to be used instead of branching.
And there, I promise, I will leave it!
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
Don't stagnate, mutate to create!
-
trogluddite - Posts: 1730
- Joined: Fri Oct 22, 2010 12:46 am
- Location: Yorkshire, UK
Re: DSP code box - flawed?
mmm- interesting stuff, you adepts
I just wanted to make a small observation that Ruby frames are necessary because of Ruby's speed,
not because they are an inherently superior device.
If you want to affect samples before and after a sample depending on that sample,
there are mems which can be called in DSP, so u could operate at say a 128 sample delay anywhere in the FS DSP construct. RMS for example has a window
DSP has an elegant simplicity in FS imo
I just wanted to make a small observation that Ruby frames are necessary because of Ruby's speed,
not because they are an inherently superior device.
If you want to affect samples before and after a sample depending on that sample,
there are mems which can be called in DSP, so u could operate at say a 128 sample delay anywhere in the FS DSP construct. RMS for example has a window
DSP has an elegant simplicity in FS imo
-
nix - Posts: 817
- Joined: Tue Jul 13, 2010 10:51 am
Re: DSP code box - flawed?
nix wrote:I just wanted to make a small observation that Ruby frames are necessary because of Ruby's speed,
not because they are an inherently superior device.
Yes, and nobody denies that. But the concept behind it is superior. You get the audio data from the DAW in blocks anyway, so why not making use of it (oh no, here I go again...trog, I swear I won't make it a topic again, it is just an answer to a statement!).
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: DSP code box - flawed?
trogluddite wrote:You are correct that there is serial operation in the FS "wrapper" to gather the samples from, say, left and right channels; but within the "sandbox", the 128-bit registers/variables are treated like single indivisible units. In another part of the tale, this is the cause of bitmasks having to be used instead of branching.
And that, Trog, is the kind of explanation I need. I now know how you define things AND therefore can follow your explanation. I also see that I didn't misunderstand the whole concept of SSE, but it's a matter of words! I agree to this, understand it, and am pleased that we could unravel Gordo's knot
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
19 posts
• Page 2 of 2 • 1, 2
Who is online
Users browsing this forum: No registered users and 62 guests