If you have a problem or need to report a bug please email : support@dsprobotics.com
There are 3 sections to this support area:
DOWNLOADS: access to product manuals, support files and drivers
HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects
USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here
NEW REGISTRATIONS - please contact us if you wish to register on the forum
Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright
Audio or Mono signal as trigger for a graph module
20 posts
• Page 2 of 2 • 1, 2
Re: Audio or Mono signal as trigger for a graph module
tulamide wrote:Martin, Spogg, anybody else, can you give me a real world example, where it matters, if you see sample #0 at trigger - and not sample #-1 or sample #3? Let alone that you can't even draw the difference between 0.1 and 0.10001?
It is a good question, someone can maybe elaborate on it for me also...
I guess one might have the wish to draw the impulse (response), but scaled up in X axis... then each sample spans across the whole screen. In such a case, consistent alignment (indepentend of hardware) and triggering "does become" important, especially when green lives in a "non sample-space"... giving slighly different result... That might be one usage case for this realtime triggering? Or?
Maybe also triggering is needed when feeding multiple oscillators/signals?
But for audio purpose vs visual importance... at that scale the frequency is high
Could one alternatively maybe take a 1024 sample with ticker and align it to screen? Iow oscilloscope scan-style triggering?
Haven't looked into these matter or even tried to understand how for example MVs sync scope works. Some comparator functionality between samples in array i guess...
Interesting topic anyway...
My beginner synth at KVR: https://www.kvraudio.com/product/saguaro-one-by-saguaro-one
- R&R
- Posts: 468
- Joined: Fri Jul 15, 2022 2:28 pm
Re: Audio or Mono signal as trigger for a graph module
tulamide wrote:As always, my question got ignored (sigh). I'll try it again.
Martin, Spogg, anybody else, can you give me a real world example, where it matters, if you see sample #0 at trigger - and not sample #-1 or sample #3? Let alone that you can't even draw the difference between 0.1 and 0.10001?
I might be considered stubborn, nosy, even dispensable. But if you can't answer the question, it means there is no application for it. And that would mean, it's impractical to even try.
I didn’t notice the question mark!
All I can say is that I’ve never, so far, needed to do it myself, and I don’t think that I will.
This is me thinking out loud:
Maybe you could stay in the stream world and have a pre-trigger rolling capture system (based on a delay algorithm). A stream-based trigger could halt the capture after a defined number of samples (time). That way you get to see the very first sample.
After the trigger, the one-shot “delay line” would be read out and converted to a green array for display. By subtracting the pre-set number of pre-roll samples you could then examine the array using the section prim, to zoom in on the attack phase. This would not be real time but it could be fast.
Or something…
-
Spogg - Posts: 3358
- Joined: Thu Nov 20, 2014 4:24 pm
- Location: Birmingham, England
Re: Audio or Mono signal as trigger for a graph module
Spogg wrote:tulamide wrote:As always, my question got ignored (sigh). I'll try it again.
Martin, Spogg, anybody else, can you give me a real world example, where it matters, if you see sample #0 at trigger - and not sample #-1 or sample #3? Let alone that you can't even draw the difference between 0.1 and 0.10001?
I might be considered stubborn, nosy, even dispensable. But if you can't answer the question, it means there is no application for it. And that would mean, it's impractical to even try.
I didn’t notice the question mark!
All I can say is that I’ve never, so far, needed to do it myself, and I don’t think that I will.
This is me thinking out loud:
Maybe you could stay in the stream world and have a pre-trigger rolling capture system (based on a delay algorithm). A stream-based trigger could halt the capture after a defined number of samples (time). That way you get to see the very first sample.
After the trigger, the one-shot “delay line” would be read out and converted to a green array for display. By subtracting the pre-set number of pre-roll samples you could then examine the array using the section prim, to zoom in on the attack phase. This would not be real time but it could be fast.
Or something…
Thanks, Spogg! At least one person answered.
I see it quite similar. You have to seperate audio and graphic. You can be sample precise in your audio stream and do exactly what is needed, while having the graphics just being an approximation. From my point of view, this is the only way that makes sense, and if it were not, you would see sample precise graphics in every plugin. Which you don't. So, unless there is a convincing argument against this, I'd advice to always neglect realtime graphics in favor of audio streams.
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: Audio or Mono signal as trigger for a graph module
Reading my post back, and thinking about it a bit more, what I’ve described is actually available on many oscilloscopes already. It’s called something along the lines of “Pre-trigger” and became possible when ‘scopes started using digital memory. When you enable pre-trigger, the ‘scope starts to record the input into a buffer. When a trigger event happens, the buffer is halted after a pre-set time. Then the buffer’s content is sent to the display, so you get to see what happened just before the trigger and just after.
When I said I didn’t personally see a use for this I was horribly wrong, and forgot about my past work on X-Ray machines. Older machines used a pre-contact system in which the HT transformer was powered up in 2 stages. The initial stage involved connecting the transformer via a resistor, typically for 20mS, then the resistors were shorted out by the main contacts to give full power, and this reduced the very high inrush current. In some fault situations this didn’t work and I found I could use my new digital ‘scope to examine the pre-contact phase, to actually see the very start of the power-up and work out what was failing.
Of course, audio work is a different situation, but maybe it would be useful for analysing initial clicks when a stream opens.
When I said I didn’t personally see a use for this I was horribly wrong, and forgot about my past work on X-Ray machines. Older machines used a pre-contact system in which the HT transformer was powered up in 2 stages. The initial stage involved connecting the transformer via a resistor, typically for 20mS, then the resistors were shorted out by the main contacts to give full power, and this reduced the very high inrush current. In some fault situations this didn’t work and I found I could use my new digital ‘scope to examine the pre-contact phase, to actually see the very start of the power-up and work out what was failing.
Of course, audio work is a different situation, but maybe it would be useful for analysing initial clicks when a stream opens.
-
Spogg - Posts: 3358
- Joined: Thu Nov 20, 2014 4:24 pm
- Location: Birmingham, England
Re: Audio or Mono signal as trigger for a graph module
When we talk about plugins and realtime graphical representation of actual data generated by the plugin, then I don't expect an example on x-ray or analog oscilloscopes. But since it happened, allow me an off-topic example as well.
<Attachment removed due to copyright claim - please abide by the forum rules and do not post images which you do not own or have permission to distribute>
This is a caesium atomic clock. One of only 4 in Germany. Since 1991 this clock tells the time as accurate as possible. Even this clock goes wrong, even if only 1 second in 100 Million years. But let's assume it is prefectly showing the current time.
I don't see anybody walking around with such a thing. Or have it in their apartment. Instead we walk around with standard watches, that once in a while are synchronized to this clock. We don't care, because we don't need to see the time in 9,192,631,770 fractions per second (that's how fast caesium oscillates and the clock gets its accuracy from). We are totally fine with an approximation. Who of us cares, if it is 9:00:00.0 am or 9:00:00.1? And who cares, that there is no oscilloscope in the world, able to show the oscillations in realtime?
What we see is an approximation of what there is.
This brings me back to VST plugins and realtime graphics of audio streams. At 44.1 kHz and assigning each sample to a pixel, you need to draw one pixel every 1/44100s of a second, or 44100 pixels per second. But that is not all. You also tranfer a signal from 1-dimensional to 2-dimensional. And so you need to draw a frame with image data. A pixel is always to be drawn, wether it's repersenting data or nothingness. Let's say you want the oscillation to be seen as a peak of 60 pixels up or down, while only 1 pixel wide. You still need to draw 121 pixels per sample, 44100 times per second, resulting in 5.3 million pixels per second. To at least have the impression of motion, you need at least 18 fps, but in Flowstone you will use more like 25 fps. That's 213,444 pixels per 0.04 seconds. To see 1 sample at a time in motion on a frame that's 121x1, which is very small, to say the least. Now, you could omit all the samples, that will never be seen anyway. 1 sample = 121 pixel and that x 25 (if you are able to draw 25 times per second while also calculating audio stream data), results in 3025 pixels per second. This is now doable. But hey, it only shows 25 of 44100 samples per second. And remember, it only shows 60 levels of amplitude in both directions. Not even close to the billions of steps a 32-bit value generates. But if you want to see actual data, you would need a monitor with a height of approx. 4 billion pixels, just to represent one sample's amplitude accurately. Otherwise you just see an approximation.
And nobody cares. Because there is no application for seeing 4 billion pixels 44100 times per second. The human eye couldn't even take in that much of an image (at standard 96 ppi, it would be a monitor height of 1.1 km), let alone animated. And I won't even talk about technical constraints, like needing 32 Gigabytes of RAM for one sample 25 times per second, if only double-buffering.
It's the equivalent of the caesium clock to pocket watch example, I gave earlier. Nobody cares. Because it has no application.
<Attachment removed due to copyright claim - please abide by the forum rules and do not post images which you do not own or have permission to distribute>
This is a caesium atomic clock. One of only 4 in Germany. Since 1991 this clock tells the time as accurate as possible. Even this clock goes wrong, even if only 1 second in 100 Million years. But let's assume it is prefectly showing the current time.
I don't see anybody walking around with such a thing. Or have it in their apartment. Instead we walk around with standard watches, that once in a while are synchronized to this clock. We don't care, because we don't need to see the time in 9,192,631,770 fractions per second (that's how fast caesium oscillates and the clock gets its accuracy from). We are totally fine with an approximation. Who of us cares, if it is 9:00:00.0 am or 9:00:00.1? And who cares, that there is no oscilloscope in the world, able to show the oscillations in realtime?
What we see is an approximation of what there is.
This brings me back to VST plugins and realtime graphics of audio streams. At 44.1 kHz and assigning each sample to a pixel, you need to draw one pixel every 1/44100s of a second, or 44100 pixels per second. But that is not all. You also tranfer a signal from 1-dimensional to 2-dimensional. And so you need to draw a frame with image data. A pixel is always to be drawn, wether it's repersenting data or nothingness. Let's say you want the oscillation to be seen as a peak of 60 pixels up or down, while only 1 pixel wide. You still need to draw 121 pixels per sample, 44100 times per second, resulting in 5.3 million pixels per second. To at least have the impression of motion, you need at least 18 fps, but in Flowstone you will use more like 25 fps. That's 213,444 pixels per 0.04 seconds. To see 1 sample at a time in motion on a frame that's 121x1, which is very small, to say the least. Now, you could omit all the samples, that will never be seen anyway. 1 sample = 121 pixel and that x 25 (if you are able to draw 25 times per second while also calculating audio stream data), results in 3025 pixels per second. This is now doable. But hey, it only shows 25 of 44100 samples per second. And remember, it only shows 60 levels of amplitude in both directions. Not even close to the billions of steps a 32-bit value generates. But if you want to see actual data, you would need a monitor with a height of approx. 4 billion pixels, just to represent one sample's amplitude accurately. Otherwise you just see an approximation.
And nobody cares. Because there is no application for seeing 4 billion pixels 44100 times per second. The human eye couldn't even take in that much of an image (at standard 96 ppi, it would be a monitor height of 1.1 km), let alone animated. And I won't even talk about technical constraints, like needing 32 Gigabytes of RAM for one sample 25 times per second, if only double-buffering.
It's the equivalent of the caesium clock to pocket watch example, I gave earlier. Nobody cares. Because it has no application.
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: Audio or Mono signal as trigger for a graph module
If I was to think of a possible use ...
Maybe at the Large Hadron Collider
Tracing the movement of sub-atomic particles. Maybe.
And .... as T points out ...
'Does anybody really know what time it is' .... 'does anybody really care' [CTA]
Maybe at the Large Hadron Collider
Tracing the movement of sub-atomic particles. Maybe.
And .... as T points out ...
'Does anybody really know what time it is' .... 'does anybody really care' [CTA]
- RJHollins
- Posts: 1571
- Joined: Thu Mar 08, 2012 7:58 pm
Re: Audio or Mono signal as trigger for a graph module
tulamide wrote:When we talk about plugins and realtime graphical representation of actual data generated by the plugin, then I don't expect an example on x-ray or analog oscilloscopes. But since it happened, allow me an off-topic example as well...
Finally! A decent ticker-prim...
But. Even that that one might be non realtime due to observer effect... gravitational waves... or disturbances in decay rates
X-rays? lol
My beginner synth at KVR: https://www.kvraudio.com/product/saguaro-one-by-saguaro-one
- R&R
- Posts: 468
- Joined: Fri Jul 15, 2022 2:28 pm
Re: Audio or Mono signal as trigger for a graph module
There might be a misunderstanding of my post, or I interpret something wrong. Anyway, to clear things up:
I seriously answered to another serious answer. I don't think that Spogg's examples are funny. On the contrary, I think of them, and I thought I explained that with my example, as being ways over the field of VST plugins. If anyone thought, I was mocking either topic or person, that's wrong!
Spogg and I are good friends, and we commonly have such discussions, where we bring up examples in order to dissect them and come to a conclusion. We found this very fruitful most often. Even when working together on a project, we have a similar approach, because only if we both exclude all possible errors, wrong concepts or impossible-to-pull-off-ideas beforehand, can we get to a good product as a final result.
I seriously answered to another serious answer. I don't think that Spogg's examples are funny. On the contrary, I think of them, and I thought I explained that with my example, as being ways over the field of VST plugins. If anyone thought, I was mocking either topic or person, that's wrong!
Spogg and I are good friends, and we commonly have such discussions, where we bring up examples in order to dissect them and come to a conclusion. We found this very fruitful most often. Even when working together on a project, we have a similar approach, because only if we both exclude all possible errors, wrong concepts or impossible-to-pull-off-ideas beforehand, can we get to a good product as a final result.
"There lies the dog buried" (German saying translated literally)
- tulamide
- Posts: 2714
- Joined: Sat Jun 21, 2014 2:48 pm
- Location: Germany
Re: Audio or Mono signal as trigger for a graph module
tulamide wrote:There might be a misunderstanding of my post, or I interpret something wrong.
Maybe...
Atleast I enjoy your and Spoggs posts/answers/questions... And I suspect others do to, so I would'nt worry
Alot of interesting reading for folks like me that hasn't got that much insight...
The original poster might have gotten what he/she asked for, which i'm guessing was an "on demand" triggering of stream capture, rather than rendering it on screen realtime nor displaying exact samples, in any sense of the word.
Might be my fault creating noise in the discussion by starting rambling about oscilloscope triggering (iow level triggering), but not having any deep insight... lol maybe I post to much BS... sorry about that...
My beginner synth at KVR: https://www.kvraudio.com/product/saguaro-one-by-saguaro-one
- R&R
- Posts: 468
- Joined: Fri Jul 15, 2022 2:28 pm
Re: Audio or Mono signal as trigger for a graph module
tulamide wrote:There might be a misunderstanding of my post, or I interpret something wrong. Anyway, to clear things up:
I seriously answered to another serious answer. I don't think that Spogg's examples are funny. On the contrary, I think of them, and I thought I explained that with my example, as being ways over the field of VST plugins. If anyone thought, I was mocking either topic or person, that's wrong!
Spogg and I are good friends, and we commonly have such discussions, where we bring up examples in order to dissect them and come to a conclusion. We found this very fruitful most often. Even when working together on a project, we have a similar approach, because only if we both exclude all possible errors, wrong concepts or impossible-to-pull-off-ideas beforehand, can we get to a good product as a final result.
I hope not referring to my post !
I had no sense of 'disparagement' of anyone or topic .... at all.
For myself, even on complex Topics outside of my full understanding, I enjoy [and try to understand/learn] from those who have other knowledge ... and a reason I visit this Forum.
2nd .... your assessment is in line with my understanding.
Thanks for your posts !
- RJHollins
- Posts: 1571
- Joined: Thu Mar 08, 2012 7:58 pm
20 posts
• Page 2 of 2 • 1, 2
Who is online
Users browsing this forum: Google [Bot], Majestic-12 [Bot] and 69 guests