If you have a problem or need to report a bug please email : support@dsprobotics.com
There are 3 sections to this support area:
DOWNLOADS: access to product manuals, support files and drivers
HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects
USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here
NEW REGISTRATIONS - please contact us if you wish to register on the forum
Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright
how to add offset
Re: how to add offset
Use "clear audio" primitive. That will completely reset the thing. Selectors usually only freeze the audio when you turn the chain off and it continues where it stopped when turned back on - I noticed that some time ago...
- KG_is_back
- Posts: 1196
- Joined: Tue Oct 22, 2013 5:43 pm
- Location: Slovakia
Re: how to add offset
Thanks, I haven't thought of that.
Okay, now it's time to cook something of it...
Okay, now it's time to cook something of it...
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
Feel free to donate. Thank you for your contribution.
- tester
- Posts: 1786
- Joined: Wed Jan 18, 2012 10:52 pm
- Location: Poland, internet
Re: how to add offset
I also made interpolated wave reader, that takes int index input. and float frac. It should do basically the same thing as the one you seat you're using, just this one will work with any wave-sizes upto (2^31)-1 samples.
It also receives Wave size as int, so I provided a code that will convert the LSB and MSB to int.
Note that I didn't had time to test it, so it might not work or crash. It uses pointer of the mem.
It also receives Wave size as int, so I provided a code that will convert the LSB and MSB to int.
Note that I didn't had time to test it, so it might not work or crash. It uses pointer of the mem.
- Attachments
-
- custom wave read.fsm
- (1.61 KiB) Downloaded 852 times
- KG_is_back
- Posts: 1196
- Joined: Tue Oct 22, 2013 5:43 pm
- Location: Slovakia
Re: how to add offset
Thanks, maybe will be useful in the future. Right now I have wired the basic setup, and everything seems to work fine.
Of course there are more practical uses of that, but what would be the science of sound without a bit of fun?
Of course there are more practical uses of that, but what would be the science of sound without a bit of fun?
- Attachments
-
- combinator.rar
- (1 MiB) Downloaded 890 times
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
Feel free to donate. Thank you for your contribution.
- tester
- Posts: 1786
- Joined: Wed Jan 18, 2012 10:52 pm
- Location: Poland, internet
Re: how to add offset
That sounds so fckn cooooool
whaaaaannnnnnntttttttteeeeeeedddddd!!!!
frankly... it'd be like potentiometric titration... mixing two stupid liquids and measuring a stupid number... 60 times (which takes 2 minutes each) , just to use 2 of those values to linearly interpolate pH of God-knows-what something.
All that time I was doing that stupid exercise in school I was drawing a FS schematic on a back-side of my note paper, for a robot that'd do it automatically...
whaaaaannnnnnntttttttteeeeeeedddddd!!!!
tester wrote:but what would be the science of sound without a bit of fun?
frankly... it'd be like potentiometric titration... mixing two stupid liquids and measuring a stupid number... 60 times (which takes 2 minutes each) , just to use 2 of those values to linearly interpolate pH of God-knows-what something.
All that time I was doing that stupid exercise in school I was drawing a FS schematic on a back-side of my note paper, for a robot that'd do it automatically...
- KG_is_back
- Posts: 1196
- Joined: Tue Oct 22, 2013 5:43 pm
- Location: Slovakia
Re: how to add offset
KG_is_back wrote:That sounds so fckn cooooool
whaaaaannnnnnntttttttteeeeeeedddddd!!!!
It's a simple mixing and time/pitch shifting. There are a lot of apps that allow to overlap/mix loops that way freely. But "freely" means - you need to know "how". And the "how" is the keyword, otherwise it will be yet another doubler with delays, pitch shifts and time stretches. Basically you need a lot (up to 50 will be enough) of overlapping layers (loops), spread through channels and randomly shifted in time, and kept in very specific resample ranges to have the rhythm and colour (at least - follow harmonics or harmonious intervals/chords, at best - select octave series within +/- 5 octaves range). If you use 2 to 16 layers of a shifted voice (avoid short delays at this point; phrases should overlap) per resample range, then it will change into babbling. If you mix it across resample ranges - it will blend well and unfold dragons within.
To make live effect, you probably would have do make some sort of phrase granulator with soft crossfade. I can imagine, that such effect would record something in normal speech routine (few sentences, spoken or relatively flat singing to keep the pitch?), and after these few phrases (but it will record more as you sing/speak?) - it would start to in-mix randomly the multilayer, multiresamplerange part onto your normal speech/song. Maybe it would have to use pitch tracking to some degree, to correct the thing as you change your voice. Theoretically it should be easy to do (but a lot of work), and it should not use too much CPU.
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
Feel free to donate. Thank you for your contribution.
- tester
- Posts: 1786
- Joined: Wed Jan 18, 2012 10:52 pm
- Location: Poland, internet
Re: how to add offset
Bravo, tester!
The "full on" sound at the start amazed me enough, but my jaw dropped at the end section where the original sample re-appears out of its fragments. The "musicality" of the sound is much better than most grain based effects that I've heard - most seem to degenerate into a mush when there are too many layers, or are just too random to retain a sense of time when you isolate them from other rhythmic cues.
As for a sound engine, I believe that much of the code within my 'Soopa Loopa' plugin could be adapted. It is already designed to take input from live audio, take randomised snippets, create psuedo-random control data, handles cross-fading. The whole engine is "grain" based to allow shift and stretch.
It's also all optimised mono4 assembly (i.e. four complete 'engines' per block), and a single 'clock' module can be shared between multiple engines - so CPU is pretty good.
The big difference is that I have only used it with much longer loops and larger 'slices' (e.g. note divisions), and a much more limited pitch range - I think probably as you describe "babbling" when used with vocal samples.
But those are mostly self-imposed limitations aimed solely at keeping the user interface simpler, and to fit my particular musical direction - in principle, the parameter ranges and synchronisation could be made much more flexible with very little modification.
I can't promise to do it quickly, as I'm committed to a couple of other projects, and haven't looked at that code for a very long time - but if you're interested, I can take a look to see if the engine can be extracted as a module. I wanted to revisit it anyway, as the new 'mem' features make some things possible that I always wanted to do.
The "full on" sound at the start amazed me enough, but my jaw dropped at the end section where the original sample re-appears out of its fragments. The "musicality" of the sound is much better than most grain based effects that I've heard - most seem to degenerate into a mush when there are too many layers, or are just too random to retain a sense of time when you isolate them from other rhythmic cues.
As for a sound engine, I believe that much of the code within my 'Soopa Loopa' plugin could be adapted. It is already designed to take input from live audio, take randomised snippets, create psuedo-random control data, handles cross-fading. The whole engine is "grain" based to allow shift and stretch.
It's also all optimised mono4 assembly (i.e. four complete 'engines' per block), and a single 'clock' module can be shared between multiple engines - so CPU is pretty good.
The big difference is that I have only used it with much longer loops and larger 'slices' (e.g. note divisions), and a much more limited pitch range - I think probably as you describe "babbling" when used with vocal samples.
But those are mostly self-imposed limitations aimed solely at keeping the user interface simpler, and to fit my particular musical direction - in principle, the parameter ranges and synchronisation could be made much more flexible with very little modification.
I can't promise to do it quickly, as I'm committed to a couple of other projects, and haven't looked at that code for a very long time - but if you're interested, I can take a look to see if the engine can be extracted as a module. I wanted to revisit it anyway, as the new 'mem' features make some things possible that I always wanted to do.
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
Don't stagnate, mutate to create!
-
trogluddite - Posts: 1730
- Joined: Fri Oct 22, 2010 12:46 am
- Location: Yorkshire, UK
Re: how to add offset
I can't say that I can spend too much time on this one, but we could work something out. The destination interface can be simplified after initial tests with advanced settings.
The (playback speed) ratio rescaler is already here, and it works across the octaves. It is important to know, that independent pitch shifting and time stretching - may not be good; it's a matter of simplest regulation of playback speed between layers, but at specific ratios. Otherwise, heavy granulation, spectral spread and dissonance will kill the effect (I guess).
For second part - I will describe what may be needed, and you tell me if this works that way in SL.
Spoken voice would have to be sliced at not too short periods of time (to keep syllabes or better - phrases) AND - at explosive phones (like 'p', 't', etc) or silence points. Out of such fragments, randomized cotinuous strings/voices can be made.
To add some variation - it would have to monitor, I don't know - few seconds to one minute of last spoken audio, to update the phrases database continuously, as speech changes; this way - phrases and pitches would locally match the mood so to speak, while content would be randomized.
Having said this - there would have to be about 50 layers of randomized quasi-speech, I think. At least 3-4 harmonics/octaves covered, everything else to spread in panorama between channels (48/4=12 voices per layer).
One last thing that comes to my mind is to use beat detection, to detect explosives in live (source) voice and sync some layers (at least one per playback speed) with it from time to time through crossfade/mix (multipitched baba/dada/kikitake would start at the same time so to speak).
Pitch tracking is optional, because voice has it's own rhythm. It could be useful in two cases - to use old databse with current speech (match base) or to automatically remove some phrases from database. But the more I think about it - it would be probably optional. Auto pitch shifting and auto time stretching could be tested after the whole design works "as is", because there are few things I'm not sure of it.
I will need to do more listening tests on how it works, and what are the limitations. As I said - it was just a byproduct of another experiment - I needed short audio, to check if my schematic works correctly and what variations are / can be produced by the output.
The (playback speed) ratio rescaler is already here, and it works across the octaves. It is important to know, that independent pitch shifting and time stretching - may not be good; it's a matter of simplest regulation of playback speed between layers, but at specific ratios. Otherwise, heavy granulation, spectral spread and dissonance will kill the effect (I guess).
For second part - I will describe what may be needed, and you tell me if this works that way in SL.
Spoken voice would have to be sliced at not too short periods of time (to keep syllabes or better - phrases) AND - at explosive phones (like 'p', 't', etc) or silence points. Out of such fragments, randomized cotinuous strings/voices can be made.
To add some variation - it would have to monitor, I don't know - few seconds to one minute of last spoken audio, to update the phrases database continuously, as speech changes; this way - phrases and pitches would locally match the mood so to speak, while content would be randomized.
Having said this - there would have to be about 50 layers of randomized quasi-speech, I think. At least 3-4 harmonics/octaves covered, everything else to spread in panorama between channels (48/4=12 voices per layer).
One last thing that comes to my mind is to use beat detection, to detect explosives in live (source) voice and sync some layers (at least one per playback speed) with it from time to time through crossfade/mix (multipitched baba/dada/kikitake would start at the same time so to speak).
Pitch tracking is optional, because voice has it's own rhythm. It could be useful in two cases - to use old databse with current speech (match base) or to automatically remove some phrases from database. But the more I think about it - it would be probably optional. Auto pitch shifting and auto time stretching could be tested after the whole design works "as is", because there are few things I'm not sure of it.
I will need to do more listening tests on how it works, and what are the limitations. As I said - it was just a byproduct of another experiment - I needed short audio, to check if my schematic works correctly and what variations are / can be produced by the output.
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
Feel free to donate. Thank you for your contribution.
- tester
- Posts: 1786
- Joined: Wed Jan 18, 2012 10:52 pm
- Location: Poland, internet
Re: how to add offset
Few other tests (text-to-speech and singing).
Voice density is increased gradually (1pack, 2packs, 3packs).
Maybe yet the phrases to combine could be only few seconds (syllabes) long? Then it would have to refresh more often and more cut-and-mix process would take place? Many unanswered questions to check....
Who hears cello?
Voice density is increased gradually (1pack, 2packs, 3packs).
Maybe yet the phrases to combine could be only few seconds (syllabes) long? Then it would have to refresh more often and more cut-and-mix process would take place? Many unanswered questions to check....
Who hears cello?
- Attachments
-
- lalala.rar
- (1.47 MiB) Downloaded 872 times
-
- tts.rar
- (751.08 KiB) Downloaded 832 times
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
Feel free to donate. Thank you for your contribution.
- tester
- Posts: 1786
- Joined: Wed Jan 18, 2012 10:52 pm
- Location: Poland, internet
Re: how to add offset
KG, I think I may have one final question regarding this counter.
Could you add a node that would allow to reverse (on the fly) counting direction?
Or this is just a matter of simple pushing through (NS - x) on int part and (-1 * y) on fractional part?
This would refer to backward playback.
Could you add a node that would allow to reverse (on the fly) counting direction?
Or this is just a matter of simple pushing through (NS - x) on int part and (-1 * y) on fractional part?
This would refer to backward playback.
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
Feel free to donate. Thank you for your contribution.
- tester
- Posts: 1786
- Joined: Wed Jan 18, 2012 10:52 pm
- Location: Poland, internet
Who is online
Users browsing this forum: No registered users and 110 guests