If you have a problem or need to report a bug please email : support@dsprobotics.com
There are 3 sections to this support area:
DOWNLOADS: access to product manuals, support files and drivers
HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects
USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here
NEW REGISTRATIONS - please contact us if you wish to register on the forum
Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright
Research ... capture EQ curve
Re: Research ... capture EQ curve
Thanks for the feedback, guys. Here is a variant with offline processing. Processing is done in blocks and results are cumulated, so the final readings are RMS values. Unfortunately it takes some time to process an entire track: on my computer it is about the time you would need to stream it. May be there is a smarter way to do it?
Oh, and I added a color input to the Ruby display code.
Oh, and I added a color input to the Ruby display code.
- Attachments
-
- SpectralEnvelope2.fsm
- (94.95 KiB) Downloaded 1223 times
-
martinvicanek - Posts: 1328
- Joined: Sat Jun 22, 2013 8:28 pm
Re: Research ... capture EQ curve
I will simply offer for honesty's sake that my understanding of "gain staging" is that it's an outdated concept in the realm of 32 bit float audio. Many plugins don't care what input level you feed them and the ones that do, if they're worth anything, almost always have an input level knob. If not it's easy to drop a gain plugin in the chain. In these cases it's almost always (I might even say always) more useful to just set the input/output level manually anyways. It's just too easy to be a problem.
That being said, I'm not trying to discourage you from what you're doing, especially if you have such a specific use case, which honestly I don't even understand (really I am very lost there haha).
That being said, I'm not trying to discourage you from what you're doing, especially if you have such a specific use case, which honestly I don't even understand (really I am very lost there haha).
- Perfect Human Interface
- Posts: 643
- Joined: Sun Mar 10, 2013 7:32 pm
Re: Research ... capture EQ curve
Thanks for the fix martin. That's really awesome
There are 2 more extra's that could be added to make it more designable:
1) Ability to paint the bars (instread of making them transparent with colorful margins)
2) Ability to decrease the number of the bars, lets say, from 40 to 20 (because too many bars eat too much cpu! I have no idea if it's possible, so sorry for this request if it isn't)
3) Off topic: that great little wav. player needs "play-pause-stop" buttons badly!
Thanks a lot for your efforts man.
Here is an example where I use it: My new project streams both audio interface's input (microphones or guitar) and a wav. playing playback. I've added to the project my own made EQ with sidechaining that "feels" the input stream and reduces the mid gain in a given threshold. Martin's scoope allow me to see which range of mid frequencies are being reduced in the playback's streams when the mic' is in use. That's really great visual feature
There are 2 more extra's that could be added to make it more designable:
1) Ability to paint the bars (instread of making them transparent with colorful margins)
2) Ability to decrease the number of the bars, lets say, from 40 to 20 (because too many bars eat too much cpu! I have no idea if it's possible, so sorry for this request if it isn't)
3) Off topic: that great little wav. player needs "play-pause-stop" buttons badly!
Thanks a lot for your efforts man.
That being said, I'm not trying to discourage you from what you're doing, especially if you have such a specific use case, which honestly I don't even understand (really I am very lost there haha).
Here is an example where I use it: My new project streams both audio interface's input (microphones or guitar) and a wav. playing playback. I've added to the project my own made EQ with sidechaining that "feels" the input stream and reduces the mid gain in a given threshold. Martin's scoope allow me to see which range of mid frequencies are being reduced in the playback's streams when the mic' is in use. That's really great visual feature
-
kortezzzz - Posts: 763
- Joined: Tue Mar 19, 2013 4:21 pm
Re: Research ... capture EQ curve
mmm ...
took about 9 secs to scan a 3 minute song, and post results on my i7-5820K.
Will be testing more today!
took about 9 secs to scan a 3 minute song, and post results on my i7-5820K.
Will be testing more today!
- RJHollins
- Posts: 1571
- Joined: Thu Mar 08, 2012 7:58 pm
Re: Research ... capture EQ curve
I need a new PC.
-
martinvicanek - Posts: 1328
- Joined: Sat Jun 22, 2013 8:28 pm
Re: Research ... capture EQ curve
A power surge help me make that decision
Looking at the Kaiser module, what does the 'Kaiser window for a=3.5' mean ?
Looking at the Kaiser module, what does the 'Kaiser window for a=3.5' mean ?
- RJHollins
- Posts: 1571
- Joined: Thu Mar 08, 2012 7:58 pm
Re: Research ... capture EQ curve
When analyzing the frequencies in a block of finite duration, a window function is used to suppress spectral leakage. The Kaiser module is my optimized implementation of a Kaiser window.
Reminds me that I should use overlapping windows to cover all of the data.
Reminds me that I should use overlapping windows to cover all of the data.
-
martinvicanek - Posts: 1328
- Joined: Sat Jun 22, 2013 8:28 pm
Re: Research ... capture EQ curve
early in the quest ...
Trying to get to the basics to better focus on what I need to learn ...
Am I correct in looking at FFT as the principal idea to capture a snapshot result of the sources spectral envelope ?
I'm thinking along the lines of WAVES 'Q-Clone' that captures EQ curves.
If this is correct [please tell me] , and then, I need to utilize this 'eq-captured' so that a test signal can be shaped by this capture curve. Is this also an FFT function ?
I just need to get some dialog on this to get the concepts together, then explore a FS solution.
Thanks for insights.
Trying to get to the basics to better focus on what I need to learn ...
Am I correct in looking at FFT as the principal idea to capture a snapshot result of the sources spectral envelope ?
I'm thinking along the lines of WAVES 'Q-Clone' that captures EQ curves.
If this is correct [please tell me] , and then, I need to utilize this 'eq-captured' so that a test signal can be shaped by this capture curve. Is this also an FFT function ?
I just need to get some dialog on this to get the concepts together, then explore a FS solution.
Thanks for insights.
- RJHollins
- Posts: 1571
- Joined: Thu Mar 08, 2012 7:58 pm
Re: Research ... capture EQ curve
You could use FFT but it has constant resolution in terms of Hz. What you want is a proportional resolutio (constant Q).
-
martinvicanek - Posts: 1328
- Joined: Sat Jun 22, 2013 8:28 pm
Re: Research ... capture EQ curve
Even being familiar with the terms .... Hz, proportional, Q ... it's built into the language of audio engineers. It still difficult to know where to begin. What technique to use.
The practical idea of the project seems simplistic [to a point]. To feed an audio track through a 'process' that would analyze the spectral content [freq/ level]. Then use that analysis to model a static EQ that replicates the source curve.
The accuracy of this dual process? Detailed enough to provide a good representation. At this stage, I can't even determine. I'm NOT looking to replicate the ultra detail ... but it is important that the tolerance closely fit. Keeping in mind that we'd only have a single snapshot of the entire audio file length.
The snapshot is part of the goal. The 'totality' of the source audio's spectrum is contained as a single curve ... this 'curve' becomes the FILTER.
I may have confused this entire issue mentioning terms like FFT, or convolution. I'm trying to figure out WHAT technique [and its name], so that I can even know where to begin to look.
As an audio engineer ... a series of band-limited FILTERs or EQs that could adjust dynamically to a source [within its band of awareness], and determine its +/- gain to a reference point, could then be locked [hold those settings], and be used to simulate the original audio source.
Thinking aloud ... like taking an analyzer, split into [many 10 bands], track the audio input signal and HOLD the levels. Once analysis is over, poll the held gain level. We'd already know the Q [bandwidth], and transfer those values to a series of [10] static EQ's.
Again ... just tossing ideas. I do know the issues that this could cause ... band overlap, phase, etc ... I just don't know if that would be detrimental to the reason/need for doing this.
We are trying to eliminate the need to play a source track in full to determine it's overall level [VU].
2. We are also needing to determine the LEVEL after passing the source through another process [the EQ we'd use to intentionally alter the sound]. The result could be Higher or Lower levels [measured by VU].
3. multiply this several times ... or much more. Every intentional process [eq] will have affect on the gain output of these processes. Each EQ setting change we make with impact every successive process in that chain.
Efficiency in maintaining UNITY GAIN STAGING through this chain is the target.
Playing the original audio source [even finding JUST the LOUDEST section] is what we do ... but I'm looking for a faster solution. A global snapshot of the the source track is a piece of this concept.
I hope I'm clarifying things that make better sense. I just not sure what the proper term is to describe it from a programming perspective [FlowStone].
Thanks Guys for your patience and understanding. I don't mean to sound frustrated or anything, but it is when not even knowing what direction to start.
Again ... thanks.
The practical idea of the project seems simplistic [to a point]. To feed an audio track through a 'process' that would analyze the spectral content [freq/ level]. Then use that analysis to model a static EQ that replicates the source curve.
The accuracy of this dual process? Detailed enough to provide a good representation. At this stage, I can't even determine. I'm NOT looking to replicate the ultra detail ... but it is important that the tolerance closely fit. Keeping in mind that we'd only have a single snapshot of the entire audio file length.
The snapshot is part of the goal. The 'totality' of the source audio's spectrum is contained as a single curve ... this 'curve' becomes the FILTER.
I may have confused this entire issue mentioning terms like FFT, or convolution. I'm trying to figure out WHAT technique [and its name], so that I can even know where to begin to look.
As an audio engineer ... a series of band-limited FILTERs or EQs that could adjust dynamically to a source [within its band of awareness], and determine its +/- gain to a reference point, could then be locked [hold those settings], and be used to simulate the original audio source.
Thinking aloud ... like taking an analyzer, split into [many 10 bands], track the audio input signal and HOLD the levels. Once analysis is over, poll the held gain level. We'd already know the Q [bandwidth], and transfer those values to a series of [10] static EQ's.
Again ... just tossing ideas. I do know the issues that this could cause ... band overlap, phase, etc ... I just don't know if that would be detrimental to the reason/need for doing this.
We are trying to eliminate the need to play a source track in full to determine it's overall level [VU].
2. We are also needing to determine the LEVEL after passing the source through another process [the EQ we'd use to intentionally alter the sound]. The result could be Higher or Lower levels [measured by VU].
3. multiply this several times ... or much more. Every intentional process [eq] will have affect on the gain output of these processes. Each EQ setting change we make with impact every successive process in that chain.
Efficiency in maintaining UNITY GAIN STAGING through this chain is the target.
Playing the original audio source [even finding JUST the LOUDEST section] is what we do ... but I'm looking for a faster solution. A global snapshot of the the source track is a piece of this concept.
I hope I'm clarifying things that make better sense. I just not sure what the proper term is to describe it from a programming perspective [FlowStone].
Thanks Guys for your patience and understanding. I don't mean to sound frustrated or anything, but it is when not even knowing what direction to start.
Again ... thanks.
- RJHollins
- Posts: 1571
- Joined: Thu Mar 08, 2012 7:58 pm
Who is online
Users browsing this forum: No registered users and 44 guests