Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q: what is the proper way to read individual samples from a buffer as a constant signal (and smoothly so)? #126

Open
balintlaczko opened this issue Jan 31, 2025 · 3 comments

Comments

@balintlaczko
Copy link

balintlaczko commented Jan 31, 2025

Sorry if this is a noob question! I have a situation where I set up independent Patches that initialize their inputs (or outputs) as 1-sample buffers. This is so that I can dynamically attach or detach "mapper" patches which get a reference of someone's output buffer and someone's input buffer. The idea is that if I take away the mapper patch, the other two patch (the one that has an output buffer and the other that has an input buffer) are "untouched" so to speak. Consider the following:

Here is a feature extractor that writes its output to a buffer:

class Feature():
    def __init__(self):
        self.features = sf.Buffer(1, 1)

    def __call__(self, mat):
        self.features.data[:, :] = np.mean(mat)

Here is the mapper that reads an input buffer and writes 2x the value to an output buffer:

class Mapper():
    def __init__(
            self, 
            buf_in, 
            buf_out):
        self.buf_in = buf_in
        self.buf_out = buf_out

    def __call__(self):
        self.buf_out.data[:, :] = self.buf_in.data * 2

And here is a patch that should read a parameter from a buffer and synthesize it:

class Theremin(sf.Patch):
    def __init__(self, frequency=440, amplitude=0.5):
        super().__init__()

        buf_frequency = sf.Buffer(1, 1)
        buf_amplitude = sf.Buffer(1, 1)
        buf_panning = sf.Buffer(1, 1)
        
        buf_frequency.data[0][0] = frequency
        buf_amplitude.data[0][0] = amplitude
        buf_panning.data[0][0] = 0
        
        frequency_value = sf.BufferPlayer(buf_frequency, loop=True)
        amplitude_value = sf.BufferPlayer(buf_amplitude, loop=True)
        panning_value = sf.BufferPlayer(buf_panning, loop=True)
        
        freq_smooth = sf.Smooth(frequency_value, 0.999)
        amplitude_smooth = sf.Smooth(amplitude_value, 0.999)
        panning_smooth = sf.Smooth(panning_value, 0.999)
        
        sine = sf.SineOscillator(freq_smooth)
        output = sf.StereoPanner(sine * amplitude_smooth, pan=panning_smooth)
        
        self.set_output(output)

This is a simplified version of what I'm using now. My question is about the last patch: how can I read a certain sample from a buffer as a signal? I tried some things, but the BufferPlayer solution seems to be the only one that works. But I guess it only works (as intended) as long as the buffer is one-sample. I have no clue how I could set one param to let's say the first sample in the buffer and another param to the second sample. I think I tried setting the Smooth input from the Buffer.data but that did not update as the buffer data changed.

Another smaller question is that I don't know how to smooth some signal not by "mix value" (like Smooth) but by a time value (like the rampsmooth~ object in Max, if that says anything). I ended up creating a function that converts number of samples to a mix value and that's what I use with Smooth. But if there is a way to have something like:

some_value = sf.Constant(1.23)
some_value_smoothed = sf.RampSmooth(some_value, 480, 480) # input, samples to ramp up, samples to ramp down

...then let me know! Also let me know if any of this is a hopeless, caveman-style approach, while there are other super-neat and elegant ways. :)

@ideoforms
Copy link
Owner

In the first example, I would say that the simplest approach would be to potentially move away from using Buffers and instead simply add three inputs to your patch (self.add_input("frequency"), etc), and then when you update your feature buffers, have some process that calls patch.set_input("frequency", x).

If you really do need to use Buffers, then you could do this in a somewhat inelegant way with this:

offset = 7 # offset in frames to read
player = BufferPlayer(buffer,
                      start_time = (offset)/graph.sample_rate,
                      end_time = (offset + 1)/graph.sample_rate,
                      loop=True)

This plays a single-sample loop in the buffer. It's pretty horrible, and maybe calls out for a BufferOffset node or similar, which can take an offset in samples...

Re smoothing by time value: It's a great question. There is actually a helper function for this, but it's not yet documented or properly exposed. It is this:

# args: decay time, sample rate, decay level
calculate_decay_coefficient(0.1, graph.sample_rate, 0.001)

where the first arg is how long you want the smoothing to take, and the third arg is how close the convergence should get as a proportion of the target value (where 0.001 = -60dB). The reason that the third value is needed is that Smooth is for logarithmic smoothing, not linear, so it converges to the value rather than reaching it directly.

This should help for the moment, but a linear ramp node is a great idea, I'd just have to think a bit about what happens when the input value is modulated.

@balintlaczko
Copy link
Author

Thanks, @ideoforms, this is really useful! I did start out using the normal inputs, but I moved away from them, can't remember why, now. I'll give it another shot, and try the calculate_decay_coefficient too! For reference, my helper is currently this:

def mix2samps(mixval, eps=1e-6):
    "Convert a mix value (used in sf.Smooth) to samples"
    return np.ceil(np.log(eps) / np.log(mixval))

def samps2mix(samps, eps=1e-6):
    "Convert samples to a mix value (used in sf.Smooth)"
    return eps ** (1 / samps)

It is a co-production with ChatGPT, and I am not sure if it is mathematically accurate, but based on my test it seemed good enough. (I guess my eps is your third arg?) I did wish it was ramping more linearly, since now I have to give a really long ramp time (like 24000 samples) if I don't want to hear artifacts when there is a sudden modulation.

@balintlaczko
Copy link
Author

balintlaczko commented Feb 6, 2025

Hey there, finally managed to implement a linear smoother class, and tested with different signal vector sizes as well. I am not entirely sure it is the most efficient/elegant implementation, but open for suggestions! Here it is:

class LinearSmooth(sf.Patch):
    def __init__(self, input_sig, smooth_time=0.1):
        super().__init__()
        graph = sf.AudioGraph.get_shared_graph()
        samps = graph.sample_rate * smooth_time
        steps = samps / graph.output_buffer_size
        steps = sf.If(steps < 1, 1, steps)

        current_value_buf = sf.Buffer(1, graph.output_buffer_size)
        current_value = sf.FeedbackBufferReader(current_value_buf)

        history_buf = sf.Buffer(1, graph.output_buffer_size)
        history = sf.FeedbackBufferReader(history_buf)

        change = input_sig != history
        target = sf.SampleAndHold(input_sig, change)
        diff = sf.SampleAndHold(target - current_value, change)

        increment = diff / steps

        out = sf.If(sf.Abs(target - current_value) < sf.Abs(increment), target, current_value + increment)
        graph.add_node(sf.HistoryBufferWriter(current_value_buf, out))
        graph.add_node(sf.HistoryBufferWriter(history_buf, input_sig))
        self.set_output(out)

Update: this is still not the best when it comes to continuously changing input_sig, let's say coming from mousing or other non-signalflow data streams, I think because the feedback has to be at least one block long, which means the change variable only updates once per block at best. This gave the thing a noticable lag when it came to smoothly following a continuously changing input (from mousing on image data in my case). It's still good for smoothing inputs that do not change super often (let's say the user moving a slider in a UI), but not ideal for faster-moving inputs, or you should really reduce your graph.output_buffer_size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants