-
Notifications
You must be signed in to change notification settings - Fork 4
Programs with Baked Events vs Performaces #16
Comments
I agree entirely. I also think that if we want to move the |
@micahswitzer That's exactly what I was thinking, can possibly treat the finger position as a kind of second capo. |
I also initially supported not having fretting in the program to be more true to the MMX, however: I think the Alice-Bob protocol analogy is a little misleading here. The design here applies just as well in the case where there's just one user. Anyways, my view of a more accurate description of the workflow we want to support, which at least at first is the main workflow overall:
Since phase 1 is largely about composing, it is very convenient to be able to hear the actual bassline along with the rest of the song. Having the user change capos on the fly (or any other way of dialing the notes in) for each note of the bassline whilst just wanting to hear the composition in real time is not reasonable. Fretting is strictly tied to notes on the programming wheel, and we don't want the program to include other "performance hints". They belong in the performance. Here baking program events into the performance is done to simplify playback (and also editing) of a performance. If the events are not baked, the original program always has to be read along with the performance, and deducing what a bass note does requires looking into both event timelines. |
This idea that editing an performance being a future feature I think is a poor assumption, there is no reason not to develop the initial composing application to support composing a program and performance simultaneously, I alluded to this in the issue (I'll see if I can find time to make a prototype in the next couple days). Thus the idea that a more accurate schema would put obstacles in the way of a user composing a base line becomes inapplicable. Physically fretting is not strictly tied to notes on the programming wheel, so that should not be a requirement of the schema when a easy alternative is available. Regarding the timelines, a performance really can't be considered without a program so I don't quite follow this point. The reason I chose this use case is because it best supported the idea of baking, if someone is composing everything locally, then there is not reason to have the frets indicated on the drops as the editor will have full access to the performance event list. |
Once again, @Tiedye made my point exactly. Here's me saying pretty much the same thing:
I think that this is not an issue at all. At some point we need to decide what we care about more: the ease of software development, or the ease of using the actual software. Sure, baking in every program wheel event might make it easier to write the playback code, but what it sacrifices is the ability to modify the program separate from the performance. Now, like you already said, editing performances is not a part of phase 1, and I'm not saying it should be. I do however, believe that if we're careful with how we write the software and design the schema, we should be able to build a system that can support new features (such as performance editing) without requiring users to "redo" their performances. |
The reason we earlier decided to sacrifice the ability to edit programs after recording performances is the multitude of issues it brings up. If the sacrifice is assumed, baking does make more sense, but I agree that the benefit isn't enough to warrant the sacrifice if it turns out combined program-performance editing is reasonable. As an example, my "curveball": This has to do with how tempo is handled. Either:
Would be interesting to see how you solve these sorts of issues. If you think you can, leaving baking out of the equation might make more sense after all. |
On the other hand, leaving baking in doesn't totally close the door for program editing. You can always replace the baked notes in the performance once you're done. Depending on how you implement the editor, those baked notes would practically already be visible in the editor. |
Thanks for the great discussion all, its a very interesting read. I personally supported baking only in the way that it moved us closer to representing a full performance on the marble machine. Whereas previously, I felt that the bass performance aspect of the machine was not really dealt with. I believe that the overall goal of the project is to allow people to compose music that Martin could conceivably play live. Martin would also have to be convinced of the quality of the composition. As an ex musician and composer, I can not emphasize enough the role of the bass in an overall composition. I know that we have now moved well beyond the discussion of whether to only program the wheel, and are starting to think seriously about the performance. I welcome this as an essential discussion! I am not a particularly big fan of the "baking" solution either, but didn't see much alternative at the time, and I felt that energy on the development of the schema was quickly running out! With all that said, I do also believe it would be beneficial for now to aim for a simplified version of the app, with just the programming aspect of the marble machine and none of the performance stuff (but keeping the door wide open for performance of course). So am I right in thinking that the suggestion is to remove baking and extend a "performance events" class? Is someone able to make a pull request for this so we can see what it would look like? Regarding timing: Would you guys agree that the bass performance aspect and the idea of having the marble machine "stopped dead" are the most complicated features? Any ideas for how can we develop and leave those features for last without totally caging ourselves out of it? |
Hm, maybe I can also add my two cents. My proposal would be to split programming wheel and performance. So on UI level I would suggest two modes:
As @micahswitzer mentioned, we won't be able to update the program once we backed in also the time based events. I personally would prefer a solution where you e.g. could download the existing marble machine performance and then add a fancy bass solo to it (but please don't ask me for the solo once the implementation is done ;D). An additional benefit of separately storing events and programming wheel would be simply efficiency in case of storage space - If we split the programming wheel from other events, we need to store the programming wheel data only for one full rotation of the wheel. If we want to store the full performance, we need to store multiple rotations, depending on the length of the performance, since during each rotation we could have different channels muted, different notes played manually etc. Big advantage of the split approach would also be that we can continue with the existing programming wheel implementation and "simply" extend it with the performance information. Regarding the global timing mentioned by @iansedano: This makes sense during playback in performance mode. But I would prefer to calculate this information on runtime (e.g. while switching from mode 1 to 2) and not store this in the exported files - So we either should create different schemas for export/import and "interal" usage or keep the global timing as implementation detail. Different timing models for different purposes may seem weird, but they are needed to represent the correct physics of the represented element. TL:DR: |
I don't think this idea with saving performance events in "time" units, while saving program events in "tick" units is the best idea. I really think everything should be stored in terms of ticks since we shouldn't recalulate all events beyond a bpm change, we should just be reading through the saved events at different rates (with ticks as the only unit). This also works much much better for working with Tone. I also agree repeated bass fretting should be saved in the the DropEvent. I think the place to draw the line between what data should or shouldn't be stored in program drop events come down to: "should the event be repeated for each program wheel cycle?", and "does the event pertain only to that specific drop? (no lasting state change)" This can be generalized not only to repeated bass fretting, but for example to repeated open hihat strikes. These events could be editted with a long press or right click on the programming pin. Handling the "complete stop" issue i think is the only reason we're considuring a tick and time combo approach, but for that case, i think it would make the most sense if we just spawn a seperate ticked clock. That way we can consistantly say "slow down bpm to 0 by x tick, at x tick start solo event stream" and at the end of the solo event stream, essentially "return" or "pop" back to the previous stream. |
Since there was not really a progress recently, I just took some time to update the schema to use ticks and created a PR. |
I stand by my previous statements regarding not including superfluous data in in DropEvents. The tick vs time discussion is somewhat outside the scope of this issue, but it is my opinion that time for performance events is easier to implement as using ticks for performance events creates special cases that need to be handled when the programming wheel stops. |
Yes, you are right, the original discussion was if we should store the final performance with "merged" information of programming wheel and performance data. Where I still would separate them to have later the possibility to change the "programmed" events independently of the performance events. Having ticks for both will make it a lot easier to keep both events (drop + performance) in sync and, as far as I understood, would also make the implementation with our audio framework a lot easier. Special handing of programming wheel running vs. programming wheel stopped is either way needed - therefore I would not see this as an issue of having same vs. different timing measurement systems. |
We've committed to supportting the following use case:
To make this workflow more user friendly, Bob should be able to indicate where he expects certain performance events to happen, like a hi-hat opening or closing, a certain fingering on the bass, etc.
Looking at the bass fingering case, currently this is supported by a
fret
parameter on theBassDropEvent
. This allows a program to include melodic bass without requiring performance events inside the program. When a program withBassDropEvent
s is used to create a newPerformance
, upon playback the playback program will "Bake" these drop events into the list ofevents
on the performance. These baked events will be marked as such with thebakeType
property being set toEventBakeType::AUTO
. These "baked" performance drop events can then be overridden by otherPerformanceDropEvents
Alice creates.Note that this process requires information that is not implicit in the layout of the schema, it requires logic that is external to it. Also if other performance suggestions are to be included in a program and then "baked" into a performance on load, this would require special handling of each type of special data, which would lead to a not insignificant rise in complexity as the number of different types of performance "suggestions" increased.
Alternatively, when Bob is creating a program to send to Alice, he can instead create a proper
Performance
that includes all of the state changes that he is expecting to happen inside the performances list of events. Then when Alice receives the performance with the preexisting events, there is no need for a baking process to properly initialize the performance for her to begin working on it.As far as I am aware, these are the reasons cited for the event baking approach:
I don't think making a intuitive performance editor is significantly more challenging than building a program editor, I'll look into creating a proof of concept to demonstrate (changing the time of performance events based on changes in tick rate is easy)
Since the baking approach is roughly equivalent to having a pre-programmed performance, there is no advantage to it, there may actually be an advantage to the pre-programmed performance as it will be possible to apply diffs to properly formatted lists of pre-programmed events (this is jsut an idea, not important to this proposal)
The reasons against event baking approach:
Event
s have to be effectively double inside theDropEvent
sThus removing the entire idea of baking from the scheme will simplify the current schema (greatly simplify the future schema and implementations), simplify implementation of playback (and probably editing) tools of performances and programs, and allow the schema to continue to be as accurate a virtual representation of the mmx as possible.
Edit: previously this assumed the baking happens on program load, it is actually planned to happen during playback, and that more baked events are not currently expected
The text was updated successfully, but these errors were encountered: