I've been working on a visual set to aid my live music setup in ableton for quite some time now. However, I'm still unfamiliar with certain fundamentals of the language of video processing.
In audio, when you bounce or freeze a track with many plugins on it, you free up tremendous amounts of CPU overall. Does this same thing apply with rendering video with multiple effects/layers?
In other words, is a rendered/recorded video file analogous to a rendered audio file/waveform? (with respect to audio units/plugins/effects/layers)
Specifically, I'm trying to use OBS to capture the output from resolume so that I can put the result of the capture back into resolume(after DXV encoding in Alley) and save on overall processing power on the CPU/GPU. However, when I do this, the result is usually pixelated and not smooth. My clips are all triggered by Midi from ableton, so it would be much easier/more organized if I was dealing with essentially one movie per song. If anyone has suggestions on how I should set up resolume and OBS to get a quality output like I do within resolume, that would be extremely helpful. Or if there are any alternatives that are simple, that would be awesome as well.
I'm running arena 5 on a macbook pro 2017 w/ 4gb gpu, 16gb ram, and 1TB SSD.
Let me know if there's any other specs or info that I can provide to be helpful.