Hi all
At the moment I'm in very early stage designs for a medium complexity projection mapping on a building. While messing around using early photos from the site I've identified a bunch of different places on the building that want distinct content.
When it comes to the final project I'll program it sensibly, but for now to make things easy to edit I've used a few virtual screens to stack things (eg screen 1 is all the doors, screen 2 all the white rendered areas, 3 the gable end, 4 is the brick areas etc). After the virtual screens there are 8 hd projs (2xUHD into datapath) heading out to projectors using the virtual screens as inputs for slices.
Predictably, this stacking up in the programming seems to have led to a reduction in fps. I'm only using about 12 still images, arranged in groups, from a Samsung 960pro (on an otherwise reasonable laptop, i7 6th gen I think, geforce 960/970 mobile), so it's extremely unlikely that it's hard drive access speeds slowing it down. I'm intrigued though, what would improve this situation most, a more modern gpu, a higher clocked processor, or a processor with more threads?
The production machine that will likely run this project will be new, and I'll program it more efficiently than currently, but I'm interested in what limits the software has generally, and which hardware is most likely to cause it