The main problem with mapping projects is no longer getting the warping and the position of the objects. Apps like Arena, Madmapper, Heavy-M and by now dozens of others have made this obscenely easy. It's basically just a matter of digitally tracing the physical objects. It's precise work that requires a lot of attention, but it's not exactly rocket science.
The real challenge now has become how to make a visually interesting show with that information. Madmapper has the Lines object, but otherwise completely leaves the content part to other software. Heavy-M has some very cool generative options, but lacks extensive controls for regular clip playback.
At Resolume, we've had requests for 'slice effects' and slice blend modes as well requests for different options for control of slices (DMX, Midi, OSC), that would allow you more performance control in your mapping. We understand how cool it would be to have all this, but we don't really like the direction it takes. A large part of performance control would be moved away from something which is exactly designed for performance control: the main interface.
Instead, the direction we'd like to take is towards the importance of an input map. An input map is an idealised representation of your mapping object or your stage. It lets you decide which part of your composition goes where in your output. In the case of the example video, it would be a set of larger and smaller regular straight rectangle slices, that roughly take up the same place in the input map as they do in the physical object. Basically it's a crude 2d model of your object. We added a new chapter in the manual
on creating input maps. In the second video that Sadler links to, the 'animate comp' that they talk about is exactly the same concept as our input map.
You would use the output mapping to align this idealised input map to the actual object and adjust for any distortions caused by the projector angle. You know, the boring bit that we all know how to do by now.
Back on the input side, you would do the creating and mixing the performance in the place designed for this, the composition.
Generative outline animations could be created by dragging Slice Effects from the Sources tab to the clip grid. These would take their data from the input map and would do different things from simple edge outlines to more complex generative content a la Heavy-M. By making these clips instead of applying them straight on slices, you keep all the extra goodness that you get from being able to switch between them, map them to midi, sequence them with the autopilot, apply transitions and even additional regular effects.
You would position abstract stock content exactly in the same place in the composition as one or more of your input map slices have in the input map. This way, that content would only play on certain objects without the need for masking or layer to slice routing. The plugin Chaser
already does this. It makes this ridiculously easy by reading in the input map data for you and placing your content in the correct position.
And if you are a creative mind like DHoude, who would like to make something custom like "a wet noodle slithering down the edges", you can take the input map into After Effects or your 3d software and create custom content for it. Inputs from Syphon or other generative sources can be tailored to the input map the same way.
By making the input map the guide for all of these different workflows, you make sure that you can mix and match everything easily and intuitively, in the place designed for mixing: the composition. And that makes the difference between having fun at a show, or making a show into a technical challenge of trying to string as many apps together as possible.
Of course this is all future think and absolutely no help to you now. I thought it might be a good opportunity to talk about where things are heading.