Rocking Out and Getting Your Geek On: Negrita!
						We spoke with Marino Cecada, an italian visual designer who has been doing some out of the box work for various pop and rock acts. Where most rock shows visually rely on simple live camera registration, Marino uses Arena and some custom FFGL wizardry to take things to the next level.
[fold][/fold]
Tell us about yourself
I live and work in Italy, and since 2006 I have been working in the video production business, in the beginning as a cameraman and editor. In 2007, together with 2 colleagues, we worked on our first visual project for concerts: it was the "Soundtrack" tour for Elisa, a famous Italian singer.
There were no live cameras for that tour. We had PVC screens on which our videos were projected. Each video was done especially for each song (track). During the years, I have been more and more attracted by video art and, generally speaking, everything regarding video production in the music and concert world.
Some of the first interactive video installations I made were with Quartz Composer. I used it to work on musical videoclips and collaborated on other tours in which I have always supervised the visuals and recently also direction and broadcast.
Then, in 2012, while preparing for a tour in the US with Il Volo, my colleague Steve Sarzi proposed to work with Arena. I never heard about it, but after using it with Steve for a couple hours, we had already set the basics for the project I had in mind and most of all, it worked! I was impressed by how fast and easy the software is to use. Also, it immediately read SMPTE signals which was extremely important as I usually work with pre-made videos which are synced.
Tell us about your last work
The last job was for Negrita, an Italian rock group that has a 30 year long career. For spring 2015 they had an indoor stadium tour in mind. The lighting and stage designer, Jo Campana, conceived a very essential stage: the background was made of three big LED screens, 5,60 meters tall and 4,20 Meters wide, occupying 16 meters in length.
Because of the importance given in terms of space, a lot of the show was centered on what was happening on these screens. The idea we had was to mainly use live footage and to exclude tracks with simply a video in the background. The live images had been conceived as a graphic element in support of the set/scenic design, so the important thing was that what we were filming had to be processed and filtered to give a different interpretation for each track.
Being a live broadcast, the result was a sequence of live videos that followed the dynamics and the energy of what was going on on stage. No pre-created video could have given the same feeling. The only part we left unprocessed was the very last song, in which all the lights were turned on and the show came back to a more "earthly" environment.
Tell us about the video system you used
To realize what we had in mind, Telemauri , which is a video rental company Steve and I closely work with, gave us 3 cameras with cameramen, three remotecontrolled cameras plus some small ones which were placed on stage, one of which on the drum set.
All cameras were routed into a ATEM switcher, which was extremely versatile. Thanks to it we could independently control 4 SDI and 4 other input signals to the computer with Arena.
What came out of Arena, went directly to a video matrix and consequently to the screen. Our output was a single full HD signal, the mapping of the three screens were directly done on Arena, deciding what should go on each screen. I prefer to keep some of the controls of Arena in manual mode, like the master fader for example, so we connected a Samson Graphite MF8 controller to the computer.
Have you used particular effects?
One of the aspects that made us choose Arena towards other more "prestigious" media servers, is that through FFGL we could develop our own plugins. In fact, also in the previous tour, Elisa's "L'Anima Vola" we created some plugins to make a choreography of the singer moving on stage, while on the screens her movements were repeated several times to create a trail.
Elisa tour 2014
Another plugin I enjoy, which has been developed together with Davide Mania (an FFGL programmer I have been working with for years) we named ScrumblePict. I often use it, and it allows us to have copies of the signals without having to use more of Arena's clip slots . These copies can be moved, rotated, scaled and cropped, allowing to always create different templates.
Elisa tour 2014
Biagio Antonacci tour 2014
Could you show us some of the graphic styles used for the show?
As I mentioned, I very much enjoy working with image decompositions, so in this tour we also got busy with breakdowns and recomposing the signal that came through the switcher.
Negrita live, example of the ScrumblePict effect.
For other tracks, we took advantage of the edge detection and pixel screen effects.
Another fantastic aspect of Arena is the possibility to use DXV files with the alpha channel. In this way we can create moving masks for live inputs.
Negrita live, mask with alpha channel and live inside
More info and other works and productions at http://www.editart.it
					
					
				[fold][/fold]
Tell us about yourself
I live and work in Italy, and since 2006 I have been working in the video production business, in the beginning as a cameraman and editor. In 2007, together with 2 colleagues, we worked on our first visual project for concerts: it was the "Soundtrack" tour for Elisa, a famous Italian singer.
There were no live cameras for that tour. We had PVC screens on which our videos were projected. Each video was done especially for each song (track). During the years, I have been more and more attracted by video art and, generally speaking, everything regarding video production in the music and concert world.
Some of the first interactive video installations I made were with Quartz Composer. I used it to work on musical videoclips and collaborated on other tours in which I have always supervised the visuals and recently also direction and broadcast.
Then, in 2012, while preparing for a tour in the US with Il Volo, my colleague Steve Sarzi proposed to work with Arena. I never heard about it, but after using it with Steve for a couple hours, we had already set the basics for the project I had in mind and most of all, it worked! I was impressed by how fast and easy the software is to use. Also, it immediately read SMPTE signals which was extremely important as I usually work with pre-made videos which are synced.
Tell us about your last work
The last job was for Negrita, an Italian rock group that has a 30 year long career. For spring 2015 they had an indoor stadium tour in mind. The lighting and stage designer, Jo Campana, conceived a very essential stage: the background was made of three big LED screens, 5,60 meters tall and 4,20 Meters wide, occupying 16 meters in length.
Because of the importance given in terms of space, a lot of the show was centered on what was happening on these screens. The idea we had was to mainly use live footage and to exclude tracks with simply a video in the background. The live images had been conceived as a graphic element in support of the set/scenic design, so the important thing was that what we were filming had to be processed and filtered to give a different interpretation for each track.
Being a live broadcast, the result was a sequence of live videos that followed the dynamics and the energy of what was going on on stage. No pre-created video could have given the same feeling. The only part we left unprocessed was the very last song, in which all the lights were turned on and the show came back to a more "earthly" environment.
Tell us about the video system you used
To realize what we had in mind, Telemauri , which is a video rental company Steve and I closely work with, gave us 3 cameras with cameramen, three remotecontrolled cameras plus some small ones which were placed on stage, one of which on the drum set.
All cameras were routed into a ATEM switcher, which was extremely versatile. Thanks to it we could independently control 4 SDI and 4 other input signals to the computer with Arena.
What came out of Arena, went directly to a video matrix and consequently to the screen. Our output was a single full HD signal, the mapping of the three screens were directly done on Arena, deciding what should go on each screen. I prefer to keep some of the controls of Arena in manual mode, like the master fader for example, so we connected a Samson Graphite MF8 controller to the computer.
Have you used particular effects?
One of the aspects that made us choose Arena towards other more "prestigious" media servers, is that through FFGL we could develop our own plugins. In fact, also in the previous tour, Elisa's "L'Anima Vola" we created some plugins to make a choreography of the singer moving on stage, while on the screens her movements were repeated several times to create a trail.
Elisa tour 2014
Another plugin I enjoy, which has been developed together with Davide Mania (an FFGL programmer I have been working with for years) we named ScrumblePict. I often use it, and it allows us to have copies of the signals without having to use more of Arena's clip slots . These copies can be moved, rotated, scaled and cropped, allowing to always create different templates.
Elisa tour 2014
Biagio Antonacci tour 2014
Could you show us some of the graphic styles used for the show?
As I mentioned, I very much enjoy working with image decompositions, so in this tour we also got busy with breakdowns and recomposing the signal that came through the switcher.
Negrita live, example of the ScrumblePict effect.
For other tracks, we took advantage of the edge detection and pixel screen effects.
Another fantastic aspect of Arena is the possibility to use DXV files with the alpha channel. In this way we can create moving masks for live inputs.
Negrita live, mask with alpha channel and live inside
More info and other works and productions at http://www.editart.it
Resolume Blog
This blog is about Resolume, VJ-ing and the inspiring things the Resolume users make. Do you have something interesting to show the community? Send in your work!
Highlights
Summer VJ Footage! VJSurvivalKit3, Cyantific & Voyage
						Here is a load of ammo for your busy summer festival season!
Even when mixing at breakneck speeds, 1 minute is not enough to show the full range of this pack. 100 (!) loops with a mix of 3D, 2D, glitch, tunnels, abstract, masks and loads more.
VJSurvivalKit3 VJ Footage by Laak
Minimal, rhythmical and abstract loops are a dime a dozen these days. But Ican Agoesdjam always has the eye for detail and composition to rise above the rest. These loops are killer and will add that extra touch to any set or stage.
Cyantific VJ Footage by Ican Agoesdjam
Ladies and gentlemen, this is your captain Ghosteam speaking. We are ready to take you on a voyage. Where it leads, none knows for sure. But isn’t the journey always more important than the destination?
Voyage VJ Footage by Ghosteam
					
					
				Even when mixing at breakneck speeds, 1 minute is not enough to show the full range of this pack. 100 (!) loops with a mix of 3D, 2D, glitch, tunnels, abstract, masks and loads more.
VJSurvivalKit3 VJ Footage by Laak
Minimal, rhythmical and abstract loops are a dime a dozen these days. But Ican Agoesdjam always has the eye for detail and composition to rise above the rest. These loops are killer and will add that extra touch to any set or stage.
Cyantific VJ Footage by Ican Agoesdjam
Ladies and gentlemen, this is your captain Ghosteam speaking. We are ready to take you on a voyage. Where it leads, none knows for sure. But isn’t the journey always more important than the destination?
Voyage VJ Footage by Ghosteam
Quick Tip: Hue Rotate and Hue Scale
Use Hue Rotate on your layers or on the composition to keep your colors in check.
Not only will your stage look better, your light guy will thank you as well. Sticking to 1 or 2 primary colors is the first step in working together with light and laser to create a better show.
Resolume 4.2.1 Free Maintenance Release
						"Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance."
Kurt Vonnegut, Hocus Pocus
We don't particularly like doing maintenance either, it always takes longer then expected. But we know you will enjoy a more stable Resolume. So with version 4.2.1 we managed to control ourself and just did maintenance. No shiny new toys, just the all important maintenance. Hit the download!
[FIXED] BlackMagic Intensity (Pro) jumps to 0x0 resolution after moving clip or reloading
[FIXED] Thumbnailing fails on short clips
[FIXED] Clip sometimes shows last frame on resync
[FIXED] Retrigger and Continue modes are confused if clip start offset is set in preferences
[FIXED] Continue on clip with in and outpoint no worky
[FIXED] Comp Midi map shortcuts for Clip Transport BPM Sync /2 and *2 are not saved
[FIXED] Window background color black instead of White
[FIXED] DXV 3 compression memory leak
[FIXED] Instant crash with non clean aperture DXV3 file
[FIXED] DXV 3 No Quicktime FFmpeg fallback no worky
[FIXED] Transitions don't work on PC with DXV3HQ files and sources with alpha
[FIXED] QuickLook plugin prevents files from being deleted
[FIXED] QuickLook plugin breaks Photo Jpeg thumbnailing
[FIXED] Solid Color effect jumps to pink on 1.0
[FIXED] Some plugins crash on start
[FIXED] Some plugins have memory leaks
Enjoy!
The Resolume Team
					
					
				Kurt Vonnegut, Hocus Pocus
We don't particularly like doing maintenance either, it always takes longer then expected. But we know you will enjoy a more stable Resolume. So with version 4.2.1 we managed to control ourself and just did maintenance. No shiny new toys, just the all important maintenance. Hit the download!
[FIXED] BlackMagic Intensity (Pro) jumps to 0x0 resolution after moving clip or reloading
[FIXED] Thumbnailing fails on short clips
[FIXED] Clip sometimes shows last frame on resync
[FIXED] Retrigger and Continue modes are confused if clip start offset is set in preferences
[FIXED] Continue on clip with in and outpoint no worky
[FIXED] Comp Midi map shortcuts for Clip Transport BPM Sync /2 and *2 are not saved
[FIXED] Window background color black instead of White
[FIXED] DXV 3 compression memory leak
[FIXED] Instant crash with non clean aperture DXV3 file
[FIXED] DXV 3 No Quicktime FFmpeg fallback no worky
[FIXED] Transitions don't work on PC with DXV3HQ files and sources with alpha
[FIXED] QuickLook plugin prevents files from being deleted
[FIXED] QuickLook plugin breaks Photo Jpeg thumbnailing
[FIXED] Solid Color effect jumps to pink on 1.0
[FIXED] Some plugins crash on start
[FIXED] Some plugins have memory leaks
Enjoy!
The Resolume Team
New Footage: Part 2 and Part 3 and Completely Awesome.
						Strangeloop enters the next part of his Hyperfields saga:
Get HyperFields 3 by Strangeloop *exclusively* from Resolume Footage.
Artifically Awake shines a light in the darkness:
Get NeonStructures by Artificially Awake from the Resolume shop.
And apparently Ablaze Visuals's computer is still glitching out. We hope it stays that way for a while ;)
Get NeoGlitch 2 by Ablaze Visuals *exclusively* from Resolume Footage.
					
					
				Get HyperFields 3 by Strangeloop *exclusively* from Resolume Footage.
Artifically Awake shines a light in the darkness:
Get NeonStructures by Artificially Awake from the Resolume shop.
And apparently Ablaze Visuals's computer is still glitching out. We hope it stays that way for a while ;)
Get NeoGlitch 2 by Ablaze Visuals *exclusively* from Resolume Footage.
Artist profile: Ghostdad
						A breath of fresh air in the saturated landscape of abstract EDM visuals, Ghostdad aka Ryan Sciaino caught our eye running the impressive visuals for Porter Robinson's Worlds tour. After spending a morning scouring the Interwebs for concert footage, we figured we just might as well get in touch with the legend himself.
Porter Robinson - Worlds still image courtesy of Invisible Light Network
[fold][/fold]
How did you get started in the VJ game? When did you discover Resolume?
I grew up playing music, and eventually DJing, and then creating visuals for my own music. At some point digging for records and samples turned into digging for found footage from VHS tapes and dollar store DVD's. That was around the time internet video was becoming popular too so I’d comb youtube and archive.org for weird stuff also.
I went to college for computer music and started learning max/msp while I was there. I built a video sampler I could use to switch through clips while DJing, but eventually amped it up to take it on the road with my band WIN WIN when we were doing a synched up DJ/VJ set. It was sort of a monster with cue points and BPM synch and effects so programming it got pretty intense!
When I started working with Porter I knew I needed something faster and easier to throw new content into on the fly. I was making lots of new looks to layer up with other clips and logos etc. It seemed like Resolume could handle anything I threw at it, and the triggering was the fastest I’d ever used, making it really fun to jam with.
Who are some of the artists that inspired you early on? Who is knocking your head back currently?
I listened to a lot of indie rock in high school and Cornelius was one of my favourite artists from Japan. I was lucky enough to see him do his Point show in NYC. I had never seen anything like it. I grew up watching music videos and even got into film and video art so I was used to seeing music and video together but never with live music in person like that. The content and degree of synch were incredible. It really blew my mind.
I’m a pretty big fan boy of artists who use multimedia in a conceptual way but also keep it really clean design wise like Ryoji Ikeda or Rafael Rozendaal. I’ve found more and more of my Vimeo likes being taken up by things that have been featured at http://ghosting.tv. And I definitely try to check out other artists when I’m at festivals too. I saw Bassnectar at Buku in New Orleans a few weeks ago and that was an awesome show.
You have a very varied but distinct style. From anime characters to mayan mysticism to abstract glitch to low-poly geometry to ponies, the list goes on and on. Where does it all come from? Didn't your mom tell you not to spend so much time on the internet?
No actually! We didn’t have the internet when I was growing up! We got a connection by the time I was in high school but it was maybe dial up speed at best. I’m a little older then what I consider to be the first real “internet generation” so when things got really high speed and dazzling it made me feel like a stranger in a strange land. There was so much amazing stuff happening on Tumblr or Vimeo or Second Life that I just wanted to check it all out. I get sucked down the rabbit hole online pretty easily, especially when I want to find out more about a genre or an artists or a meme. Some design trends I see online do remind me of things I grew up with like 8 bit video games or low poly 3D graphics so maybe that makes me think “I can do that!”
Visuals for DJ Tigerlily courtesy of Ryan
What caught my eye about the Porter Robinson 'Worlds' content is that it almost seems to be cinematic, in that it seems to be telling a story. Now our minds will always create a narrative with what we see, but is this an experience you consciously set out to create?
I think Porter’s goal is to invoke a feeling rather then tell a specific story. There’s definitely a tendency to connect what you’re seeing on the screen and create a story in your mind but that’s also the process that pulls you in and allows you to really feel it. In programming the show we give you every audio/video/lighting cue we can for the theme and timing and mood, and as a result I think the viewer gets to paint their own story and put themselves into it in the process.
Porter Robinson - Worlds still image courtesy of Invisible Light Network
The Worlds tour content is a collaborative effort, with you playing content created by a larger group of visual artists. Who are the people that you've been working with and how has it been working with them?
We worked with Invisible Light Network on the animated looks you see in the show. They’re based in NY also and had about 9 or 10 illustrators and animators working on their team. We were also able to grab additional content from some of Porter’s music videos like Flicker by Coyote Post.
I made content for the show as well and Porter was super in touch with everyone throughout the creation process. It was a lot of different footage to wrangle in Premiere, but I spent a week with Porter before the start of the Worlds tour where we really figured out the visual flow and style of the show while putting it all together. Porter has a tremendous amount of vision when it comes to his music which is totally inspiring.
So when looking at Youtube videos from your shows, I came across this: https://www.youtube.com/watch?v=AdotsHAzfVA. It looks like someone has been re-creating the content he saw at the show. How do you like them apples?
Yah we just saw that also! I think fans are dying to take home a piece of the show and it’s really cool they’d go so far as to recreate it from the bits of media that are floating around out there. I think that excitement starts with Porter’s music though since there are practically whole new versions of songs from the album in the live set.
Someone even put the entire set together from cell phone footage taken at shows with homemade recreations of the live music. Okay and here’s where it gets really crazy, someone even started building the live rig into a 3D game engine: https://youtu.be/kq3TcMxpcV4
The expanded presentation of Worlds as an album is what makes it special, but I think the live and communal aspects are still super important. Maybe someday we’ll all be able to log into an MMO and experience something similar but even that won’t be able to beat being there in person experiencing the show with other fans. My guess is there will be a complete version of the Worlds show you can watch at home someday but for now we try to keep certain things exclusive to the live set so you have to show up and get the full experience.
Hopefully this crowd video from the Youtubes captures some of that live experience!
Recently you've been playing with Unity to make realtime visuals. What's the main thing that makes realtime more fun than pre-rendered?
Render time is never fun and playing video games is always fun right? I’ve never been very patient with 3D software. A lot of the 3D stuff I work on has a lo-fi video game aesthetic as well so its sort of a no brainer to start throwing stuff into Unity. I jump in and out of Blender as well but I figured if I’m going to put my time into learning a 3D environment I wanted it to be real time.
Porter Robinson - Worlds still image courtesy of Ryan
Alex my band mate in WIN WIN is way more under hood with Blender and rendered some really weird stuff for our last music video. We really liked the effect of video footage height mapped to a mesh and the objects came out really smooth and organic looking, in part thanks to some render time:
What are the main stumbling blocks you run into when working in realtime as opposed to keyframe everything? What about the liberating moments of the freedom it offers?
Scripting is something I wrestle with. It’s great that objects can do what you want in real time but you still have to tell them what to do! The benefit of course being you can see those changes instantly, and tweak it endlessly.
Controlling things in real time keeps me a little more engaged and expressive. I think coming from a music background makes that important to me.
Visuals for DJ Tigerlily courtesy of Ryan
Do you like to control things in realtime during show? Or is the appeal more during the creative process?
It’s been great to get a chance to do both this year. Both VJing live but also spending time editing and programming a show I mean. There are always things that will look better when edited ahead of time, but even in a show like Worlds I leave myself a few things to do by hand. Sometimes that’s so I can follow what Porter’s doing live, but also for me to feel more involved in the performative aspect of the show. I still play guitar and keys so I don’t want to let go of that live aspect of playing visuals also like an instrument.
People seem to get really excited when discussing realtime vs rendered, some people even get militant about it. You seem to switch seamlessly between both. Do you think one or the other has more potential? Are they mutually exclusive? Where would you like to see visuals heading in the next five years?
Part of my thinking about learning realtime is definitely about the future. Ideally real time processing will catch up to how good it can look when rendering. I don’t mind it looking a little rough around the edges for now if I can play it to the beat.
Ryan is a prolific internet user, so you can catch him in a variety of digital media. Get started down the rabbit hole at his website: http://www.djghostdad.com/
					
					
				Porter Robinson - Worlds still image courtesy of Invisible Light Network
[fold][/fold]
How did you get started in the VJ game? When did you discover Resolume?
I grew up playing music, and eventually DJing, and then creating visuals for my own music. At some point digging for records and samples turned into digging for found footage from VHS tapes and dollar store DVD's. That was around the time internet video was becoming popular too so I’d comb youtube and archive.org for weird stuff also.
I went to college for computer music and started learning max/msp while I was there. I built a video sampler I could use to switch through clips while DJing, but eventually amped it up to take it on the road with my band WIN WIN when we were doing a synched up DJ/VJ set. It was sort of a monster with cue points and BPM synch and effects so programming it got pretty intense!
When I started working with Porter I knew I needed something faster and easier to throw new content into on the fly. I was making lots of new looks to layer up with other clips and logos etc. It seemed like Resolume could handle anything I threw at it, and the triggering was the fastest I’d ever used, making it really fun to jam with.
Who are some of the artists that inspired you early on? Who is knocking your head back currently?
I listened to a lot of indie rock in high school and Cornelius was one of my favourite artists from Japan. I was lucky enough to see him do his Point show in NYC. I had never seen anything like it. I grew up watching music videos and even got into film and video art so I was used to seeing music and video together but never with live music in person like that. The content and degree of synch were incredible. It really blew my mind.
I’m a pretty big fan boy of artists who use multimedia in a conceptual way but also keep it really clean design wise like Ryoji Ikeda or Rafael Rozendaal. I’ve found more and more of my Vimeo likes being taken up by things that have been featured at http://ghosting.tv. And I definitely try to check out other artists when I’m at festivals too. I saw Bassnectar at Buku in New Orleans a few weeks ago and that was an awesome show.
You have a very varied but distinct style. From anime characters to mayan mysticism to abstract glitch to low-poly geometry to ponies, the list goes on and on. Where does it all come from? Didn't your mom tell you not to spend so much time on the internet?
No actually! We didn’t have the internet when I was growing up! We got a connection by the time I was in high school but it was maybe dial up speed at best. I’m a little older then what I consider to be the first real “internet generation” so when things got really high speed and dazzling it made me feel like a stranger in a strange land. There was so much amazing stuff happening on Tumblr or Vimeo or Second Life that I just wanted to check it all out. I get sucked down the rabbit hole online pretty easily, especially when I want to find out more about a genre or an artists or a meme. Some design trends I see online do remind me of things I grew up with like 8 bit video games or low poly 3D graphics so maybe that makes me think “I can do that!”
Visuals for DJ Tigerlily courtesy of Ryan
What caught my eye about the Porter Robinson 'Worlds' content is that it almost seems to be cinematic, in that it seems to be telling a story. Now our minds will always create a narrative with what we see, but is this an experience you consciously set out to create?
I think Porter’s goal is to invoke a feeling rather then tell a specific story. There’s definitely a tendency to connect what you’re seeing on the screen and create a story in your mind but that’s also the process that pulls you in and allows you to really feel it. In programming the show we give you every audio/video/lighting cue we can for the theme and timing and mood, and as a result I think the viewer gets to paint their own story and put themselves into it in the process.
Porter Robinson - Worlds still image courtesy of Invisible Light Network
The Worlds tour content is a collaborative effort, with you playing content created by a larger group of visual artists. Who are the people that you've been working with and how has it been working with them?
We worked with Invisible Light Network on the animated looks you see in the show. They’re based in NY also and had about 9 or 10 illustrators and animators working on their team. We were also able to grab additional content from some of Porter’s music videos like Flicker by Coyote Post.
I made content for the show as well and Porter was super in touch with everyone throughout the creation process. It was a lot of different footage to wrangle in Premiere, but I spent a week with Porter before the start of the Worlds tour where we really figured out the visual flow and style of the show while putting it all together. Porter has a tremendous amount of vision when it comes to his music which is totally inspiring.
So when looking at Youtube videos from your shows, I came across this: https://www.youtube.com/watch?v=AdotsHAzfVA. It looks like someone has been re-creating the content he saw at the show. How do you like them apples?
Yah we just saw that also! I think fans are dying to take home a piece of the show and it’s really cool they’d go so far as to recreate it from the bits of media that are floating around out there. I think that excitement starts with Porter’s music though since there are practically whole new versions of songs from the album in the live set.
Someone even put the entire set together from cell phone footage taken at shows with homemade recreations of the live music. Okay and here’s where it gets really crazy, someone even started building the live rig into a 3D game engine: https://youtu.be/kq3TcMxpcV4
The expanded presentation of Worlds as an album is what makes it special, but I think the live and communal aspects are still super important. Maybe someday we’ll all be able to log into an MMO and experience something similar but even that won’t be able to beat being there in person experiencing the show with other fans. My guess is there will be a complete version of the Worlds show you can watch at home someday but for now we try to keep certain things exclusive to the live set so you have to show up and get the full experience.
Hopefully this crowd video from the Youtubes captures some of that live experience!
Recently you've been playing with Unity to make realtime visuals. What's the main thing that makes realtime more fun than pre-rendered?
Render time is never fun and playing video games is always fun right? I’ve never been very patient with 3D software. A lot of the 3D stuff I work on has a lo-fi video game aesthetic as well so its sort of a no brainer to start throwing stuff into Unity. I jump in and out of Blender as well but I figured if I’m going to put my time into learning a 3D environment I wanted it to be real time.
Porter Robinson - Worlds still image courtesy of Ryan
Alex my band mate in WIN WIN is way more under hood with Blender and rendered some really weird stuff for our last music video. We really liked the effect of video footage height mapped to a mesh and the objects came out really smooth and organic looking, in part thanks to some render time:
What are the main stumbling blocks you run into when working in realtime as opposed to keyframe everything? What about the liberating moments of the freedom it offers?
Scripting is something I wrestle with. It’s great that objects can do what you want in real time but you still have to tell them what to do! The benefit of course being you can see those changes instantly, and tweak it endlessly.
Controlling things in real time keeps me a little more engaged and expressive. I think coming from a music background makes that important to me.
Visuals for DJ Tigerlily courtesy of Ryan
Do you like to control things in realtime during show? Or is the appeal more during the creative process?
It’s been great to get a chance to do both this year. Both VJing live but also spending time editing and programming a show I mean. There are always things that will look better when edited ahead of time, but even in a show like Worlds I leave myself a few things to do by hand. Sometimes that’s so I can follow what Porter’s doing live, but also for me to feel more involved in the performative aspect of the show. I still play guitar and keys so I don’t want to let go of that live aspect of playing visuals also like an instrument.
People seem to get really excited when discussing realtime vs rendered, some people even get militant about it. You seem to switch seamlessly between both. Do you think one or the other has more potential? Are they mutually exclusive? Where would you like to see visuals heading in the next five years?
Part of my thinking about learning realtime is definitely about the future. Ideally real time processing will catch up to how good it can look when rendering. I don’t mind it looking a little rough around the edges for now if I can play it to the beat.
Ryan is a prolific internet user, so you can catch him in a variety of digital media. Get started down the rabbit hole at his website: http://www.djghostdad.com/
New Footage by BlueElk, Artificially Awake & Analog Recyclin
						Analog Recycling is back! The original pixel gangstaz return with a glorious set of retro-chic visuals. Motion blurred circular patterns with an almost tangible analog feel to them.
Reach for the lasers and party like it’s 1999.
Subconscious VJ Footage by Analog Recycling
Plug in to the PA and get connected to the sound. Forget about FFT and equalisers, this is the very definition of audio visualisation. These clips go up to 11.
AudioTech VJ Footage by Artificially Awake
Unleash your inner mad scientist with this amazing transparent machinery. Beautifully rendered, deliciously abstract but with a hint of a story, these clips will turn any audience into your zombie minions.
GlassMachines VJ Footage by BluElk
					
					
				Reach for the lasers and party like it’s 1999.
Subconscious VJ Footage by Analog Recycling
Plug in to the PA and get connected to the sound. Forget about FFT and equalisers, this is the very definition of audio visualisation. These clips go up to 11.
AudioTech VJ Footage by Artificially Awake
Unleash your inner mad scientist with this amazing transparent machinery. Beautifully rendered, deliciously abstract but with a hint of a story, these clips will turn any audience into your zombie minions.
GlassMachines VJ Footage by BluElk
DXV 3 Upgrade Guide
						So, a few weeks ago we publicly released a new version of DXV, imaginatively called DXV3.0. It's a big improvement, and people love it. To make sure you get the best out of the codec, here's an upgrade guide.
First of all, DXV3 encoded movies will only work in Resolume 4.2 and above. In previous versions, Resolume will think they are very big and slow Quicktime files, and you will not have a good time. If you want to use DXV3, you're committed to Res 4.2.
DXV3.0 has 3 benefits: it's faster at high resolutions, files are generally smaller and it has a high quality option.
In general, we don't recommend re-encoding your entire library. Your old DXV2 files will play fine in Resolume 4.2, and you still get all the benefits of improved smoothness and bug fixes.
So there is no need to re-encode your entire DXV2 library. Especially do not re-encode your DXV2 files to DXV3 High Quality. Because you are rendering from a DXV2 source, any image artifacts are already rendered into the file. Your image quality is not improved at all, the only thing you get is bigger files that look exactly the same. And that's probably not what you want.
There are only three scenarios where we would advise re-encoding.
[fold][/fold]
1. A DXV encoded file shows a lot of banding on gradients. In this case you can re-encode to DXV3 High Quality. But you will need to have the original file as a uncompressed source or render it again straight from After Effects or Cinema4D. Of course, encoding the DXV2 source to DXV3 will show no improvements, because the banding is already rendered into the file.
Expect file size to double when encoding to DXV3 HQ. With great power comes great responsibility, so don't use High Quality as your default render setting. You'll run out of disk space real quick. Only use it on files that have visible artefacts when you render them to Normal Quality.
DXV3 Normal Quality is exactly the same image quality as DXV2. In other words, the picture quality will be exactly the same when rendering the same source file to DXV2 or to DXV3 Normal Quality.
2. You consistently scrape by on performance at high resolutions. We don't have exact figures yet, but DXV3 performs a LOT better when playing back 4K files. When you run 640x480 or 3 or 4 layers of 1080p, re-encoding won't do much for you, but when playing 4K, you can expect around a 30% improvement.
3. You cannot store all your files. DXV3 Normal Quality should be around 25% smaller than DXV2. This can vary considerably between different types of content. Minimal line content can go down as much as 50%, but highly detailed photographic content can stay around the same.
So, hopefully this clear things up a bit. If you have any questions, we'll be happy to answer them.
					
					
				First of all, DXV3 encoded movies will only work in Resolume 4.2 and above. In previous versions, Resolume will think they are very big and slow Quicktime files, and you will not have a good time. If you want to use DXV3, you're committed to Res 4.2.
DXV3.0 has 3 benefits: it's faster at high resolutions, files are generally smaller and it has a high quality option.
In general, we don't recommend re-encoding your entire library. Your old DXV2 files will play fine in Resolume 4.2, and you still get all the benefits of improved smoothness and bug fixes.
So there is no need to re-encode your entire DXV2 library. Especially do not re-encode your DXV2 files to DXV3 High Quality. Because you are rendering from a DXV2 source, any image artifacts are already rendered into the file. Your image quality is not improved at all, the only thing you get is bigger files that look exactly the same. And that's probably not what you want.
There are only three scenarios where we would advise re-encoding.
[fold][/fold]
1. A DXV encoded file shows a lot of banding on gradients. In this case you can re-encode to DXV3 High Quality. But you will need to have the original file as a uncompressed source or render it again straight from After Effects or Cinema4D. Of course, encoding the DXV2 source to DXV3 will show no improvements, because the banding is already rendered into the file.
Expect file size to double when encoding to DXV3 HQ. With great power comes great responsibility, so don't use High Quality as your default render setting. You'll run out of disk space real quick. Only use it on files that have visible artefacts when you render them to Normal Quality.
DXV3 Normal Quality is exactly the same image quality as DXV2. In other words, the picture quality will be exactly the same when rendering the same source file to DXV2 or to DXV3 Normal Quality.
2. You consistently scrape by on performance at high resolutions. We don't have exact figures yet, but DXV3 performs a LOT better when playing back 4K files. When you run 640x480 or 3 or 4 layers of 1080p, re-encoding won't do much for you, but when playing 4K, you can expect around a 30% improvement.
3. You cannot store all your files. DXV3 Normal Quality should be around 25% smaller than DXV2. This can vary considerably between different types of content. Minimal line content can go down as much as 50%, but highly detailed photographic content can stay around the same.
So, hopefully this clear things up a bit. If you have any questions, we'll be happy to answer them.
New Footage Releases: Familiar Faces Doing What They Do Best
						The releases are coming thick and fast, but you know we always keep an eye on quality. Wherever possible, the DXV3 encoded versions will come with alpha transparency, instead of a regular black background. That means that you can get both Electric and Synthesize with embedded alpha! Happy mixing times, oh yes.
Mathias Muller is king of the particles.
Get FluidMandalas from the Resolume label.
Raw Designs is back with more visual synthesis.
Get Synthesize 2 from the Resolume label.
And Artificially Awake doesn't sleep, he renders.
Get Electric from the Resolume label
					
					
				Mathias Muller is king of the particles.
Get FluidMandalas from the Resolume label.
Raw Designs is back with more visual synthesis.
Get Synthesize 2 from the Resolume label.
And Artificially Awake doesn't sleep, he renders.
Get Electric from the Resolume label
Res 4.2 Speech Recognition - Look Ma, No Hands!
						///// So yeah, as most of you figured out, this was our April Fools' joke for 2015. See you next year, everyone! /////
One of the exciting new features in Res 4.2 is speech recognition. That’s right, you can now control Resolume just by saying what you want to do.
Check the video for an overview of the basic controls and a some hints on the new special effect commands.
Of course speech recognition needs a clear sound signal, so when using it in a club environment, it’s best to ask the DJ to turn the music down.
					
					
				One of the exciting new features in Res 4.2 is speech recognition. That’s right, you can now control Resolume just by saying what you want to do.
Check the video for an overview of the basic controls and a some hints on the new special effect commands.
Of course speech recognition needs a clear sound signal, so when using it in a club environment, it’s best to ask the DJ to turn the music down.