Mystery Revealed: the systems we use for the technical development of our shows – Onionlab
24-3-20

Mystery Revealed: the systems we use for the technical development of our shows

Carrying out our projects requires a lot of time and dedication. We are a great team that combines the efforts, knowledge and research to be able to generate products with true quality, and shows that remain in the memory of the spectators. It is not always easy but we do our best: we lose ourselves in development and find ourselves in the solution.

The technical part of each project is perhaps the most complex one, that is why we would like to bring to your attention some of our exhibitions that have been released some time ago and have been adapted various times. Projects such as Transfiguració de la Nau, Ulterior and LuminAV have challenged us to explore and design techniques that we continue to use until today. The idea has been to combine lighting with architecture, music and emotions.

Trasfiguracio de la nau
Trasfiguracio de la nau
Ulterior
Ulterior
LuminAV
LuminAV

Trasfiguració, directed by Onionlab and Xavi Bové and once again, with music by Zinkman, was conceived with the original idea of ​​transforming the cathedral of Girona into a spectacular light show on the occasion of its 600th anniversary. In the case of Ulterior, a breathtaking video mapping was projected on the facade of the Pantheon in Rome. LuminAV was created for the Llum BCN 2019 Festival as a study on light that investigates the behavior and coexistence of different light sources in the same space. The same techniques have been used in all three cases, generating fascinating results.

‘To develop these shows we have created our own system, which allows us to work with lighting systems in the software’

To develop these shows we have created our own system, which allows us to work with lighting systems in the software with which, for the most part, we create the images and all the content of our 3D pieces, Cinema4D. In this way, we managed to perfectly integrate all the content created in 3D together with the one that is physically happening with the real lights that are in the space where the installation is carried out. All this is based on the fact that Onionlab always generates the 3D model of the space in which the exhibition will be held. This includes the actual plans, 3D scanners, actual measurements, etc. In addition, we generate a scaled 3D space to then position the “fixtures” that we will use in the exact place where they will be when we do the installation.

This does help us when predicting which are the best positions to generate the sensation and the expected result in the spectators. 

‘We have created WYSIWYG (What You See Is What You Get) system’ 

Another system we have created, is WYSIWYG (What You See Is What You Get), which allows us to advance and helps us to execute projects in which the technical material is expensive, and it is only possible for us to have that material a few days before the show starts (most of the time, it takes from 1 to 3 months in the studio for its production).

At the same time, the possibility of having this tool within the same program with which we generate the images, opened a whole new world for Onionlab. For us it is very important to be able to work with a 3D light the same way as with a real one, making sure the behavior does not change and moreover achieve that the physical light reacts to the video.

As if this were not enough, we have also generated an “add on” in the Cinema4D by programming a script that we integrate in each of our projects to be able to send the information of our movements from the virtual 3D lights to the real lights that will be installed in the place. Within Cinema4D, each DMX channel of the moving light is assigned to a specific UserData within the virtual moving light, creating a UserData that will vary within the range 0-255 for Pan, Tilt, Dimmer, Intensity, Focus, etc.

‘Within Cinema4D, each DMX channel of the moving light is assigned to a specific UserData within the virtual moving light’

Within Cinema4D, each DMX channel of the moving light is assigned to a specific UserData within the virtual moving light, creating a UserData that will vary within the range 0-255 for Pan, Tilt, Dimmer, Intensity, Focus, etc.

Through the Xpresso node programming system, the UserDatas will be linked to the corresponding values ​​of the virtual mobile for real-time display within the animation C4D, using mathematical formulas so that the range of the UserData corresponds to the real range of the channel that we are animating (Dimmer intensity (0-100%), Pan Movement (0-540º), etc).

Likewise, the rotation of each moving light is guided by a target which, being able to be included within a Mograph, manages to give the flexibility of being animated by applying Effectors and creating complex movements.

These values ​​will be exported by means of a Phyton Script via OSC (Open Sound Control) to the Touchdesigner, another software we are delighted to work with and it helps us a lot when it comes to controlling and synchronizing different softwares.

Within this, all the OSC information that comes from the Cinema4D is collected to carry out the transformation of formats (from Radiant to DMX values) to later send them, either by DMX or Artnet. In this way, then, Touchdesigner helps us to have the library of each fixture that we will use. The only thing we have to do is modify and patch one of the libraries to then reference it to the real device that we will have.

Patch of one of the devices referenced within the Touchdesigner
Patch of one of the devices referenced within the Touchdesigner