How can we automate certain tasks when producing interactive projects? How can we avoid having to reconfigure the same options for different projects? The creation of a "universal" template in TouchDesigner tackles these challenges and can save creators time and energy.
In this talk, Bertrand De Becque will describe his experience setting up his own universal template. He will present how this method has become a key starting point for all of his productions and how his universal template system has progressively evolved by testing it in real conditions.
De Becque’s template system allows creators to define the devices that will be used to provide information in just a few clicks. Sources can include webcams, MIDI controllers, microphones, instruments, smartphones, joysticks, and Raspberry Pis. The creation of a template also facilitates the interactive creation of 2D and 3D content in real time. Finally, the presenter will highlight how this system allows creators to set the way pixels are retransmitted (whether through screen display, video projector, HTC VIVE, Fisheye Dome, equirectangular 360 video), light (in DMX signals), wind, smoke, audible signals, and more.
This kind of project architecture is especially useful for collaborative workflows as it helps several people to work together. It establishes clearly defined spaces where every project member can easily know where to intervene within the network. The most remarkable advantage of the template is the ability to work in a flexible environment where you can change, in a few moments, the way participants can enjoy and interact with your creation.
This talk will act as a roadmap for creative practice using a framework of nodes and connections. Within the context of connectionism the presenter, Ioannis Bardakos, will construct and define the term “diagrammatic aesthetics” through the use of a visual logic/reasoning paradigm. In such a framework, analogies, arrows, and wires create associations between digital and physical objects and restructure the consciousness of the artist at every delta time. Furthermore, the element of feedback, the forgotten backbone of first and second order cybernetics, provides the theoretical tools for a new exploration of how loops can be a generator of aesthetics.
While this presentation is largely theoretical it will be supported by a rich set of aesthetic examples that include diagrams, loops, artworks, and connections created by the presenter. Additional examples will be drawn from university students engaged in the study of art and aesthetics within a new media context. This talk aims to highlight the importance of diagrams and structures (computational, logic, or even mathematical) for generating poetics (structure and art).
In this talk, Ruokun Chen and Jimmz Zhang will describe the technical development of various commercial projects using TouchDesigner, including kinetic installations and a laser control system.
The first project that will be addressed is a kinetic installation control system developed for launch events for both Audi and Volkswagen. Photo and video documentation will be shown in order to explain how this system works with pre-rendered video, real-time projection, and dance performances. This particular technique is based on DMX512 and Art-NET protocol. This presentation will also detail how the design team used GLSL to calculate 40k channels of DMX channels in real time for the Audi Launch in Guangzhou and the troubleshooting techniques they developed for the project.
Chen and Zhang will also discuss the laser control system that they integrated with the Audi A6L Launch in January 2019 and will share how they were able to make the system cooperate precisely in relation to a holography show.
A further technical discussion will describe and explain EtherDream and laser machine in connection to TouchDesigner. A further technical discussion will describe and explain how to control Lasers from within TouchDesigner using the Etherdream interface. The presenters will also elaborate on a tdu.Matrix based cornerpin correction that can be deployed for laser projections.
The talk will be brought together by a discussion of workflows for the development of control systems in addition to an overview of the technical specifications of different hardwares and softwares. The presenters will display technical resources, including .toe files and animation FBX files, in order for attendees to better understand the concepts presented.
Lastly, the presenters will briefly discuss their part in developing the Chinese TouchDesigner Community as well as memorable moments from the first Shanghai TouchDesigner Forum this past December.
The ‘Prime Directive’ of art (shamelessly borrowing this term from Star Trek) is to inspire, to shock, to disrupt,and to create an emotional connection and response unachievable by any other means.
That’s it. Nothing more.
But what if art could help materialize data or educational concepts to create a deeper and more visceral understanding of them that would go beyond our simple interpretation of charts? What if these concepts could be forever associated in one’s mind with a particular artwork or installation?
That is one of the many things we are trying to do in OVVO Live Art Studio. The concept of ‘purpose’ and the question of ‘why’sound very old-fashioned, but we’ll be the first to admit to absolutely loving the answer ‘just because’, scientifically named the ‘beauty paradox of evolution’. Working in a commercial environment, we’re constantly confronted with justifying our choices in order to crystallize concepts front and center.
As an example for this special talk, I will betaking two of our latest projects that illustrate this idea of applied arts:
· Le Grand Paris Showroom – This is a very futuristic and sophisticated data visualization installation, heavily stylized to ‘transport’ the visitors into the Paris Metropole of 2030 and beyond. It was conceived as a mix of high-level concepts, key data and opportunities analysis aimed at inspiring suburban city authorities and key players around Paris to join the Metropole expansion. For this project, we developed a hybrid playback system that combines projection mapping, pixel mapping for pixel LED tapes, rings and underbelly lighting, as well as ambient light scenarios. All the data are encoded in one single movie file. Everything is controlled by our custom-made web app with a custom back-end CMS (content management system).
· Recycling Education Project – This is a small, one-day project that uses whimsical props and objects to introduce children to the concept of recycling. It is presented as six barcode machines with special cards containing a picture and a QR code. When the card is scanned over a recycling bin, light and sound will indicate if it is the correct, corresponding bin or not. And before you ask, yes, we used the round-robin sound sampling method to make sure the same sound is not repeated many times in a row, driving children (and us) insane. Kudos to the TouchDesigner team, especially Markus, for helping us fix a special case that used lots of Serial DATs.
BeSide has developed a user-friendly control environment called Xperience that can be used for prototyping sound, lighting, video, and automation control throughout the automotive design process. This suite of tools, which includes an authoring environment and a JSON-defined modular playback engine architecture, is entirely built using TouchDesigner. Over the course of building this application, the designers have had the opportunity to push TouchDesigner’s capabilities, in terms of building sophisticated nodal and timeline GUI elements in addition to constructing complex JSON-based storage, distribution, communication, and parsing functionality for sensitive user data.
In this talk, the presenters will look at two key areas of the application suite with a focus on applying these techniques to other distributed author+engine architecture use cases. In relation to the GUI, the presenters will examine their modular approach to building sophisticated, modern looking user interfaces. They will also explain how to combine panel and 3D-based UI building techniques used within TouchDesigner to build some of the more difficult aspects of the authoring environment such as the node-graph editor and timeline editor. In terms of system architecture, the presenters will discuss their application of external JSON files for storage, recall, and user data management. In closing they will discuss their JSON domain-filter technique that can be used for procedurally managing decentralized playback engine instances which allows for easy server load balancing and multi-process performance.
This presentation will address the challenge of creating real-time content while using particle tools for fulldome display formats. The discussion will focus largely on the technical aspects of three research-creation projects created at the Société des Arts Technologiques in Montreal: Enigma (2017-2019), Les Planètes (2018) and Illumination Frankenstein (2019), which were made in collaboration with the creative programmer Rémi Lapierre and music composers, Alain Thibault and Walter Boudreau.
Specifically, Breuleux will discuss a variety of methods for environmental storytelling, will explain how to synchronize music and visuals, and will go into detail describing the field of A/V performance. He will also discuss his experience working collaboratively to design performance tools and will highlight the expressive potential of TouchDesigner.
Human beings are used to living in a world ruled by the laws of physics, which is why using controlled digital dynamic simulation
s is a particularly effective way to create relatable digital worlds that people can engage with. The combined ability to sense people in a space and map digital spaces over physical ones can lead to the creation of new and impactful artworks and experiences.
In this talk, Vincent Houzé will explain how his visual effects experience eventually led him to the creation of large scale interactive installations and performances. He will present the techniques and inspirations that informed a selection of his artworks and describe his use of TouchDesigner as the main software in a variety of projects.
Houzé has used TD as a tool for previsualization to understand physical spaces, as the control centre for aggregating sensors data, and as a platform to develop digital physics simulations, for the creation of novel and compelling organic motions and shapes.
Have you ever asked yourself one of the following questions: How much should I charge clients when making a custom project using TouchDesigner? How should I communicate with bad clients? How do I develop contracts? How do I stop people from calling me for free gigs? How can I start winning bids?
In this talk nVoid founder, Elburz Sorkhabi will address these questions by sharing strategies and lessons he has learned over his career. Whether you're new to TouchDesigner and trying to jump-start your career or a veteran who could use a pay raise, Sorkhabi has you covered. Bring your questions for the Q&A session at the end of the presentation!
In this talk, Eugene Afonin and Yan Kalnberzin will describe how they integrated Python between the genetic simulator, Framsticks, and TouchDesigner for their multimedia installation Genome of Luck, which was produced for the exhibition Orient Express, featured as a special project at the 2015 Moscow Biennale.
For the installation a table was placed in the centre of a room, on which, two virtual organisms could be seen running a race. The organisms for each race were selected randomly from a pool of 30 creatures and were bred with the help of the Framsticks simulator. Framsticks uses genetic algorithms, rigid body simulations, and simple neuron networks to simulate the evolution of a creature with a random structure defined by the fitness function. In this case, the condition of survival for the organism was based on its ability to run quickly for a set distance. The evolution of each racer on the simulator took from one to several days. Before each race started, the evolution of the two participants in the race were sped up and displayed on two separate screens. The viewer was able to bet on the organism they believed would be the fastest by dropping their money in a glass jar placed beside each of the screens.
In this presentation Eugene Afonin and Yan Kalnberzin will discuss the process of rendering, mapping, and designing this interactive installation.
Akiko will present her most recent work 'Hana Fubuki', an interactive flowers projection for Artechouse DC. The presentation will cover both artistic and technical aspects of creating an art installation. From concept and prototyping to pre-visualizing the project using using TouchDesigner, Akiko will discuss the technical requirements of the project; setting up multiple Kinect devices with small computers across a network; NDI quality and resolution tips; how the project was driven using a Particle SOP with Force and Metaball SOP; creating custom attributes and geometry instancing with w coordinate with Texture 3D TOP will all be discussed.
GRANTECAN is currently looking in to new ways of showcasing their research to the public. They scout the never ending universe for what lies beyond. Timing is fortunate and this project is set to be completed at the start of the summit. What a great way to share this amazing journey from the stars to code with the generous Touch-community.
Beginning in February 2019 Jakob Povel will be working on the user experience for visitors of Gran Telescopio Canarias. First at studio Boompje in Amsterdam to work on the framework for the installation and then starting in April 2019 on site in Las Palmas. The result of the project is just as unknown as the vastness of the universe. More information will be shared with Derivative once known.
In this talk Javier Alvarez Bailen will tackle a problem that he kept encountering - clients requesting interactive installations that would require multiple devices (i.e. radas, cameras, various sensors). Due to the number and variation of devices designers would be forced to build and configure a new TouchDesigner project each time.
In response to this challenge Bailen and his team designed MAMI, a web application system that allows all of an installation’s devices to be connected and integrated in one central place. MAMI is easy to configure from your web browser and your mobile devices. Once your system is setup, TouchDesigner, with the help of a custom component, can receive values from input devices via Open Sound Control (OSC) or WebSockets. As a result, your focus can be on your interactive design with all sensor data readily available via a unified interface.
This session will provide a basic introduction to networks systems control in TouchDesigner. One of the most powerful features of TouchDesigner is that it contains the default tools to create control servers through local or public networks, for the development of professional and high-end interactive systems. It allows you to create stable and optimized control systems like any media server but they can be completely tailored, with a much more affordable approach. TouchDesigner allows you to develop a robust deployment of multimedia formats within one main or multiple controls, ranging from standard input hardware to non-traditional systems.
In this talk the presenters will demonstrate how to create a custom interface to control and sync different processes from a server to multiple clients, in multiple machines, with different techniques. They will demonstrate how to design and configure a user-friendly interface, controlled through MIDI hardware, mixing the use of buttons, knobs, and sliders to manage the full project. Attendees will learn how to revise basic network configuration for multiple machines inside a LAN, through a switch modem using the Windows 10 console. They will demonstrate how to test and run instructions from the master to clients, enabling each client to receive and interpret the input from their respective sources and will review how to configure a basic support system that allows a project to rely on every computer to swap processes in case of a system crash.
Finally, a fully functional project developed in INTUS for a real client will be analyzed, with the techniques previously discussed, in order to show the possibilities and benefits that a network with multiple machines can offer.
Cocolab is a Mexico City-based firm and one of the largest laboratories of art and technology in Latin America that uses TouchDesigner as their primary tool. The firm has been featured in festivals that include Day Night, Mutek MX, Today's Art, and recently SXSW as part of the Digital Art Program.
In this talk Cocolab will present their creative process and will describe their approach for using different technologies and new interactions for producing experiences for larger audiences in exhibitions and museums.
Cocolab uses art as a medium to experiment with and develop new technologies. The technical discoveries they make through artistic experimentation in turn become part of the core software and hardware they use in their exhibitions and installations. In this talk they will present a new interactive and immersive experience created in collaboration with Vincent Houzé, with technical help from Elburz. Houzé and Elburz are two very active collaborators in the TouchDesigner community.
First came Katan, then came CamSchnappr. What's next? How can you calibrate even more devices together like cameras, projectors, and VIVE Trackers? Over the past few years Harvey Moon has been working on a number of different ways to calibrate faster and more easily using TouchDesigner. At the beginning of this talk Moon will explain how he calibrated a projector to the VIVE world coordinates by re-building CamSchnappr for single point input. After this, he will deconstruct CamSchnappr and add the Ret Error output. Moon will lead a discussion about the function of OpenCV’s calibrateCamera and how it can be used in different contexts.
Moon will also present a project that uses fiber optics to automatically calibrate a scale city model, with a live demo of four corner calibration on a fiber sensor array. In closing Moon will briefly describe how to calibrate the intrinsics of a camera lens using OpenCV in TouchDesigner in addition to showing a system that uses cameras and structured light to calibrate three projectors. Attendees will be invited to ask questions at the end of the presentation.
This talk will help you get the most out of TouchDesigner’s built-in features and teach you how to develop your application without reinventing the wheel. Participants will learn how to rapidly develop a UI for managing portable and reusable generative content as well as how to automatically import and export content collections. Additionally, attendees will be exposed to several approaches for using ‘presets’. By the end of this presentation you will be able to avoid incompatibility between old content and new features and will discover the practical uses of often overlooked features like tags and comments. Attendees will also learn how to hack components from the palette.
The newly formed Dutch studio, yfx lab explores the overlap between design, science, and technology. Focusing on a data-driven approach, yfx lab investigates the possibilities of building and controlling complex systems and algorithms without losing the ability to manage, adapt, and design.
Using a parametric approach, a bridge is made between technology and design, making it possible to merge the best of both worlds in a harmonic and artistic way. yfx lab's ongoing research explores the creative collaboration between humans and machines, by using a workflow where space is made for unplanned events and happy accidents. Making use of parametric design, GLSL shaders, and machine learning we are slowly shifting our roles from designers to curators.
TouchDesigner is at the heart of this pipeline and during this talk Roy and Time Gerritsen will share their process and outline how they approach projects in both the artistic and commercial fields. Finally they will discuss the technical side of how they build patches that are modular, experimental, and highly customizable.
This talk is for intermediate TouchDesigner users and will focus on generative animation techniques. One of the 12 principles of animation is “overlapping action.” Multiple objects should move at the same time and in slightly different ways. When using only a handful of objects, this effect is easy to achieve using manual keyframing inside the Animation Component, but in situations with a larger or unpredictable number of objects, a generative approach becomes necessary. To overcome these hurdles, David Braun will introduce his GLSL/CHOP easing functions. He’ll explain the powerful concept of phasing and how it can be conveniently parameterized. Phasing can also be saved to an image, which makes it easier to understand and manipulate. Attendees will learn tips for using the Time Component, Blend CHOP, and Alembic files. Whether you’re blending images, morphing particle systems, or moving UI elements, Braun’s generative approach to animating will help you keyframe less and dazzle more. The beginning and end are fixed; how you get there is up to you (and phasing).
Digital media offers a wide range of possibilities that can draw out the relationship between sound and image. With a variety of digital techniques music can be experienced as a total audiovisual phenomenon. This masterclass will teach participants how to use randomness and probability in order to give life to visual content. By analyzing sound features such as frequencies, tempo, and other musical gestures, instructor, Mathieu Le Sourd will explain how to create unexpected outcomes using a combination of random actions.
Since Adobe dropped support for Quicktime it has become difficult for studios to encode video assets to use in TouchDesigner or other real-time applications. In this talk Stefan Kraus will demonstrate how to build a custom watchfolder video encoder in TouchDesigner and how to integrate it within an After Effects pipeline. Participants will go home with a working component, After Effects demo project, and a step-by-step tutorial.
In this session Yuki Narumi and Joe Ohara will talk about the TouchDesigner Study Weekend (TDSW) - the largest and most frequently offered TouchDesigner workshop in Japan. TDSW has been held twice a month since April 2018 and to date has offered thirteen workshops, two hackathons, and two meet-ups to over 700 participants. TDSW includes workshops for beginners and addresses a range of topics including audiovisual performances, sensors, Arduino, broadcasting, GLSL, and instruction on other software including Houdini and Substance Designer. TDSW instructors are often TouchDesigner professionals and/or Japanese artists.
TDSW has received attention from a number of Japanese companies including the Japanese interactive agency Dentsu Isobar, Tokyo-based telecommunications company, SoftBank, and entertainment conglomerate, SEGASAMMY. These companies often offer space to use for classrooms, meetups, and hackathons.
TDSW is currently planning several projects including an experimental video competition, the winner of which will have their work exhibited on SEGASAMMY’s huge LED display named TUNNEL TOKYO. Narumi and Ohara are also planning a collaboration with Doron as Japan introduces a big Doron race next year.
TDSW is not only a workshop organization. They are continuously planning innovative art projects and frequently experiment with emerging technologies. They are excited to collaborate with artists and designers working with TouchDesigner and ask that you get in touch next time you come to Japan!
In this talk Ronen Tanchum will demonstrate different ways to treat TouchDesigner as a 3D production package for the creation of high-end 360 content. He will focus on the different roles TouchDesigner can take in a traditional production pipeline, ranging from Previz, animation, on-set visualizations, 3D layout animation, shading, and rendering. Different ways of exporting 3D scenes and simulations from Houdini into TouchDesigner will also be explained. Tanchum will go into detail about how to approach large projects that don’t necessarily have the budget to go fully 3D and how they can be managed by arranging them into executable tasks for a small team. The presenter will describe how he utilizes TouchDesigner’s capacity to render in incredibly high-resolutions while saving disk-space, rendering time, and limiting turnaround time for clients. Finally, Tanchum will describe how TouchDesigner has assisted on projects in terms of computing power, accessibility, and flexibility.
In a continued effort to collapse the gap between sonic, tactile, and visual systems, in this talk Dave & Gabe will present their approach to developing musical structures and environments. With a focus on leveraging a variety of sensor inputs, the presenters will show how TouchDesigner is central to the composition of their projects. They will demonstrate how the TDAbleton package can be used to drive visuals via MIDI and sounds from Ableton. Dave & Gabe will also explain how to send MIDI sequences out of TouchDesigner while controlling external synth and effects parameters. Every physical object imparts its own characteristics onto an interaction: resistance, momentum, texture. By using these unique qualities artists and designers can produce a range of natural hybrid sonic objects.
Our community’s approaches, tools, and techniques are increasingly valuable to clients connecting with their audiences through digital layers in physical places - from temporary experiential events & activations to permanent installations in architectural contexts.
This session will explore the “emerging field” that we are all working in and creating. Presenter David Bianciardi will share some perspectives from AV&C’s time working at this intersection of digital and physical.
Ecosystem: What is the landscape of interconnected disciplines we operate in? How are our contributions key to the success of this kind of work for our clients and partners? This work is a team sport - how do we fit into this ecosystem as individuals, groups and companies?
Deep Media: We define “media” and “content” to include responsive, data-driven, computationally generated, realtime visual expression. Why is traditional “linear” media not always optimal for these new types of canvases and how is “evergreen” or “deep” media the strategic approach? Why realtime?
Working at Scale: As our approaches go mainstream, how do we scale our techniques and ways of working together? How do we manage the risk of “new” (innovation) with the responsibilities of working in the built environment?