Search
27 items found for ""
- Discover our new AR Tool
We are proud to announce that the new version of our creative Augmented Reality tool has been launched. With this tool, artists can easily build their own mobile AR experiences. Some examples of these applications were previously shared in this newsletter as part of the live shows of Vincent Höfte, Smitty and Ineffekt. To refresh your memory: Augmented Reality (AR) is a technology that adds digital elements to the real world, often via devices such as smartphones or AR glasses. This allows users to interact with virtual objects and experience information integrated into their immediate environment. AR has proven its value as an immersive application. For example, the Gorillaz presented their new single in AR at Times Square, and Coachella collaborated with Snapchat to present an AR experience that included art and wayfinding. For many artists, AR can enhance live shows and add props without the need for extensive physical setups. Despite the fact that there are several AR tools on the market, we have chosen to build our own tool – specifically for the context of live shows for emerging musicians. We chose this approach because most existing tools are overly complicated, time-consuming, and insufficiently protect the user's intellectual property. Our tool simplifies the creative process with a user-friendly interface designed for artists without prior design experience. Additionally, it only stores necessary data and is not reused for commercial purposes. What's new Our new OCT AR tool is inspired by the concept of a synthesizer. It allows you to create and arrange scenes in 3D space, where you can place and publish images. Similar to a real synthesizer, it features various oscillators (or parameters) that you can adjust to add effects and movement to the elements within your scenes. Check it out yourself: https://ar.sprps.dev Please note that you will need to create the content for the AR application yourself. In the near future, you will also be able to integrate 3D content from existing online libraries, but this feature is not available yet. To explain all the functionalities, we have made a short instructional video: https://vimeo.com/974167199 Current features: Create an AR project (scene) that can be viewed with one tap on iOS and Android devices after scanning a QR code; no app download required. Create multiple scenes per project/QR code and switch between scenes during your live show. Place 2-dimensional objects (planes) in a 3D workspace. Add transformations (move, rotate, scale) to an object. Add an image texture to an object; supports JPEGs and (transparent) PNGs. Add a transparency mask to an object (make objects more or less transparent with a colored background). Add animations to an object. Stack transformations and animations to create more complex movement. Planned features: Reorder transformations. Group objects. Support for video/animated GIF texture. Import 3D objects.
- Avatars & AI visuals with Jan Modaal
Jan Modaal is a punk “smartlappen” singer from Amsterdam. He is an expressive artist who highlights social themes in his music and lyrics. On August 24, he will play live at Cinetol in Amsterdam. Below, we will give you a sneak peek of the conceptual and technical content of this evening. As a stage performer, Jan Modaal is the frontman and guitar player in various punk bands. Apart from playing the guitar and singing, Jan has limited opportunities to express himself physically and visually on stage. For this reason, Jan and Open Culture Tech are exploring how we can augment his face and body movement – and translate this into real-time visuals using our own Avatar creator tool. With the latest version of our Avatar creator tool, it is possible to upload yourself as a 3D character (with Polycam ) and animate this avatar in real-time using a webcam that copies your movement. In our first live show with OATS, we did this by using a motion capture suit, but this was too expensive and tech-heavy. So for Jan we developed a webcam feature. In addition, it is possible to place the Avatar in a 3D space and adjust the background environment. This 3D creation can be live streamed and projected on a large screen. From this technical basis, a concept has been developed for the live show of Jan Modaal, called “Jandroid Modaal”. During this show, large curtains are hung at the front of the stage. Jan will stand behind these curtains and perform his show while his movements are recorded by the webcam – which turns his movements into 3D motion. A livestream of this animated Avatar is then projected onto the curtain on stage. The background environment is decorated with videos that Jan Modaal makes with Runway. This is a popular text-to-video AI tool that allows you to easily generate short videos based on text prompts. In these videos, various “future scenarios” are generated from different “punk” angles. Cyberpunk, solarpunk, steampunk, biopunk, etc. In our Jandroid Modaal show, the fusion between digital and physical reality is central, while Jan explores the tension between these two worlds. During the live performance, Jan will play with the world in front of the screen and the world behind the screen. We are not revealing exactly how this will be, this can be experienced live on August 24 at 8:00 PM in Cinetol Amsterdam. The use case with Jan Modaal has resulted in a series of new functionalities for our own Avatar creator tool that we will soon release on the website of Open Cultuur Tech.
- Showcase: AR and AI soundscapes
Eveline Ypma is a soundscape artist from Amsterdam and Vincent Höfte is a jazz pianist from The Hague. Together with the Open Culture Tech team, both artists have worked on two unique concepts in which we use our own generative AI and mobile Augmented Reality prototypes to enrich their live performances. In this article we will briefly take you through our journey. Eveline Ypma & AI samples Laugarvatn is an existing production created by Eveline Ypma, consisting of three parts of 5 minutes each. The performance is named after a place where Eveline Ypma made several field recordings during her residency on Iceland. These field recordings form the basis for three soundscapes in which she combines these field recordings with live vocals and bass guitar. Together with the Open Culture Tech team, a fourth part of 10 minutes has been created in which the Icelandic field recordings have been replaced by AI-generated sound samples, in the style of her original Icelandic field recordings. To generate original samples, Eveline played with various text-to-music tools (ChatGPTs for music). During her residency on Iceland, Eveline never saw the Northern Lights, so she decided to use AI to generate unique sound samples based on the prompt “Northern Lights Soundscape”. In this way, Eveline was able to create new music inspired by her journey and add a musical piece to her already existing work Laugarvatn. The result of the collaboration between Eveline Ypma and Open Culture Tech is not only a beautiful showcase in which we have used generative AI to generate unique samples for live performance, but also the first version of our own open-source AI tool that allows anyone to sample their own samples. can create based on prompts. If you are curious about the process of creating this tool, and want to know more about how this application came about, read the detailed article here . And stay tuned, the Open Culture Tech AI-driven sample tool will be published soon. Vincent Höfte & mobile AR Vincent Hofte is a jazz pianist who regularly plays on public pianos at train stations throughout The Netherlands. Together with Open Culture Tech, Vincent has created a short performance in which he plays his own piano pieces while a mobile Augmented Reality filter adds a visual layer to reality. By scanning a QR code with your smartphone, you see colored shapes floating through the train station. These shapes are remixed photos of the station hall itself, creating a mix between the architecture of the station and the repeating shapes in the photos. This show used the first version of our own Augmented Reality app, which we will publish for free and publicly in a few months. If you are also curious about the process of creating this application and want to know more about how this application was created, read the extensive article here .
- AI and music with Melle Jutte
Melle Jutte, R&B artist and producer from Amsterdam, has always had a curious mind when it comes to new technologies. Together with Open Culture Tech, he dives into an experiment with the aim of composing four to five new songs, using a different AI tool for each song. So far, Melle has written three tracks, using three different methods. All these tracks are put into a live show by Melle to investigate how this new form of composing works in a live context. His first experiment was with Suno, a popular AI tool that generates music. Melle had Suno generate a piece of music and then played it himself on his instruments. Although he put his own spin on this, it mainly felt as if he was imitating someone else's music. The process was technically impressive, but artistically less satisfying. It gave him a feeling of limited control, which hindered his creativity. Nevertheless, Melle continues to experiment with Suno to see whether he can ultimately achieve a satisfactory result by finding the right balance between the generated basis and his own instrumental influence. Next he tried Magenta, an AI tool that generates single MIDI beats and melodies. Despite the interesting possibilities, Melle often found the result dry and random. The beats and melodies generated had little coherence and the lack of coherence meant he had to spend a lot of time adjusting and piecing together the generated music. The third experiment took him to Udio, a tool similar to Suno. Instead of playing the generated music, Melle split the audio tracks and used the individual tracks as samples. This gave him the freedom to play and experiment with sounds in a way that he found very inspiring. Manipulating the samples gives him the opportunity to be truly creative, without feeling limited by the original structure of the music generated. For the other experiments, Melle is curious about the potential of MIDI in a less random setting. He is considering playing with tools such as ChordChord, AIVA and MusicLang, and also wants to explore what he can achieve when writing lyrics using ChatGPT. He is especially curious how these tools can contribute to a more structured and coherent creative process, while still retaining the freedom to follow his own artistic vision. Melle's research consciously focuses on the artistic potential of generative AI technology, where - unlike Eveline Ypma's Open Culture Tech project - he does not pay attention in advance to copyright and terms of use. Melle is aware of the risks and ethical dilemmas associated with the use of AI, but his focus is on freely exploring the artistic possibilities. Reflection on the complications of his creations only follows after the music has been created.
- Showcase: Avatars
Smitty is a rapper from Amsterdam who talks to different versions of himself in his music. He talks to his youngest self and older self about different aspects of life. In his live shows he wants enhance this concept through new technology. Together with Open Culture Tech, Smitty has developed a live show that uses our immersive Avatar technology and mobile Augmented Reality to make this happen. The evening was composed of three parts. The first part consisted of the audience reception where we used mobile AR to introduce the audience to Smitty's lyrics. The second part consisted of the live show in which we projected various 3D avatars of Smitty on the white walls of the hall. The third part consisted of a Q&A between the audience, Smitty and members of Open Culture Tech. The entrance Upon entry, large QR codes were projected on the screen to access the experience. To highlight the lyrics of Smitty's music, we created an AR experience with the Open Cultuur Tech AR app. The idea behind this was that we created a virtual world where Smitty's lyrics floated through space. In the AR experience, 5 different texts from Smitty were spread throughout the room. Visitors could walk through the white empty space of @droog and see the different texts, in the same way as you would at an art exhibition. The AR experience was a warm-up before the show. The show In order to make the 3D avatars as prominent as possible, we wanted to create the illusion of an LED wall in the @droog. An LED wall is a wall of LED screens on which you can play visuals. Such a wall is very expensive and therefore unfeasible for most smaller stages. In addition, LED requires some distance between the audience and the screens to provide a clear image. This is also difficult in many smaller stages. We solved this by installing two projectors that were of good enough quality to project onto the walls. The projections had to run from the ceiling to the floor because otherwise it still looks like you are looking at a normal projection. The projectors were aligned in such a way that they projected onto the walls on either side of the stage. This resulted in minimal shadows from the band on the projections. Various atmospheric images were projected on this back wall to support the show. These atmospheric images were a combination of free videos from, for example, Pexels and your own video recordings. After the second issue, Smitty's first 3D avatar was introduced on screen. This animated 3D avatar was a younger version of Smitty who slowly turned towards the audience. An older version of Smitty was then shown and these avatars were edited together. The different avatars, in different animations, built up to an eclectic mix that worked towards a climax. Because we did not want to show the avatars for the entire show, but also wanted to show other atmospheric images, we created a simple VJ setup via TouchDesigner, a software tool with which we could build our own video player. This way we could control the visuals on the projections with a midi controller. Using an HDMI splitter we could control both projectors with 1 laptop. An important condition for using projectors is that there cannot be too much light in the room because the projections will then become less visible. In Smitty's case, the projections provided enough light to illuminate the room. With two small RGB spots and a white spot on Smitty himself, it was sufficient to properly illuminate the stage. The Q&A In addition to music lovers, the audience also included many musicians and fellow rappers of Smitty. For this group, LED walls, animated avatars and augmented reality are not within reach. From the conversations with the audience it became clear that they found the show, which lasted approximately 45 minutes, impressive. The visuals added a valuable layer and supported Smitty's story from the content. This confirmation is important for the progress of Open Culture Tech to validate that our technology is usable for the target group. Follow-up agreements have been made with various fellow rappers to investigate how the Open Culture Tech toolkit can be used more broadly within the Amsterdam hip-hop community. To be continued.
- Summary Report
This report is a summary of results we collected in the first 7 months of the Open Culture Tech project. We surveyed more than 100 artists and other relevant stakeholders from the music industry. We have done this in knowledge sessions, guest lectures, workshops at conferences (such as ADE and ESNS), surveys and interviews. Picture by Calluna Dekkers It is important for the Open Culture Tech project to map out where the opportunities and challenges lie for emerging artists. This way we ensure that we develop technology that meets the needs of artists and their audiences. It is also important to share more information with the sector. It is important for artists to know what their options are, even with small budgets and limited technical knowledge. It is important for the broader industry to know how they can facilitate artists and how we can ensure that the Dutch music landscape does not fall behind compared to, for example, the American or Chinese music industry. LINK to full Summary Report
- 3D scanning for music artists
In the search for new technological dimensions, we use 3D scanning for our AR and avatar applications. This is an easy way to create and apply virtual people or objects in live visuals, AR experiences or elements in hybrid concerts. Artists can integrate 3D scans of themselves into augmented reality experiences during live shows. This can range from interactive AR elements to full digital replicas that appear next to the artist. What is 3D scanning? A 3D scan is a three-dimensional digital scan of a person, object or environment. There are different ways to make 3D scans. This can be done, for example, with the help of Lidar or photogrammetry. Lidar technology uses lasers to map the environment and generate accurate 3D models. Nowadays, iPhone Pros already have a built-in lidar scanner. Photogrammetry involves taking multiple photos from different angles and combining them to reconstruct a 3D model. This is the method we also use for our 3D scans. In response to this status quo, Thunderboom Records and Reblika have attempted to create a virtual artist that goes beyond these stereotypes and makes innovative use of the possibilities of avatar technology. The concept behind Fi is that Fi can be anyone and doesn't have to conform to one type of appearance. Fi is virtual and therefore fluid. As a virtual character, Fi could take different forms, combine different genres and go beyond stereotypes. In addition, it was important that Fi did not become a virtual artist who would replace human artists. Fi must above all work together and help human artists move forward. There are various apps that allow you to easily make a 3D scan. We are currently testing Polycam as it is an easy to use app. It is important to mention that this app is user-friendly, but it does collect data from the users and that the 3D scans you make also become the property of Polycam. We are still looking for alternatives that have better terms of use. How do we apply 3D scans? Within the Open Culture Tech project, 3D scans are a good way to create digital doubles of the artists we work with. We use these digital doubles, for example, to create animations during live shows or to build avatars. In the case of Smitty, for example, we use the scan to create older and younger versions of him that are reflected in the narrative of his live performance. It gives us the opportunity to do this in detail because the scans are very precise. 3D scanning is also an important feature for the development of our AR tool. It offers the ability to add custom elements to the digital world you are creating. For example, consider scanning certain landscapes or objects that are important to you as an artist, your performance or the story you want to tell.
- Generative AI & Terms of Use
Let’s start by saying that this article is not an endpoint. It is a step in our ongoing research into developing responsible AI applications for artists. One of these artists is Eveline Ypma, with whom we are organizing a live performance on April 4 in OT301. Together with Eveline, we are investigating the potential of text-to-music AI technology will share our findings in this article. Eveline created an EP that combines sampled field recordings from the nature of Iceland with her own vocals and bass guitar. The result is a harmonious 15-minute soundscape. Our challenge was to extend and translate this EP into a 30/45-minute live performance, using generative AI. Together, we decided to experiment with AI tools that can generate similar-sounding field recordings and sound effects that Eveline could use to extend her live performance. How did we start? Our goal was to generate new audio files (10-20 seconds) that sounded similar to her own Icelandic music samples. To do so we started by looking into different ways to generate new music with AI. What AI models are already out there? Which existing tools can we test? And how do we make sure that the technology providers do not take Eveline's data? First, we conducted a series of experiments with existing AI models. Inspired by Dadabots and their infinite stream of AI generated death metal, we started working with SampleRNN models. This is an audio-to-audio model where you upload a music file and get similar music files in return. Unfortunately, we were not quite happy with the results because the output was too noisy. As well, the process was very time consuming and very complex. We moved onto Stable Diffusion’s algorithm called Dance Diffusion. This is also an audio-to-audio system that allows you to create audio samples that sound like your input files. Unfortunately, like the previous model, this model also produced a lot of noise and was very glitchy. Our aim was to look for off-the-shelf AI models that we could immediately use to create a workflow for Eveline – without having to train our own customized AI model. But unfortunately, this turned out to be more difficult than expected. That's why we decided to change course and look at ready-made AI tools. First, we tried Stable Diffusion’s text-to-music application called Stable Audio, which creates audio files based on text prompts. A ChatGPT for music. For the first time, we produced AI-generated output that indeed sounded like a usable music sample. Still, we could not really use the output: the terms of use prevented us from continuing to use the tool. We also tried Meta’s MusicGen and AudioGen, as similar prompt based AI model that allows you to generate audio files and music files. As long as you have a Gmail account, anyone can use these models in a Google Collab environment. MusicGen provided us with the best results so far. It generated high-quality audio samples that we could work with right away. Unfortunately, this system had similar terms of use. Terms of use In our opinion, the terms of use of too many generative AI music tools are misleading. Although most product websites tell you that you maintain full ownership of your input and output, it often becomes clear that you also "sublicense" your work to the AI platform – once you dive into their legal documentation. Technically, you always remain the owner of your input and output. But you also give ownership to someone else. In the case of Eveline Ypma, this is problematic. Eveline is an artist and she should own the rights to her own creative work. That is why we eventually decided to download the underlying MusicGen AI model from Github and create a local version on a private server ourselves. This is possible because Meta published the code open-source via Github under an MIT License. The Open Culture Tech "text-to-music" app At this moment, we are working together with a front-end developer to build our own text-to-music application on top of the MusicGen AI model. Our goal is to host the underlying AI model on a European server and make sure that we don't safe the user's input and output data. In this way, anyone can use the AI technology for free – without having to give away their creative work. We plan to launch this app on April 4 in OT301.
- ESNS 2024 Recap
Last week was ESNS 2024, the international music conference and showcase festival where more than 350 European artists perform annually and attract more than 40,000 visitors. Open Culture Tech was present at ESNS 2024 to discuss the views and opinions on new technology such as AI, Augmented Reality and Avatars backstage with artists. The Open Culture Tech team was also invited by Effenaar Labs to participate in a panel discussion about the value of immersive technology for live music. In this article we share the most interesting backstage conversations, experiences and conclusions from our panel. Picture by Calluna Dekkers The ESNS conference takes place every year in the Oosterpoort, one of the music halls in Groningen. But that's not where most artists hang out. You will find this in the Artist Village, a backstage tent near the conference. The Open Culture Tech team stood in the middle of the Artist Village with various prototypes that led to many different conversations, discussions and brainstorms with various European artists. The first conclusion is that few artists work with new technology such as AI or AR. They don't have time for it, it is often too expensive or they don't know how to start. Even a lighting plan was a luxury for many artists. Most artists were not afraid of AI taking over their jobs as live performers on stage. Many artists were skeptical about the creative possibilities of many technological tools. Particularly around the risk of loss of privacy and intellectual property. The conversations with artists changed the moment we asked them to come up with applications themselves, regardless of budget or technical limitations. What if they could do anything they wanted? These brainstorms resulted in a lot of interesting input that we try to incorporate as features in our Open Culture Tech toolkit. Such as a control panel (for example via MIDI or foot pedal) to control visuals. Or a simple application to translate 2D images (for example album artwork) into a three-dimensional version that you can animate for a video or place in Augmented Reality. In addition to the artists who had little need for immersive technology such as AI or AR, there were many artists who did. The “what if” brainstorms show that many artists would like to experiment with new technology in their live shows but do not have the resources to do so. There were also many interesting conversations with artists who are already working with new technology, such as the Polish band Artificialice. They use, among other things, 3D scans (LiDAR) of themselves and their bedrooms and incorporate these into their artwork. Or the German Wax Museum. With their latest album they also release an 8-bit version of their music and an online video game in which the band members appear as game characters. In the game, the player must look for a stolen disco ball. This story could lend itself very well to a mobile AR experience during their concert in which the audience can jointly look for a hidden Disco Ball in the hall. A virtual quest like in Pokémon Go. 'Hiding' 3D objects is therefore a feature that we will certainly investigate in the AR app from Open Culture Tech. Picture by Casper Maas The Open Culture Tech team was also invited to a panel organized by Effenaar Lab. With the Hybrid Music Vibes program, Effenaar Lab offers talents the opportunity to investigate how new technologies can contribute to their artistry. The panel included Julia Sabaté and De Toegift as participants of the Hybrid Music Vibes program, their manager and Joost de Boo on behalf of Open Culture Tech. The panel discussed the importance of crossover collaborations, the importance of education and sharing experiences. Effenaar Lab's Hybrid Music Vibes program is an inspiring program in which participants conduct valuable experiments that we would like to share in this newsletter. Read more about Julia Sabaté: https://www.effenaar.nl/hybrid-music-vibes-2-julia-sabate Read more about De Toegift: https://www.effenaar.nl/hybrid-music-vibes-2-de-toegift
- How we co-create our live show concepts
In order to create the most useful emerging tech toolbox, the Open Culture Tech team works closely together with 12 selected artists. Every artist has the unique opportunity to develop a live show for which we build unique technology. Ultimately, we publish all this technology toolbox on our website so that every artist in the Netherlands has the same opportunity to experiment with the tools. In this article, we'll tell you all about our process and progress. Moodboard for the Smitty live show. Sources: Harry Potter, Bruce Almighty, The Matrix, Spongebob, 2001: Space Odyssey The process of designing the creative concepts for the 12 live shows is delicate. On the one hand we have to deal with the ever growing range of new technologies and possibilities, and on the other hand we need to make sure that the artist is strengthened by this technology and remains the centerpiece of the performance. It is important to note that Open Culture Tech does not research whether the latest AI, AR or Avatar technology works for live artists. There are many companies already proving that, such as WaveXR. Instead, Open Culture Tech is about whether these technologies can also work for emerging artists, who don’t have the means to work with complex or expensive software. In addition, we also want to ensure that the artist's autonomy is always guaranteed and that the technology is an addition to this rather than a replacement. The development of our 12 shows and 12 tools is done according to our own Open Culture Tech method, which consists of a number of steps. The first step is to come up with an overarching concept together with the individual artists. Based on this concept, we decide if and how we need AI, AR or Avatar technology. The second step is to create and test first prototypes with our tech partners. The third step is to create a final prototype – based on the feedback and input from step two – that can be used on stage during the actual show. To provide more insights in this creative process, we will explain how the concept building process works by going over two examples that we are currently working on. Smitty Smitty is a rapper and producer who just released his new EP. In his music, he often shows two sides of himself – that often contradict each other. Tough and vulnerable. Adult and child. Hope and sorrow. During his live show, he wants to emphasize this duality in both his music and his visuals. We have been going over different possibilities to emphasize this overall concept. To display the duality of Smitty’s mind, we can create multiple avatars representing different versions of the artist. We can use AI or 3D technology to create images of Smitty in different phases of his life, maybe as a child or as an old man. We can also use animation to create moving images and let the avatars interact with Smitty. He could literally talk with different versions of himself. The weight of Smitty’s music is in his lyrics. To emphasize this, we want to emphasize the most important words by leveraging new technology. For example, mobile AR could be used to add a virtual layer to the show and let the words flow in 3D. Another important starting point was the use of color and space. Smitty would love to have a stage in the middle of a white venue – so that everyone can stand around him. Perhaps an art gallery or empty warehouse with white walls and video projections. This white space is inspired by the “void rooms” often used in Hollywood productions such as The Matrix or even Spongebob. It transports the audience into Smitty’s mind, a canvas in which we are all stuck together. Sophie Jurrjens Sophie Jurrjens is a person of many talents. She is a pianist, composer and producer and also works as a DJ. In the past years, she has developed an app called Off-Track with music walking tours throughout Amsterdam, for which all music was created by herself. By adding music to a walking route, going for a walk is transformed into an experience. The walking tours evolved from the idea that “by adding music to a walking route, going for a walk is transformed into an experience”. Sophie wants to translate this idea into a live DJ set, where visitors start with a walking tour in Amsterdam North and finish at a fixed location where she is performing as a live DJ. The keyword in our creative process is “grandiosity”. How can we add interactive or visual elements to the walking tour to create a grandiose atmosphere? The answer turned out to be mobile augmented reality. By using mobile AR, visitors can experience virtual objects or that appear in 3D during the walking tour. These virtual objects can return during the final live show. The music tour starts at the NDSM Werf in Amsterdam North and should finish at a venue located alongside the IJ canal. In this way, we can use the canal as a backdrop for the mobile AR filter and create large grandiose 3D objects. In addition, we will add light tubes or aesthetic lamps to the physical stage design. The shapes of the lamps will also be translated into 3D objects to bring together the virtual and physical world. Our next step is to create and test the first basic prototypes together with our tech partners Reblika and Superposition and plan the actual live shows. Keep an eye on our website for the first results and exact show dates.
- Behind the Scenes
At Open Culture Tech, we are developing our own open-source and easy-to-use AI, Augmented Reality and Avatar creation tools for live music. Open Culture Tech issued an Open Call in the spring of 2023, in which musicians in The Netherlands could join our development programme. From a large number of applications, 10 diverse artists were selected. From Punk to R&B and from Folk to EDM. Each with a unique research question that could be answered by applying new technology. Questions such as: “I’ve put my heart and soul into creating an EP that’s 15 minutes long. I want to perform this new work live on stage, but I need at least 30 minutes of music to create a full show.” Or questions like: “how can I enhance the interaction between me and the audience, when I play guitar on stage?” Together with these artists, we (the Open Culture Tech team) come up with original answers to their questions that we translate into open-source AI, AR or Avatar technology – with easy to use interfaces. Then, each solution prototype will be tested during a live pilot show. After 10 shows, we have tested various different prototypes that we will combine into one toolkit. In this way, we aim to make the latest technology more accessible to every artist. Below, we share the results of our first two prototypes and live pilot shows. OATS × Avatars The first Open Culture Tech pilot show was created in close collaboration with OATS. Merging punk and jazz, OATS is establishing themselves through compelling and powerful live shows. Their question was: “how can we translate the lead singer's expression into real-time visuals on stage?” To answer this question, we decided to build and test the first version of our own Avatar creation tool, with the help of Reblika. Unreal Engine is the global industry standard in developing 3D models. It is used by many major 3D companies in the music, gaming and film industry. The learning curve is steep and the prices for experts are high. Reblika is a Rotterdam-based 3D company with years of experience in creating hi-res avatars for the creative industry, who are currently developing their own avatar creator tool – using Unreal Engine – called Reblium. For Open Culture Tech, Reblika is developing a free, stand alone, open-source edition with an easy-to-use interface, aimed at helping live musicians. The master plan was to capture the body movement of the lead singer (Jacob Clausen) with a Motion Capture Suit and link the signal to a 3D avatar in a 3D environment that could be projected live on stage. In this way, we could experience what it’s like to use avatars on stage and to find out what functionalities our Avatar creation tool would need. In this case, the aesthetic had to be dystopian, alienating and glitchy. Our first step was to create a workflow for finding the right 3D avatar and 3D environment. OATS preferred a gloomy character in hazmat suit, moving through an abandoned factory building. We decided to use the Unreal Engine Marketplace, a website that offers ready-made 3D models. To create the 3D environment, Jacob Clausen decided to use a tool called Polycam and scan an abandoned industrial area. Polycam is an easy-to-use software tool that uses a technology called LiDAR, better known as 3D-scanning. Polycam allows you to scan any physical 3D object or space and render it into a 3D model. The 3D scan (factory) and avatar (hazmat suit) were imported into Unreal Engine and the avatar was connected to a motion capture suit. This allowed Jacob Clausen to become the main character on screen and test the experience live on stage at Popronde in EKKO in Utrecht, on 19 October at 23:30. What followed was a show that taught us a lot. The venue provided us with a standard beamer/projector and a white screen behind the stage. Due to an over-active smoke machine, unstable internet connection and a low-res beamer-projector, the avatar was not always visible on screen. Nevertheless, there were certainly moments where everything came together. At these moments, the synchronization between Jacob and his avatar was super interesting, the storytelling was amazing and the technology showed a lot of potential. The Motion Capture suit was very expensive and we had to borrow this suit from Reblika. This is not very sustainable, accessible and inclusive. For our next prototype, we will look at Motion Capture AI technology, such as Rokoko Vision, instead of suits. The 3D avatar and environment were shown from different camera angles. To make this possible, someone had to keep changing the camera angle (real-time) within the Unreal Engine software. Going forward, we should add predefined camera angles. In this way, you don’t need an extra person to control the visuals. Ineffekt × AR The second use case of Open Culture Tech was provided by Ineffekt. Through a blend of glistening vocal pieces, strings of dreamy melodies and distinctive rhythms, Ineffekt cleverly takes on a sound that both feels comfortable and illusive. The question of Ineffekt was: “how can I translate my album artwork into a virtual experience that could transform any location into an immersive videoclip?”. To answer this question, we decided to build and test the first version of our own AR creation tool, with the help of Superposition, an innovative design studio for interactive experiences. For his latest album artwork and music video, Ineffekt decided to use a 3D model of a greenhouse in which yellow organisms are growing. This 3D model formed the basis for the AR experience we tested during the Amsterdam Dance Event. Our main goal was to create and test an intimate mobile AR experience that was built with the use of open-source 3D technologies. This meant that we couldn’t use popular AR tools like Spark AR (Meta), Snap AR (Snapchat) or ArCore (Google). In our first experiment, Blender was used to create a hi-res 3D model and webXR was used to translate this model into a mobile Augmented Reality application. Superposition also decided to experiment with App Clips on iOS and Play Instant on Android. These techniques allow you to open a mobile application – after scanning a QR code – in your browser without downloading the actual app. On October 20 and 21, we tested our first AR prototype in front of Black & Gold in Amsterdam, during ADE. After scanning the QR code on a poster, the audience was taken to a mobile website that explained the project. Then, the camera of your phone would switch on and you’d see the yellow plants/fungus grow around you. In the back, someone was sitting quietly, a silent avatar. The overall experience was poetic and intimate. As with OATS, we learned a lot. It is possible to create an intimate and valuable experience with mobile Augmented Reality technology. It is possible to create a mobile AR experience with open-source technology. The experience was static and did not react to the presence of the viewer. Going forward, we should look into the possibilities of adding interactive elements. Our ultimate goal is to develop accessible AI, AR and Avatar creation tools that any musician can use without our support. In the above examples, this has not been the case. We have mainly tested the workflow of existing tools and not created our own tools – yet. Going forward, we will start building and testing our own software interfaces and let artists create their own AI, AR Avatar experiences from scratch. In this way, we hope to ensure that up-and-coming musicians also get the opportunities and resources to work with the latest technology. This article was also published on MUSIC x on Thursday, 16 November 2023.
- The Life of Fi
Virtual influencers and artists have captured our imagination for years. Animated characters, sometimes even AI generated, that sing songs, dance and sometimes even perform on stage. A well-known example is the Japanese Hatsune Miku or the American Lil Miquela, who has even made it to Coachella. The team behind Open Culture Tech consists of makers who already have a lot of experience in building avatars and looking for responsible applications of this emerging technology. Building virtual characters involves a lot of technology, but also raises many ethical questions. Thunderboom Records and Reblika are working in Open Culture Tech on the development of the Avatar tool. During the OATS live show, an initial experiment was conducted with projecting an avatar that copied the lead singer's movements in real-time, and more shows will be testing similar technology in the coming months. But prior to Open Culture Tech, Thunderboom Records and Reblika already built a virtual artist called Fi. Fi, the fluid artist Fi is a virtual artist who comes to life on Instagram and Soundcloud. The appearance, life story and music were partly generated with AI and developed by a diverse team of designers. Fi was founded on the idea that the vast majority of existing virtual artists and influencers have little added value - other than serving as a marketing tool for major brands. Most virtual characters also promote a problematic beauty ideal and are stereotypical. This is enhanced by the functionalities of free 3D avatar creator tools such as DAZ Studio or Magic AI avatar apps like Lensa. The default 3D bodies in DAZ Studio are female bodies with thin waists and large breasts and buttocks. Free users can choose from different types of seductive facial expressions, poses & outfits and even customize the genitals. The standard male body, on the other hand, is muscular, looks tough and has no penis. These sexist stereotypes are also reflected in mobile apps such as Lensa that generate avatars from portrait photos. It was almost impossible for a journalist at the MIT Technology Review not to generate sexist avatars. In response to this status quo, Thunderboom Records and Reblika have attempted to create a virtual artist that goes beyond these stereotypes and makes innovative use of the possibilities of avatar technology. The concept behind Fi is that Fi can be anyone and doesn't have to conform to one type of appearance. Fi is virtual and therefore fluid. As a virtual character, Fi could take different forms, combine different genres and go beyond stereotypes. In addition, it was important that Fi did not become a virtual artist who would replace human artists. Fi must above all work together and help human artists move forward. DIY The concept of Fi has been translated into a story about a fictional character who was born as a star in the universe and came to earth to become an artist. This star does not want to adopt a fixed appearance. Fi therefore chooses a different appearance every 3 months that combines characteristics of inspiring artists. The starting point was an AI-driven mix between Prince and Jin from BTS. After that, Fi became a mix between Madonna and Dua Lipa. Various techniques are used to create online content for virtual influencers, such as Fi and Lil Miquela. The first step is to take photos of a human body double (actor) to use as base material. The photo is then recreated in 3D and merged with the original into one final image in Photoshop. To ensure that Fi's story was not appropriated by Thunderboom Records or Reblika, the body double became the lead for the story. On the day of the photo shoot, he or she could decide what happened to Fi. Sustainability But unfortunately Fi was not sustainable. Regularly creating, posting and maintaining Fi's online content took up so much work that most of the time was spent managing a social media account, rather than creating an interesting story. The added value for human musicians was also limited because the production was too time-intensive and therefore too expensive. The enormous potential of avatars has already been proven by great artists such as Gorillaz or Travis Scott. But it remains a challenge to create avatars that complement emerging artists. For this reason, Fi no longer publishes online content and Thunderboom Records is working with Reblika on the Open Culture Tech avatar tool with which every Dutch artist can easily create avatar content themselves and investigate the potential. The most important lesson we learned from Fi is that the avatars themselves are not the core, but that it is always about the musician who works with the avatars. In the Fi project, too much time and effort was invested in the virtual character, leaving too little for the human musician. We are currently organizing several live performances with emerging artists to explore the possibilities and added value of avatar technology. An example is Smitty, an Amsterdam rapper who constantly talks to other versions of himself in his music. We will explore how we can use projected avatars of his younger and older selves as projections on stage to emphasize this story. What if he could literally talk to another version of himself?