top of page

Search

24 items found for ""

  • Case Study: Immersive Avatars

    Smitty is a rapper from Amsterdam who talks to different versions of himself in his music. He talks to his youngest self and older self about different aspects of life. In his live shows he wants enhance this concept through new technology. Together with Open Culture Tech, Smitty has developed a live show that uses our immersive Avatar technology and mobile Augmented Reality to make this happen. The evening was composed of three parts. The first part consisted of the audience reception where we used mobile AR to introduce the audience to Smitty's lyrics. The second part consisted of the live show in which we projected various 3D avatars of Smitty on the white walls of the hall. The third part consisted of a Q&A between the audience, Smitty and members of Open Culture Tech. The entrance Upon entry, large QR codes were projected on the screen to access the experience. To highlight the lyrics of Smitty's music, we created an AR experience with the Open Cultuur Tech AR app. The idea behind this was that we created a virtual world where Smitty's lyrics floated through space. In the AR experience, 5 different texts from Smitty were spread throughout the room. Visitors could walk through the white empty space of @droog and see the different texts, in the same way as you would at an art exhibition. The AR experience was a warm-up before the show. The show In order to make the 3D avatars as prominent as possible, we wanted to create the illusion of an LED wall in the @droog. An LED wall is a wall of LED screens on which you can play visuals. Such a wall is very expensive and therefore unfeasible for most smaller stages. In addition, LED requires some distance between the audience and the screens to provide a clear image. This is also difficult in many smaller stages. We solved this by installing two projectors that were of good enough quality to project onto the walls. The projections had to run from the ceiling to the floor because otherwise it still looks like you are looking at a normal projection. The projectors were aligned in such a way that they projected onto the walls on either side of the stage. This resulted in minimal shadows from the band on the projections. Various atmospheric images were projected on this back wall to support the show. These atmospheric images were a combination of free videos from, for example, Pexels and your own video recordings. After the second issue, Smitty's first 3D avatar was introduced on screen. This animated 3D avatar was a younger version of Smitty who slowly turned towards the audience. An older version of Smitty was then shown and these avatars were edited together. The different avatars, in different animations, built up to an eclectic mix that worked towards a climax. Because we did not want to show the avatars for the entire show, but also wanted to show other atmospheric images, we created a simple VJ setup via TouchDesigner, a software tool with which we could build our own video player. This way we could control the visuals on the projections with a midi controller. Using an HDMI splitter we could control both projectors with 1 laptop. An important condition for using projectors is that there cannot be too much light in the room because the projections will then become less visible. In Smitty's case, the projections provided enough light to illuminate the room. With two small RGB spots and a white spot on Smitty himself, it was sufficient to properly illuminate the stage. The Q&A In addition to music lovers, the audience also included many musicians and fellow rappers of Smitty. For this group, LED walls, animated avatars and augmented reality are not within reach. From the conversations with the audience it became clear that they found the show, which lasted approximately 45 minutes, impressive. The visuals added a valuable layer and supported Smitty's story from the content. This confirmation is important for the progress of Open Culture Tech to validate that our technology is usable for the target group. Follow-up agreements have been made with various fellow rappers to investigate how the Open Culture Tech toolkit can be used more broadly within the Amsterdam hip-hop community. To be continued.

  • Case study: AI, AR, pianos and soundscapes

    Eveline Ypma is a soundscape artist from Amsterdam and Vincent Höfte is a jazz pianist from The Hague. Together with the Open Culture Tech team, both artists have worked on two unique concepts in which we use our own generative AI and mobile Augmented Reality prototypes to enrich their live performances. In this article we will briefly take you through our journey. Eveline Ypma & AI samples Laugarvatn is an existing production created by Eveline Ypma, consisting of three parts of 5 minutes each. The performance is named after a place where Eveline Ypma made several field recordings during her residency on Iceland. These field recordings form the basis for three soundscapes in which she combines these field recordings with live vocals and bass guitar. Together with the Open Culture Tech team, a fourth part of 10 minutes has been created in which the Icelandic field recordings have been replaced by AI-generated sound samples, in the style of her original Icelandic field recordings. To generate original samples, Eveline played with various text-to-music tools (ChatGPTs for music). During her residency on Iceland, Eveline never saw the Northern Lights, so she decided to use AI to generate unique sound samples based on the prompt “Northern Lights Soundscape”. In this way, Eveline was able to create new music inspired by her journey and add a musical piece to her already existing work Laugarvatn. The result of the collaboration between Eveline Ypma and Open Culture Tech is not only a beautiful showcase in which we have used generative AI to generate unique samples for live performance, but also the first version of our own open-source AI tool that allows anyone to sample their own samples. can create based on prompts. If you are curious about the process of creating this tool, and want to know more about how this application came about, read the detailed article here. And stay tuned, the Open Culture Tech AI-driven sample tool will be published soon. Vincent Höfte & mobile AR Vincent Hofte is a jazz pianist who regularly plays on public pianos at train stations throughout The Netherlands. Together with Open Culture Tech, Vincent has created a short performance in which he plays his own piano pieces while a mobile Augmented Reality filter adds a visual layer to reality. By scanning a QR code with your smartphone, you see colored shapes floating through the train station. These shapes are remixed photos of the station hall itself, creating a mix between the architecture of the station and the repeating shapes in the photos. This show used the first version of our own Augmented Reality app, which we will publish for free and publicly in a few months. If you are also curious about the process of creating this application and want to know more about how this application was created, read the extensive article here.

  • Generative AI & Terms of Use

    Let’s start by saying that this article is not an endpoint. It is a step in our ongoing research into developing responsible AI applications for artists. One of these artists is Eveline Ypma, with whom we are organizing a live performance on April 4 in OT301. Together with Eveline, we are investigating the potential of text-to-music AI technology will share our findings in this article. Eveline created an EP that combines sampled field recordings from the nature of Iceland with her own vocals and bass guitar. The result is a harmonious 15-minute soundscape. Our challenge was to extend and translate this EP into a 30/45-minute live performance, using generative AI. Together, we decided to experiment with AI tools that can generate similar-sounding field recordings and sound effects that Eveline could use to extend her live performance. How did we start? Our goal was to generate new audio files (10-20 seconds) that sounded similar to her own Icelandic music samples. To do so we started by looking into different ways to generate new music with AI. What AI models are already out there? Which existing tools can we test? And how do we make sure that the technology providers do not take Eveline's data? First, we conducted a series of experiments with existing AI models. Inspired by Dadabots and their infinite stream of AI generated death metal, we started working with SampleRNN models. This is an audio-to-audio model where you upload a music file and get similar music files in return. Unfortunately, we were not quite happy with the results because the output was too noisy. As well, the process was very time consuming and very complex. We moved onto Stable Diffusion’s algorithm called Dance Diffusion. This is also an audio-to-audio system that allows you to create audio samples that sound like your input files. Unfortunately, like the previous model, this model also produced a lot of noise and was very glitchy. Our aim was to look for off-the-shelf AI models that we could immediately use to create a workflow for Eveline – without having to train our own customized AI model. But unfortunately, this turned out to be more difficult than expected. That's why we decided to change course and look at ready-made AI tools. First, we tried Stable Diffusion’s text-to-music application called Stable Audio, which creates audio files based on text prompts. A ChatGPT for music. For the first time, we produced AI-generated output that indeed sounded like a usable music sample. Still, we could not really use the output: the terms of use prevented us from continuing to use the tool. We also tried Meta’s MusicGen and AudioGen, as similar prompt based AI model that allows you to generate audio files and music files. As long as you have a Gmail account, anyone can use these models in a Google Collab environment. MusicGen provided us with the best results so far. It generated high-quality audio samples that we could work with right away. Unfortunately, this system had similar terms of use. Terms of use In our opinion, the terms of use of too many generative AI music tools are misleading. Although most product websites tell you that you maintain full ownership of your input and output, it often becomes clear that you also "sublicense" your work to the AI platform – once you dive into their legal documentation. Technically, you always remain the owner of your input and output. But you also give ownership to someone else. In the case of Eveline Ypma, this is problematic. Eveline is an artist and she should own the rights to her own creative work. That is why we eventually decided to download the underlying MusicGen AI model from Github and create a local version on a private server ourselves. This is possible because Meta published the code open-source via Github under an MIT License. The Open Culture Tech "text-to-music" app At this moment, we are working together with a front-end developer to build our own text-to-music application on top of the MusicGen AI model. Our goal is to host the underlying AI model on a European server and make sure that we don't safe the user's input and output data. In this way, anyone can use the AI technology for free – without having to give away their creative work. We plan to launch this app on April 4 in OT301.

  • Summary Report

    This report is a summary of results we collected in the first 7 months of the Open Culture Tech project. We surveyed more than 100 artists and other relevant stakeholders from the music industry. We have done this in knowledge sessions, guest lectures, workshops at conferences (such as ADE and ESNS), surveys and interviews. Picture by Calluna Dekkers It is important for the Open Culture Tech project to map out where the opportunities and challenges lie for emerging artists. This way we ensure that we develop technology that meets the needs of artists and their audiences. It is also important to share more information with the sector. It is important for artists to know what their options are, even with small budgets and limited technical knowledge. It is important for the broader industry to know how they can facilitate artists and how we can ensure that the Dutch music landscape does not fall behind compared to, for example, the American or Chinese music industry. LINK to full Summary Report

  • 3D scanning for music artists

    In the search for new technological dimensions, we use 3D scanning for our AR and avatar applications. This is an easy way to create and apply virtual people or objects in live visuals, AR experiences or elements in hybrid concerts. Artists can integrate 3D scans of themselves into augmented reality experiences during live shows. This can range from interactive AR elements to full digital replicas that appear next to the artist. What is 3D scanning? A 3D scan is a three-dimensional digital scan of a person, object or environment. There are different ways to make 3D scans. This can be done, for example, with the help of Lidar or photogrammetry. Lidar technology uses lasers to map the environment and generate accurate 3D models. Nowadays, iPhone Pros already have a built-in lidar scanner. Photogrammetry involves taking multiple photos from different angles and combining them to reconstruct a 3D model. This is the method we also use for our 3D scans. In response to this status quo, Thunderboom Records and Reblika have attempted to create a virtual artist that goes beyond these stereotypes and makes innovative use of the possibilities of avatar technology. The concept behind Fi is that Fi can be anyone and doesn't have to conform to one type of appearance. Fi is virtual and therefore fluid. As a virtual character, Fi could take different forms, combine different genres and go beyond stereotypes. In addition, it was important that Fi did not become a virtual artist who would replace human artists. Fi must above all work together and help human artists move forward. There are various apps that allow you to easily make a 3D scan. We are currently testing Polycam as it is an easy to use app. It is important to mention that this app is user-friendly, but it does collect data from the users and that the 3D scans you make also become the property of Polycam. We are still looking for alternatives that have better terms of use. How do we apply 3D scans? Within the Open Culture Tech project, 3D scans are a good way to create digital doubles of the artists we work with. We use these digital doubles, for example, to create animations during live shows or to build avatars. In the case of Smitty, for example, we use the scan to create older and younger versions of him that are reflected in the narrative of his live performance. It gives us the opportunity to do this in detail because the scans are very precise. 3D scanning is also an important feature for the development of our AR tool. It offers the ability to add custom elements to the digital world you are creating. For example, consider scanning certain landscapes or objects that are important to you as an artist, your performance or the story you want to tell.

  • ESNS 2024 Recap

    Last week was ESNS 2024, the international music conference and showcase festival where more than 350 European artists perform annually and attract more than 40,000 visitors. Open Culture Tech was present at ESNS 2024 to discuss the views and opinions on new technology such as AI, Augmented Reality and Avatars backstage with artists. The Open Culture Tech team was also invited by Effenaar Labs to participate in a panel discussion about the value of immersive technology for live music. In this article we share the most interesting backstage conversations, experiences and conclusions from our panel. Picture by Calluna Dekkers The ESNS conference takes place every year in the Oosterpoort, one of the music halls in Groningen. But that's not where most artists hang out. You will find this in the Artist Village, a backstage tent near the conference. The Open Culture Tech team stood in the middle of the Artist Village with various prototypes that led to many different conversations, discussions and brainstorms with various European artists. The first conclusion is that few artists work with new technology such as AI or AR. They don't have time for it, it is often too expensive or they don't know how to start. Even a lighting plan was a luxury for many artists. Most artists were not afraid of AI taking over their jobs as live performers on stage. Many artists were skeptical about the creative possibilities of many technological tools. Particularly around the risk of loss of privacy and intellectual property. The conversations with artists changed the moment we asked them to come up with applications themselves, regardless of budget or technical limitations. What if they could do anything they wanted? These brainstorms resulted in a lot of interesting input that we try to incorporate as features in our Open Culture Tech toolkit. Such as a control panel (for example via MIDI or foot pedal) to control visuals. Or a simple application to translate 2D images (for example album artwork) into a three-dimensional version that you can animate for a video or place in Augmented Reality. In addition to the artists who had little need for immersive technology such as AI or AR, there were many artists who did. The “what if” brainstorms show that many artists would like to experiment with new technology in their live shows but do not have the resources to do so. There were also many interesting conversations with artists who are already working with new technology, such as the Polish band Artificialice. They use, among other things, 3D scans (LiDAR) of themselves and their bedrooms and incorporate these into their artwork. Or the German Wax Museum. With their latest album they also release an 8-bit version of their music and an online video game in which the band members appear as game characters. In the game, the player must look for a stolen disco ball. This story could lend itself very well to a mobile AR experience during their concert in which the audience can jointly look for a hidden Disco Ball in the hall. A virtual quest like in Pokémon Go. 'Hiding' 3D objects is therefore a feature that we will certainly investigate in the AR app from Open Culture Tech. Picture by Casper Maas The Open Culture Tech team was also invited to a panel organized by Effenaar Lab. With the Hybrid Music Vibes program, Effenaar Lab offers talents the opportunity to investigate how new technologies can contribute to their artistry. The panel included Julia Sabaté and De Toegift as participants of the Hybrid Music Vibes program, their manager and Joost de Boo on behalf of Open Culture Tech. The panel discussed the importance of crossover collaborations, the importance of education and sharing experiences. Effenaar Lab's Hybrid Music Vibes program is an inspiring program in which participants conduct valuable experiments that we would like to share in this newsletter. Read more about Julia Sabaté: https://www.effenaar.nl/hybrid-music-vibes-2-julia-sabate Read more about De Toegift: https://www.effenaar.nl/hybrid-music-vibes-2-de-toegift

  • The Life of Fi

    Virtual influencers and artists have captured our imagination for years. Animated characters, sometimes even AI generated, that sing songs, dance and sometimes even perform on stage. A well-known example is the Japanese Hatsune Miku or the American Lil Miquela, who has even made it to Coachella. The team behind Open Culture Tech consists of makers who already have a lot of experience in building avatars and looking for responsible applications of this emerging technology. Building virtual characters involves a lot of technology, but also raises many ethical questions. Thunderboom Records and Reblika are working in Open Culture Tech on the development of the Avatar tool. During the OATS live show, an initial experiment was conducted with projecting an avatar that copied the lead singer's movements in real-time, and more shows will be testing similar technology in the coming months. But prior to Open Culture Tech, Thunderboom Records and Reblika already built a virtual artist called Fi. Fi, the fluid artist Fi is a virtual artist who comes to life on Instagram and Soundcloud. The appearance, life story and music were partly generated with AI and developed by a diverse team of designers. Fi was founded on the idea that the vast majority of existing virtual artists and influencers have little added value - other than serving as a marketing tool for major brands. Most virtual characters also promote a problematic beauty ideal and are stereotypical. This is enhanced by the functionalities of free 3D avatar creator tools such as DAZ Studio or Magic AI avatar apps like Lensa. The default 3D bodies in DAZ Studio are female bodies with thin waists and large breasts and buttocks. Free users can choose from different types of seductive facial expressions, poses & outfits and even customize the genitals. The standard male body, on the other hand, is muscular, looks tough and has no penis. These sexist stereotypes are also reflected in mobile apps such as Lensa that generate avatars from portrait photos. It was almost impossible for a journalist at the MIT Technology Review not to generate sexist avatars. In response to this status quo, Thunderboom Records and Reblika have attempted to create a virtual artist that goes beyond these stereotypes and makes innovative use of the possibilities of avatar technology. The concept behind Fi is that Fi can be anyone and doesn't have to conform to one type of appearance. Fi is virtual and therefore fluid. As a virtual character, Fi could take different forms, combine different genres and go beyond stereotypes. In addition, it was important that Fi did not become a virtual artist who would replace human artists. Fi must above all work together and help human artists move forward. DIY The concept of Fi has been translated into a story about a fictional character who was born as a star in the universe and came to earth to become an artist. This star does not want to adopt a fixed appearance. Fi therefore chooses a different appearance every 3 months that combines characteristics of inspiring artists. The starting point was an AI-driven mix between Prince and Jin from BTS. After that, Fi became a mix between Madonna and Dua Lipa. Various techniques are used to create online content for virtual influencers, such as Fi and Lil Miquela. The first step is to take photos of a human body double (actor) to use as base material. The photo is then recreated in 3D and merged with the original into one final image in Photoshop. To ensure that Fi's story was not appropriated by Thunderboom Records or Reblika, the body double became the lead for the story. On the day of the photo shoot, he or she could decide what happened to Fi. Sustainability But unfortunately Fi was not sustainable. Regularly creating, posting and maintaining Fi's online content took up so much work that most of the time was spent managing a social media account, rather than creating an interesting story. The added value for human musicians was also limited because the production was too time-intensive and therefore too expensive. The enormous potential of avatars has already been proven by great artists such as Gorillaz or Travis Scott. But it remains a challenge to create avatars that complement emerging artists. For this reason, Fi no longer publishes online content and Thunderboom Records is working with Reblika on the Open Culture Tech avatar tool with which every Dutch artist can easily create avatar content themselves and investigate the potential. The most important lesson we learned from Fi is that the avatars themselves are not the core, but that it is always about the musician who works with the avatars. In the Fi project, too much time and effort was invested in the virtual character, leaving too little for the human musician. We are currently organizing several live performances with emerging artists to explore the possibilities and added value of avatar technology. An example is Smitty, an Amsterdam rapper who constantly talks to other versions of himself in his music. We will explore how we can use projected avatars of his younger and older selves as projections on stage to emphasize this story. What if he could literally talk to another version of himself?

  • How we co-create our live show concepts

    In order to create the most useful emerging tech toolbox, the Open Culture Tech team works closely together with 12 selected artists. Every artist has the unique opportunity to develop a live show for which we build unique technology. Ultimately, we publish all this technology toolbox on our website so that every artist in the Netherlands has the same opportunity to experiment with the tools. In this article, we'll tell you all about our process and progress. Moodboard for the Smitty live show. Sources: Harry Potter, Bruce Almighty, The Matrix, Spongebob, 2001: Space Odyssey The process of designing the creative concepts for the 12 live shows is delicate. On the one hand we have to deal with the ever growing range of new technologies and possibilities, and on the other hand we need to make sure that the artist is strengthened by this technology and remains the centerpiece of the performance. It is important to note that Open Culture Tech does not research whether the latest AI, AR or Avatar technology works for live artists. There are many companies already proving that, such as WaveXR. Instead, Open Culture Tech is about whether these technologies can also work for emerging artists, who don’t have the means to work with complex or expensive software. In addition, we also want to ensure that the artist's autonomy is always guaranteed and that the technology is an addition to this rather than a replacement. The development of our 12 shows and 12 tools is done according to our own Open Culture Tech method, which consists of a number of steps. The first step is to come up with an overarching concept together with the individual artists. Based on this concept, we decide if and how we need AI, AR or Avatar technology. The second step is to create and test first prototypes with our tech partners. The third step is to create a final prototype – based on the feedback and input from step two – that can be used on stage during the actual show. To provide more insights in this creative process, we will explain how the concept building process works by going over two examples that we are currently working on. Smitty Smitty is a rapper and producer who just released his new EP. In his music, he often shows two sides of himself – that often contradict each other. Tough and vulnerable. Adult and child. Hope and sorrow. During his live show, he wants to emphasize this duality in both his music and his visuals. We have been going over different possibilities to emphasize this overall concept. To display the duality of Smitty’s mind, we can create multiple avatars representing different versions of the artist. We can use AI or 3D technology to create images of Smitty in different phases of his life, maybe as a child or as an old man. We can also use animation to create moving images and let the avatars interact with Smitty. He could literally talk with different versions of himself. The weight of Smitty’s music is in his lyrics. To emphasize this, we want to emphasize the most important words by leveraging new technology. For example, mobile AR could be used to add a virtual layer to the show and let the words flow in 3D. Another important starting point was the use of color and space. Smitty would love to have a stage in the middle of a white venue – so that everyone can stand around him. Perhaps an art gallery or empty warehouse with white walls and video projections. This white space is inspired by the “void rooms” often used in Hollywood productions such as The Matrix or even Spongebob. It transports the audience into Smitty’s mind, a canvas in which we are all stuck together. Sophie Jurrjens Sophie Jurrjens is a person of many talents. She is a pianist, composer and producer and also works as a DJ. In the past years, she has developed an app called Off-Track with music walking tours throughout Amsterdam, for which all music was created by herself. By adding music to a walking route, going for a walk is transformed into an experience. The walking tours evolved from the idea that “by adding music to a walking route, going for a walk is transformed into an experience”. Sophie wants to translate this idea into a live DJ set, where visitors start with a walking tour in Amsterdam North and finish at a fixed location where she is performing as a live DJ. The keyword in our creative process is “grandiosity”. How can we add interactive or visual elements to the walking tour to create a grandiose atmosphere? The answer turned out to be mobile augmented reality. By using mobile AR, visitors can experience virtual objects or that appear in 3D during the walking tour. These virtual objects can return during the final live show. The music tour starts at the NDSM Werf in Amsterdam North and should finish at a venue located alongside the IJ canal. In this way, we can use the canal as a backdrop for the mobile AR filter and create large grandiose 3D objects. In addition, we will add light tubes or aesthetic lamps to the physical stage design. The shapes of the lamps will also be translated into 3D objects to bring together the virtual and physical world. Our next step is to create and test the first basic prototypes together with our tech partners Reblika and Superposition and plan the actual live shows. Keep an eye on our website for the first results and exact show dates.

  • The Frankenstein of AI music

    As the MUTEK team wrote in the MUSIC x newsletter last August, it is impossible to tell whether AI music is good. Not only because there are so many different applications, but also because the definition of “good music” is quite a subjective one. AI music in the broadest sense of the word is an important element in the Open Culture Tech project. We look for ways in which it can add value for artists, without harming them in their creative process. For many artists in our direct surroundings, AI feels like a monster under their bed, ready to attack their professional career and creative abilities. Because of that, most of them are mainly interested in AI technologies that can help them with their visual performance, or the interaction with the audience. But really having AI interfere with their music is something they are very cautious with. At Open Culture Tech, we try to look at music AI tools as Frankenstein’s monster. Whether they are voice cloning tools or generative algorithms that create melodies, chords or beats. Just as Frankenstein’s monster was built up from separate elements (limbs), AI tools can also be seen as separate elements in the musical creation process. And just like doctor Frankenstein, musicians still need creative ideas and skills to connect the separate elements and bring the whole body to life. But when we take a look at the current market for AI music tools, there is something strange going on. Many tools that are currently available, such as AIVA or BandLab Songstarter are being promoted as tools that provide “inspiration” by generating MIDI beats or melodies from scratch. In essence, there is nothing wrong with that. However, professional artists, or artists with professional ambitions, are not the right target audience for these specific tools. So far, we have not spoken to a single artist with a lack of inspiration or musical ideas. To go even further, many artists seem to enjoy these first steps in their creation process the most since that is the point where they are really being creative, without having to think too much about the end results yet. The idea that AI needs to help us kick-start our human creativity feels wrong. Of course, if your not a musician and you need some quick stock music for your TikTok videos, these tools could be very helpful. But for professional musicians, this is not very helpful. Two weeks ago, Google and YouTube introduced the Lyria AI model that was accompanied by various Dream Track applications. The most prominent app allows users to enter a topic (write a prompt), choose an artist from a carousel and generate a 30 second soundtrack. The AI will generate original lyrics, backing tracks and an AI-generated voice will sing the song for you. But again, this application is aiming at social media content – mainly promoting YouTube Shorts. When diving into other websites that showcase AI music tools, such as AI Tools Club, you'll see that the majority of applications are just not created to help professional musicians. Like Mubert, they also want to support social media, podcast or app creators. (The other Dream Track app allows you to transform your humming or beatbox into an instrument such as a saxophone or even an entire orchestra. This is very similar to Vochlea, an application that helps musicians quickly “sketch” their ideas into MIDI). Although AI tools that translate humming into MIDI might be helpful, we at Open Culture Tech feel that there is a growing gap between the AI (application) community and the wider professional music community. Yes, there are very interesting music AI initiatives out there such as the Sound Of AI YouTube channel (recommended!) but this is not a plug-and-play app such as AIVA. In the end, the limbs of Frankenstein’s monster are not developed for professional artists but for TikTok creators. That is why Open Culture Tech is currently working on 3 different easy-to-use AI applications that are aiming to support or inspire professional musicians during different parts of their creative process. Together with our collaborating artists and AI developers, we will test our first prototypes soon and share our findings with you in the upcoming updates.

  • Collective AR during ADE

    Last month, we created an AR (Augmented Reality) experience for upcoming DJ Ineffekt that premiered at the Amsterdam Dance Event. It enabled the audience to collectively enter the world of Ineffekt in AR while listening to his new EP called High Hopes. Ineffekt had previously worked with director Cas Mulder to create a short video for his EP that was released last August. The clip features the producer in a seemingly empty space, watching an organism grow inside a contamination box. Since the contamination box and its contents had all been designed by an amazing VFX team, we figured it would be interesting to create a ‘follow up’ to the video in AR. In other words, what happened to the growing organism in the glass box? The AR was presented four times during ADE, and was freely accessible. Visitors would gather on the street in front of a record store. The High Hopes EP would start playing, and we would invite the audience to scan a QR code that would instantly serve the AR experience, without the need for downloading an app first*. Once inside the AR experience, the audience found themselves surrounded by a pulsating, post-apocalyptic landscape. Fragments from the contamination box, now almost completely overrun, can be seen. The yellow organism that was growing so quickly inside the box is now everywhere. And after some exploration, the audience could see that Ineffekt is watching this (and them?) all from a short distance, seemingly content. The message: you cannot restrain growth; nature (or creativity?) will always find its way.

  • Behind the Scenes

    At Open Culture Tech, we are developing our own open-source and easy-to-use AI, Augmented Reality and Avatar creation tools for live music. Open Culture Tech issued an Open Call in the spring of 2023, in which musicians in The Netherlands could join our development programme. From a large number of applications, 10 diverse artists were selected. From Punk to R&B and from Folk to EDM. Each with a unique research question that could be answered by applying new technology. Questions such as: “I’ve put my heart and soul into creating an EP that’s 15 minutes long. I want to perform this new work live on stage, but I need at least 30 minutes of music to create a full show.” Or questions like: “how can I enhance the interaction between me and the audience, when I play guitar on stage?” Together with these artists, we (the Open Culture Tech team) come up with original answers to their questions that we translate into open-source AI, AR or Avatar technology – with easy to use interfaces. Then, each solution prototype will be tested during a live pilot show. After 10 shows, we have tested various different prototypes that we will combine into one toolkit. In this way, we aim to make the latest technology more accessible to every artist. Below, we share the results of our first two prototypes and live pilot shows. OATS × Avatars The first Open Culture Tech pilot show was created in close collaboration with OATS. Merging punk and jazz, OATS is establishing themselves through compelling and powerful live shows. Their question was: “how can we translate the lead singer's expression into real-time visuals on stage?” To answer this question, we decided to build and test the first version of our own Avatar creation tool, with the help of Reblika. Unreal Engine is the global industry standard in developing 3D models. It is used by many major 3D companies in the music, gaming and film industry. The learning curve is steep and the prices for experts are high. Reblika is a Rotterdam-based 3D company with years of experience in creating hi-res avatars for the creative industry, who are currently developing their own avatar creator tool – using Unreal Engine – called Reblium. For Open Culture Tech, Reblika is developing a free, stand alone, open-source edition with an easy-to-use interface, aimed at helping live musicians. The master plan was to capture the body movement of the lead singer (Jacob Clausen) with a Motion Capture Suit and link the signal to a 3D avatar in a 3D environment that could be projected live on stage. In this way, we could experience what it’s like to use avatars on stage and to find out what functionalities our Avatar creation tool would need. In this case, the aesthetic had to be dystopian, alienating and glitchy. Our first step was to create a workflow for finding the right 3D avatar and 3D environment. OATS preferred a gloomy character in hazmat suit, moving through an abandoned factory building. We decided to use the Unreal Engine Marketplace, a website that offers ready-made 3D models. To create the 3D environment, Jacob Clausen decided to use a tool called Polycam and scan an abandoned industrial area. Polycam is an easy-to-use software tool that uses a technology called LiDAR, better known as 3D-scanning. Polycam allows you to scan any physical 3D object or space and render it into a 3D model. The 3D scan (factory) and avatar (hazmat suit) were imported into Unreal Engine and the avatar was connected to a motion capture suit. This allowed Jacob Clausen to become the main character on screen and test the experience live on stage at Popronde in EKKO in Utrecht, on 19 October at 23:30. What followed was a show that taught us a lot. The venue provided us with a standard beamer/projector and a white screen behind the stage. Due to an over-active smoke machine, unstable internet connection and a low-res beamer-projector, the avatar was not always visible on screen. Nevertheless, there were certainly moments where everything came together. At these moments, the synchronization between Jacob and his avatar was super interesting, the storytelling was amazing and the technology showed a lot of potential. The Motion Capture suit was very expensive and we had to borrow this suit from Reblika. This is not very sustainable, accessible and inclusive. For our next prototype, we will look at Motion Capture AI technology, such as Rokoko Vision, instead of suits. The 3D avatar and environment were shown from different camera angles. To make this possible, someone had to keep changing the camera angle (real-time) within the Unreal Engine software. Going forward, we should add predefined camera angles. In this way, you don’t need an extra person to control the visuals. Ineffekt × AR The second use case of Open Culture Tech was provided by Ineffekt. Through a blend of glistening vocal pieces, strings of dreamy melodies and distinctive rhythms, Ineffekt cleverly takes on a sound that both feels comfortable and illusive. The question of Ineffekt was: “how can I translate my album artwork into a virtual experience that could transform any location into an immersive videoclip?”. To answer this question, we decided to build and test the first version of our own AR creation tool, with the help of Superposition, an innovative design studio for interactive experiences. For his latest album artwork and music video, Ineffekt decided to use a 3D model of a greenhouse in which yellow organisms are growing. This 3D model formed the basis for the AR experience we tested during the Amsterdam Dance Event. Our main goal was to create and test an intimate mobile AR experience that was built with the use of open-source 3D technologies. This meant that we couldn’t use popular AR tools like Spark AR (Meta), Snap AR (Snapchat) or ArCore (Google). In our first experiment, Blender was used to create a hi-res 3D model and webXR was used to translate this model into a mobile Augmented Reality application. Superposition also decided to experiment with App Clips on iOS and Play Instant on Android. These techniques allow you to open a mobile application – after scanning a QR code – in your browser without downloading the actual app. On October 20 and 21, we tested our first AR prototype in front of Black & Gold in Amsterdam, during ADE. After scanning the QR code on a poster, the audience was taken to a mobile website that explained the project. Then, the camera of your phone would switch on and you’d see the yellow plants/fungus grow around you. In the back, someone was sitting quietly, a silent avatar. The overall experience was poetic and intimate. As with OATS, we learned a lot. It is possible to create an intimate and valuable experience with mobile Augmented Reality technology. It is possible to create a mobile AR experience with open-source technology. The experience was static and did not react to the presence of the viewer. Going forward, we should look into the possibilities of adding interactive elements. Our ultimate goal is to develop accessible AI, AR and Avatar creation tools that any musician can use without our support. In the above examples, this has not been the case. We have mainly tested the workflow of existing tools and not created our own tools – yet. Going forward, we will start building and testing our own software interfaces and let artists create their own AI, AR Avatar experiences from scratch. In this way, we hope to ensure that up-and-coming musicians also get the opportunities and resources to work with the latest technology. This article was also published on MUSIC x on Thursday, 16 November 2023.

  • Aespa, avatars and live holograms

    As part of Open Culture Tech’s Avatar Series, we delve into the unique concept of Aespa. Aespa is a South Korean girl group that has managed to carve a unique niche for themselves by blending the boundaries between reality and the digital realm. We will look at their innovative use of technology and storytelling, but we will also look at ways to apply these technologies yourself. Aespa made their debut in November 2020, during the height of the COVID-19 pandemic, with the single “Black Mamba”. It is a catchy K-pop track that combines elements of pop, hip-hop, and electronic dance music. One of the most striking aspects of Aespa’s debut was the addition of a storyline that used digital avatars. The idol group consists of four human members – Karina, Giselle, Winter, and Ningning – who are accompanied by digital counterparts known as “æ”. There is æ-Karina, æ-Giselle, æ-Winter, and æ-Ningning, and they all live in a parallel virtual world called “æ-planet”. Aespa was introduced to the world as a group of hybrid action figures in a sci-fi story that was set in both the physical and virtual world. Aespa and their digital counterparts had to fight against Black Mamba, a typical villain who wanted to destroy the virtual and physical world. The audience could follow the story in a three-part series on YouTube and supporting content appeared on various social media channels for months. Fast forward to 2023, and you hardly see any avatars on Aespa's online channels anymore. The storyline about action heroes has been exchanged for a staged storyline about 4 close friends who share a lot of backstage footage. Still, with Aespa, technology is never far away. Even though Aespa’s social media channels no longer show avatars, they are still prominently present at the live shows. Last summer, Joost de Boo, member of the Open Culture Tech team, was in Japan to see Aespa live at the Tokyo Dome, together with 60.000 excited Japanese fans. “Before the show, while everyone was looking for their seats, the Black Mamba avatar video series was broadcasted on a huge screen”, Joost recalls. “It really set the stage and took the audience into the world of Aespa. But not only that. It was also a natural build-up towards the start of the show where the 4 members first entered the stage as dancing avatars, after which they were followed by the human versions.” Joost found the live show at the Tokyo Dome was both impressive and questionable. “There is a certain aesthetic and ideal of physical beauty that is being pursued by Aespa – and almost any other idol band I know – and I wonder if that is something we should promote”. Over the years, more and more (ex) members of K-pop groups have spoken out about the dark side of K-pop culture; including sexism, abuse and mental health stigmas. So we will get back to the subject of stereotyping and avatars in another article. “But without ignoring these concerns, the technology used in the Aespa show is something we can and should definitely learn from.” To be fair, projecting a video on stage doesn’t sound very revolutionary. Still, combining a projection on stage with a storyline on social media and YouTube does not happen very often. Furthermore, this was not the only appearance of the avatars on stage. After the first 20 minutes of the show, a large screen was wheeled onto the stage. What happened next can be seen in the videos below. The rest of the show followed the same structure as the chronological content on YouTube and social media: the avatars disappeared and a group of human friends remained. What have we seen on stage, and what can we learn from it? First the Black Mamba storyline. It is important to note that Aespa is created and owned by SM Entertainment, a South Korean multinational entertainment agency that was one of the leading forces behind the global popularization of South Korean popular culture in the 1990s. SM Entertainment is active throughout the entire entertainment industry and owns and operates record labels, talent agencies, production companies, concert production companies, music publishing houses, design agencies, and investment groups. So what we have seen is a multimillion euro cross-media production where dozens of talented designers, artists and developers have worked together. In order to create Aespa’s live show, SM Entertainment worked together with (among others) 37th Degree and Giantstep, two international award-winning agencies: from creating anime characters, modeling 3D avatars and designing merchandise to story writing, directing and filming. But besides the impressive high-budget content production, the most innovative part is not the storyline or avatars itself, but the way these characters appeared on stage after about 20 minutes. According to LG, this is the first time ever that 12 brand new “Transparent OLED” screens are combined into 1 large hologram screen on a stage. A new technology that we can expect to become much more common in the coming years. You can check out this link if you want to know more about these screens or read our previous article about cheap alternatives. Source: https://www.giantstep.com/work/sm-ent-aespa-synk/ To wrap up. As an artist in the Netherlands you probably don't have the budgets of SM Entertainment. Nevertheless, it is not impossible to make up a storyline or to invent an alter ego – if you want to. It is also not impossible to translate that story into audiovisual content such as (moving) images. Maybe generative AI can help you there? But the most exciting thing is this: soon it will also be possible to translate your story into 3D with our very own Open Culture Tech “Avatar tool”. Last week we tested our first prototype live on stage and the results are more than promising. Curious about our progress? Then keep an eye on the newsletter for more updates. Want to know more about Aespa? Read their latest interview with Grimes in Rolling Stone Magazine.

bottom of page