top of page

Search

22 items found for ""

  • Generative AI & Terms of Use

    Let’s start by saying that this article is not an endpoint. It is a step in our ongoing research into developing responsible AI applications for artists. One of these artists is Eveline Ypma, with whom we are organizing a live performance on April 4 in OT301. Together with Eveline, we are investigating the potential of text-to-music AI technology will share our findings in this article. Eveline created an EP that combines sampled field recordings from the nature of Iceland with her own vocals and bass guitar. The result is a harmonious 15-minute soundscape. Our challenge was to extend and translate this EP into a 30/45-minute live performance, using generative AI. Together, we decided to experiment with AI tools that can generate similar-sounding field recordings and sound effects that Eveline could use to extend her live performance. How did we start? Our goal was to generate new audio files (10-20 seconds) that sounded similar to her own Icelandic music samples. To do so we started by looking into different ways to generate new music with AI. What AI models are already out there? Which existing tools can we test? And how do we make sure that the technology providers do not take Eveline's data? First, we conducted a series of experiments with existing AI models. Inspired by Dadabots and their infinite stream of AI generated death metal, we started working with SampleRNN models. This is an audio-to-audio model where you upload a music file and get similar music files in return. Unfortunately, we were not quite happy with the results because the output was too noisy. As well, the process was very time consuming and very complex. We moved onto Stable Diffusion’s algorithm called Dance Diffusion. This is also an audio-to-audio system that allows you to create audio samples that sound like your input files. Unfortunately, like the previous model, this model also produced a lot of noise and was very glitchy. Our aim was to look for off-the-shelf AI models that we could immediately use to create a workflow for Eveline – without having to train our own customized AI model. But unfortunately, this turned out to be more difficult than expected. That's why we decided to change course and look at ready-made AI tools. First, we tried Stable Diffusion’s text-to-music application called Stable Audio, which creates audio files based on text prompts. A ChatGPT for music. For the first time, we produced AI-generated output that indeed sounded like a usable music sample. Still, we could not really use the output: the terms of use prevented us from continuing to use the tool. We also tried Meta’s MusicGen and AudioGen, as similar prompt based AI model that allows you to generate audio files and music files. As long as you have a Gmail account, anyone can use these models in a Google Collab environment. MusicGen provided us with the best results so far. It generated high-quality audio samples that we could work with right away. Unfortunately, this system had similar terms of use. Terms of use In our opinion, the terms of use of too many generative AI music tools are misleading. Although most product websites tell you that you maintain full ownership of your input and output, it often becomes clear that you also "sublicense" your work to the AI platform – once you dive into their legal documentation. Technically, you always remain the owner of your input and output. But you also give ownership to someone else. In the case of Eveline Ypma, this is problematic. Eveline is an artist and she should own the rights to her own creative work. That is why we eventually decided to download the underlying MusicGen AI model from Github and create a local version on a private server ourselves. This is possible because Meta published the code open-source via Github under an MIT License. The Open Culture Tech "text-to-music" app At this moment, we are working together with a front-end developer to build our own text-to-music application on top of the MusicGen AI model. Our goal is to host the underlying AI model on a European server and make sure that we don't safe the user's input and output data. In this way, anyone can use the AI technology for free – without having to give away their creative work. We plan to launch this app on April 4 in OT301.

  • Summary Report

    This report is a summary of results we collected in the first 7 months of the Open Culture Tech project. We surveyed more than 100 artists and other relevant stakeholders from the music industry. We have done this in knowledge sessions, guest lectures, workshops at conferences (such as ADE and ESNS), surveys and interviews. Picture by Calluna Dekkers It is important for the Open Culture Tech project to map out where the opportunities and challenges lie for emerging artists. This way we ensure that we develop technology that meets the needs of artists and their audiences. It is also important to share more information with the sector. It is important for artists to know what their options are, even with small budgets and limited technical knowledge. It is important for the broader industry to know how they can facilitate artists and how we can ensure that the Dutch music landscape does not fall behind compared to, for example, the American or Chinese music industry. LINK to full Summary Report

  • 3D scanning for music artists

    In the search for new technological dimensions, we use 3D scanning for our AR and avatar applications. This is an easy way to create and apply virtual people or objects in live visuals, AR experiences or elements in hybrid concerts. Artists can integrate 3D scans of themselves into augmented reality experiences during live shows. This can range from interactive AR elements to full digital replicas that appear next to the artist. What is 3D scanning? A 3D scan is a three-dimensional digital scan of a person, object or environment. There are different ways to make 3D scans. This can be done, for example, with the help of Lidar or photogrammetry. Lidar technology uses lasers to map the environment and generate accurate 3D models. Nowadays, iPhone Pros already have a built-in lidar scanner. Photogrammetry involves taking multiple photos from different angles and combining them to reconstruct a 3D model. This is the method we also use for our 3D scans. In response to this status quo, Thunderboom Records and Reblika have attempted to create a virtual artist that goes beyond these stereotypes and makes innovative use of the possibilities of avatar technology. The concept behind Fi is that Fi can be anyone and doesn't have to conform to one type of appearance. Fi is virtual and therefore fluid. As a virtual character, Fi could take different forms, combine different genres and go beyond stereotypes. In addition, it was important that Fi did not become a virtual artist who would replace human artists. Fi must above all work together and help human artists move forward. There are various apps that allow you to easily make a 3D scan. We are currently testing Polycam as it is an easy to use app. It is important to mention that this app is user-friendly, but it does collect data from the users and that the 3D scans you make also become the property of Polycam. We are still looking for alternatives that have better terms of use. How do we apply 3D scans? Within the Open Culture Tech project, 3D scans are a good way to create digital doubles of the artists we work with. We use these digital doubles, for example, to create animations during live shows or to build avatars. In the case of Smitty, for example, we use the scan to create older and younger versions of him that are reflected in the narrative of his live performance. It gives us the opportunity to do this in detail because the scans are very precise. 3D scanning is also an important feature for the development of our AR tool. It offers the ability to add custom elements to the digital world you are creating. For example, consider scanning certain landscapes or objects that are important to you as an artist, your performance or the story you want to tell.

  • ESNS 2024 Recap

    Last week was ESNS 2024, the international music conference and showcase festival where more than 350 European artists perform annually and attract more than 40,000 visitors. Open Culture Tech was present at ESNS 2024 to discuss the views and opinions on new technology such as AI, Augmented Reality and Avatars backstage with artists. The Open Culture Tech team was also invited by Effenaar Labs to participate in a panel discussion about the value of immersive technology for live music. In this article we share the most interesting backstage conversations, experiences and conclusions from our panel. Picture by Calluna Dekkers The ESNS conference takes place every year in the Oosterpoort, one of the music halls in Groningen. But that's not where most artists hang out. You will find this in the Artist Village, a backstage tent near the conference. The Open Culture Tech team stood in the middle of the Artist Village with various prototypes that led to many different conversations, discussions and brainstorms with various European artists. The first conclusion is that few artists work with new technology such as AI or AR. They don't have time for it, it is often too expensive or they don't know how to start. Even a lighting plan was a luxury for many artists. Most artists were not afraid of AI taking over their jobs as live performers on stage. Many artists were skeptical about the creative possibilities of many technological tools. Particularly around the risk of loss of privacy and intellectual property. The conversations with artists changed the moment we asked them to come up with applications themselves, regardless of budget or technical limitations. What if they could do anything they wanted? These brainstorms resulted in a lot of interesting input that we try to incorporate as features in our Open Culture Tech toolkit. Such as a control panel (for example via MIDI or foot pedal) to control visuals. Or a simple application to translate 2D images (for example album artwork) into a three-dimensional version that you can animate for a video or place in Augmented Reality. In addition to the artists who had little need for immersive technology such as AI or AR, there were many artists who did. The “what if” brainstorms show that many artists would like to experiment with new technology in their live shows but do not have the resources to do so. There were also many interesting conversations with artists who are already working with new technology, such as the Polish band Artificialice. They use, among other things, 3D scans (LiDAR) of themselves and their bedrooms and incorporate these into their artwork. Or the German Wax Museum. With their latest album they also release an 8-bit version of their music and an online video game in which the band members appear as game characters. In the game, the player must look for a stolen disco ball. This story could lend itself very well to a mobile AR experience during their concert in which the audience can jointly look for a hidden Disco Ball in the hall. A virtual quest like in Pokémon Go. 'Hiding' 3D objects is therefore a feature that we will certainly investigate in the AR app from Open Culture Tech. Picture by Casper Maas The Open Culture Tech team was also invited to a panel organized by Effenaar Lab. With the Hybrid Music Vibes program, Effenaar Lab offers talents the opportunity to investigate how new technologies can contribute to their artistry. The panel included Julia Sabaté and De Toegift as participants of the Hybrid Music Vibes program, their manager and Joost de Boo on behalf of Open Culture Tech. The panel discussed the importance of crossover collaborations, the importance of education and sharing experiences. Effenaar Lab's Hybrid Music Vibes program is an inspiring program in which participants conduct valuable experiments that we would like to share in this newsletter. Read more about Julia Sabaté: https://www.effenaar.nl/hybrid-music-vibes-2-julia-sabate Read more about De Toegift: https://www.effenaar.nl/hybrid-music-vibes-2-de-toegift

  • The Life of Fi

    Virtual influencers and artists have captured our imagination for years. Animated characters, sometimes even AI generated, that sing songs, dance and sometimes even perform on stage. A well-known example is the Japanese Hatsune Miku or the American Lil Miquela, who has even made it to Coachella. The team behind Open Culture Tech consists of makers who already have a lot of experience in building avatars and looking for responsible applications of this emerging technology. Building virtual characters involves a lot of technology, but also raises many ethical questions. Thunderboom Records and Reblika are working in Open Culture Tech on the development of the Avatar tool. During the OATS live show, an initial experiment was conducted with projecting an avatar that copied the lead singer's movements in real-time, and more shows will be testing similar technology in the coming months. But prior to Open Culture Tech, Thunderboom Records and Reblika already built a virtual artist called Fi. Fi, the fluid artist Fi is a virtual artist who comes to life on Instagram and Soundcloud. The appearance, life story and music were partly generated with AI and developed by a diverse team of designers. Fi was founded on the idea that the vast majority of existing virtual artists and influencers have little added value - other than serving as a marketing tool for major brands. Most virtual characters also promote a problematic beauty ideal and are stereotypical. This is enhanced by the functionalities of free 3D avatar creator tools such as DAZ Studio or Magic AI avatar apps like Lensa. The default 3D bodies in DAZ Studio are female bodies with thin waists and large breasts and buttocks. Free users can choose from different types of seductive facial expressions, poses & outfits and even customize the genitals. The standard male body, on the other hand, is muscular, looks tough and has no penis. These sexist stereotypes are also reflected in mobile apps such as Lensa that generate avatars from portrait photos. It was almost impossible for a journalist at the MIT Technology Review not to generate sexist avatars. In response to this status quo, Thunderboom Records and Reblika have attempted to create a virtual artist that goes beyond these stereotypes and makes innovative use of the possibilities of avatar technology. The concept behind Fi is that Fi can be anyone and doesn't have to conform to one type of appearance. Fi is virtual and therefore fluid. As a virtual character, Fi could take different forms, combine different genres and go beyond stereotypes. In addition, it was important that Fi did not become a virtual artist who would replace human artists. Fi must above all work together and help human artists move forward. DIY The concept of Fi has been translated into a story about a fictional character who was born as a star in the universe and came to earth to become an artist. This star does not want to adopt a fixed appearance. Fi therefore chooses a different appearance every 3 months that combines characteristics of inspiring artists. The starting point was an AI-driven mix between Prince and Jin from BTS. After that, Fi became a mix between Madonna and Dua Lipa. Various techniques are used to create online content for virtual influencers, such as Fi and Lil Miquela. The first step is to take photos of a human body double (actor) to use as base material. The photo is then recreated in 3D and merged with the original into one final image in Photoshop. To ensure that Fi's story was not appropriated by Thunderboom Records or Reblika, the body double became the lead for the story. On the day of the photo shoot, he or she could decide what happened to Fi. Sustainability But unfortunately Fi was not sustainable. Regularly creating, posting and maintaining Fi's online content took up so much work that most of the time was spent managing a social media account, rather than creating an interesting story. The added value for human musicians was also limited because the production was too time-intensive and therefore too expensive. The enormous potential of avatars has already been proven by great artists such as Gorillaz or Travis Scott. But it remains a challenge to create avatars that complement emerging artists. For this reason, Fi no longer publishes online content and Thunderboom Records is working with Reblika on the Open Culture Tech avatar tool with which every Dutch artist can easily create avatar content themselves and investigate the potential. The most important lesson we learned from Fi is that the avatars themselves are not the core, but that it is always about the musician who works with the avatars. In the Fi project, too much time and effort was invested in the virtual character, leaving too little for the human musician. We are currently organizing several live performances with emerging artists to explore the possibilities and added value of avatar technology. An example is Smitty, an Amsterdam rapper who constantly talks to other versions of himself in his music. We will explore how we can use projected avatars of his younger and older selves as projections on stage to emphasize this story. What if he could literally talk to another version of himself?

  • How we co-create our live show concepts

    In order to create the most useful emerging tech toolbox, the Open Culture Tech team works closely together with 12 selected artists. Every artist has the unique opportunity to develop a live show for which we build unique technology. Ultimately, we publish all this technology toolbox on our website so that every artist in the Netherlands has the same opportunity to experiment with the tools. In this article, we'll tell you all about our process and progress. Moodboard for the Smitty live show. Sources: Harry Potter, Bruce Almighty, The Matrix, Spongebob, 2001: Space Odyssey The process of designing the creative concepts for the 12 live shows is delicate. On the one hand we have to deal with the ever growing range of new technologies and possibilities, and on the other hand we need to make sure that the artist is strengthened by this technology and remains the centerpiece of the performance. It is important to note that Open Culture Tech does not research whether the latest AI, AR or Avatar technology works for live artists. There are many companies already proving that, such as WaveXR. Instead, Open Culture Tech is about whether these technologies can also work for emerging artists, who don’t have the means to work with complex or expensive software. In addition, we also want to ensure that the artist's autonomy is always guaranteed and that the technology is an addition to this rather than a replacement. The development of our 12 shows and 12 tools is done according to our own Open Culture Tech method, which consists of a number of steps. The first step is to come up with an overarching concept together with the individual artists. Based on this concept, we decide if and how we need AI, AR or Avatar technology. The second step is to create and test first prototypes with our tech partners. The third step is to create a final prototype – based on the feedback and input from step two – that can be used on stage during the actual show. To provide more insights in this creative process, we will explain how the concept building process works by going over two examples that we are currently working on. Smitty Smitty is a rapper and producer who just released his new EP. In his music, he often shows two sides of himself – that often contradict each other. Tough and vulnerable. Adult and child. Hope and sorrow. During his live show, he wants to emphasize this duality in both his music and his visuals. We have been going over different possibilities to emphasize this overall concept. To display the duality of Smitty’s mind, we can create multiple avatars representing different versions of the artist. We can use AI or 3D technology to create images of Smitty in different phases of his life, maybe as a child or as an old man. We can also use animation to create moving images and let the avatars interact with Smitty. He could literally talk with different versions of himself. The weight of Smitty’s music is in his lyrics. To emphasize this, we want to emphasize the most important words by leveraging new technology. For example, mobile AR could be used to add a virtual layer to the show and let the words flow in 3D. Another important starting point was the use of color and space. Smitty would love to have a stage in the middle of a white venue – so that everyone can stand around him. Perhaps an art gallery or empty warehouse with white walls and video projections. This white space is inspired by the “void rooms” often used in Hollywood productions such as The Matrix or even Spongebob. It transports the audience into Smitty’s mind, a canvas in which we are all stuck together. Sophie Jurrjens Sophie Jurrjens is a person of many talents. She is a pianist, composer and producer and also works as a DJ. In the past years, she has developed an app called Off-Track with music walking tours throughout Amsterdam, for which all music was created by herself. By adding music to a walking route, going for a walk is transformed into an experience. The walking tours evolved from the idea that “by adding music to a walking route, going for a walk is transformed into an experience”. Sophie wants to translate this idea into a live DJ set, where visitors start with a walking tour in Amsterdam North and finish at a fixed location where she is performing as a live DJ. The keyword in our creative process is “grandiosity”. How can we add interactive or visual elements to the walking tour to create a grandiose atmosphere? The answer turned out to be mobile augmented reality. By using mobile AR, visitors can experience virtual objects or that appear in 3D during the walking tour. These virtual objects can return during the final live show. The music tour starts at the NDSM Werf in Amsterdam North and should finish at a venue located alongside the IJ canal. In this way, we can use the canal as a backdrop for the mobile AR filter and create large grandiose 3D objects. In addition, we will add light tubes or aesthetic lamps to the physical stage design. The shapes of the lamps will also be translated into 3D objects to bring together the virtual and physical world. Our next step is to create and test the first basic prototypes together with our tech partners Reblika and Superposition and plan the actual live shows. Keep an eye on our website for the first results and exact show dates.

  • The Frankenstein of AI music

    As the MUTEK team wrote in the MUSIC x newsletter last August, it is impossible to tell whether AI music is good. Not only because there are so many different applications, but also because the definition of “good music” is quite a subjective one. AI music in the broadest sense of the word is an important element in the Open Culture Tech project. We look for ways in which it can add value for artists, without harming them in their creative process. For many artists in our direct surroundings, AI feels like a monster under their bed, ready to attack their professional career and creative abilities. Because of that, most of them are mainly interested in AI technologies that can help them with their visual performance, or the interaction with the audience. But really having AI interfere with their music is something they are very cautious with. At Open Culture Tech, we try to look at music AI tools as Frankenstein’s monster. Whether they are voice cloning tools or generative algorithms that create melodies, chords or beats. Just as Frankenstein’s monster was built up from separate elements (limbs), AI tools can also be seen as separate elements in the musical creation process. And just like doctor Frankenstein, musicians still need creative ideas and skills to connect the separate elements and bring the whole body to life. But when we take a look at the current market for AI music tools, there is something strange going on. Many tools that are currently available, such as AIVA or BandLab Songstarter are being promoted as tools that provide “inspiration” by generating MIDI beats or melodies from scratch. In essence, there is nothing wrong with that. However, professional artists, or artists with professional ambitions, are not the right target audience for these specific tools. So far, we have not spoken to a single artist with a lack of inspiration or musical ideas. To go even further, many artists seem to enjoy these first steps in their creation process the most since that is the point where they are really being creative, without having to think too much about the end results yet. The idea that AI needs to help us kick-start our human creativity feels wrong. Of course, if your not a musician and you need some quick stock music for your TikTok videos, these tools could be very helpful. But for professional musicians, this is not very helpful. Two weeks ago, Google and YouTube introduced the Lyria AI model that was accompanied by various Dream Track applications. The most prominent app allows users to enter a topic (write a prompt), choose an artist from a carousel and generate a 30 second soundtrack. The AI will generate original lyrics, backing tracks and an AI-generated voice will sing the song for you. But again, this application is aiming at social media content – mainly promoting YouTube Shorts. When diving into other websites that showcase AI music tools, such as AI Tools Club, you'll see that the majority of applications are just not created to help professional musicians. Like Mubert, they also want to support social media, podcast or app creators. (The other Dream Track app allows you to transform your humming or beatbox into an instrument such as a saxophone or even an entire orchestra. This is very similar to Vochlea, an application that helps musicians quickly “sketch” their ideas into MIDI). Although AI tools that translate humming into MIDI might be helpful, we at Open Culture Tech feel that there is a growing gap between the AI (application) community and the wider professional music community. Yes, there are very interesting music AI initiatives out there such as the Sound Of AI YouTube channel (recommended!) but this is not a plug-and-play app such as AIVA. In the end, the limbs of Frankenstein’s monster are not developed for professional artists but for TikTok creators. That is why Open Culture Tech is currently working on 3 different easy-to-use AI applications that are aiming to support or inspire professional musicians during different parts of their creative process. Together with our collaborating artists and AI developers, we will test our first prototypes soon and share our findings with you in the upcoming updates.

  • Collective AR during ADE

    Last month, we created an AR (Augmented Reality) experience for upcoming DJ Ineffekt that premiered at the Amsterdam Dance Event. It enabled the audience to collectively enter the world of Ineffekt in AR while listening to his new EP called High Hopes. Ineffekt had previously worked with director Cas Mulder to create a short video for his EP that was released last August. The clip features the producer in a seemingly empty space, watching an organism grow inside a contamination box. Since the contamination box and its contents had all been designed by an amazing VFX team, we figured it would be interesting to create a ‘follow up’ to the video in AR. In other words, what happened to the growing organism in the glass box? The AR was presented four times during ADE, and was freely accessible. Visitors would gather on the street in front of a record store. The High Hopes EP would start playing, and we would invite the audience to scan a QR code that would instantly serve the AR experience, without the need for downloading an app first*. Once inside the AR experience, the audience found themselves surrounded by a pulsating, post-apocalyptic landscape. Fragments from the contamination box, now almost completely overrun, can be seen. The yellow organism that was growing so quickly inside the box is now everywhere. And after some exploration, the audience could see that Ineffekt is watching this (and them?) all from a short distance, seemingly content. The message: you cannot restrain growth; nature (or creativity?) will always find its way.

  • Behind the Scenes

    At Open Culture Tech, we are developing our own open-source and easy-to-use AI, Augmented Reality and Avatar creation tools for live music. Open Culture Tech issued an Open Call in the spring of 2023, in which musicians in The Netherlands could join our development programme. From a large number of applications, 10 diverse artists were selected. From Punk to R&B and from Folk to EDM. Each with a unique research question that could be answered by applying new technology. Questions such as: “I’ve put my heart and soul into creating an EP that’s 15 minutes long. I want to perform this new work live on stage, but I need at least 30 minutes of music to create a full show.” Or questions like: “how can I enhance the interaction between me and the audience, when I play guitar on stage?” Together with these artists, we (the Open Culture Tech team) come up with original answers to their questions that we translate into open-source AI, AR or Avatar technology – with easy to use interfaces. Then, each solution prototype will be tested during a live pilot show. After 10 shows, we have tested various different prototypes that we will combine into one toolkit. In this way, we aim to make the latest technology more accessible to every artist. Below, we share the results of our first two prototypes and live pilot shows. OATS × Avatars The first Open Culture Tech pilot show was created in close collaboration with OATS. Merging punk and jazz, OATS is establishing themselves through compelling and powerful live shows. Their question was: “how can we translate the lead singer's expression into real-time visuals on stage?” To answer this question, we decided to build and test the first version of our own Avatar creation tool, with the help of Reblika. Unreal Engine is the global industry standard in developing 3D models. It is used by many major 3D companies in the music, gaming and film industry. The learning curve is steep and the prices for experts are high. Reblika is a Rotterdam-based 3D company with years of experience in creating hi-res avatars for the creative industry, who are currently developing their own avatar creator tool – using Unreal Engine – called Reblium. For Open Culture Tech, Reblika is developing a free, stand alone, open-source edition with an easy-to-use interface, aimed at helping live musicians. The master plan was to capture the body movement of the lead singer (Jacob Clausen) with a Motion Capture Suit and link the signal to a 3D avatar in a 3D environment that could be projected live on stage. In this way, we could experience what it’s like to use avatars on stage and to find out what functionalities our Avatar creation tool would need. In this case, the aesthetic had to be dystopian, alienating and glitchy. Our first step was to create a workflow for finding the right 3D avatar and 3D environment. OATS preferred a gloomy character in hazmat suit, moving through an abandoned factory building. We decided to use the Unreal Engine Marketplace, a website that offers ready-made 3D models. To create the 3D environment, Jacob Clausen decided to use a tool called Polycam and scan an abandoned industrial area. Polycam is an easy-to-use software tool that uses a technology called LiDAR, better known as 3D-scanning. Polycam allows you to scan any physical 3D object or space and render it into a 3D model. The 3D scan (factory) and avatar (hazmat suit) were imported into Unreal Engine and the avatar was connected to a motion capture suit. This allowed Jacob Clausen to become the main character on screen and test the experience live on stage at Popronde in EKKO in Utrecht, on 19 October at 23:30. What followed was a show that taught us a lot. The venue provided us with a standard beamer/projector and a white screen behind the stage. Due to an over-active smoke machine, unstable internet connection and a low-res beamer-projector, the avatar was not always visible on screen. Nevertheless, there were certainly moments where everything came together. At these moments, the synchronization between Jacob and his avatar was super interesting, the storytelling was amazing and the technology showed a lot of potential. The Motion Capture suit was very expensive and we had to borrow this suit from Reblika. This is not very sustainable, accessible and inclusive. For our next prototype, we will look at Motion Capture AI technology, such as Rokoko Vision, instead of suits. The 3D avatar and environment were shown from different camera angles. To make this possible, someone had to keep changing the camera angle (real-time) within the Unreal Engine software. Going forward, we should add predefined camera angles. In this way, you don’t need an extra person to control the visuals. Ineffekt × AR The second use case of Open Culture Tech was provided by Ineffekt. Through a blend of glistening vocal pieces, strings of dreamy melodies and distinctive rhythms, Ineffekt cleverly takes on a sound that both feels comfortable and illusive. The question of Ineffekt was: “how can I translate my album artwork into a virtual experience that could transform any location into an immersive videoclip?”. To answer this question, we decided to build and test the first version of our own AR creation tool, with the help of Superposition, an innovative design studio for interactive experiences. For his latest album artwork and music video, Ineffekt decided to use a 3D model of a greenhouse in which yellow organisms are growing. This 3D model formed the basis for the AR experience we tested during the Amsterdam Dance Event. Our main goal was to create and test an intimate mobile AR experience that was built with the use of open-source 3D technologies. This meant that we couldn’t use popular AR tools like Spark AR (Meta), Snap AR (Snapchat) or ArCore (Google). In our first experiment, Blender was used to create a hi-res 3D model and webXR was used to translate this model into a mobile Augmented Reality application. Superposition also decided to experiment with App Clips on iOS and Play Instant on Android. These techniques allow you to open a mobile application – after scanning a QR code – in your browser without downloading the actual app. On October 20 and 21, we tested our first AR prototype in front of Black & Gold in Amsterdam, during ADE. After scanning the QR code on a poster, the audience was taken to a mobile website that explained the project. Then, the camera of your phone would switch on and you’d see the yellow plants/fungus grow around you. In the back, someone was sitting quietly, a silent avatar. The overall experience was poetic and intimate. As with OATS, we learned a lot. It is possible to create an intimate and valuable experience with mobile Augmented Reality technology. It is possible to create a mobile AR experience with open-source technology. The experience was static and did not react to the presence of the viewer. Going forward, we should look into the possibilities of adding interactive elements. Our ultimate goal is to develop accessible AI, AR and Avatar creation tools that any musician can use without our support. In the above examples, this has not been the case. We have mainly tested the workflow of existing tools and not created our own tools – yet. Going forward, we will start building and testing our own software interfaces and let artists create their own AI, AR Avatar experiences from scratch. In this way, we hope to ensure that up-and-coming musicians also get the opportunities and resources to work with the latest technology. This article was also published on MUSIC x on Thursday, 16 November 2023.

  • Aespa, avatars and live holograms

    As part of Open Culture Tech’s Avatar Series, we delve into the unique concept of Aespa. Aespa is a South Korean girl group that has managed to carve a unique niche for themselves by blending the boundaries between reality and the digital realm. We will look at their innovative use of technology and storytelling, but we will also look at ways to apply these technologies yourself. Aespa made their debut in November 2020, during the height of the COVID-19 pandemic, with the single “Black Mamba”. It is a catchy K-pop track that combines elements of pop, hip-hop, and electronic dance music. One of the most striking aspects of Aespa’s debut was the addition of a storyline that used digital avatars. The idol group consists of four human members – Karina, Giselle, Winter, and Ningning – who are accompanied by digital counterparts known as “æ”. There is æ-Karina, æ-Giselle, æ-Winter, and æ-Ningning, and they all live in a parallel virtual world called “æ-planet”. Aespa was introduced to the world as a group of hybrid action figures in a sci-fi story that was set in both the physical and virtual world. Aespa and their digital counterparts had to fight against Black Mamba, a typical villain who wanted to destroy the virtual and physical world. The audience could follow the story in a three-part series on YouTube and supporting content appeared on various social media channels for months. Fast forward to 2023, and you hardly see any avatars on Aespa's online channels anymore. The storyline about action heroes has been exchanged for a staged storyline about 4 close friends who share a lot of backstage footage. Still, with Aespa, technology is never far away. Even though Aespa’s social media channels no longer show avatars, they are still prominently present at the live shows. Last summer, Joost de Boo, member of the Open Culture Tech team, was in Japan to see Aespa live at the Tokyo Dome, together with 60.000 excited Japanese fans. “Before the show, while everyone was looking for their seats, the Black Mamba avatar video series was broadcasted on a huge screen”, Joost recalls. “It really set the stage and took the audience into the world of Aespa. But not only that. It was also a natural build-up towards the start of the show where the 4 members first entered the stage as dancing avatars, after which they were followed by the human versions.” Joost found the live show at the Tokyo Dome was both impressive and questionable. “There is a certain aesthetic and ideal of physical beauty that is being pursued by Aespa – and almost any other idol band I know – and I wonder if that is something we should promote”. Over the years, more and more (ex) members of K-pop groups have spoken out about the dark side of K-pop culture; including sexism, abuse and mental health stigmas. So we will get back to the subject of stereotyping and avatars in another article. “But without ignoring these concerns, the technology used in the Aespa show is something we can and should definitely learn from.” To be fair, projecting a video on stage doesn’t sound very revolutionary. Still, combining a projection on stage with a storyline on social media and YouTube does not happen very often. Furthermore, this was not the only appearance of the avatars on stage. After the first 20 minutes of the show, a large screen was wheeled onto the stage. What happened next can be seen in the videos below. The rest of the show followed the same structure as the chronological content on YouTube and social media: the avatars disappeared and a group of human friends remained. What have we seen on stage, and what can we learn from it? First the Black Mamba storyline. It is important to note that Aespa is created and owned by SM Entertainment, a South Korean multinational entertainment agency that was one of the leading forces behind the global popularization of South Korean popular culture in the 1990s. SM Entertainment is active throughout the entire entertainment industry and owns and operates record labels, talent agencies, production companies, concert production companies, music publishing houses, design agencies, and investment groups. So what we have seen is a multimillion euro cross-media production where dozens of talented designers, artists and developers have worked together. In order to create Aespa’s live show, SM Entertainment worked together with (among others) 37th Degree and Giantstep, two international award-winning agencies: from creating anime characters, modeling 3D avatars and designing merchandise to story writing, directing and filming. But besides the impressive high-budget content production, the most innovative part is not the storyline or avatars itself, but the way these characters appeared on stage after about 20 minutes. According to LG, this is the first time ever that 12 brand new “Transparent OLED” screens are combined into 1 large hologram screen on a stage. A new technology that we can expect to become much more common in the coming years. You can check out this link if you want to know more about these screens or read our previous article about cheap alternatives. Source: https://www.giantstep.com/work/sm-ent-aespa-synk/ To wrap up. As an artist in the Netherlands you probably don't have the budgets of SM Entertainment. Nevertheless, it is not impossible to make up a storyline or to invent an alter ego – if you want to. It is also not impossible to translate that story into audiovisual content such as (moving) images. Maybe generative AI can help you there? But the most exciting thing is this: soon it will also be possible to translate your story into 3D with our very own Open Culture Tech “Avatar tool”. Last week we tested our first prototype live on stage and the results are more than promising. Curious about our progress? Then keep an eye on the newsletter for more updates. Want to know more about Aespa? Read their latest interview with Grimes in Rolling Stone Magazine.

  • “Alexa, can you sing me a song?”

    Voice cloning technology has developed rapidly, offering artists new ways to explore their creative depths. Artists like Holly Holly Herndon and Ghostwriter managed to shake up the music industry, for better or for worse. Although this technology is best known for robot voices in Alexa or Google Translate, there is also an application for this technology in music. Musicians can now add vocal textures to their compositions, experiment with harmonies and collaborate with virtual artists. But what exactly is this, and how do I use this technology as an artist? The combination of voice clones, voice swaps and synthetic voices is called vocal synthesis. The difference between these three technologies is as follows: 1. Voice clones A voice clone is a virtual copy of an existing voice. In order to make a voice clone, it is necessary to have audio material of the voice you want to recreate, such as voice messages or music. By teaching an algorithm the characteristics of these voices, such as timbre, tone and pronunciation, this voice can be imitated. 2. Voice swaps Voice swap technology allows you to change a voice in an audio recording or during a live conversation. It is mainly used to change or replace one person's voice with another's while preserving the original content of the speech. Voice swap technology is currently used, for example, to dub voices in films, for virtual assistants or to make anonymous conversations. 3. Synthetic voices Synthetic voices are completely computer-generated voices, in which text is automatically converted into spoken words. These voices are often used in digital assistants, GPS navigation systems or audiobooks and can be easily customized and personalized. How do I use vocal synthesis as an artist? There are 1001 possibilities when it comes to using vocal synthesis in music. It can offer good solutions for artists who cannot sing well but want to make full tracks, or for artists who are looking to expand their own voice. Vocal synthesis software can be used to generate harmonies and backing vocals that can complement your live performances or recordings. This allows you to add additional voices to your music without additional vocalists. You could do this, for example, by cloning your own voice or by adding a synthetic voice as a second voice. Below are some examples you can experiment with: 1. Vocaloid: synthetic singing voices Vocaloid is a synthetic voice creation tool that allows musicians to create customizable virtual singing voices. The tool synthesizes vocals using pre-recorded voice banks, allowing users to enter lyrics and melodies and then drag them across a staff to create their own compositions. The software includes a variety of voice banks with different tones and styles. Although Vocaloid is easy to use, it is very difficult to get a good sounding output. The software requires quite a bit of adjustment and subsequent trial or editing. Musicians can experiment with different vocal timbres and languages, which is an essential tool for creating distinctive vocal textures. New versions of the software also offer the ability to sing in different languages. 2. iZotope VocalSynth: Vocal morphing iZotope VocalSynth is a voice processing plugin that allows artists to manipulate their voices in real time. By combining live singing with altered vocal elements, musicians can bring depth and character to their music. An experimental feature is VocalSynth's ability to transform voices into robotic, alien or alien textures, perfect for artists venturing into electronic, experimental or sci-fi genres. It also facilitates the creation of harmonies, vocal effects and subtle enhancements. 3. Alter/Ego: creating voices Another option is Alter/Ego, which may not be the vocal synth you're used to, but can certainly be used to create voices. It offers a simple interface and a wide selection of vocal libraries, allowing users to easily create different singing voices. The system is compatible with various DAWs, making it easy to integrate into your production workflow. Alter/Ego may lack the advanced features and customization options that some other vocal synthesis software offers. Musicians who require very complicated and customized vocal effects may find this somewhat limiting compared to more complex solutions.

  • Meet our artists

    Open Culture Tech is working towards an open-source toolkit with existing and self-built tools for AI, AR and Avatar technology. To build this toolkit, we will organize several live shows with collaborating musicians in the coming year to test tools in practice. We are very proud to introduce the collaborating artists below. Keep an eye on our agenda so you don't miss any of the shows. If you can't wait, come tomorrow to the Popronde in the EKKO (Utrecht), where OATS will experiment with live avatar technology. Or visit ADE, where Ineffect presents an immersive live experience with AR technology. Ineffekt Ineffekt has proven himself to be a multi-genre producer and DJ who fuses all the sounds he adores. His never-ending energy is heard in the adventurous tracks selected during his sets. Having his breakthrough year in a time when clubs were closed, festivals were forbidden and dancing happened in secret, this young artist is driven to let the world know who he is. Instagram OATS Melding the aggression of punk and metal and technical intricacies of jazz, OATS are making their mark with an emotional and hard hitting live act. Their refreshing blend of genres such as emo, math rock and experimental hip hop ensure a passionate performance not to be missed. They have performed at Complexity Fest 2023 and have been selected as NMTH talent for Popronde 2023. Instagram Nana Fofie Nana Fofie, is a 28-year-old singer and songwriter born and raised in the Netherlands of Ghanaian descent. Nana grew up in a wholesome Ghanaian household, for others to be considered quite untraditional. Her late father was an experienced singer who showed her the various sides of music, from modern soul to traditional Ghanaian music to R&B. Nana had her first cosigns from the likes of Nicki Minaj in 2019 , with a highlight performing at Amsterdam’s Ziggo Dome in front of 15,000 people. Nana’s career has continued to grow with over 60 million total streams, 2023 presents the year of Nana’s breakthrough project. Instagram Alex Figueira Alex Figueira (pronounced Fee-Gay-Ra) is a versatile Venezuelan-Portuguese musician, producer, DJ, and record collector based in Brussels. He's known as the “hardest working man in Tropical music” and has founded successful music projects like Fumaça Preta, Conjunto Papa Upa, Vintage Voudou Records, Music With Soul Records, and the Heat Too Hot music studio. His solo debut album "Mentallogenic" received critical acclaim, and he's gained recognition from industry leaders like Kenny Dope and Giles Peterson. Figueira blends lesser-known genres from Africa, the Caribbean, and Latin America with vintage soul, funk, and psychedelia, resulting in a unique and experimental musical style. Instagram Melleni Melleni is the moniker of songwriter and producer Melle Jutte. His musical odyssey weaves together an eclectic blend of influences, encompassing the pulsating rhythms of house and techno, the diverse world of global grooves, and the immersive soundscapes of experimental and ambient music. Melleni's distinctive sound is subtly enhanced by the inclusion of live elements and vocals, capturing the essence of Melle Jutte's artistic persona. Together, this musical fusion creates an enchanting narrative that resonates with audiences worldwide. Instagram Eveline Ypma Eveline Ypma is a filmcomposer, multi-instrumentalist and sound artist from Amsterdam, The Netherlands. She is specialized in composition with a modest and playful character. Her love for nature can be heard through the soundscapes in her compositions; this brings an organic and quirky dimension to her music. As a studious musician Eveline explores her surroundings for interesting sounds, from volcanic vibrations from Iceland to human and natural sounds at a beach entrance in the Dutch dunes. With a combination of field recordings, sound scapes and musical instruments she tells a story in a masterly way. Her use of characteristic sounds and her authentic approach make her compositions unique. Website Jan Modaal Jan Modaal writes punk “smartlappen”. Jan's texts reveal a great love for the Dutch language. The songs are somewhere between a cheerful indictment and an angry declaration of love. “Wil je dood ofzo?”, Jan Modaal's debut, was released in 2020. Jan explains his vision of the world in four cutting, tearing pieces. The second EP, Dode Witte Man (Dead White Man), was released in 2023. Hard, catchy and straightforward. Jan Modaal is a man of the people, the strident voice of a generation. Instagram Sophie Jurrjens Sophie Jurrjens lives for music. Sophie is a composer and creative entrepreneur, interested in merging music with the environment. After completing her bachelor's degree at the Utrecht Conservatory, she graduated from the interfaculty of ArtScience at the Royal Conservatory/Royal Academy of Visual Arts in The Hague. To let people experience the power of music, Sophie has developed the Off-Track app. Off-Track transforms going outside into an experience by adding music to a walking route. They write the music themselves and adapt it to the route you walk. Instagram Smitty Smitty (27) is a multi-talent from Haarlem who masters both the art of rapping and the skills of a producer. With roots in the Dominican Republic and lyrics in Dutch and English, Smitty has serious international appeal. The rapper/producer has been a valuable member of hip hop/trap collective Black Acid since 2016 and also makes strong moves solo. Smitty is a passionate story-teller and you can hear that in his music. His sound is unequivocally unique: tough and sincere, and supported by razor-sharp lyrics. During his show at Creative by Nature, the rapper played an acoustic version of his songs for the first time - a step that proved his diversity as an artist. Instagram Casimir & Sofia Sofia Maria and Casimir regularly perform together at festivals and clubs throughout the Netherlands. As a duo they have only been playing together for a few years, yet they already have a number of major festivals under their belt. Always from behind a digital turntable and often in front of a broad audience. From Amsterdam Dance Event to Best Kept Secret. They play a variety of styles between house, electro and a touch of breakbeat here and there. Instagram (Casimir) Instagram (Sofia) Vincent Höfte Vincent Höfte is 30 years old and lives in The Hague, works in Amsterdam as an engineer. Music, and piano in particular, has been his hobby for 20 years. After briefly considering the conservatory, it remained a hobby. His situation is recognizable to many: "little time, I would like to play more again". Fortunately, there are public station pianos and they always provide motivation to go for an unsolicited station concert. Especially for themselves, but perhaps also for the casual spectator.

bottom of page