top of page

Search

24 items found for ""

  • “Alexa, can you sing me a song?”

    Voice cloning technology has developed rapidly, offering artists new ways to explore their creative depths. Artists like Holly Holly Herndon and Ghostwriter managed to shake up the music industry, for better or for worse. Although this technology is best known for robot voices in Alexa or Google Translate, there is also an application for this technology in music. Musicians can now add vocal textures to their compositions, experiment with harmonies and collaborate with virtual artists. But what exactly is this, and how do I use this technology as an artist? The combination of voice clones, voice swaps and synthetic voices is called vocal synthesis. The difference between these three technologies is as follows: 1. Voice clones A voice clone is a virtual copy of an existing voice. In order to make a voice clone, it is necessary to have audio material of the voice you want to recreate, such as voice messages or music. By teaching an algorithm the characteristics of these voices, such as timbre, tone and pronunciation, this voice can be imitated. 2. Voice swaps Voice swap technology allows you to change a voice in an audio recording or during a live conversation. It is mainly used to change or replace one person's voice with another's while preserving the original content of the speech. Voice swap technology is currently used, for example, to dub voices in films, for virtual assistants or to make anonymous conversations. 3. Synthetic voices Synthetic voices are completely computer-generated voices, in which text is automatically converted into spoken words. These voices are often used in digital assistants, GPS navigation systems or audiobooks and can be easily customized and personalized. How do I use vocal synthesis as an artist? There are 1001 possibilities when it comes to using vocal synthesis in music. It can offer good solutions for artists who cannot sing well but want to make full tracks, or for artists who are looking to expand their own voice. Vocal synthesis software can be used to generate harmonies and backing vocals that can complement your live performances or recordings. This allows you to add additional voices to your music without additional vocalists. You could do this, for example, by cloning your own voice or by adding a synthetic voice as a second voice. Below are some examples you can experiment with: 1. Vocaloid: synthetic singing voices Vocaloid is a synthetic voice creation tool that allows musicians to create customizable virtual singing voices. The tool synthesizes vocals using pre-recorded voice banks, allowing users to enter lyrics and melodies and then drag them across a staff to create their own compositions. The software includes a variety of voice banks with different tones and styles. Although Vocaloid is easy to use, it is very difficult to get a good sounding output. The software requires quite a bit of adjustment and subsequent trial or editing. Musicians can experiment with different vocal timbres and languages, which is an essential tool for creating distinctive vocal textures. New versions of the software also offer the ability to sing in different languages. 2. iZotope VocalSynth: Vocal morphing iZotope VocalSynth is a voice processing plugin that allows artists to manipulate their voices in real time. By combining live singing with altered vocal elements, musicians can bring depth and character to their music. An experimental feature is VocalSynth's ability to transform voices into robotic, alien or alien textures, perfect for artists venturing into electronic, experimental or sci-fi genres. It also facilitates the creation of harmonies, vocal effects and subtle enhancements. 3. Alter/Ego: creating voices Another option is Alter/Ego, which may not be the vocal synth you're used to, but can certainly be used to create voices. It offers a simple interface and a wide selection of vocal libraries, allowing users to easily create different singing voices. The system is compatible with various DAWs, making it easy to integrate into your production workflow. Alter/Ego may lack the advanced features and customization options that some other vocal synthesis software offers. Musicians who require very complicated and customized vocal effects may find this somewhat limiting compared to more complex solutions.

  • Meet our artists

    Open Culture Tech is working towards an open-source toolkit with existing and self-built tools for AI, AR and Avatar technology. To build this toolkit, we will organize several live shows with collaborating musicians in the coming year to test tools in practice. We are very proud to introduce the collaborating artists below. Keep an eye on our agenda so you don't miss any of the shows. If you can't wait, come tomorrow to the Popronde in the EKKO (Utrecht), where OATS will experiment with live avatar technology. Or visit ADE, where Ineffect presents an immersive live experience with AR technology. Ineffekt Ineffekt has proven himself to be a multi-genre producer and DJ who fuses all the sounds he adores. His never-ending energy is heard in the adventurous tracks selected during his sets. Having his breakthrough year in a time when clubs were closed, festivals were forbidden and dancing happened in secret, this young artist is driven to let the world know who he is. Instagram OATS Melding the aggression of punk and metal and technical intricacies of jazz, OATS are making their mark with an emotional and hard hitting live act. Their refreshing blend of genres such as emo, math rock and experimental hip hop ensure a passionate performance not to be missed. They have performed at Complexity Fest 2023 and have been selected as NMTH talent for Popronde 2023. Instagram Nana Fofie Nana Fofie, is a 28-year-old singer and songwriter born and raised in the Netherlands of Ghanaian descent. Nana grew up in a wholesome Ghanaian household, for others to be considered quite untraditional. Her late father was an experienced singer who showed her the various sides of music, from modern soul to traditional Ghanaian music to R&B. Nana had her first cosigns from the likes of Nicki Minaj in 2019 , with a highlight performing at Amsterdam’s Ziggo Dome in front of 15,000 people. Nana’s career has continued to grow with over 60 million total streams, 2023 presents the year of Nana’s breakthrough project. Instagram Alex Figueira Alex Figueira (pronounced Fee-Gay-Ra) is a versatile Venezuelan-Portuguese musician, producer, DJ, and record collector based in Brussels. He's known as the “hardest working man in Tropical music” and has founded successful music projects like Fumaça Preta, Conjunto Papa Upa, Vintage Voudou Records, Music With Soul Records, and the Heat Too Hot music studio. His solo debut album "Mentallogenic" received critical acclaim, and he's gained recognition from industry leaders like Kenny Dope and Giles Peterson. Figueira blends lesser-known genres from Africa, the Caribbean, and Latin America with vintage soul, funk, and psychedelia, resulting in a unique and experimental musical style. Instagram Melleni Melleni is the moniker of songwriter and producer Melle Jutte. His musical odyssey weaves together an eclectic blend of influences, encompassing the pulsating rhythms of house and techno, the diverse world of global grooves, and the immersive soundscapes of experimental and ambient music. Melleni's distinctive sound is subtly enhanced by the inclusion of live elements and vocals, capturing the essence of Melle Jutte's artistic persona. Together, this musical fusion creates an enchanting narrative that resonates with audiences worldwide. Instagram Eveline Ypma Eveline Ypma is a filmcomposer, multi-instrumentalist and sound artist from Amsterdam, The Netherlands. She is specialized in composition with a modest and playful character. Her love for nature can be heard through the soundscapes in her compositions; this brings an organic and quirky dimension to her music. As a studious musician Eveline explores her surroundings for interesting sounds, from volcanic vibrations from Iceland to human and natural sounds at a beach entrance in the Dutch dunes. With a combination of field recordings, sound scapes and musical instruments she tells a story in a masterly way. Her use of characteristic sounds and her authentic approach make her compositions unique. Website Jan Modaal Jan Modaal writes punk “smartlappen”. Jan's texts reveal a great love for the Dutch language. The songs are somewhere between a cheerful indictment and an angry declaration of love. “Wil je dood ofzo?”, Jan Modaal's debut, was released in 2020. Jan explains his vision of the world in four cutting, tearing pieces. The second EP, Dode Witte Man (Dead White Man), was released in 2023. Hard, catchy and straightforward. Jan Modaal is a man of the people, the strident voice of a generation. Instagram Sophie Jurrjens Sophie Jurrjens lives for music. Sophie is a composer and creative entrepreneur, interested in merging music with the environment. After completing her bachelor's degree at the Utrecht Conservatory, she graduated from the interfaculty of ArtScience at the Royal Conservatory/Royal Academy of Visual Arts in The Hague. To let people experience the power of music, Sophie has developed the Off-Track app. Off-Track transforms going outside into an experience by adding music to a walking route. They write the music themselves and adapt it to the route you walk. Instagram Smitty Smitty (27) is a multi-talent from Haarlem who masters both the art of rapping and the skills of a producer. With roots in the Dominican Republic and lyrics in Dutch and English, Smitty has serious international appeal. The rapper/producer has been a valuable member of hip hop/trap collective Black Acid since 2016 and also makes strong moves solo. Smitty is a passionate story-teller and you can hear that in his music. His sound is unequivocally unique: tough and sincere, and supported by razor-sharp lyrics. During his show at Creative by Nature, the rapper played an acoustic version of his songs for the first time - a step that proved his diversity as an artist. Instagram Casimir & Sofia Sofia Maria and Casimir regularly perform together at festivals and clubs throughout the Netherlands. As a duo they have only been playing together for a few years, yet they already have a number of major festivals under their belt. Always from behind a digital turntable and often in front of a broad audience. From Amsterdam Dance Event to Best Kept Secret. They play a variety of styles between house, electro and a touch of breakbeat here and there. Instagram (Casimir) Instagram (Sofia) Vincent Höfte Vincent Höfte is 30 years old and lives in The Hague, works in Amsterdam as an engineer. Music, and piano in particular, has been his hobby for 20 years. After briefly considering the conservatory, it remained a hobby. His situation is recognizable to many: "little time, I would like to play more again". Fortunately, there are public station pianos and they always provide motivation to go for an unsolicited station concert. Especially for themselves, but perhaps also for the casual spectator.

  • Artist interview: Sofia Maria & Casimir

    In the coming weeks we will share interviews with the artists who will be experimenting live with the latest AI, AR and Avatar technology as part of Open Cultuur Tech's Test Program. Together with designers and programmers, they are given free rein to develop their own show in which they can explore the creative possibilities of technology. The series of interviews kicks off with Sofia Schuijt and Cas Mulder – better known as Sofia Maria and Casimir, an up-and-coming DJ duo from Amsterdam. Sofia Maria and Casimir regularly perform together at festivals and clubs throughout the Netherlands. As a duo they have only been playing together for a few years, yet they already have a number of major festivals under their belt. Always from behind a digital turntable and often in front of a broad audience. From Amsterdam Dance Event to Best Kept Secret. They play a variety of styles between house, electro and a touch of breakbeat here and there. Why Sofia Maria and Casimir signed up for Open Cultuur Tech's Testing Program? If you ask Casimir, it is because it is necessary to stay informed of the latest developments. “This ensures that you can stay in touch with a broad younger target group.” In addition to being a DJ, Casimir is also a filmmaker and has a great interest in theater. He sees that the theater sector has a lot of difficulty filling the halls and reaching a young new audience, which means that there are mainly older people in the audience. New technology can offer a solution for this. Take augmented reality (AR), for example. This technology is perhaps best known to a younger target group in the form of AR filters on TikTok and Snapchat. You therefore see that more and more major pop artists are collaborating with these apps to release their own AR filters for use during live concerts. In this way, many pop artists involve the perception of the younger target group in their live show. But Sofia Maria and Casimir also notice that many artists are reluctant to adopt new technology. Especially when it comes to artificial intelligence (AI). The idea that in the future there will be computers that can make exactly the same music as humans is of course frightening. Yet Sofia Maria and Casimir are not worried at all. “Music, and any other form of art, is about interpreting the human experience, regardless of the medium through which you deliver it. AI can never describe this human experience better than a human itself because it is simply not human.” Despite their down-to-earth attitude, Sofia Maria and Casimir are extremely curious about the possibilities of artificial intelligence. For example, they are involved in the development of WAIVE Studio, a series of AI tools that can generate live drum beats and samples based on audio archive material from Sound and Vision in Hilversum. It is a unique way for them to add distinctive sounds to their music and explore how they can further develop in the rapidly changing music industry. But for Sofia Maria and Casimir, their interest in artificial intelligence does not stop there. For example, Casimir would also find it interesting if a computer could help him match the pitches or BPMs of tracks and, for example, make suggestions. “You better become good friends with it.” Even though Sofia Maria is less enthusiastic about this, Casimir would still be interested. “In my opinion, receiving help from an AI does not detract from the artistic freedom of the artist.” Both DJs agree about using AI for visual elements in a live show. The lack of budget often forces them to create visual elements themselves, but they do not always have the time to do so. For example, Casimir uses Midjourney, a simple AI tool that generates images based on text prompt, to generate header visuals for SoundCloud. But both DJs would benefit most from an AI that could process their music in real-time into VJ visuals. Sofia Maria is especially curious about the possibility of AI-generated projections on the walls in the venue during their shows or on the outside wall of the club. When we ask Sofia Maria and Casimir what they think the DJ profession will look like in 10 years, we get a carefree answer. Both agree that their own way of working, from a human experience, will not change much. The new technology will offer many new possibilities. Just as the rise of digital audio gave many more people access to music. Luckily, the vinyl record still exists, but the vast majority of people listen to music via streaming. “The DJ will probably be cut back at some point in commercial venues on Rembrandtplein. But I don't know if that's a bad thing in the end." They have a clear image in mind for their own Open Cultuur Tech live show. It must be a spectacle in which the boundaries between theater and club become blurred. In which theatrical elements are used to push the boundaries and the DJ becomes a conductor who not only plays the music, but also plays the visuals and audience.

  • Is AI a threat to musicians?

    AI-generated music is on the rise. A large chunk of this music has not been produced by a human hand. Anyone can generate a track at the touch of a button. On the other hand, there are also more and more AI tools that allow you to generate separate musical elements such as chord progressions, melodies or drum rhythms that you then have to process into a complete track yourself. What do these developments mean for you as a musician and your way of working? Should you worry about this new artificial competitor or is there nothing to worry about? Holly Herndon is an American musician who experiments a lot with AI and voice cloning and music rights. As Nick Cave aptly said early this year: “Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder.” Most people make music for themselves and for others, to express feelings or to convey a message. The question is whether AI will ever be able to express meaningful feelings or convey a personal message. A melody or text generated by an algorithm is impersonal by definition because a computer is not a person who can be in love or suffer from heartbreak. But suppose a computer can learn what love songs are by listening to millions of love songs and recognize what is popular and what is not. Suppose it learns to understand genres and formats, learns which chords, instruments and lyrics have an effect and can combine them into recognizable music. Is that bad? A parallel can be drawn with chess. Chess computers have been able to beat even the best grandmasters since the 1970s. Nevertheless, in 2023 most people do not find it interesting to watch a match between two computers playing chess. In fact, the (human) sport of chess has only become more popular since then. The chess computer introduced a new way of playing, invented new moves and challenged people to look at the sport differently. The rise of AI similarly challenges us as music professionals. Making music is becoming accessible to an increasing group of people due to the emergence of accessible AI tools. Programs such as Bandlab and Soundraw ensure that people with very limited musical knowledge can generate their own songs and use music as an outlet. It democratizes and gives a large group access to a form of expression that was previously unattainable. Whether it's hip-hop, house or jazz, these AI tools know the simplest basics. The emergence of voice clones is already contributing to the impact of AI on the music industry. There are already handfuls of voice-cloning programs available that allow artists to clone their own voices, but also allow others to clone voices of their favorite artists. Apple recently announced that voice clones will become a standard part of iOS this year, making the technology even more accessible. But not only music creation is influenced by AI, also mixing and mastering. Rick Beato, producer, musician and YouTuber believes it won't be long before producers' styles are mimicked by AI. There are already robotic microphone stands that are used by technicians via an app. These could easily be controlled by an AI. Nobody & The Computer is a YouTube channel that explores and shares the possibilities of emerging AI. There are different ways to look at this development. American artist Grimes offers everyone the opportunity to use her voice through a public AI voice clone. If you make money with her vote, she gets a small share. And Sir Paul McCartney says he's using AI to finish the last Beatles song ever by replacing the deceased John Lennon and George Harrison with AI systems that have learned to emulate Lennon and Harrison's styles. Grimes and McCartney explore how AI can be a unique addition to their work, rather than a replacement. Another way to look at it is to stop comparing ourselves to AI. Computer scientist Jaron Lanier calls the idea that AI threatens human capabilities 'foolish'. He indicates that AI is made by humans, but can never be human: 'Comparing ourselves to AI is like comparing ourselves to a car: It is like saying that a car can go faster than a human runner. Of course you can, but then we don't say that the car has become a better runner.' The music that AI makes is different from music made by humans. No one can predict how the musical landscape will change in the near future with the rise of AI-generated music. But that AI will play a significant role in the music industry is certain. We are convinced that artists should not be afraid that their artistic freedom will disappear, because we will always need other people who can do something unique and share it. The music industry is known for its innovative character and we can therefore assume that artists will continue to be able to embrace new technology and create beautiful new works with it. At Open Culture Tech, we are convinced that artists should not be afraid that their artistic freedom will disappear. Most people make music for themselves and for other people, to express feelings or to convey a message. AI has proven that it can help you come up with surprising melodies, find unique sounds or mix your audio. But together we must continue to ensure that computers do not determine what you, as a musician, should do. The best way to avoid this is to keep looking critically at how these computers work, to experiment together with the latest technology and to share experiences with each other and our audience.

  • Open Call / Artists wanted (closed)

    Are you curious about the possibilities of new technology? Are you open to experiment and do you want to apply the latest Artificial Intelligence (AI), Augmented Reality (AR) and Avatar technology during live shows? Then this is the chance to sign up for Open Culture Tech's unique Testing Program. Artists from every genre are welcome. In the Test Program, selected musicians get unique access to the latest experimental AI, AR and Avatar technology. In a period of 6 weeks you will work towards your own live show in which you can apply the most advanced technology of the moment. Exactly the way you want it. In the preparation you will be guided by technical experts who will support you from your own artistic vision. The program - Compensation of ± €1,500 - Your own live show in October (2023), February (2024) or June (2024) - A concert location of your choice - The ability to experiment with AI, AR or Avatar technology - Preparation time and technical expert support - Musical and artistic freedom Explore innovative shapes and sounds using AI, build a virtual stage with floating scenery in AR, or use Avatars as body doubles, backup dancers, or band members. You literally get all the freedom to enrich your music and performance in endless ways – exactly the way you want. Open Call is closed Program structure Phase 1 In the first phase, you will explore the possibilities of experimental AI, AR or Avatar technology. You test different techniques in the creative studio and, together with technical experts, build a live show in which your live concert is enriched by the latest technology. Phase 2 The second phase consists of your unique live concert in which you and your audience discover how the new technology works in practice. It is important that there is room to experiment during the live show. This also means that you are allowed to make mistakes and that the public is involved in this process. Phase 3 In the third phase, the live show is discussed with the audience and technical experts. This happens directly in the room after the show. What could we learn from your live concert? did everything go according to plan or do things on stage work differently than hoped? What went well and what could we do differently next time? For example, will you continue to use the technology in subsequent shows or would you like something different? After the show, the Testing Program ends and you may continue to use the technology however you wish. You are part of the Open Culture Tech community and are automatically invited to all following events and programs. Tool kit The ultimate goal of Open Culture Tech is to develop a public toolkit with accessible technology and best practices that every musician in the Netherlands can easily use. Your experiences during the Test Program will be taken into account in the development of this toolkit and are an important part of this development process. For every musician in the Netherlands The Test Program of Open Culture Tech offers all musicians in the Netherlands access to technology that until now was only available to world-famous artists such as Beyoncé, Gorillaz or Travis Scott. Whether you're a seasoned professional or just starting out, Open Culture Tech's Testing Program welcomes all types of musicians of all skill levels. Open Call is closed

  • Behind the scenes: Augmented Reality

    Imagine being in the middle of the audience at a concert of your favourite artist. You have a drink in your hand and are enjoying a great performance. At a certain point you find out that an extra, virtual layer is offered at the concert: via augmented reality, or AR. Source: www.nexusstudios.com/work/skinny-ape People around you grab their phones and hold the screens above their heads. The wildest visuals appear in space through the screens. You are curious, but also reserved. You want to enjoy the moment and experience the concert as it is. On the other hand, it's something that the artist has added to the show, so it must be worth it. So you give it a chance. You grab your phone and scan the QR code that is projected on the screens on the covered side of the stage. As long as it works on your not-definitely-new phone. And it's not too much of a hassle to install it. By the way, your data bundle is almost empty. Can this be achieved at all? From Wi-Fi? There are many technical questions at the heart of a well-functioning AR experience. In this article, we provide insight into our development process and the various technical steps we go through when developing the Open Culture Tech AR experiences. 1. The problem To make an augmented reality (AR) experience during a concert accessible to a large audience, a number of technical challenges must be overcome. These challenges include the diversity of mobile phones and operating systems, varying performance and hardware capabilities, the availability of mobile internet or WiFi and the bandwidth of these signals. In addition, it is important that the user experience, the UX, of the entire process is as simple as possible and contains as few steps as possible. 2. The state of the art To provide an AR experience on a mobile phone, there are generally two approaches: directly in the browser (on the web) or through a specific app that can be downloaded from the App Store (iOS) or Google Play (Android). AR on the Web With AR on the web, the AR experience opens immediately after opening a link (e.g. after scanning a QR code) in the browser (e.g. Chrome or Safari) on the phone. The advantage of this is that the AR experience is accessible because the users do not need to download an app. The downside is that the AR experience on iPhones is severely limited because iOS does not have native support for webXR. AR through an App To offer AR via an app, the visitor must first download this app on their phone (for example after scanning a QR code). The advantage of this is that the app can be developed for both iOS and Android in one codebase (for example, by building the app in Unity). This allows us to create a native experience that is optimally tailored to the capabilities of both iOS and Android. The downside is that visitors must first download an app. This is a high threshold; experience shows that few people are willing to take this step. 3. Our Mindset We want the best of both worlds: to offer the richest user experience without creating a barrier to actually using the experience. For Open Cultuur Tech we are working on a solution that uses App Clips on iOS. This is a technique that allows a small portion of an app to be served in the browser without downloading the entire app. An example of this is scanning a QR code with which you can immediately buy a cinema ticket, without having to use the app. One limitation of an App Clip is the maximum file size. Apple does not allow this to be larger than 15 MB so that it does not take too long before the content of the App Clip is downloaded. Now we are investigating how we can stream assets in an App Clip so that we can offer more than 15 MB of content. In this way we can offer material from different artists via one app. Based on the scanned QR code, we load the right experience. We want the best of both worlds: to offer the richest user experience without creating a barrier to actually using the experience. For Open Cultuur Tech we are working on a solution that uses App Clips on iOS. This is a technique that allows a small portion of an app to be served in the browser without downloading the entire app. An example of this is scanning a QR code with which you can immediately buy a cinema ticket, without having to use the app. In practice, this means that the user can scan a QR code or click on a link. This natively shows an introduction screen with a button to start the experience. The user then immediately enters the native AR experience. Android offers a similar system: Google Play Instant. If this setting is active, an 'Instant' option will appear on the Google Play page of the app, allowing the app to be opened without installing it. The downside to this is that the user experience isn't as good as on iOS. For example, this feature is not active by default, but the user must have activated a setting himself. In our experience, this 'instant option' replaces one friction with another and is not an accessible solution. To still offer a frictionless experience on Android, we take advantage of the fact that webXR is excellently supported here. Because we can use ARcore on the web, this experience can feel native. The user experience then looks like this: the visitor scans a QR code or clicks on a link, after which the AR experience opens directly in the browser. A disadvantage of the above solutions is that we have to develop for two different platforms: in Swift for iOS, and in javascript (using the webXR API) for Android. Because the AR apps themselves only need to display a 3D object and will therefore not be too complex, we do not expect too much 'double' work. 4. Conclusion Offering an accessible AR experience during a concert in front of a large audience requires careful technical considerations. The choice between a web-based approach and a specific app has advantages and disadvantages that differ per platform. Our proposed approach for iOS users with App Clips allows us to deliver a seamless and rich AR experience without the barrier of downloading a full app. We provide high-quality web-based AR to Android users. These technical solutions open up new possibilities for artists and event organisers to surprise their audience and involve them in an interactive concert experience, without the audience dropping out due to a too high threshold.

  • New series: Avatar Artists

    Open Culture Tech is thrilled to present a unique new series that dives deep into the phenomenon that's reshaping the Asian music scene: Avatar Artists. As the lines between reality and technology continue to blur, musicians are embracing virtual avatars, holograms, and deep-fake personas to captivate audiences in ways never seen before. Source: www.teenvogue.com/story/rising-k-pop-group-aespa-concept-explained In the upcoming series, we will focus on individual artists who are reshaping the music scene in South Korea and Japan using virtual characters. Each episode will dissect the technology driving their performances. From Aespa's holographic avatars to APOKI's viral dances. Aespa: The Band Our journey will begin with Aespa, a girl idol band from South Korea that has ingeniously integrated avatars into their live performances. These avatars, ethereal holograms, grace the stage alongside their human counterparts, creating a stunning fusion of reality and virtual artistry. https://www.youtube.com/watch?v=i8fRCkq5tbw APOKI: The Bunny Next up, we have the friendly sensation known as APOKI—a bunny-like 3D character that has taken TikTok and YouTube by storm. With a million-strong following, APOKI isn't just an entertainer; it's a digital sensation that dances, sings, and captures hearts one pixel at a time. https://www.youtube.com/watch?v=8Pw_bKZe1I8 Hatsune Miku: The Idol No exploration of virtual characters would be complete without mentioning the iconic Hatsune Miku. This Japanese sensation has transcended the virtual realm to become a holographic stage performer. Created through technologies like MikuMikuDance and Vocaloid, Hatsune Miku stands as a testament to the possibilities of collaborative creativity between humans and machines. https://www.youtube.com/watch?v=AG7d2KzYJmc Etern!ty: The Deep-Fake As we explore the controversial, yet intriguing world of deep-fake technology in the music industry, we'll dissect how Etern!ty's unique persona challenges our perceptions of authenticity and identity. https://www.youtube.com/watch?v=LErTSB2qdP8 Teflon Sega: The Manga Our journey concludes with Teflon Sega, a story that began in the pages of a 2D manga and evolved into a 3D music sensation. With millions of views on TikTok and YouTube, Teflon Sega exemplifies the transformative power of technology in bringing fictional characters to life and crafting musical narratives that resonate deeply. https://www.youtube.com/watch?v=P9VDMZmzZ7Q Join us in the forthcoming editions of the Avatar Artists series, as we delve deeper into each of these captivating stories. So, whether you're a musician, a technologist, or simply curious about the intersection of art and innovation, stay tuned to Open Culture Tech for an in-depth look into the exciting world of virtual characters.

  • Work with low-budget holograms

    ABBA recently made the world news with their new tour. Not in the first place because the band, all well into their 70s, would perform again after years, but mainly because they would do this as holograms. In reality, the band members' movements were pre-recorded and later shown on giant screens. The show had a budget of 175 million euros. Far from feasible for 99% of all musicians. But are there any cheaper alternatives? 1. Transparent holographic projection screen The operation is similar to that of a normal projection. You project your visuals with a beamer on the screen, but you get a great 3D effect due to the transparency of the screen. For example, you can project an avatar that can act as a background dancer. This technology produces very cool effects and you can purchase a transparent projection screen online for little money. An earlier show by Thunderboom Records failed miserably due to poor stage lighting coordinations. The disadvantage of this technology is that it must be completely dark, especially when you use a normal beamer. So make sure that no lights are pointed at the screen while projecting. Otherwise, the hologram will not be clearly visible. 2. Pepper's Ghost This is a technology that was invented as far back as the 19th century but is still widely used in concerts or by illusionists. You place a glass or plexiglass screen at an angle of 45 degrees. Place an object or a reflective foil behind the screen and use a beamer to project onto the screen from the side. This creates a holographic effect where the virtual object seems to appear next to the real environment. So for this illusion you only need a glass or plexiglass plate and a beamer. This video explains how to make it yourself. 3. Smartphone cone This option is best seen as a gimmick and requires some tinkering. This is also a Pepper's Ghost illusion but applied in a different way. Especially if you want to offer this to your audience. With this technique you can make small-scale holograms that you can project via your smartphone. By placing a plastic funnel on your smartphone In this way you can, for example, have an avatar dance along to the music. This video explains in 2 minutes how to assemble this yourself.

  • Easily create live show visuals with AI

    Most musicians are not VJs. But if you still want to have appealing visuals to enhance your live music, AI can help. In this article we discuss an example that was made for Kay Slice, a Dutch-Ghanaian afro-futurism artist. The visual belongs to a live song that is getting wilder and that's why the visuals are getting more and more expressive. Below you can find the end result. This example uses Dall-E, a simple text-to-image system. With Dall-E you generate images and you can also ask the system to adjust the outside of an image or add elements to it. This is called outpainting. The above printscreen shows the interface of DALL-E in which an image of a sunrise in Accra, Ghana has been generated. The erase function was then used to remove a small border on the right side of the image and a new prompt (description) was entered to generate a new piece of photo. This produces the following result. The image has become a lot bigger and more futuristic because of this. If your image is large enough, you can download it from Dall-E and import it into iMovie or another video editing tool. In the example below, it was chosen to generate an elongated image with a frame added to the right side. Then iMovie was used to make the image move from left to right, making it appear as if a video is playing. You can also choose to move the image in other ways. For example from top to bottom. Despite the fact that the images are not perfectly hi-res - and they contain a mistake here and there - you can still make cool visuals by trying a lot that come into their own on a projection screen. Be aware Keep in mind that there are risks associated with using text-to-image systems such as Dall-E. These systems are often trained on copyrighted material and also include all the material you create yourself in their dataset. In addition, these systems have been trained on data from the internet that is not representative of society. The results are therefore often sexist or racist.

  • Sound & Vision - audiovisual exploration hub

    Sound & Vision is the national media archive and museum. Sound & Vision is based in Hilversum and serves most of the Dutch public broadcasters and has millions of hours of audiovisual, text, game and other content. It therefore offers many opportunities to experiment in the world of audiovisual research and design. This is important because heritage is meant to live and be reused. Open Culture Tech is a welcome addition to the growing portfolio of technology-driven projects with which the institute and its partners explore new possibilities for creative makers. Credits: Jorrit Lousberg As a public institution that exists to serve and facilitate the needs of the public, Sound & Vision places great value on the responsible and ethical application of new technology such as AI and AR. Ethical questions surrounding AI are endless and constantly evolving as technology advances. Open Culture Tech is a perfect example of an artistic and technology-driven initiative that can enhance the potential of new technology for the creative industry. Gregory Markus, founder of the RE:VIVE project and project leader at Sound & Vision states that “Open Culture Tech is a pioneering venture that can connect new tools with audiovisual electronic music performances. This lowers the barrier to entry to the point where any artist can explore this exciting new territory.” Credits: Jorrit Lousberg Collaborating with Thunderboom Records is not new for Sound & Vision. The institute was a partner in the WAIVE project and Open Culture Tech partner Superposition is also a long-term collaborator. As part of Open Culture Tech, Sound & Vision will activate its international network of creative, artistic and cultural partners to provide invaluable network collaboration, input from best practices and diverse user needs. This ensures that the results of Open Culture Tech end up in the hands of musicians from far and wide.

  • What the f*ck is Thunderboom Records?

    Thunderboom Records is the world's first robot record label. It probably sounds like every musician's worst nightmare. After all, what good are robots to musicians who can also make music and stand on stage? For Max Tiel and Joost de Boo, this question was exactly the reason to set up Thunderboom Records. WAIVE is an AI-driven DJ tool that uses sounds from Sound & Vision's audio archive material. Thunderboom Records was founded three years ago as a foundation to ensure that new technology always continues to add to our human creativity. As Max puts it: “We want to ensure that the latest technology, such as AI, enriches our creative expression as much as possible and strengthens the position of musicians in the music industry. We hope to prevent new technology from threatening the creative process of musicians.” Thunderboom Records does this by developing and testing creative technology concepts together with musicians and their audience. These are, for example, virtual robot artists who release new music together with human artists. But also unique tools with which a DJ can run live back-to-back with an AI system and receive creative suggestions. Fi is an AI-powered virtual artist who collaborates with human musicians. Fi has a unique fluid appearance. Does this mean that every musician should start working with AI? Absolutely not. “New technology is going to play an increasingly important role in the music industry and we especially want to help musicians to use this technology properly and safely.” Simply put, Thunderboom Records helps musicians better understand and take advantage of technology. This mission originated from the idea that technological developments, such as artificial intelligence and avatars, will continue to increase and will change the music industry even more radically in the coming years. But these new developments come with risks and it is important that artists are helped in this, says Joost. Joost (left) and Max (right) at the Audio Collaborative conference in London 2022. “It is not always clear what data is used to build an AI system and it is not always clear what happens to the intellectual property of the end users.” It is also often unclear to users why AI creates certain texts, pictures or melodies. “There are many cases where the AI even generates racist and sexist texts, images or music.” To make musicians resilient in the rapidly changing music industry, Max and Joost therefore give workshops and regularly speak at conferences and schools. “The Open Culture Tech project is a great way for us to further propagate our mission and to work together with musicians on a sustainable music industry in which public values are central and in which every musician is given the opportunity to work in a safe and critical manner. with the latest technology”. Visit www.thunderboomrecords.com for more information *This article is the first in a series of articles introducing the initiators of Open Culture Tech.

  • Open Culture Tech kicks off

    Open Culture Tech is an initiative to make the latest Artificial Intelligence (AI), Augmented Reality (AR) and Avatar technology more accessible to musicians in the Netherlands. It's all about sharing knowledge, experience and resources with which musicians can immediately start working in their live performances. Open Culture Tech does this by building a toolkit – together with musicians and their audience – with accessible AI, AR and Avatar technology and publishing content including best practices, opinions and experiences of experts by experience. Open Culture Tech is part of Innovationlabs, a program that gives an impulse to new resilience in the cultural and creative sector. The Creative Industries Fund NL, on behalf of all national cultural funds, and CLICKNL are implementing the program on behalf of the Ministry of Education, Culture and Science. The Creative Industries Fund NL and CLICKNL issued the Open Call for Innovation Labs twice, in 2021 and 2022. This call was open to innovative and experimental projects to tackle current challenges in the cultural and creative sector and to increase the sector's resilience . Many makers, cultural institutions and other creative parties responded. Sixteen projects have been selected from all their ideas in the first edition and seventeen projects in the second edition. Together, the 33 initiatives represent more than 200 parties from diverse cultural and creative disciplines. Open Culture Tech is one of the seventeen projects selected in the second edition.

bottom of page