top of page


21 items found for ""

  • Is AI a threat to musicians?

    AI-generated music is on the rise. A large chunk of this music has not been produced by a human hand. Anyone can generate a track at the touch of a button. On the other hand, there are also more and more AI tools that allow you to generate separate musical elements such as chord progressions, melodies or drum rhythms that you then have to process into a complete track yourself. What do these developments mean for you as a musician and your way of working? Should you worry about this new artificial competitor or is there nothing to worry about? Holly Herndon is an American musician who experiments a lot with AI and voice cloning and music rights. As Nick Cave aptly said early this year: “Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder.” Most people make music for themselves and for others, to express feelings or to convey a message. The question is whether AI will ever be able to express meaningful feelings or convey a personal message. A melody or text generated by an algorithm is impersonal by definition because a computer is not a person who can be in love or suffer from heartbreak. But suppose a computer can learn what love songs are by listening to millions of love songs and recognize what is popular and what is not. Suppose it learns to understand genres and formats, learns which chords, instruments and lyrics have an effect and can combine them into recognizable music. Is that bad? A parallel can be drawn with chess. Chess computers have been able to beat even the best grandmasters since the 1970s. Nevertheless, in 2023 most people do not find it interesting to watch a match between two computers playing chess. In fact, the (human) sport of chess has only become more popular since then. The chess computer introduced a new way of playing, invented new moves and challenged people to look at the sport differently. The rise of AI similarly challenges us as music professionals. Making music is becoming accessible to an increasing group of people due to the emergence of accessible AI tools. Programs such as Bandlab and Soundraw ensure that people with very limited musical knowledge can generate their own songs and use music as an outlet. It democratizes and gives a large group access to a form of expression that was previously unattainable. Whether it's hip-hop, house or jazz, these AI tools know the simplest basics. The emergence of voice clones is already contributing to the impact of AI on the music industry. There are already handfuls of voice-cloning programs available that allow artists to clone their own voices, but also allow others to clone voices of their favorite artists. Apple recently announced that voice clones will become a standard part of iOS this year, making the technology even more accessible. But not only music creation is influenced by AI, also mixing and mastering. Rick Beato, producer, musician and YouTuber believes it won't be long before producers' styles are mimicked by AI. There are already robotic microphone stands that are used by technicians via an app. These could easily be controlled by an AI. Nobody & The Computer is a YouTube channel that explores and shares the possibilities of emerging AI. There are different ways to look at this development. American artist Grimes offers everyone the opportunity to use her voice through a public AI voice clone. If you make money with her vote, she gets a small share. And Sir Paul McCartney says he's using AI to finish the last Beatles song ever by replacing the deceased John Lennon and George Harrison with AI systems that have learned to emulate Lennon and Harrison's styles. Grimes and McCartney explore how AI can be a unique addition to their work, rather than a replacement. Another way to look at it is to stop comparing ourselves to AI. Computer scientist Jaron Lanier calls the idea that AI threatens human capabilities 'foolish'. He indicates that AI is made by humans, but can never be human: 'Comparing ourselves to AI is like comparing ourselves to a car: It is like saying that a car can go faster than a human runner. Of course you can, but then we don't say that the car has become a better runner.' The music that AI makes is different from music made by humans. No one can predict how the musical landscape will change in the near future with the rise of AI-generated music. But that AI will play a significant role in the music industry is certain. We are convinced that artists should not be afraid that their artistic freedom will disappear, because we will always need other people who can do something unique and share it. The music industry is known for its innovative character and we can therefore assume that artists will continue to be able to embrace new technology and create beautiful new works with it. At Open Culture Tech, we are convinced that artists should not be afraid that their artistic freedom will disappear. Most people make music for themselves and for other people, to express feelings or to convey a message. AI has proven that it can help you come up with surprising melodies, find unique sounds or mix your audio. But together we must continue to ensure that computers do not determine what you, as a musician, should do. The best way to avoid this is to keep looking critically at how these computers work, to experiment together with the latest technology and to share experiences with each other and our audience.

  • Behind the scenes: Augmented Reality

    Imagine being in the middle of the audience at a concert of your favourite artist. You have a drink in your hand and are enjoying a great performance. At a certain point you find out that an extra, virtual layer is offered at the concert: via augmented reality, or AR. Source: People around you grab their phones and hold the screens above their heads. The wildest visuals appear in space through the screens. You are curious, but also reserved. You want to enjoy the moment and experience the concert as it is. On the other hand, it's something that the artist has added to the show, so it must be worth it. So you give it a chance. You grab your phone and scan the QR code that is projected on the screens on the covered side of the stage. As long as it works on your not-definitely-new phone. And it's not too much of a hassle to install it. By the way, your data bundle is almost empty. Can this be achieved at all? From Wi-Fi? There are many technical questions at the heart of a well-functioning AR experience. In this article, we provide insight into our development process and the various technical steps we go through when developing the Open Culture Tech AR experiences. 1. The problem To make an augmented reality (AR) experience during a concert accessible to a large audience, a number of technical challenges must be overcome. These challenges include the diversity of mobile phones and operating systems, varying performance and hardware capabilities, the availability of mobile internet or WiFi and the bandwidth of these signals. In addition, it is important that the user experience, the UX, of the entire process is as simple as possible and contains as few steps as possible. 2. The state of the art To provide an AR experience on a mobile phone, there are generally two approaches: directly in the browser (on the web) or through a specific app that can be downloaded from the App Store (iOS) or Google Play (Android). AR on the Web With AR on the web, the AR experience opens immediately after opening a link (e.g. after scanning a QR code) in the browser (e.g. Chrome or Safari) on the phone. The advantage of this is that the AR experience is accessible because the users do not need to download an app. The downside is that the AR experience on iPhones is severely limited because iOS does not have native support for webXR. AR through an App To offer AR via an app, the visitor must first download this app on their phone (for example after scanning a QR code). The advantage of this is that the app can be developed for both iOS and Android in one codebase (for example, by building the app in Unity). This allows us to create a native experience that is optimally tailored to the capabilities of both iOS and Android. The downside is that visitors must first download an app. This is a high threshold; experience shows that few people are willing to take this step. 3. Our Mindset We want the best of both worlds: to offer the richest user experience without creating a barrier to actually using the experience. For Open Cultuur Tech we are working on a solution that uses App Clips on iOS. This is a technique that allows a small portion of an app to be served in the browser without downloading the entire app. An example of this is scanning a QR code with which you can immediately buy a cinema ticket, without having to use the app. One limitation of an App Clip is the maximum file size. Apple does not allow this to be larger than 15 MB so that it does not take too long before the content of the App Clip is downloaded. Now we are investigating how we can stream assets in an App Clip so that we can offer more than 15 MB of content. In this way we can offer material from different artists via one app. Based on the scanned QR code, we load the right experience. We want the best of both worlds: to offer the richest user experience without creating a barrier to actually using the experience. For Open Cultuur Tech we are working on a solution that uses App Clips on iOS. This is a technique that allows a small portion of an app to be served in the browser without downloading the entire app. An example of this is scanning a QR code with which you can immediately buy a cinema ticket, without having to use the app. In practice, this means that the user can scan a QR code or click on a link. This natively shows an introduction screen with a button to start the experience. The user then immediately enters the native AR experience. Android offers a similar system: Google Play Instant. If this setting is active, an 'Instant' option will appear on the Google Play page of the app, allowing the app to be opened without installing it. The downside to this is that the user experience isn't as good as on iOS. For example, this feature is not active by default, but the user must have activated a setting himself. In our experience, this 'instant option' replaces one friction with another and is not an accessible solution. To still offer a frictionless experience on Android, we take advantage of the fact that webXR is excellently supported here. Because we can use ARcore on the web, this experience can feel native. The user experience then looks like this: the visitor scans a QR code or clicks on a link, after which the AR experience opens directly in the browser. A disadvantage of the above solutions is that we have to develop for two different platforms: in Swift for iOS, and in javascript (using the webXR API) for Android. Because the AR apps themselves only need to display a 3D object and will therefore not be too complex, we do not expect too much 'double' work. 4. Conclusion Offering an accessible AR experience during a concert in front of a large audience requires careful technical considerations. The choice between a web-based approach and a specific app has advantages and disadvantages that differ per platform. Our proposed approach for iOS users with App Clips allows us to deliver a seamless and rich AR experience without the barrier of downloading a full app. We provide high-quality web-based AR to Android users. These technical solutions open up new possibilities for artists and event organisers to surprise their audience and involve them in an interactive concert experience, without the audience dropping out due to a too high threshold.

  • New series: Avatar Artists

    Open Culture Tech is thrilled to present a unique new series that dives deep into the phenomenon that's reshaping the Asian music scene: Avatar Artists. As the lines between reality and technology continue to blur, musicians are embracing virtual avatars, holograms, and deep-fake personas to captivate audiences in ways never seen before. Source: In the upcoming series, we will focus on individual artists who are reshaping the music scene in South Korea and Japan using virtual characters. Each episode will dissect the technology driving their performances. From Aespa's holographic avatars to APOKI's viral dances. Aespa: The Band Our journey will begin with Aespa, a girl idol band from South Korea that has ingeniously integrated avatars into their live performances. These avatars, ethereal holograms, grace the stage alongside their human counterparts, creating a stunning fusion of reality and virtual artistry. APOKI: The Bunny Next up, we have the friendly sensation known as APOKI—a bunny-like 3D character that has taken TikTok and YouTube by storm. With a million-strong following, APOKI isn't just an entertainer; it's a digital sensation that dances, sings, and captures hearts one pixel at a time. Hatsune Miku: The Idol No exploration of virtual characters would be complete without mentioning the iconic Hatsune Miku. This Japanese sensation has transcended the virtual realm to become a holographic stage performer. Created through technologies like MikuMikuDance and Vocaloid, Hatsune Miku stands as a testament to the possibilities of collaborative creativity between humans and machines. Etern!ty: The Deep-Fake As we explore the controversial, yet intriguing world of deep-fake technology in the music industry, we'll dissect how Etern!ty's unique persona challenges our perceptions of authenticity and identity. Teflon Sega: The Manga Our journey concludes with Teflon Sega, a story that began in the pages of a 2D manga and evolved into a 3D music sensation. With millions of views on TikTok and YouTube, Teflon Sega exemplifies the transformative power of technology in bringing fictional characters to life and crafting musical narratives that resonate deeply. Join us in the forthcoming editions of the Avatar Artists series, as we delve deeper into each of these captivating stories. So, whether you're a musician, a technologist, or simply curious about the intersection of art and innovation, stay tuned to Open Culture Tech for an in-depth look into the exciting world of virtual characters.

  • Work with low-budget holograms

    ABBA recently made the world news with their new tour. Not in the first place because the band, all well into their 70s, would perform again after years, but mainly because they would do this as holograms. In reality, the band members' movements were pre-recorded and later shown on giant screens. The show had a budget of 175 million euros. Far from feasible for 99% of all musicians. But are there any cheaper alternatives? 1. Transparent holographic projection screen The operation is similar to that of a normal projection. You project your visuals with a beamer on the screen, but you get a great 3D effect due to the transparency of the screen. For example, you can project an avatar that can act as a background dancer. This technology produces very cool effects and you can purchase a transparent projection screen online for little money. An earlier show by Thunderboom Records failed miserably due to poor stage lighting coordinations. The disadvantage of this technology is that it must be completely dark, especially when you use a normal beamer. So make sure that no lights are pointed at the screen while projecting. Otherwise, the hologram will not be clearly visible. 2. Pepper's Ghost This is a technology that was invented as far back as the 19th century but is still widely used in concerts or by illusionists. You place a glass or plexiglass screen at an angle of 45 degrees. Place an object or a reflective foil behind the screen and use a beamer to project onto the screen from the side. This creates a holographic effect where the virtual object seems to appear next to the real environment. So for this illusion you only need a glass or plexiglass plate and a beamer. This video explains how to make it yourself. 3. Smartphone cone This option is best seen as a gimmick and requires some tinkering. This is also a Pepper's Ghost illusion but applied in a different way. Especially if you want to offer this to your audience. With this technique you can make small-scale holograms that you can project via your smartphone. By placing a plastic funnel on your smartphone In this way you can, for example, have an avatar dance along to the music. This video explains in 2 minutes how to assemble this yourself.

  • Easily create live show visuals with AI

    Most musicians are not VJs. But if you still want to have appealing visuals to enhance your live music, AI can help. In this article we discuss an example that was made for Kay Slice, a Dutch-Ghanaian afro-futurism artist. The visual belongs to a live song that is getting wilder and that's why the visuals are getting more and more expressive. Below you can find the end result. This example uses Dall-E, a simple text-to-image system. With Dall-E you generate images and you can also ask the system to adjust the outside of an image or add elements to it. This is called outpainting. The above printscreen shows the interface of DALL-E in which an image of a sunrise in Accra, Ghana has been generated. The erase function was then used to remove a small border on the right side of the image and a new prompt (description) was entered to generate a new piece of photo. This produces the following result. The image has become a lot bigger and more futuristic because of this. If your image is large enough, you can download it from Dall-E and import it into iMovie or another video editing tool. In the example below, it was chosen to generate an elongated image with a frame added to the right side. Then iMovie was used to make the image move from left to right, making it appear as if a video is playing. You can also choose to move the image in other ways. For example from top to bottom. Despite the fact that the images are not perfectly hi-res - and they contain a mistake here and there - you can still make cool visuals by trying a lot that come into their own on a projection screen. Be aware Keep in mind that there are risks associated with using text-to-image systems such as Dall-E. These systems are often trained on copyrighted material and also include all the material you create yourself in their dataset. In addition, these systems have been trained on data from the internet that is not representative of society. The results are therefore often sexist or racist.

  • Sound & Vision - audiovisual exploration hub

    Sound & Vision is the national media archive and museum. Sound & Vision is based in Hilversum and serves most of the Dutch public broadcasters and has millions of hours of audiovisual, text, game and other content. It therefore offers many opportunities to experiment in the world of audiovisual research and design. This is important because heritage is meant to live and be reused. Open Culture Tech is a welcome addition to the growing portfolio of technology-driven projects with which the institute and its partners explore new possibilities for creative makers. Credits: Jorrit Lousberg As a public institution that exists to serve and facilitate the needs of the public, Sound & Vision places great value on the responsible and ethical application of new technology such as AI and AR. Ethical questions surrounding AI are endless and constantly evolving as technology advances. Open Culture Tech is a perfect example of an artistic and technology-driven initiative that can enhance the potential of new technology for the creative industry. Gregory Markus, founder of the RE:VIVE project and project leader at Sound & Vision states that “Open Culture Tech is a pioneering venture that can connect new tools with audiovisual electronic music performances. This lowers the barrier to entry to the point where any artist can explore this exciting new territory.” Credits: Jorrit Lousberg Collaborating with Thunderboom Records is not new for Sound & Vision. The institute was a partner in the WAIVE project and Open Culture Tech partner Superposition is also a long-term collaborator. As part of Open Culture Tech, Sound & Vision will activate its international network of creative, artistic and cultural partners to provide invaluable network collaboration, input from best practices and diverse user needs. This ensures that the results of Open Culture Tech end up in the hands of musicians from far and wide.

  • Open Call / Artists wanted (closed)

    Are you curious about the possibilities of new technology? Are you open to experiment and do you want to apply the latest Artificial Intelligence (AI), Augmented Reality (AR) and Avatar technology during live shows? Then this is the chance to sign up for Open Culture Tech's unique Testing Program. Artists from every genre are welcome. In the Test Program, selected musicians get unique access to the latest experimental AI, AR and Avatar technology. In a period of 6 weeks you will work towards your own live show in which you can apply the most advanced technology of the moment. Exactly the way you want it. In the preparation you will be guided by technical experts who will support you from your own artistic vision. The program - Compensation of ± €1,500 - Your own live show in October (2023), February (2024) or June (2024) - A concert location of your choice - The ability to experiment with AI, AR or Avatar technology - Preparation time and technical expert support - Musical and artistic freedom Explore innovative shapes and sounds using AI, build a virtual stage with floating scenery in AR, or use Avatars as body doubles, backup dancers, or band members. You literally get all the freedom to enrich your music and performance in endless ways – exactly the way you want. Open Call is closed Program structure Phase 1 In the first phase, you will explore the possibilities of experimental AI, AR or Avatar technology. You test different techniques in the creative studio and, together with technical experts, build a live show in which your live concert is enriched by the latest technology. Phase 2 The second phase consists of your unique live concert in which you and your audience discover how the new technology works in practice. It is important that there is room to experiment during the live show. This also means that you are allowed to make mistakes and that the public is involved in this process. Phase 3 In the third phase, the live show is discussed with the audience and technical experts. This happens directly in the room after the show. What could we learn from your live concert? did everything go according to plan or do things on stage work differently than hoped? What went well and what could we do differently next time? For example, will you continue to use the technology in subsequent shows or would you like something different? After the show, the Testing Program ends and you may continue to use the technology however you wish. You are part of the Open Culture Tech community and are automatically invited to all following events and programs. Tool kit The ultimate goal of Open Culture Tech is to develop a public toolkit with accessible technology and best practices that every musician in the Netherlands can easily use. Your experiences during the Test Program will be taken into account in the development of this toolkit and are an important part of this development process. For every musician in the Netherlands The Test Program of Open Culture Tech offers all musicians in the Netherlands access to technology that until now was only available to world-famous artists such as Beyoncé, Gorillaz or Travis Scott. Whether you're a seasoned professional or just starting out, Open Culture Tech's Testing Program welcomes all types of musicians of all skill levels. Open Call is closed

  • What the f*ck is Thunderboom Records?

    Thunderboom Records is the world's first robot record label. It probably sounds like every musician's worst nightmare. After all, what good are robots to musicians who can also make music and stand on stage? For Max Tiel and Joost de Boo, this question was exactly the reason to set up Thunderboom Records. WAIVE is an AI-driven DJ tool that uses sounds from Sound & Vision's audio archive material. Thunderboom Records was founded three years ago as a foundation to ensure that new technology always continues to add to our human creativity. As Max puts it: “We want to ensure that the latest technology, such as AI, enriches our creative expression as much as possible and strengthens the position of musicians in the music industry. We hope to prevent new technology from threatening the creative process of musicians.” Thunderboom Records does this by developing and testing creative technology concepts together with musicians and their audience. These are, for example, virtual robot artists who release new music together with human artists. But also unique tools with which a DJ can run live back-to-back with an AI system and receive creative suggestions. Fi is an AI-powered virtual artist who collaborates with human musicians. Fi has a unique fluid appearance. Does this mean that every musician should start working with AI? Absolutely not. “New technology is going to play an increasingly important role in the music industry and we especially want to help musicians to use this technology properly and safely.” Simply put, Thunderboom Records helps musicians better understand and take advantage of technology. This mission originated from the idea that technological developments, such as artificial intelligence and avatars, will continue to increase and will change the music industry even more radically in the coming years. But these new developments come with risks and it is important that artists are helped in this, says Joost. Joost (left) and Max (right) at the Audio Collaborative conference in London 2022. “It is not always clear what data is used to build an AI system and it is not always clear what happens to the intellectual property of the end users.” It is also often unclear to users why AI creates certain texts, pictures or melodies. “There are many cases where the AI even generates racist and sexist texts, images or music.” To make musicians resilient in the rapidly changing music industry, Max and Joost therefore give workshops and regularly speak at conferences and schools. “The Open Culture Tech project is a great way for us to further propagate our mission and to work together with musicians on a sustainable music industry in which public values are central and in which every musician is given the opportunity to work in a safe and critical manner. with the latest technology”. Visit for more information *This article is the first in a series of articles introducing the initiators of Open Culture Tech.

  • Open Culture Tech kicks off

    Open Culture Tech is an initiative to make the latest Artificial Intelligence (AI), Augmented Reality (AR) and Avatar technology more accessible to musicians in the Netherlands. It's all about sharing knowledge, experience and resources with which musicians can immediately start working in their live performances. Open Culture Tech does this by building a toolkit – together with musicians and their audience – with accessible AI, AR and Avatar technology and publishing content including best practices, opinions and experiences of experts by experience. Open Culture Tech is part of Innovationlabs, a program that gives an impulse to new resilience in the cultural and creative sector. The Creative Industries Fund NL, on behalf of all national cultural funds, and CLICKNL are implementing the program on behalf of the Ministry of Education, Culture and Science. The Creative Industries Fund NL and CLICKNL issued the Open Call for Innovation Labs twice, in 2021 and 2022. This call was open to innovative and experimental projects to tackle current challenges in the cultural and creative sector and to increase the sector's resilience . Many makers, cultural institutions and other creative parties responded. Sixteen projects have been selected from all their ideas in the first edition and seventeen projects in the second edition. Together, the 33 initiatives represent more than 200 parties from diverse cultural and creative disciplines. Open Culture Tech is one of the seventeen projects selected in the second edition.

bottom of page