Easy way to get 15 free YouTube views, likes and subscribers
Get Free YouTube Subscribers, Views and Likes

Bass + AI: Improvisation (OSC communication between Python Unity3D and Kyma) | Simon Hutchinson

Follow
Simon Hutchinson

An improvised duet(?) with a AI agent trained on the "Embodied Musicking Dataset" (linked below).

In this performance, Python listens to live audio input from the bass, and, based on models trained with the dataset, sends out data to Unity3D and ‪@SymbolicSound‬ Kyma. Unity3D creates the visuals (the firework), and Kyma processes the audio from the bass.

First, though, the dataset used for training was collected from several pianists in the US and UK. As pianists played, we recorded multiple aspects of their performance: audio, video of their hands, EEG, skeletal data, and galvanic skin response. After playing, pianists listened to their own performance and were asked to record their state of “flow” over the course of the performance. All of these different dimensions of data, then, were associated over time, and so neural networks can be trained on these different dimensions to make associations.

This demonstration uses the trained models from Craig Vear's Jess+ project (   • Jess+ Digital Score w/ robot arm and ...  ) to generate X&Y data (from the skeletal data), and “flow”, from the amplitude of the input. These XY coordinates, “flow”, and amplitude are sent out from Python as OSC Data, which is received by both Unity3D (for visuals) and Kyma (for audio).

In Unity, the XY data moves the “firework” around the screen. Flow data affects its color, and amplitude affects its size. Audio in Kyma is a bit more sophisticated, but X position is left/right pan, and the flow data affects the delay, reverb, and live granulation.

As you can see, amplitude to XY mapping is limited, with the firework moving along a kind of diagonal. Possible next steps would be to extract more features of the audio (e.g. pitch, spectral complexity, or delta values), and train with those.

Applying this data trained on pianists to a bass performance (in a different genre) does not have the same goals musicgeneration AI such as MusicGen or MusicLM. Instead of automatically generating music, the AI becomes a partner in performance. Sometimes unpredictable, but not random, since its behavior is based on rules.

Get the Embodied Musicking Dataset here: https://github.com/CreativeAIResear...
Dr. Jeffrey Stolet’s beginner book on Kyma (affiliate link): https://amzn.to/3SAhYei

LINKS:
Subscribe: https://www.youtube.com/user/SimonHut...
Official Website http://simonhutchinson.com/
bandcamp (lots of free music!): https://simonhutchinson.bandcamp.com/...
Sign up for my mailing list: http://eepurl.com/hVs7bT
Buy my old gear on Reverb (affiliate link): https://reverb.grsm.io/simon
Buy me a coffee: https://kofi.com/simonhutchinson

* I provide affiliate links for some products that I use and enjoy. If you end up buying something through these external links, I may earn a small commission (while the price for you remains the same).

#SoundSynthesis #SoundDesign #ExperimentalMusic #unity3d #symbolicsoundkyma
#kyma

posted by Kifaransalc