FINALIST
visible / hidden / re-visible
Category : STUDENT
By Signal compose (Hiroshi YAMATO, Yoshitaka OISHI, Kota TSUMAGARI, Ryo MORITA) (Japan)
Signal compose
Signal compose is a creative team driven by members with specialized skills, knowledge and creativity. We aim to enrich the world little by little in an exciting way through music, video, software, and hardware.
This performance is a work that performs and collaborates with each other as improvisation by both humans and AI. In performance, we tried to incorporate AI as a partner to co-create music. The AI uses a Recurrent Neural Network that is learned piano and guitar duo in advance. In addition, MIDI data is generated in real time according to human performance, and AI uses the MIDI data to play the toy piano itself.
At the present day, We can be watching the word “AI” in every place. However, we may only use “AI” just as “conveniently” and “achieve human desires” as in the past. Therefore, I made a work with the following concept.”AI will interpret human performance and perform together (This part performance is as “visible”). There is another musical communication that is not between humans (This part performance is as “hidden”). The communication is all digital data, and as long as it is digital data, it can be “reinterpreted”. Reproducing the data as a sound field presents a modern form of communication that can be experienced by the audience (This part performance is as “re-visible”). “This performance is an attempt to bring “AI as a partner” and “AI as a co-creator” into the transient phenomenon like Music.
First, we needed a body that AI dependence on as a sacred. At first, we thought about using an auto-playing piano that already exists as a product. However, the auto-playing piano sounds of the product were too complete for the performance performed by AI. Therefore, we decided to add an automatic performance function to a ready-made toy piano using a solenoid. Solenoids are all produced using a 3D printer to match the size of the toy piano.AI is trained in advance using human piano and guitar duo data and performance data from other pianos. In actual performance, human guitar performance is converted to MIDI, creating a state where AI can be heard, and AI is playing in a form that responds to human performance.