top of page

Can a vision generate sound? When I work with the AI image generating project, the sound-vision relationship is an image generated by audio input. Following this logic and I am thinking to reverse this relation. Is possible to utilize image to generate sound? To achieve this goal, in the first of the beginning, I use TouchDesigner to convert pixel data of different color channels into 32-bit digital signals. By resample them, I can select which RGB pixel I want to use and I successfully convert pixel into sound. However, this sound is harsh and I have a little control over its output timbre. Because of the deficiency above, I use Processing to gain more control of data which tracks specific RGB value of a pixel in the live streaming footage. When the target pixel was detected, a certain MIDI file will be triggered. By assigning thresholds in certain areas in the image, various sounds can be generated by movement of image.

Visual phonograph

Following the sound-generation logic of the Visual Sound project, I intend to have more control over the sound that plays. I apply some pattern on the rolling tape as music-tape-liked tape as a musical note. When a certain note was detained by Processing and the corresponded audio will play. After this prototype, I considered that the Visual based recording media can be use for audio recording. There is a huge potential of mixing visual and acoustic experience(instruments).

bottom of page