First Set of Experiments
This week I continued exploring sound as a form of interaction, but instead of simply reacting to frequencies, I became more interested in how sound could map onto spatial gestures and how memory, replay, and visual traces could create a feeling of agency. These were still early-stage tests, without any fixed conceptual direction, but they allowed me to notice what aspects of sound–visual interaction felt meaningful or worth pursuing. Both experiments also nudged me toward thinking about sound as something that can be captured, replayed, and geographically located (almost like drawing with audio).
Experiment 01
IdeaThis experiment explored continuous gesture like drawing sound through movement. What would happen:
Mouse movement triggers sound depending on:
(i) horizontal position → different synths
(ii) vertical position → pitch
(iii) speed → volume + circle size
It also records everything when “R” is pressed, and plays it back with “P.”
So it is similar to painting with sound.
-
Pressing and sliding the mouse across the sketch creates these circles. -
A trace is left behind by the circles that creates its own visual. -
The video shows the experiment. How dragging cursor can create its own music and then pressing play can replay the same.
This experiment came from a question of: What if sound was not placed in points, but created through movement? I wanted to see if sound could feel like a flowing, responsive material. Something shaped by gestures, acceleration, and rhythm.
What I Observed / InsightsSpeed as data adds emotion. Moving slowly created soft, small tones. Faster movements produced louder, brighter notes. Sound started feeling like a form of handwriting. It made movement feel intentional as crossing boundaries changes the voice of the instrument. Seeing the system replay the recorded gestures made me think about interaction as choreography. Sound wasn’t temporary—it could be captured and re-lived. The fading circles, line trails, and musical attack envelopes created a layered, almost painterly texture.
Experiment 02
Idea
I wanted to see what would happen if sound became a kind of cosmic drawing tool.
Each click produces a “star,” and that star plays a note based on where it is placed.
Over time, clicking multiple points builds a constellation — visually and musically.
Pressing ENTER “replays” the constellation in the exact order it was created, like a memory unfolding star by star.
-
Visual created as I made my own music compisition. -
Visual created by my friend as she made her own version. -
The experiment being replayed as the points are formed and the music composition is created.
Why I Tried This
I realised in previous experiments that sound was mostly reactive: the system listens, and visuals move.
But I wanted sound to become placed, stored, spatial, and intentional.
This experiement was based on the idea of: What if sound had coordinates? What if interactions became events that accumulate instead of disappearing?
Spatial sound feels meaningful. The pitch mapping (left–right across the screen) made every point feel like a decision, like placing a note in a composition. Playback created a sense of memory. Watching the constellation reconstruct itself both visually and sonically created the feeling that the system “remembered” my actions. The connection lines made the drawing feel like a network. Not just isolated points, but a structure tied together by sound. Sound’s emotional quality changed when it became part of a sequence. A single note is just a tap. That shift made me realise how interaction can move from random to expressive very quickly. This experiment introduced the idea that interaction can be temporal + spatial + sonic, not just reactive animations. I didn’t know it then, but this sense of “memory through interaction” will become important later.
What These Two Experiments Taught Me
Even though I’m still in the early weeks and my direction is not fixed, doing both experiments back-to-back made a few things much clearer:
i. Sound becomes more interesting when linked to physical behaviour
(placement, speed, gesture, sequence).
ii. Interaction feels more deep and creative when the system remembers
(recording + playback introduced the idea of temporal layering).
iii. Mapping space to sound creates an embodied relationship.
I’m becoming more drawn to the idea of sound as a material
Something you can “drop,” “drag,” “connect,” “stretch,” or “compose with,” instead of something that purely controls visuals.
Learning and Challenges
The observations from the experiments were not just by me but also my friends as I asked them how they felt about it as they were interacting.
Developing this experiment taught me how sound can directly shape the rhythm and structure of a digital visual system. However, I encountered challenges:
Performance limitations: FFT processing and multiple animation layers caused occasional lag
Interpretation: While the visuals respond accurately to sound, their conceptual reading still feels abstract. I need to refine what each mode communicates in relation
to my research question but I am not sure if I will be continuing to work on this sketch.
Scalability: The current sketch functions as a visual prototype, but I want to extend it into something more interactive or installation-based later.