Tolkyn: Turning Invisible Rhythm Into Living Architecture

by prithwisd in Circuits > Arduino

135 Views, 2 Favorites, 0 Comments

Tolkyn: Turning Invisible Rhythm Into Living Architecture

TolkynDemoEnhanced.png

Tolkyn is a wearable, real-time system that transforms invisible motion and sound into living visual structures.

In this project, a motion sensor mounted on a pair of glasses captures subtle head and body movements as the user interacts with their environment. These sensor signals are streamed to a computer, where a custom Processing-based engine converts them into a dynamic network of nodes and edges displayed on screen. The result is a continuously evolving visual field that responds to both human motion and audio rhythm.

Unlike traditional audio visualizers that rely solely on continuous frequency amplitude, Tolkyn separates rhythm and spectral energy into distinct visual roles. Fast Fourier Transform (FFT) analysis drives the edge system, determining when connections form, how many edges appear, and how bright or thick they become. Beat detection, on the other hand, acts as a temporal accent—injecting short-lived pulses into the nodes themselves. Each detected beat causes the visible nodes to momentarily inflate and decay, creating a rhythmic breathing effect while the underlying network structure continues to evolve based on the audio spectrum.

By placing the sensor on a wearable form factor, the user becomes an active part of the visualization loop. Small movements of the head subtly reshape the digital space, blurring the boundary between physical motion and generative visuals.

Tolkyn explores how wearable sensing and real-time audiovisual feedback can make hidden signals—motion, rhythm, and energy—visible, intuitive, and expressive.

Supplies

image_grobo.jpg
2395961-40.jpg
images.jpeg
images (1).jpeg
images.png
led-5mm-25pcs-red-green-yellow-blue-white-a35430.jpg

Arduino MKR WiFi 1010

Serves as the wearable controller, handling sensor data acquisition and serial communication with the visualization system.

L3G4200D Triple-Axis Gyroscope Module

Captures real-time rotational motion (X, Y, Z axes) from head movements, enabling orientation-aware and motion-responsive visuals.

Passive Buzzer Module

Provides simple audio feedback for system states such as startup and termination, useful for debugging and interaction cues.

Indicator LEDs

Provides visual feedback towards the gyro tilt readings and useful for debugging as well.

Breadboard

Used to prototype and organize the sensor, buzzer, and microcontroller connections without permanent soldering.

Jumper Wires (Male–Male / Male–Female)

Connect the Arduino, gyroscope, and buzzer through the breadboard, allowing flexible wiring and quick iteration during development.

Ordinary Glasses (Frame)

Acts as the wearable mounting platform for the gyroscope, allowing natural head movement to directly influence the visual output.

Hot Glue Gun

Used to securely attach the electronics to the glasses frame while keeping the setup lightweight and wearable.

Concept and Systems Overview

Tolqyn is a wearable, motion- and sound-responsive visualization system that transforms human movement and ambient audio into a living, evolving visual structure.

Unlike traditional audio visualizers that focus only on sound amplitude, Tolqyn combines:

  1. Head motion (captured through a wearable gyroscope),
  2. Ambient audio analysis (Fast Fourier Transform and beat detection),
  3. Visual feedback loops (nodes, edges, and spatial motion).

The system is designed to not only project "invisible" sound waves into generative art, but also explore human–computer interaction by allowing the visuals to react continuously to subtle bodily gestures and sound, creating a feedback-driven experience that feels organic rather than scripted.

At a high level:

  1. A gyroscope mounted on glasses tracks head motion.
  2. An Arduino MKR WiFi 1010 streams motion data to a computer.
  3. A Processing sketch fuses motion, camera input, and audio to generate a 3D visual network.
  4. A buzzer provides audible feedback when the system starts or terminates.

Assemble the Hardware

WhatsApp Image 2025-12-29 at 5.38.59 PM (2).jpeg
WhatsApp Image 2025-12-29 at 5.38.59 PM.jpeg

To capture natural head movement, the gyroscope is directly attached to a pair of ordinary glasses.

  1. Insert the L3G4200D gyroscope module and the Arduino board into the breadboard.
  2. Place the buzzer module on one of the side edges of the breadboard and secure it firmly via the glue gun.
  3. Place the breadboard on one side of the glasses frame.
  4. Use a glue gun to secure the setup firmly while keeping it lightweight.
  5. Route the jumper wires along the glasses arm to reduce strain and discomfort.
  6. Ensure the sensor remains stable during head movement — even slight wobble can introduce noise.

This wearable placement allows the system to respond intuitively to nods, tilts, and rotations of the head.

Wiring the Hardware

The hardware setup connects the gyroscope, buzzer and the indicator LEDs to the Arduino using simple I2C and digital connections.

Gyroscope (I2C):

  1. VCC → 5V
  2. GND → GND
  3. SDA → SDA
  4. SCL → SCL

Buzzer:

  1. Signal → Digital pin 9
  2. GND → GND

Red LED (for X axis tilts):

  1. Signal → Digital Pin 3
  2. GND → GND

Green LED (for Y axis tilts):

  1. Signal → Digital pin 6
  2. GND → GND

Blue LED (for Z axis tilts):

  1. Signal → Digital pin 5
  2. GND → GND

Yellow LED (for debugging):

  1. Signal → Digital pin 8
  2. GND → GND

Programming the Arduino (Motion + Feedback)

WhatsApp Image 2025-12-29 at 7.37.33 PM.jpeg

The Arduino is responsible for:

  1. Reading angular velocity from the gyroscope,
  2. Detecting sudden motion changes,
  3. Sending clean motion data over serial,
  4. Providing audio feedback on system state.

We shall not go over the intricate details of the code but instead enlist the key functionality as follows.

How the Arduino Code Works (High-Level)

The Arduino acts as a real-time motion sensor and feedback controller.

  1. The L3G4200D gyroscope is configured over I2C and continuously measures angular velocity on the X, Y, and Z axes.
  2. Raw sensor values are converted into degrees per second, making the motion data easy to interpret and visualize.
  3. The Arduino streams this motion data to the computer over serial communication at a fixed rate.
  4. The system listens for simple text commands:
  5. "START" triggers an audible startup melody.
  6. "TERMINATE" resets internal motion tracking and plays a shutdown tone.
  7. A passive buzzer provides immediate audible confirmation of system state changes, useful when the wearable is out of view.

This design keeps the Arduino lightweight and responsive, while allowing the Processing sketch to handle all heavy visual computation.

Downloads

The Processing Interface (Turning Motion & Sound Into a Living Network)

instructablesthumbail.png

The heart of this project lives in Processing, where motion, sound, and sensor data are transformed into a living visual structure.

Processing was chosen because it is specifically designed for real-time graphics, creative coding, and rapid prototyping. It provides native support for audio analysis, video input, 3D rendering, and serial communication, allowing all system components to interact smoothly in a single integrated environment.

This sketch acts as the central nervous system of the project.

What it Represents

The visualization appears as a three-dimensional network of floating nodes and edges:

  1. Nodes are persistent points in space, forming a distributed field around the viewer.
  2. Edges are temporary connections that appear between distant nodes, creating momentary structures that fade over time.
  3. The entire network exists in 3D space and continuously evolves rather than resetting or looping.

Instead of displaying sound as a flat waveform or spectrum, the system visualizes relationships and events, making invisible forces perceptible.

How Motion Shapes the Space

Motion in this system is not driven by a single sensor. Instead, it emerges from a hybrid motion model that combines wearable inertial sensing with camera-based visual motion detection.

The gyroscope mounted on the glasses continuously streams rotational data from the user’s head. This data provides precise, low-latency orientation changes, making the visual field feel directly attached to the wearer’s movement.

At the same time, the live camera feed analyzes frame-to-frame brightness changes to detect motion in the surrounding environment. This allows the system to respond not only to the user’s movement, but also to motion happening around them—such as people passing by, changes in lighting, or shifts in the scene.

Both motion sources are blended together with equal weight:

  1. 50% gyroscope input for stability and intentional movement
  2. 50% camera-based motion for environmental awareness and liveliness

This fusion produces smooth, responsive rotations that feel grounded yet dynamic, avoiding the rigid feel of purely sensor-driven systems and the "jitter" of purely vision-based ones.

Because motion is influenced by both the wearer and the environment, the visualization behaves less like a pre-programmed generative artwork and more like a living spatial system that reacts to presence, context, and movement in real time.

This dual-input approach is what makes the experience feel immersive rather than mechanical, setting it apart from traditional generative art installations that rely on a single control source.

How Sound Builds Structure

Live audio input is analyzed using a Fast Fourier Transform (FFT):

  1. Louder frequency components generate edges between nodes
  2. Stronger amplitudes create more connections
  3. Edges fade naturally, preventing visual clutter and allowing the structure to remain fluid

This means the visual network continuously reorganizes itself based on the sonic environment.

How Beats Affect the System

Unlike traditional audio visualizers that use beats to create new shapes or flashes, this project treats beats as energy events.

When a beat is detected:

  1. No new edges are created
  2. Existing nodes briefly expand and pulse
  3. The pulse decays naturally over time, returning nodes to their original size

This creates a breathing, organic response where rhythm injects energy into the system.

Why This Matters

By separating continuous audio energy (FFT-based edges) from discrete musical events (beat-based node pulses), the interface avoids visual noise and maintains clarity.

The result is a system that feels:

  1. Reactive but not chaotic
  2. Expressive without being literal
  3. Alive rather than animated
  4. Modular, facilitating easy future developments

This Processing interface makes sound, motion, and rhythm visible as spatial behavior, transforming intangible signals into something you can perceive, navigate, and relate to.

Downloads

Further Exploration / Source Code

The complete source code for this project—including the Processing visualization, Arduino firmware, and documentation—is available on GitHub: https://github.com/SUNSET-Sejong-University/Tolqyn/

The repository includes:

  1. The complete Processing sketch that drives the real-time visual system
  2. The Arduino code for the gyroscope and buzzer feedback
  3. Comments on tuning parameters such as motion sensitivity, beat response, and node behavior

This project can also serves as a stepping stone for explorations in wearable interfaces, spatial computing, and responsive generative art.