Patch
Audio visualizations for podcasters & musicians
Patch is a browser-based audio visualization tool. Upload a track, choose a generative style, customize the look, and export a video ready for social media. Every visual is driven by the math of your audio. No AI, no templates, no two exports look the same.
try patch
Role: Designer, Developer
Website: patchaudio.com
Sound you can see.
Every visual is driven directly by the elements of your audio track; frequency, amplitude, rhythm, and more. Each style is a different instrument, tuned by your hand and inherently unique. No two exports look the same.
Designed, developed, and launched
Multiple explorations translating audio into visuals
Free + Pro tiers, live with Stripe subscriptions
Everything renders in browser, nothing leaves your device
Problem
Most podcasts don't have video.
Most musicians don't either.
So what do they post?
A static image with a waveform.
Maybe an audiogram with a bouncing subtitle.
Nothing that actually captures what their sound feels like.
There's a gap between having great audio and having something worth sharing visually. Patch fills that gap.
How it works
1. Upload
a track (or paste a link). The Web Audio API analyzes the audio in real time, pulling out frequency data, amplitude, and beat information.
2. Choose
Eleven visualization styles, each with its own visual logic. Some are minimal and precise. Others are dense and expressive. All of them respond to the actual content of the audio, not a random animation loop or AI generated video.
3. Customize
Adjust color palette, background, intensity, and blending. A WYSIWYG preview plays your track with the visualization live, so what you see is what you get.
4. Export
Render to video at up to 4K resolution, with audio baked in, ready to post. The entire pipeline runs client-side in the browser, no upload to a server, no waiting for a render farm.
The stack
Each style reads the audio's frequency and volume data in real time and translates it into motion.
or: p5.js + custom shaders, driven by Web Audio API frequency and time-domain analysis
The browser breaks down the audio track into its component parts (which frequencies are present, how loud they are, where the beats land) and feeds that to the visualization engine.
or: Web Audio API (AnalyserNode FFT, beat detection via amplitude delta)
The final video is rendered and assembled entirely in the browser. No file ever leaves your computer until you choose to download it.
or: WebCodecs API via Mediabunny (client-side muxer, MP4 output with AAC audio)
Stripe handles subscriptions. The site is hosted on Vercel with automatic updates from GitHub.
or: Stripe Checkout, Vercel serverless functions , auto-deploy from GitHub
IMPORTANT
Patch runs entirely in the browser.
No backend processing, no accounts required for the free tier.
The decision to keep everything client-side was intentional. Audio files are personal. A podcaster shouldn't have to upload their unreleased episode to someone else's server just to make a 30-second clip look interesting.
Also, it means I don’t have to deal with any privacy issues... on that front at least.
Approach
Each visualization style in Patch is essentially a different mapping function. One responds to frequency bands, one traces amplitude across the spectrum, one morphs with bass energy, and another plots overtone relationships. The visual language changes, but the principle is the same: the audio is the input, math is the engine, and the output is something that is truly unique to that specific piece of sound.
The important design decision was what to expose to the user. Generative systems can have dozens of parameters, but most people don't want to tune oscillator dampening, nor would they know what that means in the first place. Patch gives you simple and meaningful controls: color, intensity, speed, scale. Enough to make it yours, not so much that you’re lost, about as easy a learning curve as there is.
in conclusion
Feels like the right direction.