I’ve been working on these libraries for a few years now, so it’s probably past time I wrote up a few notes (heh) about what they are and how they work.
wasgen are a pair of companion projects for producing arbtirary WebAudio sound effects, without (and this is the important part) touching the WebAudio API. (The API is powerful but famously hairy, and even now that I understand it pretty thoroughly I try to abstract myself as far away from it as possible.)
The two projects stem from the desire to have a MIDI-like way of quickly and easily playing sounds. I don’t play with Web Audio for the joy of it, after all, I want to make music! So I wanted a library where I could say “play a piano-like sound, at 440Hz, for 0.35 seconds”, and it would magically takes care of all the nodes and parameters events. Of course there are projects that do this for bare oscillators, but I wanted something abstract enough that the sounds could be glockenspiels or kick drums or whatever.
While looking for projects like this, I came across tinysynth, a browser synth and MIDI player, which had a built-in system for playing simple WebAudio approximations of all the default MIDI instruments. It centered around a bunch of recipes, which defined one or two oscillators for each instrument and some parameters for how to join them up.
I used that as a starting point and started hacking on the “recipe” format, to make it more abstract, nestable to arbitrary depth, support filters, and so on. Many iterations later,
wasgen now lets you define pretty much any arbitrary audio graph you can think of.
var Generator = require('wasgen')
Once you call
play() the library creates the two oscillators and the biquad filter, connects them up, and schedules the gain envelopes and the frequency parameter events. It also remembers which parameters it will need to reschedule when the note is released, and after the release envelope finishes it disconnects all the nodes so they can be disposed.
As for how it actually works, the most complicated part is scheduling parameter values (envelopes, sweeps and so on). I wanted
wasgen programs to be able to schedule any arbitrary combination of ramps and sweeps, but the WebAudio parameter scheduling API is so hairy that I wound up building myself a whole abstraction layer between it and
wasgen. If you’re reading this hoping to find out anything about parameters, definitely go look there instead.
Outside of parameters, the library’s main job is to know what kinds of nodes will be needed for your audio graph, to create and connect them all, and (most crucially) to intelligently apply lots of sensible defaults for everything that you didn’t expressly define. The hardest part of the project was working out a program format that lets you define almost any audio setup, without making you explicitly spell out every detail.
If you’ve read this far, you might be thinking about performance and wondering whether the library does object pooling to reuse Audio nodes. I experimented with this, but took it out. For one thing, in naive tests the difference wasn’t easily measurable, and for a second, while most WebAudio resources seem to be reusable I don’t think the spec actually requires them to be. So object pooling here looks to me like bad karma, with no concrete benefit.
Anyhoo that’s the story. If it sounds like your cup of tea, then
wasgen was made specifically for you.