This the testbed app I showed at I/O 2012, which helps developers visualize how the graph nodes in the Web Audio API work.
This application (also shown at I/O 2012) implements a 28-band (actually variable) vocoder - a "robotic voice" processor. It's a pretty complex audio processing demo.
This application performs a naive (zero-crossing based) pitch detection algorithm in realtime. It will eventually be suitable perhaps for a guitar tuner; it currently shows just how bad my pitch control when whistling is. :)
This application implements a polyphonic "analog" synthesizer, with a classic voice architecture mostly copied from the Moog Prodigy (minus OSC sync, and with reverb and drive thrown in). It's not an emulation of the Prodigy (I've never directly played one), just as a generic synth example. Although a large part of the reason I wrote this was to test out the Web MIDI API (and the synth is playable and controllable through MIDI), you can use the onscreen UI and computer keyboard to play - the ASDF row is the major C scale, with the black notes above.
This application implements a dual DJ deck, specifically intended to be driven by a Numark DJ2Go MIDI controller. This was both a test of some audio processing and playback rate manipulation (part of which is not currently exposed) and a test of MIDI control.
This is a pretty straightforward implementation of Conway's Game of Life, on a small (8x8) grid - except I wrote it to interface with a Novation Launchpad controller, to both show the grid on the Launchpad and use the Launchpad buttons to draw/erase cells on the grid.
Quick addition of MIDI controller support (currently using a Livid Instruments CNTRLR) to the Shiny Drum Machine.
This demo lets you play with a few common effects on the audio inputs.
This is a simple template to get you up and running building live synthesizers. It gives you a polyphonic voice architecture, on-screen keyboard (which supports touch input as well as mouse), computer keyboard, and even supports MIDI (using the Web MIDI Polyfill if the Web MIDI API is not present). To build a synthesizer, all you have to do is plug in the sound creation and user interface! To demonstrate, a simple ADSR envelope and sine-wave voice is built in. This is essentially the "Analog" Synth demo with the voice architecture and UI stripped out.
This is example code for recording audio from live input, displaying the buffers and downloading them as WAV files. It builds on Matt Diamond's excellent RecorderJS for recording and encoding the sound buffers, and my own Audio-Buffer-Draw library to display sound buffers on an HTML5 canvas.