I’ve been stumped on the right way to do this for a while. Actually adding the method is pretty easy. What I’ve been trying to think through is how you deal with the linearity of audio, and the fact that unlike a canvas (which also lets you do a get/set) the audio data isn’t necessarily there to get (e.g., it is yet to be downloaded/decoded). I’m sure there are good solutions to these problems, but they’ve eluded me thus far.
I added two new methods to audio/video:
- mozSetup(channels, rate, volume) // called to create the audio stream
- mozWriteAudio(buffer, bufferCount) // called to write buffer to the stream
Next time I hope to show some other demos that my colleagues have created or are creating now. It is ridiculously fun to iterate on this stuff. That’s partly due to the fact that I’ve got such a talented group of people working with me. But it’s also due to how “hackable” this code is to begin with. The fact that we can go from a back-of-the-napkin idea and then turn that into a working (if somewhat evil) patch in a couple of hours speaks to how well this stuff was written in the first place. There’s a good discussion going right now about View Source, and how important it is. What we’re doing in these experiments is View Source all the way down: we work in terms of HTML5 and the Open Web, but underneath is this amazing browser and platform, and underneath that the audio lib, etc. The only thing that enables this kind of experimentation and collaboration is the existence of View Source, and the kinds of communities that can form around it.