This post marks the beginning of what I expect to be a somewhat regular series of posts in which I will document my thinking, learning, and progress related to some Firefox development. I say “Experiments” above, because I am going to try a number of things. First, I’m going to push into a part of the Firefox source code where I haven’t worked before, namely, the DOM implementation (note: it scares me, to be honest). Second, I’m going to do so in an open and pedagogic way, attempting to cast aside my own ego and hesitation at looking foolish–I don’t actually know how to do what I’m going to try, and will learn and fail as I go–on the way to producing an authentic model of open development for my students. Third, I’m going to work with a few others who are also interested in extending themselves and extending the web. The only thing I can assure you of at this early date is that these posts will be an honest account of the attempt.
First, I should say something about <audio> and what we’re planning to do with it. The <audio> element is part of HTML5, and allows web developers to embed audio, music, sound effects, etc. directly in a web page. This same ability has been achieved in the past through the use of Flash. The <audio> element, like the <video> element, is still in flux and presents an excellent target for experimentation and play while the spec continues to wend its way toward completion.
I don’t know anything about audio or have any need for <audio> personally. However, I have spent the fall working with creative designers, visualizers, and developers as part of the Processing for the Web project. It has opened my eyes to what the browser and web might become, if these people are given the right tools. And while I can’t do anything more with audio than press ‘play’ (I struggle to get this to work sometimes, too), I can bring my knowledge of Mozilla, its code, and the web to this effort. So my role in these experiments will be to help get things working in Firefox, and to help my partners (who do understand and need audio and <audio>) understand how to do this work.
In my next posts I’ll show some of the work I’m doing to understand the process of exposing more data to content. We have some existing DOM elements (e.g., canvas) that do similar things now, so that’s where much of my reading is currently taking place.