Re: Virtual Concerto


Subject: Re: Virtual Concerto
From: Richard Zvonar (zvonar@zvonar.com)
Date: Thu May 13 2004 - 02:01:20 EDT


At 11:37 PM -0400 5/9/04, Eliot Handelman wrote:

>There's still a question as to whether Lewis' new piece
>involved voyager or something newer.

Here's something from George that might help:

At 9:09 PM -0400 5/12/04, George E. Lewis wrote:
>>Some discussion about your work is going on at the CEC list. Any comments?

>Lewis, George, "Too Many Notes: Computers, complexity and culture in
>Voyager." Leonardo Music Journal 10, 2000. Reprinted in Everett,
>Anna, and John T. Caldwell, eds. 2003. New Media: Theories and
>Practices of Intertextuality. New York and London: Routledge.
>
>What I can say for now is that my co-programmer (Damon Holzborn,
>http://www.damonholzborn.com/) and I started with the old Voyager
>code, which I had written in Forth. We decided to switch to OS X
>Max/MSP as a platform for the new piece. Basically I pseudocoded
>those algorithms that I wanted to use, and Damon created the MAX
>code from that. Maybe he could add his comments as well.
>
>Certainly, a number of algorithms from the Voyager code were recoded
>in Max (in fact, some of the algorithms date back to the Ircam piece
>from 1984) but where Voyager was a virtual orchestra, the new piece
>was a pianist, which meant that a lot of the complexity of Voyager,
>with 64 single-voice "players" grouped in continually shifting
>combinations, able to use different microtonal pitchsets and timbres
>simultaneously, play in different tempi, etc., sounded a bit too
>uncorrelated to simply graft onto the piano. Also, the 9-foot
>Disklavier to be used in the concert could not really handle all
>that data anyway, and of course there were no microtones.
>
>So we had to make a lot of changes. Also, as people who have used
>the Disklavier have experienced, to get the maximum dynamic range
>from the instrument, you have to pay a lot of attention to the
>relationship between duration and velocity. That is, a staccato
>note played at velocity of 1 is very different from one played at 80
>or 90. That's mainly because of the physicality of the piano
>response.
>
>To smooth that out required some extra work, basically creating a
>velocity to duration curve at 50 ms intervals that translated
>Voyager duration data according to the velocity being played at that
>moment. That more or less eliminated the kind of thing you see where
>the Disklavier is playing "pantomime" notes during rapid, very soft
>passages.
>
>About a week before we had to leave for New York I became a bit
>frustrated with some of the bonehead form decisions the program was
>making, so I went back to an old Rainbow Family idea (pre-Voyager,
>made at Ircam in 1984) and created a transition network of
>probabilities that biased the space of possibility according to what
>the program had been doing up to that point.
>
>The old Voyager code featured a set of "demons" that would create
>transitions, but these demons didn't really care what had been
>already going on. For instance, the duration demon basically
>started with what was going on and made it faster or slower for some
>period of time.
>
>The newer transition network had a bit more memory, and made it
>unlikely, for example (though not impossible) for the program to
>move suddenly from a very slow passage to an extremely fast one.
>Rather, the transition would usually encourage more gradual movement
>from slow to fast, or do very gradual shifts up and down in tempo.
>At the same time, the transition program made it quite likely that
>if the program had reached an extremely fast moment, it could
>suddenly "break down" the tempo to a slow pace. Finally, every so
>often the transition network would be overruled and a completely new
>behavior would be instantiated.
>
> Of course, as with any algorithm, the choices made by this
>transition network embodied aesthetic, culturally and historical
>concerns, as well as the practical concerns involved with getting
>the program to improvise in performance with a large orchestra. I
>felt that having the program appear to be staying in one place and
>developing "logically" from there, rather than jumping around
>"randomly," might be more compatible with what the orchestra might
>be doing.
>
>The seventeen-minute score was pretty much conventionally notated,
>which required a scheme of presets in which the initial behavior for
>a given section was specified, including which of the four soloist
>mikes (clarinet, trumpet, trombone, violin) to play attention to.
>The program would develop its improvisation from the initial
>behavior. Most of the cadenzas operated in this way, and you could
>constrain the ways in which the program would develop from the
>initial behavior set. Sometimes the preset just said essentially,
>"go for it," with no initial behavior specified. The behavior set
>idea comes from Voyager, and the Leonardo article explains the
>basics.
>
>Tomorrow's concert, unlike the Carnegie Hall event, will be
>open-ended, with no score or presets, and one of the great
>improvisors of our time as a participant. This should allow Muhal
>Richard Abrams to develop some interesting duo textures with the
>computer pianist. I'm going to play also.
>
>As I indicated above, every algorithm is culturally mediated--or to
>put it slightly differently, musical computer programs, like any
>texts, are not "objective" or "universal," but instead represent the
>particular ideas of their creators. As notions about the nature and
>function of music become embedded into the structure of
>software-based musical systems and compositions, interactions with
>these systems tend to reveal characteristics of the community of
>thought and culture that produced them.
>For example, we were using MSP fiddle with some of Tristan Jehan's
>modifications to do pitch following for Virtual Concerto, but for
>tomorrow I'm going to use my old IVL machine, which in my view seems
>to make it easier to parse discrete "notes." The two approaches
>come from different networks of practice,and reflect different
>audiences and constituencies. I read a paper which compared fiddle
>with hardware followers, indicating that fiddle picks up the pitch
>as quickly as the hardware machines do. My experience indicates that
>this is true, but pitch isn't the only important dimension I need.
>You need to be able to reliably parse duration as well to foster a
>kind of rhythmic discursivity, and we seemed to have a harder time
>doing that with fiddle. On the other hand, we were somewhat pressed
>for time, and fiddle isn't something that did what we needed done
>right out of the box. Perhaps if we worked on it a bit (and I
>managed to learn a bit more about DSP algorithms) I could do better.
>
>Best--
>
>George
>
>World Music Institute & Thomas Buckner present
>Interpretations
>New York's Home for the Avant Garde--Now in its 15th season!
>
>Thursdays at 8 PM
>Merkin Concert Hall, 129 W. 67th St. 212/501-3330
>
>May 13, 2004 George Lewis & Muhal Richard Abrams
><http://music.ucsd.edu/public/fm_music_directory.php?cmd=fm_music_directory_detail&query_Full_Name=%2BGeorge%2BLewis&query_Active_Status=Faculty>George
>Lewis, trombonist, composer, professor, and distinguished member of
>the AACM, is known for his diverse body of work which spans
>electronic and computer music, multimedia installations, notated
>forms and improvisation. Tonight the 2002 MacArthur Fellow will
>present a new interactive computer work for pianist Muhal Richard
>Abrams, who will perform alongside a digitally driven Yamaha
>Disklavier concert grand, as well as a duet between Abrams and the
>composer on trombone.

-- 

______________________________________________________________ Richard Zvonar, PhD (818) 788-2202 http://www.zvonar.com http://RZCybernetics.com



This archive was generated by hypermail 2b27 : Sat Dec 22 2007 - 01:46:01 EST