Re: Le mat?riau Sonore, hidden place


Subject: Re: Le mat?riau Sonore, hidden place
From: Eliot Handelman (eliot@generation.net)
Date: Fri Sep 16 2005 - 13:36:18 EDT


Morgan Sutherland wrote:

>How so? In the realm of synthesis and processing? Or psychoacoustics?
>Do we need more synthesis techniques, more deeply modeled ones? Detail
>or new ideas?
>
>On 9/15/05, Michael Gogins <gogins@pipeline.com> wrote:
>
>
>>
>>We won't use the computer to its potential until we know a LOT more about music and music theory -- a lot lot more.
>>
>>

I guess I'll chirp in on this one.

Mike is right about the problem of the POTENTIAL of the computer. The
question is how far you're
willing to think ahead. Past query-by-humming? Make up an EA sound
and search the web for music
in which something that "sounds" like this sound occurs? Find a piece
that is more or less made of the
same type of sound and with highly similar structure? Maybe some
classical turkish piece with a similar
kind of rhythmic energy? Maybe the pieces you hit on were generated by
computer? We already have
radio stations on the web that broadcast non-stop streams of
algorithmically generated music. This tendency
is not likely to decrease. Right now you make a piece and you want to
find out if it's any good. So you
write to prof. X at Slugsville university, but he's too busy to answer.
Let's say instead you use
google's "critacoocle" feature, which is a vast machine learning network
that knows all the music that exists
and knows who listens to what, and it can display the margin of interest
your piece will generate, and evaluate
the community that listens to such music too. You can find out what
ideas they've been influenced by, based on
an a analysis of their on-line presence, relating it to a database of
world thought and inferring what
sorts of pharmaceuticals they're involved with, what worlds they hang
out in, etc. It can also tell you how to
tweak things so that your music can migrate to a different, possibly
broader, community of interest. Of course at
this point everyone is producing music and the bots have so entwined
their presence that the chances are the only
thing that listens to your music is a computer program. But that's ok,
because over the past few weeks you've been
in love with a presence that really understands you and does not bs in
the matter of evaluating your
claims to musical creativity.

 Google doesn't have this feature yet but I know that they're interested
in music database query people. My point
is that the computer is not just a way of GENERATING noise but also a
way of formulating critical responses
to noises. The computer's potential is about LISTENING to music.There's
huge interest in this.
But there's a great deal of underlying theory that has as yet to be
developed. It may happen that the machines can develop these theories
for us.

 
problems, in no particular order:

1. a program that decides whether your EA piece has a publically
acceptable noise floor and no edit clicks.
2. program uses different attentional models to decide whether enough or
too much is happening.
3. program discovers new physically playable instruments. New kind of
string instrument that takes advantage of
     bone structure of someone's hand.
4. discovers new complex progressions with new tuning systems. Informs
you only if it thinks you might be interested.
5. invents dramatic scenarios and creates Hans Zimmer like music that
articulates the psychological tension in those
    scenarios.
6. improvises diabolical music on solo violin without falling back into
older styles.
7. explains in technical language what is happening in a beatles tune.
8. creates new dance sensation. Guy/gal who wrote program gets famous
for 15 min (to 15 other people)!
9. program plays piano like rachmaninoff. "such liberties!" Good at
improvising new Mozart concerto
    candenzas.

So throw it all together. The computer will build new instruments,
simulate new composers who play
these instruments, give them a life history and have them express
themselves in new tortured
ways, provided that it finds the revelant interested communities, who
could be other programs.

If you ask me this sounds like great fun and is worth working towards.

All of these things involve some basic understanding of what is music
and how do we listen to it.

In part, the answers to these things are: what does the brain do when it
enjoys music? Wittgenstein,
always ahead of his time, asked this question somewhere. We now know
that it's doing SOMETHING,
ie, listening to music does involve some set of physiological processes.
Can we grow new ones?

Another big part is just getting a program to actually compose music,
where by "compose" I mean
"the program has some understanding of what it is doing from the point
of view of how this music
SOUNDS." When we speak of "algorithmic" music, we tend to mean a
process that produces a
signal that someone must evaluate as music. AI means that the computer
is participating in this evaluation.

How the music sounds is also about making a model of what, emotionally,
this music might FEEL like. Does
the music narrate anything, given that our brains are big narration
machines?
 
Yet I spent about an hr. last night wiping dl'ed papers about different
aspects pertaining to above probs off my desktop.
There's a lot of work out there suffering from a lack of basic ideas.
One of the most basic problems
is that too many computer scientists with no knowledge of music are
working on these things. We need musically
alive people to think about the hard problems.

-- eliot

 



This archive was generated by hypermail 2b27 : Sat Dec 22 2007 - 01:46:11 EST