Re: CIMESP-Results Fwd:

Subject: Re: CIMESP-Results Fwd:
From: Richard Wentk (
Date: Sun Nov 06 2005 - 16:38:14 EST

At 19:34 06/11/2005, you wrote:
>Then prediction must be the wrong word. What you're talking about is
>something like a facilitation effect
>in which, eg, if I happen to be "in mind" of c then if I hear c# a certain
>inflection of c# takes place because of
>my memory. The c# gets "colored" in a (presumably) new way.

No, I think you're *still* confusing structural and perceptual prediction.
Saying c# is a certain kind of inflection is a structural analysis.

This is different to stopping a (hypothetical) CD at a certain point and
asking a group of listeners what they expect to hear next. When they hear
an ascending C major scale stopped at B, there's a clear prediction that
the next note will be the C above. A C# or an F# would be very surprising.
In fact it would be an unpleasant surprise because rather than extending
the predictive model - as an A might - it completely destroys it.

This can be tested empirically. If music has any internal consistency,
listeners will be able to make plausible guesses about what happens next.
Quite often they'll be right. How is this not predictive?

There are different dimensions to this, because accuracy depends on both
the listener's and the composer's ability to build reliable models, which
depend very much on experience and training. Still, simplifying greatly
here, there seem to be three basic experiences involved -

1. Default development, where the music does exactly what a naive listener
might expect it to.
2. Extended development, where the model that's being built is extended in
an unexpected direction.
3. Destructive development, where the model is completely destroyed.

1 and 2 is experienced as composition - especially 2, which is highly
valued as an experience.

3 is usually experienced as unpleasant noise and labelled 'not music.' More
experienced listeners have more flexible and open-ended predictive ability
and may find their model making stretches enough to include developments
that more naive listeners would find painful. But it's always the same
process, happening at different levels of perceptual flexibility and

The other dimension is that predictability varies throughout a piece. After
the first note of Beethoven 5 you can't guess what's coming next. After the
end of the phrase the range of *likely* possibilities has narrowed
considerably, and you can be fairly sure you're not going to get a car horn
or a blues guitar solo. (Except in an EA cut-up piece.)

>That's the Leonard B. Meyer theory of yore.

And many others, who seem to have reached the same point independently.

The clincher here is the empirical obviousness. Clearly, with almost any
music, there will be expectations about the direction of future
development. Even when listeners can't articulate what they expect to hear
next, they will never have any trouble evaluating possible continuations as
more or less believable and 'right'. This can only happen if if there's
some kind of predictive process happening.

>It's also hard to see what the theory is trying to explain. In my
>thinking, the shapes of music are
>analogues of other perceptions that relate to things in the real world. So
>it's not as though we're
>simply playing a game of "expect this" and "be surprised." In some way the
>Beethoven 5 theme
>does feel like some force "knocking at the door."

Well, arguably. Without wanting to be *too* tedious about the narrative
point again, these kinds of programmatic descriptions mainly seem to
perpetuate themselves through text. Would listeners associate the first
movement of The Planets with Mars if they hadn't been told that's what it

Also, if you take away simple onomatopeia in B5 bar 1 - it literally sounds
like a knock on a door - what's left?

>The theme of the eroica begins with ease and concludes with a sense of
>trouble or concern. The demarche of the the 7th is
>a complex conversation about dance, in which the players agree and develop
>a crazy elastic energy.
>I don't see where prediction plays any role in this.

I don't see where it doesn't, given that the key (?) point of tonal music
seems to be the creation of these kinds of extended metaphors by playing
with tension and release - which are really just synonyms for frustrated
and confirmed expectation.

I'm not suggesting that confounded expectation is the whole story. I'm also
not saying that tonality is the only tool these models use - it's just an
easy one to watch in action.

But the original point was about good composition vs bad composition, and
whether this has something to do with information density. And I really
don't see how it's possible to ignore the empirical evidence that some kind
of perceptual modelling process is involved, and that as information
density increases, the complexity of the models that are being built
follows suit, and that this seems to be considered a good and interesting
thing in music.


This archive was generated by hypermail 2b27 : Sat Dec 22 2007 - 01:46:14 EST