Re: Fwd: from Eliot who ...


Subject: Re: Fwd: from Eliot who ...
From: Eliot Handelman (eliot@generation.net)
Date: Sun Feb 06 2005 - 23:43:47 EST


Kevin Austin wrote:

>
>
> While a computer model may be of interest, my recent experiences with
> victims of stroke and other forms of brain damage see to reveal other,
> perhaps more direct methods. I have me people who ... stutter badly
> while speaking but can sing fluently; ... have lost (interest in)
> speech but sing along (in foreign languages!) with opera; ... who have
> lost the ability to "make sense of" music etc etc.
>
> These are complex issues and are ripe for detailed research.

 Isabelle Peretz has wrirtten extensively and interestingly about
various bizarre music-related disorders.
The last I heard was someone with perfect pitch who was completely
tone-deaf -- if you can figure that
one out. When he sang solfege he was dead on with each note. When he
sang a tune withouyt solfege syallables,
it didn't even remotely resemble the tume, without his being aware of this.

This kind of work is invaluable in tracking the boundaries of the music
module, assuming there is one. In this
case there may or may nboite a complex relation between speech and
music. But she, and we, don't really know
what's going on there.

But what does this really get us, musically? Does it even start to trot
down a path leading to the road whereby
music may itself be elucidated?

My goal is increase my knowlkedge and understanding of music. The thing
that seems top bear most directly on
that knowldge is music itself, not people with various disabilities.

It's hard for me to believe that music has any kind of
spatial inherence -- so studies about space and music are not directing
themselves to whatever may seem to inhere
in music. There are much more basic things that should be studied, eg
rhythm.

computer studies aren't just "of interest" -- they're essential, because
there's no other way to begin to grasp
what music is, both past and present, except on an inarticulable
individual level probably best
exprssed as music, which gets "science of music" no further. Of course
we have a long way to go until computer studies actually start to
begin to be elucidating. This strongly implies for me presentation of
new theortical ideas about how
music could be understood, since the older framework is, explanatorily
speaking, a complete failure. What's
called for here is a revolution in the technical conceptualization of
music that may come
by testing our ideas before declaring them to be theories of music. It's
time to build things. So it's notr merely "comp0uter models are of
interest" rather, the expectation is
that such activities have potentially revolutionary p[otential because a
whole new branch of music theorization
is imminent in this work.

 

>
>
>
> Spatialization has a pronounced impact on streaming and segregation in
> some circumstances. The ensemble performers experience is spatial by
> its nature. Today with fewer students learning music through
> ensembles, I have found that the "return of space" can assist in
> learning to hear and discriminate.

ok.

>
>
>>> My first application in this will be with four (or more) part
>>> harmonic dictation where each voice can be heard from a separate
>>> speaker ...
>>
>>
>> What service is being rendered by this?
>
>
>
>
> Anyone can move a fader, but the voice (as I propose this) would need
> to have a separate 'real' channel. I come to my positions from a
> couple of decades of teaching a couple of thousand people.

 where's the benefit?

>
>
>> Anyone can turn up and down a slider on a midi sequencer and isolate
>> a voice. I don't see the advantage of placing harmony exercises in
>> space.
>
>
>
>
> (I haven't been in his seminar but wonder about the use of the word
> "hear" in this context.) How neurophysiology deals with the
> demodulation of a multiplexed signal supports the idea that 'sounds'
> do not carry "information" without the necessary perceptual decoders.

We guessed this around the time of John Locke.

>
> Modern theories of multiplexing grow out Information Theory some 50
> years ago, but are well known in practice dating back centuries -- the
> well-known (and oft discussed) compound melody (Bach Prelude in C major).

I'm not sure what you're saying. All music is compound -- which
compoundness of the piece are you referring to?

>
>
>> Al Bregman contended in his perception seminar that we never hear
>> more than one line at once -- we can shift our attention as rapidly
>> as you like but the operation is still serial.
>
>
> There are many implications here possibly meaning that while the
> incoming stream is multiplexed, is memory also multiplexed, or is it
> partitioned?

The probable word is "schematized." Note that we can't use fancy
sounding computer terminology to describe this part
of the mind, because computers don't know how to do this yet.

>
> One way of avoiding the 'partitioning' of memory into several
> simultaneous memory areas (which then introduce problems of keeping
> them sync'ed together -- but the loss of sychronization is well known
> within the study of streaming)

 You're talking aboiut voice-leading or recognition, not streamiung,
which implies a kind of reflex response over which you have no control.
Confucious
thought it was imprtant or vital to call things byu theoir right
names. Why don't you talk about "hearing out the indivusal
voices," if that's what you mean?

> is to propose that memory itself is multiplexed ... possibly leading
> to (infinite?) regression (unless there are phase-locked markers -- an
> idea that could be supported in the parsing of verbal language).

"multiplexed" is probably much less that whatever the real answer is?

>
>
>
>> What happens if we happen to know exactly what's going on in the
>> inside of a 4-part chorale? The central problem for me is "what is a
>> line from a musical point of view," not "separate these things about
>> which I have nothing to say."
>
>
> For me, "line from a musical point of view" is a language-specific
> issue. My interest in the development of ear-training is at a more
> fundamental level than dealing with language constraint.

In brief, whatever a line is is trhat which coheres in time and with
respect to things that are no longer heard. It
invilves the induction of a schematic framework that absorbs parts in
the interest of an emerging whole.
The "whole" is a kind of pattern in which many other patterns have been
absorbed. I don't know
what makes you think this is about language.

Maybe you think musical point of view means "music quite jolly just
about now."

I thin k the problem here is that we think about about music in
fiundamentally different ways. To me you've
put many things ahead that should be behind but maybe you're right if
you're achieving desired effects with
students.

My view of how music should be taught is that our senses awken to our
unbderstanding Whereas it seems
that you want some kind of VR setup that seems to me a very muscular
approach.

>
>
>
>
>> Is this resentment, Kevin? Is the world not big enough for this other
>> and better piece?
>
>
>
> Well of course this is about the allocation of resources! It's been a
> major theme of mine for more than 5 decades, and it probably won't
> stop now. If the artist can feel free to be critical of society, can
> someone not be critical of the society that selects. Maybe there were
> no other projects at that time that would have benefited from an extra
> $25,000 (or more).
>

But I found that it was a very good project and it desrved the grant. In
a moma setup a work must have a very strong
conceptual dimension. Cardiff's piece had this in a way that regular
EA, however technically good, usually completely lacks.

If there's anything like ear-training in art school, I'd suggest it was
concept studio with crits.

-- eliot



This archive was generated by hypermail 2b27 : Sat Dec 22 2007 - 01:46:06 EST