Re: robot com-posers


Subject: Re: robot com-posers
From: Kevin Austin (kevin.austin@videotron.ca)
Date: Thu Feb 17 2005 - 10:48:35 EST


I think this is a slight simplification and obscures some basic
elements that are making research difficult.

One could talk about the "stream of data", and how to deal with it.
Adopting the ASA terms, the first thing to be done is to somehow
'segment' the data -- it has to be taken in 'chunks'. With a message
like:

00111010010010111010010101001001110100101001010010101111011101010100
01001000101111010101101010101001010101001011010101010100101010001011
11111010011010010010110100101001001001001010111110100100010000001010

The individual digits reveal (almost) nothing. One needs to determine
where the 'meaningful segmentation' is. In written language, this is
shown by spaces. The segmentation can evan allouw errers tu cum inta
thur streem anits pozible te figur ou wots bein rittern, but without
the segmentation markers ...

thaholldamtingizsamushhmahdifelultergittaideeoffaztsgoawnaughtbeesaez.

Eliot's work (and those who start from MIDI as being 'music') accept
that the segmentation occurs at the level of the "note". Ea composers
live in the sub-particle world as well.

Voice recognition has come a long way, when the source is 'clean',
but if there is background noise, or another voice present, there is
the much more complex problem of sorting out whether and "chunk" of
sound is "integrated" or requires "segregation". A bell with a
complex harmonic spectra 'functions' as an integrated sound, as do
the low notes of the piano.

Here is a little text to demonstrate the problem ... you need to
figure out whether there are two different texts (such as two
voices), or one text with noise ... (in which case one works to
eliminate the noise -- a kind of karaoke in reverse).

ThThiDissisisiz inisen fconfacact ttohred te teexxdets tht hat
afistrs b heeave betin cerst ton nl putgethhded ioed witser wierrers
aex siikento tth the texts nd its r chooo usons ofen sube melhe chch
thamoget esen tplit thenuf letxt we ter epeet s and tth da choz. reed
toein texopehe fexcarry lots citittsa repiaen nrry repeot ttitions of
letlettters. eets ers.

(* see bottom)

Given english language frequency tables you can probably figure out
that this is not a standard english, but possible is something about
"text", due to the frequency of the "ex" and "xt" combinations, and
towards the end the ideas of "repetition" and "letters" begin to
emerge as one moves to third and fourth order Markov chains. The
first string seems to contain elements of "This" ties to "is"
followed by "in" and "fact", and at this stage, the Markov chains
work at the level of syntax ... something like ... "This is in fact
..." "text" ... "choose ..?" ... "text" ... "repeat" etc etc

But this depends upon a prior knowledge of several aspects of the
language involved, both at the vocabulary and syntax levels. This has
required the development of a relatively robust theory ... one which
is able to describe large portions of the existing body of material,
and that can be somewhat predictive of what is 'possible' within the
'core' of the language.

A sense of this can be found in looking at English as Second Language
dictionaries, where 10,000 english words are defined, but the reader
needs to know only 2,000 words, and all of the words are defined with
the restricted vocabulary of the 2,000 'core' words.

For tonal music, there are only a handful of 'core' rules related to
the Circle of Fifths and 'scale degree resolution'.

Vocabulary segmentation (using the note and MIDI model) is not a
major issue. The (metric) model also proposes standardized ways of
generating vocabulary groupings (derived from concepts of harmonic
rhythm), and these concepts can start to generate basic 'core'
analyses that work for over 90% of western music from 1600 - 1870.

But, this doesn't work on the 'sound', which is a large part of the
work in ASA and psychoacoustics in general. Given the central role of
the individual's central processing (brain and/or mind), in ea, it
seems that a solid introduction to psychoacoustics is important in
the field of electroacoustic studies.

One does not seriously study the literature of western music without
a firm grounding in notation and concepts of meter, harmony (figured
bass, chords and progressions), and their interaction in melody and
counterpoint.

Often hypothetical or "what-if" scenarios are introduced, but the
question I raise (and delimit) is one of the practice as reflected in
the work of hundred of thousands of people, not in the musings of an
individual (even if it is Cowell, Partch or Stockhausen).

My consideration here is not about "possible" musics, but about
exiting practice.

Best

Kevin

(* texts)
(1) This is in fact three texts that have been put together with the
texts chosen such that the texts carry repetitions of letters.
(2) This second text has been chosen not too unlike the first to be
melded into the chosen text with repetitions of letters.
(3) Dis iz infac de firs tex simplied wit errers and its repeeted too
get enuf letters and da chozein tex carry lotsa repeets.

At 18:19 -0500 2005/02/16, n_kondon@alcor.concordia.ca wrote:
>Can you tech a computer to recognise patterns? It has been done for
>speech/writing to a certain degree.



This archive was generated by hypermail 2b27 : Sat Dec 22 2007 - 01:46:06 EST