Some clarification (was Re: New wavefield synthesis system -- holosonics?)


Subject: Some clarification (was Re: New wavefield synthesis system -- holosonics?)
From: Philippe-Aubert Gauthier (Philippe-Aubert.Gauthier@USherbrooke.ca)
Date: Mon Jul 26 2004 - 09:00:04 EDT


Hello,

Here is some clarifications about Wave Field Synthesis and IOSONO products
which I use to work with in may this year. See in the text.

Selon Kevin Austin <kevin.austin@videotron.ca>:

> My uninformed guess is that it works on a principle something similar
> to holography (holosonics).

We use to speak of holophony (see M. Jessel which introduced "holophonie",
1973) which is a theoretical idea which is true for any wave phenomenon, putted
simply : you can replace any real acoustics place/scene/source with a entire
surrounding (arround the audience) acoustics source (which should be a
continous surface). From the holophony idea you can derive in few lines the
Ambisonics equations or the Wave Field Synthesis operators which are >>BOTH<< a
specific applied case of holophony. Ambisonic is local (low order) to larger
(higher order) volume while WFS is intended for a large moving audience
surrounded by the loudspeaker layout.

It is likely 'size dependent', with
> different 'mixes' needed for different size spaces. As holography
> does not 'replicate' the object, but rather the wave patterns that
> would be created if the object were to be there, the creation of
> instantaneous sound pressure patterns (rather than the waves
> themselves) could be the objective.

Exact about the objectives. But you don't need to change your mixes because
there is no mixes, just a virtual scene description including virtual sources
which are fed by audionumerical streaming (live or prerecorded). Using the
Fraunhofer software interface, you can scale your virtual scene to adapt to ANY
loudspeaker layout (intended for WFS). You have a scene description (you can
create it with the interface or with your "notepad") and a file which describes
the WFS loudspeaker layout. WFS makes everything between your scene description
and your loudpspeaker layout.

>
> The example cited of a stone dropping into a pond may be a useful
> place to start. Drop the stone into a still pond, and take a picture
> of it. What exists is as wave pattern -- no velocity, simply
> displacement.

Hum, you still create displacement, velocity, acceleration, pressure, density
and even temperature oscillation. The idea is replicate the proper pattern
(including amplitude and phase) as function of time and space.

> As this surface is representative of the energy passing across it,
> this 'displacement' of a two-dimensional surface could be emulated by
> a surface that has small piston rods underneath it, and the rods
> would move up and down, creating the effect of the wave moving across
> the water (transverse wave motion if I recall). An animated
> explanation is at
> http://www.kettering.edu/~drussell/Demos/waves/wavemotion.html ...
> and this is taken from the excellent site:
> http://www.kettering.edu/~drussell/demos.html

DrRussel animations are great. That's not exactly the thing with WFS, you want
to create a wave field (which is not like water waves) in an horizontal plane
that you limit by a loudspeaker linear array which is driven to recreate the
thing inside the plane defined by the linear array (I hope this is clear?).
 
> In air, the soundwave moves by longitudinal vibration. If the system
> (by using 400 small drivers) is able to emulate the kind of
> displacement which causes the brain to interpret the information
> correctly (as in a hologram), 'how' it is done is less important (to
> the listener). A form of 'holosonics'.

WFS is really about physical reproduction of wave field. Its surely not perfect
(room coloration is important) from a physical view point, but it gives very
very nice results from a perceptive view point. There is no real perceptive
things in the basis of WFS. You make some perceptive approximation when you (as
a composer or mixer) create a mix. As an example you will put a limited number
of plane wave to recreate a sensation of diffuse sound field ... even if its
not physically diffuse.

>
> >Judging from what was said, each sound must play on a separate
> >channel which can be localized independently. I wonder how many
> >channels it supports ... if you could have a sound object created in
> >a physical space and walk around it, and emulate distant sounds from
> >outside the room. Or create heirarchal structures, for example, not
> >just the position of the horse, but the position of each of the
> >horse's hooves, the horse's breath, etc, and then do that with maybe
> >say, a hundred horses moving across a battlefield..

You can use an incredible number of virtual source. This is only limited by
your CPU power. The FRaunhoffer demo includes something between 50 or 100
virtual moving (focused and non focused) sources. The things are moving so fast
and there is so many sources, that you can catc everything. By personnal
experience 88 speaker + 50 virtual sources gives you a very dense thing.

Hope this help,

%====================================================================%
% Philippe-Aubert Gauthier, B.Ing, M.Sc. %
% Étudiant au doctorat en reproduction de champs acoustiques %
% %
% GAUS (Groupe d'Acoustique et de vibrations de l'Université de %
% [ Sherbrooke) %
% CIRMMT (Centre for Interdisciplinary research in Music, Media %
% [ and Technology) %
% %
% http://www3.sympatico.ca/philippe_aubert_gauthier/acoustics.html %
%====================================================================%



This archive was generated by hypermail 2b27 : Sat Dec 22 2007 - 01:46:03 EST