how
How
does AOM group of performers work together?
- make rehearsals in Second Life together
- create new interfaces
- test interfaces together
- discuss how to organise a scenography for a piece
- communicate on AOM mail list about ideas and organisation of the performances
- publish documentations about AOM in different networks
During the rehearsals that takes about 2 hours
per week we are both discussing and playing. We talk about and
play
old and new pieces at different locations to find out for performances
in
Second Life. Also we discuss plans and elaborate together about future
performances and ideas carried out by a composer, member or not member
of AOM
group.
A rehearsal time

To make scenography for pieces we work together in choosing or
building specific landscape architecture. Also for a piece we can
think of special ways between place and architectural objects inside it
to play with. This in order to obtain a specific choreography with
avatars. Sometimes we create clothes too. We can imagine a piece by
testing it in different places in Second Life too. We test all these
parameters depending on if we want to obtain an imbrication or if we
search to create a surprise between environment and playing. Many
parameters are chosen to be included in our playing but other
parameters are necessary to be integrated like for instance the lag
(time delay) or other script restrictions in different places. We work
with this lag (streaming audio and 3D interaction delays) and composes
pieces to play. During performances AOM follows scores or a conductor
avatar and in some pieces there is also improvisation. Some pieces are
performed in a series of investigations in new surroundings (called
'Orchestral investigations'). Also we have some suggestions to choose
camera points of view during a performance to show the audience.
Example of one of script of the camera (written by one of members
called wirxli flimflam, posted in the AOM mail-list the 14 november
2007):
This post here is just for whomever happens to be
the RL camera avatar for Wien Modern on the 24th. It might be
Maximillian or someone else who is there locally. Anyways, here is the
best way for the RL camera in Vienna to get the most out of Riesenrad:
1- Ensure that the camera avatar is wearing a flight feather (I can
pass one of these around) as the wheel has been installed in a space
with strong gravity.
2- The best place to position the cam avatar would be somewhere
floating near the centre of the wheel but not too close as to knock the
conductor off the perch ;-)
3- Using the alt+arrow keys, the camera can zoom around to different
chairs on the wheel and zoom into the local sounds of the various
performers...zooming out gives the audience the total orchestrational
mix of the ensemble and so even though it sounds the most dense, any
sort of structural detail or tonal development will not be heard unless
zooming in on the flutes or cellos. I suggest zooming back and forth
between the macro view and the local view of each performer over the
course of the composition's duration. It is ok to enable to the doppler
effect on the camera avatar's sound preferences so it will make the
zooming actions produce cool warping sounds :-)
4- The conductor also plays sounds so that drone sound you hear is what
the conductor is playing..if the drone timbre sounds too repetitive and
minimalist after prolonged exposure, it is recommended that the camera
zoom away from the conductor and towards each
individual player.
5- The conductor occasionally stops conducting and the performers will
try to stop performing until the conductor continues conducting
again... The composision does not end until the conductor says, /THE
END.
How does AOM create a HUD interface to
play a piece?
A creator of interface with AOM
communicate the ideas to invent it with others. So at the same time
he/she compose with architecture and possible avatar choreography.
It’s a collective process that can
be recognized in lenghty discussions during rehearsal and in the
mailing list of AOM about how to implement differents things
technically, estethically and also conceptually.
To create an interface a member
propose an idea that also includes:
- short samples of sounds,
- different avatars movements for
various sounds
When the interface is beginning to
take shape, at least in the head of the creator, next step is taking
place and that is the technical programming of the so called HUDs
(Heads Up Display). Usually this includes filling in specifications on
how the HUD shall control sounds and avatar movements.
examples of two modes of one of our
HUD interfaces
default
mode
playing mode
The HUDs are programmed in
the language called LSL (Linden Script Language). The first HUDs made
for AOM where simple and used only a single user interface with playing
buttons visible (screen interface) only for the avatar orchestra player.
Later HUDs have animations of
avatars included and visible wearable soundsack on avatars back.
examples of two screen interfaces
Fragula screen
interface
Fadheit screen interface
These interfaces makes it
possible to play with avatars in the virtual life like aviaphones
or onomatophones (see glossary). We also
want to create other pieces for mixed reality
live performances (see in progress).