An auralized soundfield, whether rendered over loudspeakers or headphones, usually presents a static aural picture of what that reproduced space sounds like for a pre-recorded sound source captured in a controlled, anechoic) environment. One of the key aspects missing from this acoustic picture is how the source reacts to the aural feedback received from the space itself.
Although this is not critical when recreating the experience from a passive audience member’s perspective, this becomes much more important when the listener is also the performer – an active sound source responding to what they hear as the sound they make is projected into the acoustic environment in which they have been virtually placed. Research here focuses on both the vocal and instrumental performer, and looks at how stage acoustics might be rendered and presented to the performer in a more subjective and objectively accurate manner, as well as examining how singers perform in both real and virtual spaces. The results have an immediate application in the development of virtual performance and rehearsal spaces for musicians, but also extend to the development of more immersive and believable virtual reality and game audio systems.
Members
- Damian Murphy
- Jude Brereton
- Helena Daffern
- Gavin Kearney
Website
Research