DARKNESS VR 

In this world, the sense of eyesight does not exist…It is completely dark, and you must make your way through space using just your hearing…

Darkness VR is a Virtual Reality experience where there are actually no visuals to rely upon - players are not given the affordance of sight. The piece centers around specified sound cues and directions that the player must hone in on to wayfind their way through the darkness to their destination.

The experience begins by the user hearing sounds that they are on a New York City subway approaching the West 4th Street Station. They must then focus upon the spatial and localized audio implemented to realize how to navigate the space off the train onto a subway station platform, call and take an elevator to street level, find a street corner, listen for when it is safe to cross the street, and ultimately find their destination next to the fountain in Washington Square Park.

Created in collaboration with Nick Tanic. Designed in Unity, and built for the Oculus Rift S.

 INSPIRATION

I wanted to capitalize on the sensory affordances of the virtual reality medium and create a stark exploration related to accessibility of an audio-centric experience. My desire was to allow users to experience a full world with the ability of just one sense, devoid of all other senses.

I also wanted to dive deep into spatial audio, and learn as much as I could about its nuances through research and practice (this being my first exploration work into this area).

Another inspiring aspect was to create an audio-centric piece for audiences to take a moment and listen to the everyday New York City sounds that often get taken for granted. Through all the bustle, devices, and external stimuli, the sounds of the subway platforms, streets, and parks can pass us by - this is an artifact for that remembrance.

 THE PROCESS

I was responsible for conceptualizing the blueprint and the recording plan for all the necessary live-world audio for this piece, creating the spatialized audio journey of this world, and building the path and mapped structures for the experience while my partner, Nick, worked on all the animations for the triggered events.

I began by first studying the path of this journey in reality, and then built its corresponding world in Unity for virtual reality.

 
 
 
 

The path for the journey is: a) starting on an MTA Subway A Train approaching the West 4th Street Station, b) exiting the train and walking the station platform until reaching the elevator, c) calling and taking the elevator to street level, d) exiting the elevator and walking on a street sidewalk, e) walking until an intersection, f) waiting for traffic to cede to cross the street, and g) entering and walking through Washington Square Park until reaching the large fountain.

Following that, I charted and executed the recording of the audio for this world. I organized myself and Nick to be 15 feet apart on the subway, subway platform, street, park, etc. and we would commence recording mono-track at the exact same time for the same duration. We would then move 15 feet forward each in parallel lines and repeat recording until we captured the entire path.

 
 

I then devoted a lot of effort to the guts of this project - spatializing the audio. As I dove into the mechanics, I would uncover a whole trove of additional details to learn - how human ears hear is a really complex and interesting process between the left and right ears constantly collaborating and the head-related transfer function. Spatializing audio to mimic that is a really exciting area.

I used the Steam Audio Unity plugin to help achieve this, and dove into attenuation radius, rolloff model and volume, and reverb zones in spatializing each audio source. As best I could, I tuned into the nuance of how all the recorded audio complemented one another to ensure it approximated reality when moving through this virtual world.

 
 
 
 

After some user testing, I discovered it was important to built a barrier around the circumference of the path. As the player is in complete darkness and only has their hearing guiding them, it was pivotal to make sure they couldn’t veer much from the areas of the world with audio, as they would then lose their hearing and be directionless without any trigger being able to bring them back.

This world would be difficult enough to navigate with only hearing.

 
 

In addition, as the player moves through the world (embodying the “camera” in Unity), I scripted the locomotion such that they can only move in the direction the headset is facing, and locked the Y axis so they could not go up into the sky or down beneath the ground.

The player would then be free to move about the x and z axis but the headset would remain at the same “height”, giving them a real feeling of gravity much like moving about in the real world! This would help ensure audiences do not give a second thought that this world feels like reality - with hearing being their only tool in the world, it was important to minimize other variable challenges.

Below on the left is the world the user is navigating with all its sounds and triggered events. And below on the right is what the user is seeing while moving through - complete darkness!

 
 

Live Demos of players navigating through the world!