Mobile: 07539886775
Email: squanchaudio@samuelscott-tsd.co.uk
Dialogue System (FMOD for Unity):
This project was the main aspect of my MSc dissertation where I developed a series of audio tools that aimed to help junior sound designers implement FMOD audio into Unity. The dialogue system is XML based and allows users to author complex dialogue interactions through a standalone authoring application developed alongside the Unity system. Through utilising LINKQ XML (through the file browser developed), authored XML files could be deserialized into Unity and the system would handle all aspects of dialogue management. The tool also supports unlimited conditions and allows player response authoring to provide users with the capability to create complex dialogue sequences for their games. This dissertation was marked at first-class. Please note the video is quite long as the dialogue system has a lot of complexities and features. However, these are the time stamps for the sections: Feature overview: 0-8:15 mins, demo overview: 8:15-11 mins, demo: 11 mins - end.
Audio Occlusion and Obstruction system (FMOD for Unity):
This system was another aspect of my MSc dissertation and aimed to extend the capabilities provided by the FMOD spatialiser. As FMOD's default spatialiser does not support occlusion or obstruction a system was developed that allowed for the easy inclusion of this acoustical phenomena. Furthermore, complexity is well abstracted and provides an easy means for sound designers to set up from the inspector, while also providing widening parameters to allow for acoustical differences between large and small sound sources.
FFT analysis and BPM detection (FMOD low level API and Unity):
This project was unique in the fact that it involved providing a means for players to not only add their own audio tracks and artwork, but also attempt to calculate the added tracks BPM. As well as developing the BPM and audio and artwork adding, I also developed the playback mechanism for users to be able to create their own beat maps. Furthermore, as the main game was already made (being able to play a set amount of pre-defined tracks), I also had to replace all the Unity built-in audio with audio implemented via FMOD. This was part of an MSc coursework where we made three games (alpha, beta, and gold), this game was our beta game.
Pykrete Simulation (Music System):
This was the music system created for the pykrete simulation. While game development was the most substantial role for this project, music and sound effects were also essential for giving life to the simulation and enforcing player actions. The music was implemented to provide an ambient feeling to the game, which progressed as the player advanced through different stages of the simulation. The music is adapted based on a parameter for the current year and highlights audio editing techniques (such as cross fade loops and transitions), as well as using programming to implement parameter changes to progress the music system forward.
Grid-based movement with the command design pattern:
As part of the global game jam 2023 I undertook the role of a game designer. I was responsible for developing the grid-based movement system for this game, which was developed utilising the command design pattern. However, as I was the only audio engineer on the team I also designed and implemented the audio using FMOD for Unity. This was done at the last stages of the game jam, as my main priority was as game designer for this project.
Project Apollo (Manhattan Internship):
This project was part of an internship that I undertook. While the internship was primarily audio focused, as it aimed to test the capabilities of an audio tracker software, I landed the role of game designer and that was my primary focus. However, during the project I acted as a mentor to the other participants and taught them some key concepts of programming, such as design patterns (I undertook this internship during my MSc in games development and the other participants were at different stages of the Audio Technology BSc). This meant that while my main role was game design, I wrote code that could easily be extended (such as through interfaces) and delegated tasks to the other team members to extend core code to support other functionality. Audio implementation was done through the Manhattan music tracker, and the whole purpose of this internship was to test the generative audio implementation capabilities with this software within Unity.
Sonar (Manhattan Internship):
This project was another game developed as part of the Manhattan internship. For this project my main responsibility was creating the intersection shader for the sonar, this was made utilising the Unity shader graph in conjunction with the VFX graph. Like the Apollo game, I taught some key VFX and shader concepts to the other participants. I had learned these concepts during a MSc in commercial video game development.
Pykrete simulation:
This was a client-based project where I developed a simulation for a proposed global warming solution. The client was happy with the simulation and added me as a credit on the website they have launched to promote the global warming solution concept, in hopes to gain funding from bodies such as the UN. While I did implement audio utilising FMOD, the game development was the most substantial undertaking for this project. My credit is available at: https://freezingglobalwarming.org/team/ The full unedited video is available here: https://drive.google.com/file/d/1Wbh35cOO_P7iSZskFa8h9eYJV2LTWdHR/view?usp=sharing
Please note that I have many other audio editing to video content, however these were used under educational use (such as editing audio for BBC spring watch clips). Therefore, it would be a breach of copyright to show on my portfolio as both the audio and video are copyrighted, as this editing involved track laying and editing copyrighted audio within an educational setting. The World of Warcraft audio re-work is different as all the audio was created or recorded by myself and the cinematic was not used in full.