Sunday, 25 June 2017

Soundtrack for a Moving Image

Devising a Soundtrack

Public Domain
Being in the public domain means owned by the public. It means hat intellectual property is free from copyright law and its restrictions. In the UK, music enters the public domain 70 years after the life of the last surviving author.
Internet Downloading
Sound effects and music can be downloaded from the internet, however monetised use of copyrighted songs breaks the law. A wide range of recordings are in the public domain however.

We used licensed/ public sound effects downloaded from the internet in our moving image project, as it would be hard to recreate similar sound effects using readily available tools with Foley.
Licensed Music 
Licensed music is music that can be legally played in public, that is subject to copyright law but is given permission by the author to be played. Licensed music for our soundtrack can be bought online.
Licensed SFX 
Sound effects can also be bought online.
Mechanical Copyright Protection Society, Performing Rights Society Alliance (MCPS-PRS)
PRS and MCPS are the major copyright organisation in the UK. They distribute licenses for music. PRS distribute licenses to allow music to be played in public, live or through a recording, broadcast on TV or radio, and streamed or downloaded via the internet. They then distribute a portion of the revenue to the copyright holders of the song.

MCPS however handle the "mechanical rights". They entitle you to earn money when a song or recording is reproduced, whether it is distributed via CD, streamed or downloaded via the internet, or broadcast on TV/ radio. In some cases, the money is collected by both organisations and distributed evenly.

PRS for music handles both of these organisations.

Recording Audio for Moving Image


Monitor and Control
Volume can be monitored and controlled via a Volume Units Meter (VUM) which measure the decibel level of a signal. Peak Program Meters (PPM) measure the peak level of a signal. Both meters calculate the decibel level by measuring the strength of the signal passing through the meter. The way we often measure the strength of a signal is using dB, or decibels.
Synchronisation
licenses for non diegetic sound in film
smpte - sync audio with video using cubase
Synchronising an audio track can be easy with preparation. In professional settings a clapper board would be used. A clapper board is an audio visual cue designed to sync audio to video. To start a take, you slam the clapper board down. You can both see the clapper board and the loud noise shows up in your audio track. Clapper boards also have written on them which scene and take to help editors navigate their footage.

SMPTE, or society of motion picture and television engineers developed the standard for labelling frames of film with a timecode. Video with SMPTE time codes can be used to synchronise audio and video together.
Producing a Soundtrack


Marking
Using the marker tool in Cubase you can compose your work based on the video you're working on. If you mark, for example, a moment where a chase scene begins or a dramatic event happens your composition can build into an intense moment with plenty of warning. 
Storing and Archiving
When working on making sound effects or music for a soundtrack, it makes sense to store and archive your work properly for future use. By labelling work correctly, accessing work becomes much easier and more efficient. Backing up your files also means that you won't be in a position where you've lost files that you're either working on or might need to recycle for a future project.
Logging Soundtracks
By logging your soundtracks, you can navigate through your previous work easier. If you log where you've used each sound effect for your soundtrack in a word document, you won't waste time looking for it later on.


Task 1:
Be able to devise a soundtrack for a moving image project
Professional practice: working with a director; working to a brief; working with a studio crew; working with a location crew; meeting audience requirements in relation to issues of taste and decency
Components: dialogue; recorded music; pre-recorded music; SFX, eg pre-recorded, public domain,
licensed, own; library, eg, audio CD, CD ROM, internet, public domain, licensed material

Planning: capabilities of the available locations; recording equipment; software; recognition of various audio formats and their compatibility; copyrights; documentation

Intellectual property: public domain; internet downloading; licensed music; licensed SFX; Mechanical Copyright Protection Society-Performing Rights Society Alliance (MCPS-PRS)


Task 2:
Be able to record audio for moving image
Environments: studio and location sound formats; mixing live sound; acoustic interference

Equipment: selection; configuration and operation (studio, inside, outside, on location); video; digital; from single sources; from multiple sources

Microphones: selection; handling; positioning for different environments (indoor, outdoor and studio)

Connecting audio: awareness of talk-back; headphones; recognising and applying cabling connections

Monitor and control: monitoring and controlling of recording levels via peak program meters (PPMs) and volume units meters (VUMs); fundamentals of decibels (dBs)

Synchronisation: timecode use; SMPTE

Content: dialogue, eg individuals, groups, crowds; music, eg solo, ensemble, vocal, instrumental; location, eg background animate, background inanimate, wildtrack; SFX

Documentation and storage: marking; storing and archiving of all types of sound recording media; logging tracks and timing; log soundtracks from video and audio rushes using time-code and control track

Task 3:
Be able to produce a soundtrack for a moving image project
Professional practice: working with a director; requirements of client; requirements of audience
Creativity: using audio track to complement the visual content of a production (speech, music, ambient sound, SFX)

Edit sound to picture: locking sound and vision (synchronisation); lip synchronising; split edits; use of  timecode; adding music or background atmosphere; laying off and laying back tracks
Sound processing and enhancement: use of digital effect generators or synthesisers

Mixing and dubbing sound sources: level setting; equalisation; mixing dialogue; music and effects; using appropriate compression


Friday, 23 June 2017

Audio Production Processes and Techniques

Spy - Dusty Fingers

I assembled the tracks by adding them into Cubase. After that, I recreated the drum track from the original song using the samples of the drum set. Once that was done, I arranged the drums into the track and duplicated it several times. I found the track was at 170 bpm, which I used to arrange the drums. I then mixed the track a little. I liked the vocals, so I wanted to make them the most important part of the track. I duplicated the lead vocals and split the audio channels, although not completely. I set one at L75 and one at R75. I added chorus to both tracks to make them sound a little different, to add some variation. I took away some of he volume from both of these tracks to compensate. Then I took the main vocal fx track and added some reverb using the reverance plugin.

I wasn't happy with the bass after this so I used an equaliser to boost the lower frequencies of the jungle bass track.

Chase and Status - Time

Straight away I found that I would have to lower the overall volume of the track as it was getting very close to clipping. I assembled the tracks, then added in the drum break. I found the track was 140bpm. After this I used a couple of effects on a more subtle track that I liked - "vox time time" to make the track a little more interesting. I used distortion and octaver, and the end result had an interesting sound.

The arpeggiated synths track was one of my favourites, so I wanted to make it stand out. I used the equaliser to decrease the bass and treble, and increase the centre frequency. This raised the overall volume of the track, and gave it a muffled effect which I liked.

The two vox leads were very similar I found, so I thought it would sound good to move one to the left channel, and one to the right channel, which made the audio a little more interesting. I also turned the vox hard lead channel up slightly as I found it too quiet.

Mixing Audio

Radio
All radio stations use a multi band compressor on the songs they play. This results in a flat dynamic range, which is bad for listening - it sounds squash. To counteract this, radio exclusive mixes exaggerate the dynamic range, which compensates the sound.

Music
Mixing music is about creating 3D depth. Sound needs to come out of both ears in a balanced way. If a recording is mono, it sounds uninteresting, and if the sound is unbalanced, it will sound strange. Samples that are most important to your mix should be brought forward - sounds that you notice first are clear, flat and loud. To send something to the back for example, add in some blurry reverb with low detail, which will lower the clarity. Lower the volume, and reduce the treble slightly to emulate the way sound is dampened by surroundings - bass is rarely caught by everyday objects.

Sound for Games
Sound for games is unique because games often need smooth transitions in order to match the tone of the events taking place, otherwise atmosphere is lost. Side chaining (explained under editing - speech) can be used to lower the volume of music as sound effects are played.

Mixing for record release
Mixing for record release requires assembling each sample recorded ready for the mastering process. It also includes adjusting volume. 6 decibel headroom is ideal for mastering because it's unlikely to clip during the mastering without extreme audio processing. Also ensure there is no compression on the master track - it will lower the dynamic range, making the sound flat.

Production Possibilities
During production, by creating multiple different variations of one mix a final mix can be made that improves on all of them. Having a comparison between different mixes also allows you to compare which is invaluable when making subtle changes to a track.

Audio Post Production
Audio post production depends on the type of audio. Most audio requires EQ using digital audio workstations and mixing desks. For example, vocal low end often needs removing, bass can need boosting, and problem frequencies need removing. Audio can be made into a stereo recording by panning through the left and right channels, which makes it more interesting to hear. Left and right channels must be balanced however. Once this is done, audio tracks can be processed.

Live Sound
Live sound will be subject to the acoustic properties of the room the sound was recorded in. An empty room will create reveberance, a large room will create echo, other sound influences like people talking will create noise, and a certain level of distortion is expected.

Recordings
how they relate to process of mixing and editing
Mixing on recordings can be done during a recording using a mixing desk. EQ and volume control can be changed during the recording while being monitored - Recordings must have plenty of decibels to move around before they clip, 6 decibels should be enough. Non studio recordings where frequency response cannot be flat can be mixed during a sound test to balance according to acoustic properties of the room.

Analogue
Analogue editing is much different from digital editing. Analogue music data for cassettes are stored on tape. One of the things that make analogue editing so different is the order in which tape must be edited. Tape must be edited in a linear way - parts of a tape cannot be skipped, and the only way to go through a tape is forwards or backwards. Tape must also be edited manually, by hand, similar to the way in which old movies were edited.

Computer-based Software
The introduction of digital audio workstations like Cubase allowed for sound to be edited, processed, mixed and mastered in ways previously impossible. Audio no longer has to be edited in a linear way.

Compression and Equalisation
Compression squashes audio files. It does this by bringing the highest volume frequencies down closer to the lower volume frequencies, then raising the overall volume of a track.

Equalisation also manipulates the frequencies of sound waves. Using EQ you can define which frequencies to boost and which frequencies to duck, which when used correctly can enhance recordings and remove noise.

Use of Reverberation and Effects
Reverberation is another tool that can be used in DAW's like Cubase. You may also find reverb dials on guitar amps. Reverb is an effect similar to echo, in which sound waves bounce off of multiple surfaces causing them to decay over time. Artificial reverb is created by delaying sound waves.
Synchronisation
Cubase and many other DAW's have tools that allow you to synchronise video to sound, like hit markers that can mark important parts of videos.

Recording and Sequencing Software
Cubase is a recording and sequencing software, also known as a digital audio workstation, or DAW. Ableton is another recording and sequencing software with similar functions.

MIDI
MIDI stands for Musical Instrument Digital Interface. MIDI note data can be used when mixing in DAW's like Cubase. MIDI offers complete control as it allows you to change notes without having to record again. MIDI is similar to "Direct input" in the way that MIDI files are not subject to acoustic properties of the room where the sound was captured. Acoustic properties can also be added to these files using audio processes.

Synthesisers
audio output or midi data
Synthesises manipulate sounds to create music. This can be done in a couple of ways - analogue synthesisers' audio outputs can be plugged directly into mixing desks or computers ready for recording in Cubase, or synthesiser samples can be created from MIDI data.

Sampling Software
While Cubase and other digital audio workstations now make some sampling software irrelevant, sampling software is used to grab samples - whether from digital files or from physical storage devices - like cassette tapes, vinyl and CD. Some of these specialised sampling software's can be superior to DAW's at sampling as they may specialise in sampling, having more effects to manipulate the samples. WaveLab is one plugin that can be added to Cubase that extends its functionality to that of a specialised sampling software.

Editing

Speech
Ducking is a technique often used in radio for when a radio DJ needs to talk over music. If audio ducks, it gets quieter temporarily. This is often done using "Side chain compression" and is sometimes used in music. With side chain compression, two tracks must be picked. As track 1 gets louder, track 2 will "duck", allowing track 1 to be heard clearly.

Music


Background Noise and Ambience
how to control
Noise control can be controlled using EQ and filters.
EQ can reduce problematic frequencies, maybe you can hear birds in the recording. These sounds can be reduced by lowering the volume of the high end frequencies which contain the chirping. An alternative is a low pass filter, which allows low frequencies and attenuates higher frequencies.

Ambient sound however is different to noise. Noise is any kind of unwanted sound, whereas ambient sound is just background sound, and can create atmosphere.

Content and Corrections
Content can be filtered out by chopping up recordings. If a voice actor makes a mistake in the middle of a sentence, that part of the recording can be cut out. Profanity can also be cut out, and often is using radio stations broadcast delays.

Linear Editing
Linear editing is the only way to edit cassette tapes - they can only go backwards and forwards, they can't skip around. Linear editing must be done manually.
Non-linear Editing
Almost all modern media now uses non-linear editing, as it makes it much easier to drastically change sound in ways that was once impossible.
Edit Lists
Edit lists are used to keep track of edits that have been made to recordings. By having an edit list in front of you, you can be sure of exactly what you've done to the song, which allows you to undo any mistakes you might make during the mastering process.

Play lists
Radio DJ's often use playlists. A playlist is a list of recordings/ songs ready to play. By creating a play list in advance, DJ's won't have to worry about choosing songs on the day and getting them ready, and instead can focus on their other work.

Streaming surfaces like Apple Music and Spotify use the option of custom playlists to use to attract customers.