Powered By Blogger

Monday, 29 March 2010

MS (middle & side) mic technique



Date:
Duration: 5 hours

To increase my knowledge of mic techniques, I decided to research and implement the MS recording technique. I have have very little exposure to the technique and knew very little about how to best implement the technique.
I recalled reading a article in FutureMusic magazine which explained the MS technique so I began by digging up the old issue to re-digest the information. The article (August 2008 p40~46) discussed in-depth the workings of the technique and how to correctly implement and decode the signal.

I read around the subject more on the internet and also had the opportunity to use the technique whilst recording some classical guitar after borrowing a microphone with a switchable polar pattern (Audio-Technica AT 2050).



Little background on the MS technique

The MS technique is a coincident technique (mics are close together) employs a bidirectional (figure of eight) microphone facing sideways and another microphone at an angle of 90°, facing the sound source. The technique to completely mono compatible and used widely in films and audio recording.
The signal from the figure of eight mic is duplicated and its phase reversed, the two duplicate signals are panned equally from left to right. The middle component (as its name suggests) is panned to the centre.


The huge advantage of this technique is its unique ability to adjust the width of the recording after being recorded, in another words its polar pattern can be altered. The technique allows for the capture of very wide sounding recordings suited for many applications, guitar, piano, drums etc.
The technique enables to capture the room ambiance thanks to its bi-directional polar patterned microphone and the direct sound from the cardioid middle microphone.

After hearing samples of the technique on sites such as youtube, I wanted to have a go at it myself so after borrowing the Audio-Technica AT 2050 from a friend, I went about using the technique on a folk duo that came in to do some recording.



Recording

Setup: I used the Audio-Technica AT 2050 as the bi-directional mic and the AKG C214 as the directional cardioid microphone. The microphone's diaphragm were 90 degrees at a angle from each other. The cardioid facing directly toward the source (the folk duo) and the bi-directional mic faced the sides, hopefully capturing the ambiance.


(setup in the same configuration as the picture above)



(above is the positions of the vocalist and guitarist in the MS field)

I plugged the AKG C214 (cardioid) and Audio-Technica AT 2050 into Focusrite pre-amp and took the output of the bi-directional mic and duplicated the signal using the patchbay (Behringer PX 2000) and fed it into the soundcard.
After the recording was made inside logic, I bussed the side components to a stereo bus. This allowed be to control both side components and fade in and out for comparison. There is also a MS decoder inside Logic which only needs the one side component and duplicates the other automatically, with similar results.
The results became instantly apparent when I mixed in the side components to the middle component. The stereo depth became very wide and imaging very precise. When I turn the sides all the way down, the track becomes mono. When the sides are panned to the centre, because they are 180 degrees out of phase, they perfectly cancel each other out also leaving just the mono signal.

I was very impressed with the results and started to understands its significance and its many potential for stereo recording. After the recordings and the experimentation, I was sure I will be using this versatile technique for many of the recordings in the future. After returning the Audio-Technica AT 2050 to a friend, I began looking online for multi-patterned mic to buy for future recordings.


Here is the recording with MS technique deployed. The side components are first slowly brought up then turned on and off for easy comparison:

Thursday, 25 March 2010

Recording in Pro Tools & mixing in Logic


Date: Recording 20/3/10 & Post-Production 25/3/10
Duration: Recording 5 hours + Post Production 4 hours

I recorded band named Enable Submarine along with two other friends at a project studio (not mine) which had a Protools LE system. No click track was used for the recording because the drummer was not confident she could play along with it. This made mixing outside Protools harder but since all tracking was done starting at the beginning it was not too difficult to import into Logic.


Recording


Acoustic guitar and vocals:
  • Vocals were recorded using large condenser microphone
  • Acoustic guitar recorded using 2 AKG C1000s, one on the front and another on the back
  • Vocals and guitar also utilised 2 ambient microphones AB configuration (AKG C214s)
Bass guitar
  • Played over the drums. Headphone mix created inside protools
  • Bass was recorded DI and cab was also miked. The two signals give more option when mixing.
Drums

  • Recorded over the vox/guitar track.
  • 2 overheads (C214s) XY configuration, D112 for kick and SM57 for snare. No tom miking was necessary as drummer didn't use it for this particular song.
  • Re-recorded several times to get the timing right. Drummer advised to play hihat open with more stick etc
Electric guitar
  • Rhythm/lead guitar was recorded using 2 mics (C214 & Rode NT-1a)
  • Mic side by side comparison, ended up using both together for fuller tone.
  • Solo guitar was recorded using same configuration


 Post-Production

After the session, all necessary data was transferred to my external hard drive and transported to my project studio. As there are no method to save project files on protools to be opened inside logic, I had to manually import them and name each track, colour code them and pan them to necessary positions.

  (individual files had to be tested and compared to the bounced track from the protools session)


After importing all necessary files into Logic, I went about sorting the levels out. One of the problem that was immediately spotted is the overall low level nature of the recordings. Under the circumstances in recording, the levels seemed reasonable but after being transferred, it seemed to have drastically dropped in volume. I went about inserting a gain insert on most tracks to solve the problem.

(as you can see, many of the channels have a gain insert in them to solve this low level problem)




I treated drums, bass guitar, acoustic guitar with parallel compression. This boosted their presence in the mix and made their sounds punchier. The inserted compressor inside the parallel bus is dialed in with normally extreme settings and gradually mixed in with the original signal until desired results are achieved.
(above shows the drum parallel compressor with hard-ratio dialed in. There is also a HF shelf boost to make the hi-hats shimmer more in the mix)

The vocals were treated using two compressors after one another. The first compressor to tame the high energy transients and the second one to balance the overall body of the voice.

All reverb featured inside the mix was handled by the external Lexicon MX300 rack with a custom settings dialed in.
All compression and most EQ was handled using the Focusrite Liquid mix 16

There were problematic areas in the song were signal was running too hot and it was going into digital clipping (with the limiter in place). I manually drew in automation areas concerned and tweaked the levels (mostly the vocals)
After finishing the mix I reviewed it several times to make sure the balance was right then discovered couple of areas where the drums suddenly dropped out (problem with automation) was fixed quickly and the mix was done.

Sunday, 21 March 2010

Rewire (Reason 4 with Logic 8)


Date: 21/3/10
Duration: 4.5 hours

Having used Reason to create drum sounds in the past, I thought to make the most out of Reason it needed to be Rewired to Logic. The idea is MIDI notes generated inside Logic will trigger drum machine/synths inside Reason and be fed back into Logic. To accomplish this, several steps must be taken to ensure Reason enters the Rewire mode as slave and Logic as its host/master.

To understand how to setup Rewire properly I read/watched tutorials online. They explained in-depth information about the dos and don'ts of Rewire and its capabilities as well as its limitations.


Setting up Rewire

First, the host program (Logic) has to open before Reason. This step is important otherwise it wouldn't work. Open Reason and tab at the top indicates Reason has entered slave mode. It is now relying on Logic's input and output.

To get any sound out of reason, External MIDI track must be created. Then an auxiliary channel must be created inside the mixer and appropriate input must be chosen.



(When creating aux channel, it is important to make sure the input is corresponding to the instrument inside Reason's hardware)



(above shows the Reason rack from behind. shows Malstrom & subtractor connected to the combinator, Redrum drum sequencer and Reason's sampler connected to individual outputs)

After creating all necessary inputs and outputs, I began playing some notes on the controller. After deciding on a particular beat, I hit the record button and played the beats. I added few extras then quantised the result. The advantage of using Reason's drum sound is its customization ability. Each drum sound from the kit can easily be swapped from their extensive library. I didn't like the original snare sound on the kit so I loaded up a new sound by clicking on the folder button and chose another one I felt worked better.

I loaded up two separate sounds inside the Combinator and tweaked them until I was happy with the result. Inserting the line mixer inside the Combinator enables the user to mix the different sounds accordingly. If you want to enhance one particular instrument over the other, it can be turned up as well as panned.

(picture above shows levels can be adjusted to get the right balance at the outputs)


After I had all the sounds I was looking for, I began playing these sounds over the drums I recorded earlier. Small sections were played and copied mush like apple loops inside Logic but these could be altered as they are MIDI date and not sound files.

(above shows the automation data, the volume changes of the drums and the panning of the strings)

After a brief mixing it became apparent I needed to enhance the drums as its presence in the mix was not great. I normally would use parallel compressions for drums (New york compression) but I was unsure of the results inside the Reason so I decided to do this inside Logic using Focusrite Liquid mix emulation.

(picture show the Empirical Distressor emulation with NEVE desk EQ)

Hard compression was used to really squash the dynamic range and slight shelf boost for the hi-hat and cymbals to cut through.


Here is the finished product:

Thursday, 18 March 2010

Room acoustics & treatment

Date:17&18th march 2010
Duration: 6 hours


Having recently changed the layout inside my project studio, I decided to consider about the acoustics of the room to achieve better results in mixing.
Limitations on space means extra care must be taken on the layout, practicality and the final result. I knew compromise had to be made because of the less than ideal space but I needed to learn the theory in acoustics more so I could improve the acoustics of the room using treatment.


 Above is the layout of the room as of now. There is a fixed bookshelf on the bottom right of the room which is acting as a diffuser at the moment (with different depths of books and general uneven surface) which if necessary could be moved.
There is drumkit inside the room which cannot be placed anywhere else other than this particular room. Although far from ideal to have the drums in the same room as control room, no other alternative exists at the moment.
All the rack mounted gear is tucked under the table to save space so that's not an issue. Two guitar amps are placed in front of the table in the top centre of the diagram above. One of those are usually tucked under the table when not in use. There is also a Bass guitar cabinet which is stored in another room and only is brought in when tracking so its not included in the plan.
There is a sofa which is used for artists to sit on for tracking purposes (bottom left)


Background Research

After spending several hours browsing through websites and articles in Sound on sound magazine I got a basic grasp of the fundamentals of acoustics.
Here are some areas I need to consider to improve the acoustics in my room:

  • The listener position has to be symmetrical in proportion to the dimensions of the room. So currently I am tucked away in top left hand side of the room and I'm going to have to find a more symmetrical position inside the room.
  • Try to reduce reverberations inside the room. The room has carpeting and thick curtains and there is very little reverberation inside the room. Not enough to for it to be a problem and just enough for critical listening and mixing.
  • Prevent standing waves & acoustic interference. Because of the bare wall behind the listening position, in all likelihood there are waves reflected from the wall and interfering with the sounds coming from the monitors. This causes peaks and dips in the frequency response of the room and this must be sorted.
  • Reduce modal ringing. This is more difficult to deal with and I'll probably look further into it on another occasion.
  • Prevent flutter echoes. This is usually treated using absorption and diffusers. Dealing with echoes can improve stereo imaging.
Recently I was donated 20~ panels of acoustic foam from a friend who had acoustically treated his vocal booth for recording. After doing research online and reading articles in magazines, it became apparent that acoustic foam panels are not very effective compared with other materials such as rigid fiberglass. These foam panels and okay at absorbing high frequencies but do next to nothing for bass frequencies.
What I decided to do in the end was to source other products that are more effective in lower frequencies such as bass traps and use them in conjunction with the donated acoustic panels. I will also attempt to find creative ways in which to use existing furniture etc to further improve the acoustics of the room.

Here is plan for the room showing new positions for furniture and absorption placement:


The red indicates a bass trap, blue indicates acoustic panels and the green indicates diffuser.
The bass traps are going to have to be purchased and the diffuser I'll attempt to construct myself. The foam acoustic panels will be placed on the walls not only as a single panel but two layers on top of each other. This will increase the thickness of the foam and increase its absorption coefficient, extending its absorption frequency range to mids as well.
After everything is done I should end up with a room more suitable to do critical listening and mixing in and less coloured by the room and enhanced stereo imaging.

Sunday, 14 March 2010

Soundtrack for a short film



Date:14/3/10
Duration: 4 hours

I was approached by a friend who is creating a short film for his studies at Derby college. Originally I was asked to create a totally original track for the short film but concept of the film changed and he wanted a cover of a existing track.
He wanted a acoustic version of the song 'Heartbeat' by the swedish band Knife. But after realizing the existence of  a famous cover of the song by Jose Gonzalez which was used for Sony Bravia advert, we decided to do a electronic version (similar to the original) but also in cooperating elements of acoustic guitar.


(above is the guitar tab on which the track was referenced from)

Using the tab above, I entered the corresponding notes into the Logic's piano roll (sequencer) trying to keep it accurate as possible.
I entered the notes for the acoustic guitar and created accompanying string parts using the Logic's EXS24 sampler




Care was taken to ensure the strings envelope (ADSR) was appropriate for the piece and fitted in properly in the context of the song.

Other aspects of the song such as the drums used loops inside logic. I normally don't use drum loops but because of the urgency of the project I tried the loops and was generally happy with the result.

Synth sounds also utilised the EXS24. I opened a preset and tweaked it until I achieved the sound I was looking for.


Extra drums sounds (overdubs) was created using Ultrabeat inside Logic.

After All sounds  I needed was created and recorded, I quickly mixed the work for my friend to take with him.



Here is the recording my friend took with: (NOTE this is pre-mixing and the song has since been mixed properly and more instruments added)

Thursday, 11 March 2010

iPod compositions


Date: 11th March 2010
Duration: 4hours


Having seen people on websites such as Youtube perform music together on their iPod touches and iPhones, I thought it will be interesting to create music using just the iPod itself and recording it to a recorder.


Procedure

I began by finding suitable Apps (applications) for the iPod touch to create music with. I browsed through reviews on magazines and websites (ie Future music) to find (if possibly) free music apps. Many applications creators release 'lite' versions of their apps (like demo) so customers can sample some of the features before committing to buy the full version. I found some interesting apps I downloaded and began messing around, tweaking their sounds and finding some sounds I could use.

Here are the apps I used on the recording:





Having selected which apps to use, I went about setting up the Boss BR-600 recorder. I used a mini-TRS connector from the headphone output of the iPod to the Line (audio sub mix) input on the recorder.
I originally set up a click track on the recorder using the onboard drum machine but decided against it afterwards and laid down the tracks click free. I tracked the drum beats first using two separate apps. I set the levels using the Boss's onboard metering and adjusting the Rec level wheel on the side, I started recording.

 (as you can see above, all apps utilised the iPod touch's touch capabilities for expressive feel)


After the drums came the synthesizers and the piano app. The NLog (free synth) boasts many features including expression wheels, XY pad and adjustable controls (ie oscillator cutoff etc) and I spent long time tweaking things. The piano apps is very simple by comparison and has a fixed key sizes and features only the sound of a piano which it performs to a tolerable standard.

Afterwards I added sounds effects in the form of scratching and gunshot sounds. I found a disk scratching free app while surfing the app store and wanted to somehow add to the song in process. Having never scratched before, I hastily practiced til I was satisfied I could 'perform' over the existing track I hit the record button. I played the piano app as a accompaniment to the synth.

  
Result

Although this idea was hastily put together and far from profession quality, with the ever increasing musical apps created for hobbyist as well as serious musicians, soon it may be possible to emulate existing profession gear which may be used for performance and to lesser extent recordings.


Here is the recording :

Thursday, 4 March 2010

Reason 4




Date: 4/3/10
Duration: 7hours

Having heard many accolades for the REASON software in the past, I decided to buy it make use of its excellent features. Having never used the program properly I decided to browse through the manual then went on youtube and other websites (including official Properllerhead website) for some tutorials on how to use the program.




After opening Reason I first placed a mixer which automatically connects its output to the external hardware. I got interested in Reason because people told me it was a excellent program for drums and general beat making so I decided to insert the Redrum rack.



After inserting the rack I loaded up some more drum samples, factory presets as well as custom patches and refill packages that came bundled with the software (bought on ebay). I played around with the sounds using the controller keyboard and went about tweaking few things and programming the on board sequencer with some drum beats.

I then went about adding reverb and delay lines as aux sends on the sends of the Redrum as well as on the mixer sends. All returns were fed back into the mixer where it could be "mixed" with the rest of the audio.


After deciding on a particular drums sounds that I liked I went about processing them and shifting their sounds for creative purposes. I had 3 snare samples with varying degree of reverb added and one of the snare samples triggered another sample using the gating patches found on the back of the rack (accessed by pressing the tab key). I added delay to one of the snare and added reverb to the overall drum mix but I didn't want reverb on the lower frequencies (kick drum) so I placed a parametric EQ after the auxiliary send and before the reverb unit to remove the lower band frequencies.
I also used a distortion rack on the hi-hat cymbal and one of the snare for more 'bite' and for it to cut through better in the drum mix.



(above shows all the routing except the reverb unit which couldn't fit in the window)

After I was happy with the sound of the drumkit that I created, I programmed the sequencer to play some varying drum beats and then I hit the record button. While recording I toggled between different banks so it seamlessly switched between the drum beats. After I had some beats down I decided to add some piano on top (sounded too bland as a drum sounds on its own). I loaded up the sampler and placed it on the rack. I tweaked the levels a bit and hit play and played over it several times and decided on what I was going to play.
Having not planned from the start I was going to play anything over the drums, the beats were all over the place but I managed (hopefully) to play something over but since I couldn't find the quantise button inside Reason, its left as played by me.

Here is the finished product (not really finished):