OK, I scrapped my original reply - it was rambling all over the place (and this one is possibly even worse!)
I’ve done a LOT of thinking about this over the past few days. Partly because I’m FASCINATED by how others make their recordings and how/why. Now that I’ve just switched to a DAW, this world of MIDI and wotnot is suddenly open to me (I found out the other day that I now have the facilities to record a guitar part, convert it to midi, and play it on a virtual instrument, ANY virtual instrument - WHAT?!?!?! LOL I might start investigating that sooner or later - but artistically, with what I’m trying to create at the moment, that side of things doesn’t really appeal to me except as an “ooh that’s clever!” thing).
I ought to say up front - I am interested in the technicalities of stuff, ESPECIALLY guitars and amps. But also, sound in general, filtering on synths, etc… I am a techie in my day job… but really, musically, I’m just a guitar playing singer-songwriter who really wanted to be Robin Gibb, Freddie Mercury, Janis Joplin, etc… so when I’m recording, I’m after a way of presenting my noodlings in that light as effectively and easily as possible. In other words, I like experimenting with the tools and the techniques, I love it, but NOT when I’m making music.
Basically, most of the time, I create songs with one instrument - probably a guitar - a pen and an A4 pad to write the lyrics on. Recording doesn’t often come into the songwriting process for me.
That’s not always true. Sometimes I might use the recording machine a little to kick a song off. I’d start with a drum machine groove and then jam along a bit. If I get a part I like, I might record it and then jam along with that. This could be electric guitar, bass or sometimes piano. It won’t get ANYWHERE, though, unless I start hearing and singing some stuff over it. And as soon as that happens, I’m back to the sheet of A4. Usually as far from the recording machine as I can get.
Most of the time for me a song is either a thing to play or a thing to listen to when performed. A recording is a record of a specific performance of a specific arrangement of a song.
Songwriting to me is a performing/thinking process. And as solutions appear the process has to move FAST for me, faster than I can set up sounds, or edit MIDI. Far faster than I can rerecord parts to change them, or cut and paste sections, and so on. I don’t want to be fiddling with any recording gear at all when writing a song.
Also, arranging always happens in my head as I write. When I perform/write these things, I can ALWAYS hear the Andrew Russe Band in my head doing its thing - that helps me feel it and therefore write it. When I was in real bands years ago, I had awful trouble because the buggers would never play what I could hear in my head!
So, anyway, I find it loads easier to write songs without recording.
Equally, I find it easier to record if I know what I’m recording. Of course, during recording, things can change when I hear stuff I hadn’t thought of - I think of that as “production”, what a producer might do. But if it amounts to much more than a lyrical change or an extra or different guitar part, it often involves scrapping the recording so far and starting again (or deciding to abandon the new idea because there’s too much invested in what I have already - recording is a kind of a pragmatic “get it done” thing for me, like a piece of work to do in the day job).
I think the above “write songs then record the ones I like” has a lot of bearing on me liking the hardware approach.
I think of recording like painting (oil mainly, but sometimes it’s like pastels, or watercolour). I’m making a thing out of different colours and textures to tell a story or create an emotion that’s in my head. I’m doing it to and for myself, assuming that some others will have the same needs and desires, and accepting that many others won’t.
I think of mixing in a visual way too. Theatrical, actually, looking down on the stage from the balcony. I’m organising the actors, scenery, and lighting so that the person sat in the middle of the first row of the circle gets the best experience of the overall play: the whole thing itself, and the individual performances of the various characters, and the various plotlines making up the whole.
For all of this, I’m thinking AUDIO, sounds to blend. Notes and chords and rhythm obviously are part of all that, but I’m really thinking in terms of successfully captured multiple audio tracks. I’m thinking of guitar parts, and organ parts, and vocal parts, and so on.
I’m after the FASTEST and EASIEST way for me to get those parts captured and reproducable. I’m guessing that’s the same goal for any of us, no matter how we do it and no matter what genres/sounds we’re working in.
For me personally, at the moment, working in MIDI/software is the slowest way of me getting certain things done. In fact, all I use MIDI for at the moment is the drummer.
Vocals and acoustic guitar, they’re all done with microphones.
If I want a choir, yes, I have to sing every part. I have recordings here on alonetone where there might be upwards of 50 voices singing simultaneously - I sang every one of them. It took a while, but not as long as you’d think.
There are shortcuts like duplicating and detuning and delaying, etc - you can apply them to voices and guitar parts, and I have, even in “professional” studios years ago. I will use that sometimes, but there’s nothing quite like separate individual performances, some spotless, some less so, some deliberatley ragged - especially if you have to sing them all yourself.
I’ll also copy and paste something like backing vocals from one chorus to another like we used to in the old days in 8 or 16 track studios (you only had 2 tracks left for backing vox, so you bounce the band to a stereo tape, then you bounce that back to a clean piece of multitrack, do the back vox, mix them to stereo, then fly them back in on the mutlitrack master in the right place for each chorus).
Electric guitars, I might mic up an amplifier, but not so much nowadays. Most of the time I use an outboard modelling amp DI’d straight into the desk (Yamaha THR100HD for the last x years now). SInce the switch to the DAW I have tried plugin amp modelling - I actually have some groovy stuff on this pc - but I can’t yet get results as good as my trusty outboard gear.
For piano and organ, again it’s outboard gear for me. I have a Korg stage piano that I also use to play a Roland organ module with drawbars and chorus and leslie stuff. At the moment that set up does everything I want - acoustic pianos, electric pianos, rock/pop/jazz organ, harpsichord, basic strings. Again, I record audio via the lineouts - usually MONO. I find mono keyboard stuff usually makes for much better stereo mixes on the sort of music I do.
Yes… this hardware approach involves recording loads of tracks. But that’s the way it’s always been to do the sorts of recordings I want to do.
My last one (In The Nick Of Time) has LOADS of tracks, I think there are 70 to 80 audio tracks.
I knew the sound I wanted from “my band”. It’s a simple band, probably around 7 people if we went on stage and we could all sing (drums, bass, 2 or 3 guitarists, 2 keyboard players). I recorded what was needed to make those parts for a studio recording, cleaned and bounced them into stereo submixes, then I could use them as “parts” in the mix.
For example, the keyboards were multiple tracks to build up the three parts I’d want recorded if I could play properly. The first was the main electric piano that goes all the way through and interracts with the electric guitar in the verses. The second was an organ that complements this. This one was initially experimental, I just imagined it was needed as well as the main piano. But I was prepared to ditch this organ if it was surplus to requirements. I just thought I needed those frequencies in the mix, and early listening showed it was working, so I recorded it all. Those first two were recorded mono and are panned opposite each other, I don’t think fully, can’t remember. Then, third, there’s another organ part for the bridge/chorus bits. I think the top of that was recorded stereo for the leslie/chorus effect. Those three “parts” ended up as three stereo bounces that I used in the main mix, but they were each made up of about 4 tracks each (I still can’t really play left and right hand at the same time well enough all the time for a single take).
The same applied to the acoustic and electric guitars. A set of stereo bounces, built up of individual takes, some double-tracked, to achieve the parts I wanted on the recording,
Now, those keyboards, I know I could now record them as MIDI, and possibly fix or change them much more easily later. But that would take me much longer at the moment because I’d have to learn how to do it effectively.
Also, I want those sounds coming out of the hardware, I like them - and like Sudara says, I’d need the hardware on and linked during mix down if I’d recorded just the MIDI. (I believe Sudara’s saying he records both? That is a target for me when I get more familiar with the DAW setup, record both midi and audio on keyboards, record both amp output and instrument DI on guitars and basses - that way I’ve got the option of re-amping or re-instrumenting if selections were wrong but the performance was OK)
But there’s something else for me. And this is REALLY important for my approach to creating recordings. If I’d recorded MIDI, and had those “options”, I still wouldn’t have the final “part” I needed. Whatever I have is still up for later negotiation if I don’t like the sound or playing. And that would paralyse me when it came to mixing.
It’s partly why I don’t like the guitar amplifier plugin approach either. The “keeping options open for later” approach when recording really does not seem to work for me. It seems to take me longer, causes me stress during mixing, and seems to yield inferior results in my hands.
Personally, I’d much rather use the “commit early” approach to recording. So I kind of like that using hardware almost forces you down this path. I know what I’m trying to achieve, so get the piano player on the stage. OK, he’s gone stage right, cool, that means the organist goes the other side. Now, get him to play the piano he’s going to play… are you really gonna do that mate? It ain’t gonna fit with my telecaster part… ah, that’s better… yeah, OK, you’re using THAT piano for this, please don’t change it, no I don’t want to hear anything else, I like that one… Now, what notes are you playing… er… you do realise that gets in the way of the lead vocal? etc…
I suppose, because what I’m really doing is building the accompaniment for the song I’m going to sing, that the “commit early” approach works really well for me. I know what I’m trying to achieve, I have a very clear idea of that, though not necessarily the “how”. The earlier I can clear up all the “how” decisions, the faster it all goes, and the better chance I’ll have of actually completing something I want to publish.
And that’s what makes the “hardware” approach so attractive to me. As the player, you turn your instrument on, rehearse your part(s), play it/them, then go grab a brew while the artistic bloke gets on with his widdling about with the engineer and the knobs on the desk.
As I’m the “artistic bloke” too, I can see the attraction of being able to edit the part, even change the instrument, when the hamfisted player is gone … but, at the moment, that seems like an awful lot of extra effort for not a lot of return. I’d rather the player gives me the part I want before I let him slope off down the pub.
This may change for me in the next few months now that I’ve got a DAW set up starting to work for me. Especially as I have access to notation software and huge libraries of samples for orchestral stuff.
But… right at the moment I have hundreds of Andrew Russe songs I’ve never recorded or played live, and the Andrew Russe Band is raring to go to see whether any of them are any good. Some of them, now I’ve switched to a DAW, are looking more achievable than they ever have before in my home studio environment. And that stuff is what I want to do for the foreseeable future.
For the last x years I’ve been using a Boss BR1600 to do this. It’s a standalone 16 track machine (8 mono, 4 stereo, 16 layers of virtual tracks) with a built in drum machine. I used it for all tracking, mixing and mastering. I didn’t use any of its looping facilities. I didn’t use any of its onboard guitar affects (they weren’t as good as the hardware I had). I used it as a recording studio with a programmable drummer in it.
I’ve just switched back to using a DAW. Previous DAW experience x years ago didn’t go well for me. When I switched to using standalone recorders, the restrictions and reliance on hardware imposed by using the BR1600 have been really good for what I wanted to learn.
I was a bit scared when I could see that standalones were becoming obsolete and I thought I didn’t want to go near DAW/software again. But my BR1600 was dying and I realised I wanted to move before it was forced on me.
I decided I’d treat the new DAW in the same way as the BR1600 to start with - like the 8, 16, and 24 track tape-based studios I’d recorded demos in for various bands between 1985 and 2000. Then, as I’m learning the DAW, I’ll find more stuff I can learn/embrace.
All I needed was something to replace the BR1600’s drummer - and I found that in EZ Drummer. The approach in EZ Drummer is almost exactly what I was doing in the BR1600 but SO much easier and faster (and more stuff that I can learn/embrace when I’m ready) - it’s also helped me decode my DAW’s approach to MIDI so it’s helping with that too.
I did loads of experimenting (some on the new song, most on little 16 bar jams and things like that). I know how the MIDI stuff works, I’ve even learnt how to edit it. I converted a guitar part to MIDI and played it back on some sort of spaceship’s hooter. I tried auto-tune - interesting(!). I tried loops. I even made loops!! Related to that, I learnt (and used) “pocketing” by editing and slipping audio. Processing-wise, I’d already used parallel compression etc on the BR1600, but found it was LOADS easier in the DAW. I started using saturation too. I ended up taking it all off again cos it sounded better without all the compression and saturation tricks LOL. Things got out of hand for a week or two, until I remembered the restrictions I’d promised to apply to myself.
I’ve done one song on the new setup now (and I’m remastering all the old BR1600 mixes). I’m happy, and the BR1600 is on it’s way to the attic. I feel the new one’s a step forwards from the recording perspective. But I’m terrified it was a fluke, so I’m going to keep imposing the restrictions I set out with for a few more songs. Then we’ll see.
One last thing about the hardware approach: I’m a guitarist, I think in terms of an instrument in my hands and a backline amp behind or beside me. My purchases over the years have been driven by this thinking. I even purchased the new stage piano just a year or two ago based on “I want cool action and sounds, and it MUST have line outs and MIDI sockets”. I was not interested in USB connectivity, and I rejected anything that had USB only (scarily, that’s a lot of stuff).
So I have a bunch of hardware that makes REALLY GOOD sounds for me once I’ve got past the presets etc. If I didn’t have all that, or if I’d got into computers before I got into making music, and purchasing any of that hardware, then I suspect I’d be asking the “How in the heck do you actually make songs with hardware?” question myself.
I suspect it all depends on what direction you’re approaching from. I also understand that the world is heading the software based direction… that’s cool, whatever, hope my hardware outlasts me!!!
I personally don’t think any approach is “wrong” - that’s one of the fabulous things about creativity.
I’m VERY interested in hearing how other folks do stuff, even if I can’t see how to fit it into my workflow at the moment. And anything I’ve got, that can help other folks’ approach, I’m really happy to share.