I prefer software to hardware when it comes to editing and recording, as you can do so many things on-the-fly without splicing 2" tape, which is tedious, especially when you mess up the cut 6 times because the coffee is making your hands shake. But honestly, when I cut one of my songs on 2", it sounded better than anything I've ever done in Pro Tools. It's just that Pro Tools is so damn efficient, and I can wrap up a session in less than half the time it takes tape to record and play back and find memory locations and edit.
Whoever mixes "in-the-box", which I do most of the time because it's practical and affordable, remember that you ARE limiting yourself. The mixes I've done on the SSL are FAR better than the in-the-box ones. Besides being able to have the headroom to tweak your levels, EQs and compression, when you channel your signal through analog components and tubes, it just does something to your sound that no software can compete with.
Now there are tons of arguments, saying:
"The digital domain is so clean, and you should stay in it as long as possible. Analog adds color to your sound that you don't want."
Then the other side usually says:
"There's this <i>warmth</i> in analog that you just <i>can't get</i> in the digital domain. Analog adds color to your sound that you <i>do want!</i>"
That's why I think it's awesome how analog and digital, along with hardware and software come together. Today, the best mixing engineers will take the digital Pro Tools sessions, and mix them down on an <i>analog</i> board, onto <i>analog</i> 1/2" tape. And then they get mastered into a CD, back into the digital domain. What's funny is that they're <i>replicated</i>, (not duplicated) and the process is physical; the way the zeroes and ones get "stamped" into melted poly-carbonate on the other CDs by the master. I find it strange that the process of copying digital data is done so mechanically.
It is impossible to not have hardware with software somewhere along the line though. It all starts with a microphone, anyway. Sound is <i>analogous</i> to whatever it moves or vibrates through. Our ears perceive the changes in air molecules, in what's called <i>compression</i> and <i>rarefaction</i>. A microphone perceives the same thing. Down the cord, you now have options. You can transfer this into math or you can continue to have components <i>analogous</i> to the air pressure changes recording your sound. Transistors and diodes and tubes and capacitors will take your electrical energy and manipulate the voltage. Tape will saturate your electrical energy into magnetic particles. And then it all goes backwards, back to the original transducing element, but now a speaker, that pushes the air molecules again. If you stay in the software domain, you lose the <i>analogy</i>. Math is calculated. Waves are <i>redrawn</i> and manmade algorithms simulate what the analog components do.
Technically, if you sample music and piece things together that were already recorded, then you could possibly do every single thing in-the-box and convert it to an mp3 and put it out on the internet and let it stay digital forever. But when it hits someone's speakers, you can tell where it was done.
With that said, forget everything in this post and go back to piecing together that 64kbps mp3 that you downloaded of the song from Rocky.