Alongside technology changing workflow is changing and I have a question regarding workflow and mixing.
I work mostly in Reason8 Using a MPK49 to trigger various synths and occasionally use a software sampler to flip some of my own stuff.
As far as mixing goes... I kinda mix as I go. What I mean by this is that a lot of VST's have plugins already applied to create a certain sound (combinator patches, etc...). I also mean that after I record MIDI data, I'll often go in and apply any plugins or automation. I like to build piece by piece and I like to make sure things are sitting properly before I add another layer. If I was recording audio, I'd record as flat as possible and process after the fact, but with MIDI, I can switch any mixing on/off easily and all the information is right there.
From what I understand, this isn't the "right" method to go about it. My question is, why?
Once I record the MIDI, can I process that information like I have been or should I bus the stems and process the audio? If I am supposed to do the latter, why? What am I gaining opposed to processing the MIDI data?
I hope that makes sense... Any input would be a huge help!
I work mostly in Reason8 Using a MPK49 to trigger various synths and occasionally use a software sampler to flip some of my own stuff.
As far as mixing goes... I kinda mix as I go. What I mean by this is that a lot of VST's have plugins already applied to create a certain sound (combinator patches, etc...). I also mean that after I record MIDI data, I'll often go in and apply any plugins or automation. I like to build piece by piece and I like to make sure things are sitting properly before I add another layer. If I was recording audio, I'd record as flat as possible and process after the fact, but with MIDI, I can switch any mixing on/off easily and all the information is right there.
From what I understand, this isn't the "right" method to go about it. My question is, why?
Once I record the MIDI, can I process that information like I have been or should I bus the stems and process the audio? If I am supposed to do the latter, why? What am I gaining opposed to processing the MIDI data?
I hope that makes sense... Any input would be a huge help!