Search the Community

Showing results for tags 'mixing'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Community
    • Competitions
    • History & Study of Video Game Music
    • OverClocked Podcast
    • OverClocked Records
    • ReMix Requests
    • Site Issues & Feedback
  • Workshop
    • Music Composition & Production
    • Post Your Game ReMixes!
    • Post Your Original Music!
    • Post Your Art!
    • Projects
    • Recruit & Collaborate!
  • Comments/Reviews
    • Album Comments/Reviews
    • Judges Decisions
    • ReMix Comments/Reviews

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



Website URL




Real Name


Twitter Username

Xbox Live Gamertag

PlayStation Network ID

Steam ID

Software - Preferred Plugins/Libraries

Instrumental & Vocal Skills (Other)

Found 7 results

  1. I got into a pretty interesting topic concerning mixing lately. There's obviously a big difference between using integrated VSTi effects and insert effects (blend original signal and effect together into a new sound) and using AUX/effect sends (adds an additional signal like just the reverb on a separate AUX bus track to the original signal). 1) Integrated VSTi effects and insert effects ------------------------------------------------------- For my tracks I was used to create some MIDI stuff, add a virtual instrument or synthesizer for this track and mostly use some VSTi-integrated reverb and delay effects or some external reverb and delay VST plugin effects as an separate insert on the VST plugin slots within this track. The problem with this combination is that the original & pure sound of the VSTi/synthesizer loses its former power and blends together with the reverb/delay effects (although the reverb might be the most problematic effect in this case) into a new (and in this case less powerful, less assertive) sound. So, it 's not like "instrument + effect" - it 's much more like "instrument * effect" or "instrument x effect". With my new 3-way studio speaker system I can perceive this issue much clearer than before and I notice much better if the sound of an instrument or synthesizer gets too thin, gets lost in the reverb or shifts too much into the backround/depth of the room. It's not that you can't do it this way if you want to use some reverb in your tracks - but it doesn't seem to be the very best way of creating clean, assertive mixes on a professional production level. Nevertheless using reverb as an insert effect could be useful if you want to create a more spatial offset in depth in your soundtrack. However, it's a bit strange that I haven't got into the obviously very important topic of using AUX/effect sends for creating reverberating and highly assertive sounds at the same time - until now, after almost 5 years of music production. But after looking up a few things in my DAW manual some time ago I stumbled over this topic and tried it out. 2) AUX/effect sends ------------------------- If you want to use AUX/effect sends you have to create a new separate AUX bus track (like if you want to create an additional MIDI track in your mix - but instead you choose to create an additional AUX bus). On this AUX bus track you only use the desired effect (or even more than one effect at once - let's take a good reverb effect in this case) in one of the plugin slots and set up the plugin in the way you want to use it in your soundtrack. Several producers recommend to set up the plugin of the AUX track 100 % wet because the drier the effect gets the more it will mostly raise just the volume within the combinated interaction of VSTi/synthesizer and effect sends. Now you choose the track with the instrument or synthesizer with which you want to connect the AUX/effect send and try to set up the instrument as pure and raw as possible (especially turn off all reverb and delay effects, additional VST plugins and everything that makes the VSTi/synthesizer sound thinner, less assertive or moves the raw sound out into the room). Then you open your mixer and look for this instrument track, look for "AUX" within this instrument track and there you choose/activate the prepared AUX track with the desired effect in one of the free AUX slots. In my DAW I can draw a bar with my mouse below each of the AUX slots within the instrument tracks in the mixer view where you can regulate the volume of the additional effect send (don't worry about the volume of the instrument track, it makes no changes there - it just controls the volume of the effect sends on the AUX bus track there). So, if you play just the instrument track in solo mode afterwards you will hear the raw, unprocessed and highly assertive instrument sound. And if you play just the AUX bus track in solo mode you will just hear the separate effect of the instrument (so, just the reverb in this case). (If you turn up this AUX send effect on other instrument tracks in the mixer as well you will hear different effects (reverb from different instruments in this case) on the same AUX bus track.) With this method you can create really strong reverb effects without loosing the power and assertiveness of the raw source instrument/synthesizer. I am not quite sure how I should handle the panorama setting at the AUX bus track - I guess it would make sense to pan it the same way like the instrument. Maybe you can be a little creative there (for example if the reverb effects of two intruments who are pretty close in the mix interfere too much with each other you could take the reverb effect sends of one instrument more to the left/right side). If you plan to use an AUX/effect send on more than one instrument at the same time it could be problematic to deal with effects from different instruments at one AUX bus track with the same panorama. On the other side it will be pretty effortful, confusing and CPU/DSP-intensive to create individual AUX/effect sends for each instrument/MIDI track. And as it seems I can only put 10 different AUX/effect sends in the slots of the instrument tracks in my mixer. So, it might be useful to take the AUX/effect sends just for some instruments who really have to shine with effects (like reverb in this case) and be highly assertive at the same time (for example drums or leads). (EDIT: I could manage to create an infinite number of AUX/effect sends in my project within my DAW settings - so, technically I could create an effect send for each instrument/MIDI track.) What is your opinion about this topic and what kind of experiences do you have made with this?
  2. Hey everyone, I'm currently working on a track that starts off with a solo piano and slowly starts building off of that and adding a whole lot of new instruments as it progresses. Currently, I'm having two problems with this that I'm having difficulties fixing: 1. The piano gets 'too loud'. What I mean by this is that, at the beggining, the piano starts real quiet, playing at around -18 dB, and in the most intense parts, which may get even more intense later on, it goes up until +5 dB. These values make sense since there's a lot of dynamics going on with the piano, but I was wondering if this could be a problem, since it'll cause clipping. I've tried compressing it, but I feel like that ruins the entire dynamics of it and makes it lose its emotional power. 2. My violins are clashing with the piano. There's a part where both the piano and the violins together start rising and getting louder and louder and I can hear the violins being completely smothered and drowned out by the piano, which goes much louder than them. I've read online that these instruments play in very similar frequency ranges, which is probably the reason for this, so I was wondering what was the best way to go about this without sacrificing the sound of either of them. If my explanations don't seem too clear, I can provide a sample of the track so it's easier to understand. Thank you for the help :)
  3. Hi, I wanted to let everyone know what kind of rig I've been using and ask other members of the community for some advice/feedback regarding iOS recording and production. Does anyone have any experience with this? I'd love to hear about mobile music production from people other than YouTube. lol As far as my rig goes, I have an iPad mini, running through a Focusrite iTrack Dock (anyone familiar with Focusrite would like to know that it is a cousin of the Scarlet as it uses the same mic preamps) with lightning connector allowing me to record at 24-bit/96kHz and get some really good sounds. If there are any other users on the forums that produce with iOS tech, I'd love to hear about it! I'll list my setup (with links) as it stands below so you can get a better idea of what I'm working with. Tracking/Audio Capture: 2 Samson CL8 Studio Condenser Microphones (I use other mics too, but these are the ones that get used the most) Focusrite iTrack Dock iPad mini Harmonicdog MultiTrack DAW Final Touch Guitar Amp Simulation: BIAS Amp BIAS FX Synthesizers: SoundPrism Caustic Drum Machine/Loops: Caustic Now, at this point, I could get pretty redundant as there any many different ways to use each of the apps to get a different result, but the main thing I wanted to share was that I use the inter-app audio support of the Positive Grid apps (BIAS FX and BIAS Amp) within Multitrack DAW to get my amp sounds and often use the effects bus in the DAW for reverb/delay as they are relatively high quality. I can use BIAS for guitar and bass guitar. SoundPrism does not have inter-app audio support nor does it have AudioBus support (I don't know about SoundPrism PRO, though... haven't used it) so I am using an external device (iPhone 4, baby!) via 3.5mm to 1/4" cable into my line-in port of the iTrack Dock. This comes in handy for a lot of things like my chaos pad (SynthPad, which also does not have Audiobus support). Those of you familiar with recording via iOS will know how valuable AudioShare can be, and that is the main way I take drum loops/samples from Caustic or other DAWs (like GarageBand) and import them into MultiTrack as it is a much better DAW with good EQ, Reverb, Delay, Compression, and editing options in a very simple, user-friendly setting. Not the most complex or feature-packed DAW on iOS (not like Cubasis or Auria) but it does a good job for very cheap. Once my project is done, I bounce the track, export it to Final Touch, and start with several of their mastering presets and tweak them to my specs. I have the option to export the whole track in many different file types, bitrates, and have the option to upload directly to Dropbox, SoundCloud, email, etc... BUT Final Touch is also great for mastering tracks BEFORE they hit the DAW. Sometimes I'll send my drum track from Caustic (which may include live automation, synths, drum samples, etc...) and want it to sound a certain way before I mix in my live tracks (guitars, vocals, saxophones, hand percussion, etc...) and will give me a better idea of what to listen for and edit properly. THEN I'll bounce that, and master the whole track, making less-drastic edits to my master, and have a more coherent sounding track. This is just scratching the surface of what is available to artists via iOS and I may have said a lot of redundant things but I wanted to share what I do with others who haven't experimented with iOS music production/creation before and get some ideas from others who have been doing it longer/differently than myself. Thanks!
  4. Hello. I have uploaded a short excerpt from an original song which I am attempting to mix... Link: test.mp3 Here's a slightly different version: test - ver 2.mp3 I plan on eventually having the full song done by a professional, but currently I would like to know if my mix is of an acceptable quality for demonstration/preview purposes. Is there anything in particular that should be changed in terms of EQ or volume levels? Does it sound noticeably amateurish? All the instruments are programmed (including the guitars), since I'm a keyboard player and not a guitarist. I used Toontrack's Meshuggah EZdrummer demos as reference mixes. Edit: I compared it to some more tracks. I'm not really satisfied with the sound. I think the tone of the guitar and drums could still be improved a lot. Currently I am busy with another music-related project (not metal), but I may return to this later.
  5. Hello, I'm Ronald Poe and I write electronic music and remix. I use FL Studio and Audacity for my music and Musescore for the writing/editing of midi. I mix/produce the music myself and it seems to hinder the music itself. Do you have any tips on mixing/production? Here's a couple examples of my work. My character theme for Axel (KH) My remix of "King Ghidorah" (Godzilla NES) from that contest. Please give both your opinion and some mixing advice. Thanks
  6. Hey peeps, I barely know anything about mixing, so this might be a bit of an easy question, but I truly do not know what to do about it, so I'll just drop it here anyway. I like to record different instruments and put them all together. But whenever there are more than 2 instruments everything gets mudded and it just sounds unclear. When I pull things away from each other by stereolizing it to the left or the right it sounds better, but I can't imagine that being the right solution. Does someone here know how such things work?
  7. Hi guys, I'm working on an original piece at the moment. It's a sad, solo piano track, only 1 minute or so in length, and it has to work as a seamless loop. I'm not at all confident in my own abilities, so what I want to know is- what do you think, and what can I do to make it better? I use Logic Pro 9, and this was created with the Ivy Piano in 162 soundfont. I wish I could get a live player to perform it, but alas, I lack both a piano, and the ability to play one.