Jump to content

Search the Community

Showing results for tags 'mixing'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Community
    • Announcements
    • General Discussion
    • History & Study of Video Game Music
    • Site Issues & Feedback
  • Workshop
    • Music Composition & Production
    • Post Your Game ReMixes!
    • Post Your Original Music!
    • Post Your Art!
    • Competitions
    • Projects
    • Recruit & Collaborate!
    • ReMix Requests
  • Comments/Reviews
    • Album Comments/Reviews
    • Judges Decisions
    • ReMix Comments/Reviews

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Location


Interests


Biography


Real Name


Occupation


Twitter Username


Xbox Live Gamertag


PlayStation Network ID


Steam ID


Software - Preferred Plugins/Libraries


Instrumental & Vocal Skills (Other)

Found 10 results

  1. Hey thanks for taking the time to stop by I am making this cover of Forest Interlude from DKC2 as a sort of "self benchmark" to see how well I can emulate David Wise's signature tone into my own music. It is in a lot of ways pretty close to the original, however all of the sounds (minus one FLEX instrument and my percussion samples) are made from scratch with Serum and stock FL plugins. I am only a few years into this hobby, mostly being self taught. I guess I am looking for feedback on the quality of my mixing - as well as my writing since I wrote it all by ear. I think my mix is kinda quiet honestly, but I'd like to think I did good on the overall leveling of things. What do you think?
  2. Hey folks, having some trouble mixing a piano duo. Especially when it comes to panning. Normally a piano is panned corresponding with where the higher and lower keys on the keyboard are, so you hear everything from the players perspective. But how to do it with two pianos? Leaving them both centered is weird, bc it basically feels like the 2 pianos are in the same spot. Simply Panning them a bit L and R is also weird bc then it is some kind of unnatural overlap and basically the same as leaving them centered. Should I put them in 2 Mono channels and pan the a bit L and R? Would that be a realistic thing to do, if we imagine the listener sitting in front of a stage with the 2 pianos on? Or are there any other solutions you can think of? Thanks for your help!
  3. High-end headphone amps for optimal use of high-impedance studio headphones like Beyerdynamic DT 880 Pro and accurate mixing/mastering via headphones ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- This one might be interesting for all those who live in ordinary flats and who want to compose or just listen to music and other media most accurately at all thinkable times of the day at which you can't listen to your studio monitors without annoying your neighborhood. When I started mixing music and listening to several audio stuff with my first high-impedance headphones Beyerdynamic DT 880 Pro some time ago, I still had my very first Steinberg UR22 USB audio interface at which I had connected these headphones. The listening experience with this combination of audio equipment was really not that bad. But for accurate mixing and mastering I was afraid that the DT 880 Pro wouldn't get enough electricity just from the USB connection to work properly at their full potencial. The first things which might indicate I should be right with my assumption was the fact that the legendary DT 880 Pro obviously couldn't handle the bass and lower mids that well at this USB audio interface. It felt like the bass and lower mids lacked a bit in definition or as if there had been some kind of damping curtains at the lower frequencies - while the reproduction of the mids and higher frequencies seemed to be pretty good, compared to the sound reproduction of my studio monitor system or my Sony MDR-7506 studio headphones connected to the UR22 audio interface. And since I've read some comments of obvious Beyerdynamic DT Pro series users, in which they mentioned that these high-impedance headphones might not get enough electricity from most of the ordinary audio interfaces (guess they have especially meant all those audio interfaces which don't have one of those larger IEC/C19 socket connections for getting much more electricity into the electronic device), I thought about getting an additional high-end headphone amp. After gathering some information about good headphone amps for a few months, I finally decided to go for the G109-P from the German company for professional high-end audio equipment Lake People. It's also the company behind the legendary Violectric and Nimbus audio stuff. >>> https://www.lake-people.com/product-page/phone-amp-g109-p It's one of the headphone amps which can reproduce a very large frequency range from around 0 Hz to 150 kHz, while many other headphone amps seem to start with their frequency response at around 10 or 20 Hz. So, it should be a great device for reproducing the deepest sub bass frequencies. The Beyerdynamic DT 880 Pro can technically handle a frequency range from 5 to 35000 Hz - so, it would be a pretty nice combo for the full listening experience with even a much larger frequency range than ordinary listeners might perceive. And with the stylish black colour and sturdy design it was like perfectly made for my small but totally decent home studio which already has a pretty stylish black design (don't like it if the studio looks like a happy rainbow - already have a beautiful and colourful little forest in front of my flat - but for the studio stuff I prefer the modest, spartan and uniform style). Don't worry too much about the technical information about the frequency response "0 Hz - 150 kHz (-3 dB)". I've phoned an employee from the company Lake People and he told me that the divergence within the frequency range from about 0 Hz to 50 kHz is just around 0,5 dB (just like the HPA series, the F series and the other products from Lake People) - so, it's a pretty linear frequency response. At first, I wanted to go for the G103-P - but after the employee told me, that the G109-P can reproduce still a little bit better sound quality with a slightly better definition than the other G models below the G109 and together with the fact that the G109-P (not the G109-S!) is one of the only G models which has a safety relais with power-up delay (with these you can leave the studio headphones always connected to the headphone amp without risking technical damage when turning on the headphone amp - although the employee mentioned that even the other headphone amps from Lake People wouldn't damage my permanently connected headphones), I finally decided to spend a few more money and bought the G109-P - as well as some TRS-to-XLR cables for balanced audio. According to the information of the employee, this headphone amp won't need much more than 6 W - so, you could say it's a kinda energy-saving electronic device as well. But for connecting this headphone amp to my home studio in the best possible way, I also had to buy a larger audio interface with more line output ports than my good ol' Steinberg UR22 USB audio interface had to offer. And before buying the headphone amp, it took me also a few further weeks to make a good decision for an appropriate audio interface. It was a pretty close battle between some several audio interfaces from the companies Roland, Tascam and Steinberg. But since Steinberg (belongs to Yamaha in these days) has an excellent reputation and a long history of German & Japanese cooperation/development in things like producing professional audio interfaces and software, I decided to stick with the Steinberg audio interfaces and went straight for the bigger brother Steinberg UR44. It has more than enough possibilities of connections, a completely separate power supply, furthermore 2 separately volume-adjustable headphone outputs and it's pretty well-known for its flat frequency response reproduction and its ultra-robust build quality. Of course I've tested the Beyerdynamic DT 880 Pro studio headphones also with the Steinberg UR44 - especially in direct comparison to the UR22 USB version. Although there shouldn't have been too much differences between the UR22 USB audio interface and the UR44 audio interface (both supposed to have the same transducers), I could perceive a few differences. Besides the fact that you have to turn up the headphone output volume knobs at the UR44 a bit more to get the a similar volume as with the UR22, the Beyerdynamic DT 880 Pro seem to perform slightly better at the UR44. The bass and lower mids sound a bit more defined and cleaner on the UR44 with these headphones - and also the higher frequencies seem to have a perceivable wider range and obviously sound crispier as well. And even my studio monitor system sounds apparently a bit clearer in connection with the UR44. I'm not quite sure - but it could really be because of the better power supply of the UR44. But after connecting the Lake People G109-P headphone amp to some of the line output ports of the Steinberg UR44 and finally putting the Beyerdynamic DT 880 Pro into the new G109-P headphone amp, some kind of desired dreams and audio miracles started to happen. In one of the worst case scenarios I thought the headphone amp would only boost the original signal from the Steinberg UR44 interface just as an employee of a nearby music store told me. But that doesn't seem to be fully right - there's obviously much more audio magic going on. With the help of the G109-P the formerly perceived effect of damping curtains at the lower and lower mid frequencies vanished completely. The reproduced bass of the DT 880 Pro headphones doesn't only sound crystal-clear now. It also seems to drive the sub-bass at much deeper levels. The mids and higher frequencies seem to be even more crispier and much better presented at the DT 880 Pro in connection with the G109-P headphone amp. And yeah, the stereo panorama is kinda a (freshly polished) stage now. All this could really be because of the massive power supply of the high-end headphone amp that seem to drive the DT 880 Pro quite optimally. So, if you don't already own a professional headphone amp or one of those really expensive high-end audio interfaces with integrated high-end headphone outputs (as audio interfaces like Antelope Goliath HD or several RME audio interfaces are supposed to have) and if you want (or you are forced) to compose lots of stuff with high-impedance studio headphones or if you just want to enjoy audio with those headphones on a new, much clearer and more detailed level, don't hesitate to intensely inform yourself about several high-end headphone amps und get one. For composing via headphones, I'm really sure I'll get excellent mixing results in the future with my new trinity of professional audio equipment, consisting of: 1) Steinberg UR 44 audio interface -------------------------------------------- >>> https://www.thomann.de/gb/steinberg_ur44.htm 2) Lake People G109-P high-end headphone amp --------------------------------------------------------------- >>> https://www.thomann.de/gb/lake_people_g109_p_highend_phoneamp.htm 3) Beyerdynamic DT 880 Pro studio headphones -------------------------------------------------------------- >>> https://www.thomann.de/gb/beyerdynamic_dt_880_pro_black_edition.htm >>> https://www.thomann.de/gb/beyerdynamic_dt880_pro.htm If you additionally want to connect your studio monitor speaker system with an external high-end headphone amp as well, just have a look at the products from SPL (Sound Performance Lab), for example the SPL 2Control: >>> https://www.thomann.de/gb/spl_2control_black.htm If you are unsure about which audio equipment you want to buy, make sure to find a professional music store where you are allowed to order, check out and compare different audio equipment in a silent atmosphere by yourself. ---- PS: And always make sure to primarily invest in vital food and life force before making investments in good audio equipment. There is absolutely no reason to starve for high-end audio stuff. But you can sell your expensive car and get a nice bike or season ticket for public transport instead. And of course you can flee from those mostly unnatural, noisy and unhealthy environments of big city centres with paradoxically some of the highest rental costs and pay much less for a cosy flat at the more natural outskirts, smaller towns or romatic villages instead. Anyway, most of those shady rent sharks, racketeers and big profit-over-life investors which make your easy 'n' carefree life usually harder without any meaningful reason, seem to shun the vital areas close to the forests and wild nature - you might increase the effect by creating urban legends with all the dangerous wild animals, nasty monsters and cannibalistic tribes, that might dwell in those lovely natural environments. In this case, you might be able to enjoy high-end audio equipment as well as a pretty modest, healthy and joyful life close to vital nature (including free tickets for high-quality and truly realistic philharmonic bird sound orchestras which might become your enhanced and naturally well-timed alarm clock). Good luck. ))
  4. I got into a pretty interesting topic concerning mixing lately. There's obviously a big difference between using integrated VSTi effects and insert effects (blend original signal and effect together into a new sound) and using AUX/effect sends (adds an additional signal like just the reverb on a separate AUX bus track to the original signal). https://l2pnet.com/insert-effects-vs-send-effects-2/ 1) Integrated VSTi effects and insert effects ------------------------------------------------------- For my tracks I was used to create some MIDI stuff, add a virtual instrument or synthesizer for this track and mostly use some VSTi-integrated reverb and delay effects or some external reverb and delay VST plugin effects as an separate insert on the VST plugin slots within this track. The problem with this combination is that the original & pure sound of the VSTi/synthesizer loses its former power and blends together with the reverb/delay effects (although the reverb might be the most problematic effect in this case) into a new (and in this case less powerful, less assertive) sound. So, it 's not like "instrument + effect" - it 's much more like "instrument * effect" or "instrument x effect". With my new 3-way studio speaker system I can perceive this issue much clearer than before and I notice much better if the sound of an instrument or synthesizer gets too thin, gets lost in the reverb or shifts too much into the backround/depth of the room. It's not that you can't do it this way if you want to use some reverb in your tracks - but it doesn't seem to be the very best way of creating clean, assertive mixes on a professional production level. Nevertheless using reverb as an insert effect could be useful if you want to create a more spatial offset in depth in your soundtrack. However, it's a bit strange that I haven't got into the obviously very important topic of using AUX/effect sends for creating reverberating and highly assertive sounds at the same time - until now, after almost 5 years of music production. But after looking up a few things in my DAW manual some time ago I stumbled over this topic and tried it out. 2) AUX/effect sends ------------------------- If you want to use AUX/effect sends you have to create a new separate AUX bus track (like if you want to create an additional MIDI track in your mix - but instead you choose to create an additional AUX bus). On this AUX bus track you only use the desired effect (or even more than one effect at once - let's take a good reverb effect in this case) in one of the plugin slots and set up the plugin in the way you want to use it in your soundtrack. Several producers recommend to set up the plugin of the AUX track 100 % wet because the drier the effect gets the more it will mostly raise just the volume within the combinated interaction of VSTi/synthesizer and effect sends. Now you choose the track with the instrument or synthesizer with which you want to connect the AUX/effect send and try to set up the instrument as pure and raw as possible (especially turn off all reverb and delay effects, additional VST plugins and everything that makes the VSTi/synthesizer sound thinner, less assertive or moves the raw sound out into the room). Then you open your mixer and look for this instrument track, look for "AUX" within this instrument track and there you choose/activate the prepared AUX track with the desired effect in one of the free AUX slots. In my DAW I can draw a bar with my mouse below each of the AUX slots within the instrument tracks in the mixer view where you can regulate the volume of the additional effect send (don't worry about the volume of the instrument track, it makes no changes there - it just controls the volume of the effect sends on the AUX bus track there). So, if you play just the instrument track in solo mode afterwards you will hear the raw, unprocessed and highly assertive instrument sound. And if you play just the AUX bus track in solo mode you will just hear the separate effect of the instrument (so, just the reverb in this case). (If you turn up this AUX send effect on other instrument tracks in the mixer as well you will hear different effects (reverb from different instruments in this case) on the same AUX bus track.) With this method you can create really strong reverb effects without loosing the power and assertiveness of the raw source instrument/synthesizer. I am not quite sure how I should handle the panorama setting at the AUX bus track - I guess it would make sense to pan it the same way like the instrument. Maybe you can be a little creative there (for example if the reverb effects of two intruments who are pretty close in the mix interfere too much with each other you could take the reverb effect sends of one instrument more to the left/right side). If you plan to use an AUX/effect send on more than one instrument at the same time it could be problematic to deal with effects from different instruments at one AUX bus track with the same panorama. On the other side it will be pretty effortful, confusing and CPU/DSP-intensive to create individual AUX/effect sends for each instrument/MIDI track. And as it seems I can only put 10 different AUX/effect sends in the slots of the instrument tracks in my mixer. So, it might be useful to take the AUX/effect sends just for some instruments who really have to shine with effects (like reverb in this case) and be highly assertive at the same time (for example drums or leads). (EDIT: I could manage to create an infinite number of AUX/effect sends in my project within my DAW settings - so, technically I could create an effect send for each instrument/MIDI track.) What is your opinion about this topic and what kind of experiences do you have made with this?
  5. Hey everyone, I'm currently working on a track that starts off with a solo piano and slowly starts building off of that and adding a whole lot of new instruments as it progresses. Currently, I'm having two problems with this that I'm having difficulties fixing: 1. The piano gets 'too loud'. What I mean by this is that, at the beggining, the piano starts real quiet, playing at around -18 dB, and in the most intense parts, which may get even more intense later on, it goes up until +5 dB. These values make sense since there's a lot of dynamics going on with the piano, but I was wondering if this could be a problem, since it'll cause clipping. I've tried compressing it, but I feel like that ruins the entire dynamics of it and makes it lose its emotional power. 2. My violins are clashing with the piano. There's a part where both the piano and the violins together start rising and getting louder and louder and I can hear the violins being completely smothered and drowned out by the piano, which goes much louder than them. I've read online that these instruments play in very similar frequency ranges, which is probably the reason for this, so I was wondering what was the best way to go about this without sacrificing the sound of either of them. If my explanations don't seem too clear, I can provide a sample of the track so it's easier to understand. Thank you for the help :)
  6. Hi, I wanted to let everyone know what kind of rig I've been using and ask other members of the community for some advice/feedback regarding iOS recording and production. Does anyone have any experience with this? I'd love to hear about mobile music production from people other than YouTube. lol As far as my rig goes, I have an iPad mini, running through a Focusrite iTrack Dock (anyone familiar with Focusrite would like to know that it is a cousin of the Scarlet as it uses the same mic preamps) with lightning connector allowing me to record at 24-bit/96kHz and get some really good sounds. If there are any other users on the forums that produce with iOS tech, I'd love to hear about it! I'll list my setup (with links) as it stands below so you can get a better idea of what I'm working with. Tracking/Audio Capture: 2 Samson CL8 Studio Condenser Microphones (I use other mics too, but these are the ones that get used the most) Focusrite iTrack Dock iPad mini Harmonicdog MultiTrack DAW Final Touch Guitar Amp Simulation: BIAS Amp BIAS FX Synthesizers: SoundPrism Caustic Drum Machine/Loops: Caustic Now, at this point, I could get pretty redundant as there any many different ways to use each of the apps to get a different result, but the main thing I wanted to share was that I use the inter-app audio support of the Positive Grid apps (BIAS FX and BIAS Amp) within Multitrack DAW to get my amp sounds and often use the effects bus in the DAW for reverb/delay as they are relatively high quality. I can use BIAS for guitar and bass guitar. SoundPrism does not have inter-app audio support nor does it have AudioBus support (I don't know about SoundPrism PRO, though... haven't used it) so I am using an external device (iPhone 4, baby!) via 3.5mm to 1/4" cable into my line-in port of the iTrack Dock. This comes in handy for a lot of things like my chaos pad (SynthPad, which also does not have Audiobus support). Those of you familiar with recording via iOS will know how valuable AudioShare can be, and that is the main way I take drum loops/samples from Caustic or other DAWs (like GarageBand) and import them into MultiTrack as it is a much better DAW with good EQ, Reverb, Delay, Compression, and editing options in a very simple, user-friendly setting. Not the most complex or feature-packed DAW on iOS (not like Cubasis or Auria) but it does a good job for very cheap. Once my project is done, I bounce the track, export it to Final Touch, and start with several of their mastering presets and tweak them to my specs. I have the option to export the whole track in many different file types, bitrates, and have the option to upload directly to Dropbox, SoundCloud, email, etc... BUT Final Touch is also great for mastering tracks BEFORE they hit the DAW. Sometimes I'll send my drum track from Caustic (which may include live automation, synths, drum samples, etc...) and want it to sound a certain way before I mix in my live tracks (guitars, vocals, saxophones, hand percussion, etc...) and will give me a better idea of what to listen for and edit properly. THEN I'll bounce that, and master the whole track, making less-drastic edits to my master, and have a more coherent sounding track. This is just scratching the surface of what is available to artists via iOS and I may have said a lot of redundant things but I wanted to share what I do with others who haven't experimented with iOS music production/creation before and get some ideas from others who have been doing it longer/differently than myself. Thanks!
  7. Hello. I have uploaded a short excerpt from an original song which I am attempting to mix... Link: https://dl.dropboxusercontent.com/s/ftbejny6abxvvfh/S02T05 test.mp3 Here's a slightly different version: https://dl.dropboxusercontent.com/s/v96hp02tq9zaip0/S02T05 test - ver 2.mp3 I plan on eventually having the full song done by a professional, but currently I would like to know if my mix is of an acceptable quality for demonstration/preview purposes. Is there anything in particular that should be changed in terms of EQ or volume levels? Does it sound noticeably amateurish? All the instruments are programmed (including the guitars), since I'm a keyboard player and not a guitarist. I used Toontrack's Meshuggah EZdrummer demos as reference mixes. Edit: I compared it to some more tracks. I'm not really satisfied with the sound. I think the tone of the guitar and drums could still be improved a lot. Currently I am busy with another music-related project (not metal), but I may return to this later.
  8. Hello, I'm Ronald Poe and I write electronic music and remix. I use FL Studio and Audacity for my music and Musescore for the writing/editing of midi. I mix/produce the music myself and it seems to hinder the music itself. Do you have any tips on mixing/production? Here's a couple examples of my work. My character theme for Axel (KH) My remix of "King Ghidorah" (Godzilla NES) from that contest. Please give both your opinion and some mixing advice. Thanks
  9. Hey peeps, I barely know anything about mixing, so this might be a bit of an easy question, but I truly do not know what to do about it, so I'll just drop it here anyway. I like to record different instruments and put them all together. But whenever there are more than 2 instruments everything gets mudded and it just sounds unclear. When I pull things away from each other by stereolizing it to the left or the right it sounds better, but I can't imagine that being the right solution. Does someone here know how such things work?
  10. Hi guys, I'm working on an original piece at the moment. It's a sad, solo piano track, only 1 minute or so in length, and it has to work as a seamless loop. I'm not at all confident in my own abilities, so what I want to know is- what do you think, and what can I do to make it better? I use Logic Pro 9, and this was created with the Ivy Piano in 162 soundfont. I wish I could get a live player to perform it, but alas, I lack both a piano, and the ability to play one. https://www.dropbox.com/sh/np1ahu61yc034f4/AABQn7x16LRh6_aIrtsVl2vSa?dl=0
×
×
  • Create New...