Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by SnappleMan

  1. Sounds overly EQed to me, the low-mids are lacking and it sounds like you're not using the samples in their most appropriate ranges. In this scenario I'd remove all the EQs and start getting the sequencing itself to a place where it sounds correct. Proper orchestral mixing is impossible when you instrumentation needs so much improvement. The original had a rock beat and bass guitar, you can't just remove those elements and expect the track to work without a lot of arrangement for an orchestra. - Sounds like you're using violins/violas/celli to play lead parts, those samples aren't not really intended for long sustained lead lines like that, use something like a flute or oboe that's usually doing more expressive soloing. The long sustained string lines produce heavy droning overtones that get very ugly (like at 1:17) - The percussion section sounds rough overall, the sequence I mean, not the samples. It's just a constant wash of brittle high end that sounds too bombastic for a percussion section. This song was written originally for a drumkit, you can't 1:1 a drumkit with a percussion section. Either layer in a drumkit to play the beat and accent with the percussions or rewrite what the brass and low strings are doing to create movement and imply the disco-ish beat of the original (and again be smart and dynamic with the percussions). You won't be able to EQ anything correctly when the percussion section is going hog wild saturating the track with all kinds of high-mid and high overtones, chill with the tubular bells too. Overall the problems you have with the EQs/frequencies are mostly due to the lack of proper arrangement for the style, there's a lot of work to be done but I think you can get there with practice. It's extremely important (especially with orchestral stuff) that you learn your samples and study proper orchestration as much as you can, improper use of orchestral samples will always yield terrible results, trust me I know from extensive experience :\
  2. Yeah the only ways I know of to "eliminate" room reflections are to either absorb or diffuse them, but I am relatively inexperienced. I never heard of doing something like phase cancelling reflections, not sure how that would physically be viable but I assume that's what "... filter to invert the problems in your room" means. @Mister Mi There aren't any monitors I'm wanting to get, I've gone through so many different ones in my studio at many price ranges, right now the main ones I'm using are just a simple pair of HS7s, I have a set of A7X's waiting in storage to be installed on my 2nd rig, but that's as expensive as I'm willing to go in this environment. Anything beyond the low-mid range would just be going to waste here, my studio is a 20x13x8 room so the low ceiling limits how much use any expensive speaker would give me, but I have some strategically placed gear racks and bookshelves to diffuse the sound a good bit so I don't hear too much reflections or fluttering. If you're used to the sound of your room and have enough experience you will eventually be able to mix in it just fine regardless of your acoustics (to a reasonable degree). I spend 19 hours a day in here working on mixes and recording music so I need to have decor, windows, sunlight and an overall aesthetically pleasing atmosphere to keep me from going insane. I made that compromise between acoustics and aesthetics and it only took about a month to get used to the new sound of the room and the confidence in my mixes came back 100%.
  3. Yeah, an untreated room will be off, but there are basic "treatments" you can do via geometry and speaker placement that can make a dramatic difference. You don't have to cover your walls with insulation panels and have bass traps in your corners, as long as you place your speakers in a symmetrical position (avoid corners if you can) and have some sort of diffusion around+behind you (bookshelves or any other furniture that breaks up incoming sound waves into a relatively random/uneven dispersion pattern so you don't have too much flutter and reflections). Having accurate bass representation in a home studio will be impossible 99% of the time, and while a couch does help a little bit you'll still find yourself in the position to compensate in every mix, just be wary of the crossover frequencies between your sub and speakers. As PRYZM said, regardless of how flat a speaker is, your room will unflatten it, so follow some basic setup rules to get the most out of them: Have your speakers pointing down the length of the room if possible. (most important rule for home studios I think) Maintain an equilateral triangle between your ears and the speakers. Have them pointing at your ears and be the same distance from each other as one is to your head. Try not to have bare flat walls to your left or right (immediate location plus 1-2 feet behind you, wherever you approximate the sound from the speakers hitting the wall first) Don't have your back up against an immediate wall, the longer the space between your back then wall behind it, the better bass response you'll get. There are debates about what is the best way to set up speakers, but what there's no real debate about is that haphazardly placing your desk+speakers in any room is not the best idea, so try to follow as many setup guidelines as you can, even if you can only do one of those, it'll be a huge difference between having none. The Presonus Eris 3.5 in that graph are not flat in the least. There's a pretty large 8dB resonant peak at about 110hz and a large 9dB boost between 1khz-1.7khz, they're designed more for listening than mixing, so you'll have to watch your bass mixing and the very important 1khz area (between 1khz-2khz is where a lot of speakers of all price levels tend to vary a bit). If you want to test, load a simple sine wave patch and play B2, and then play a D4, you should hear a difference in level.
  4. With multimedia speakers like those you should be careful of resonant peaks at the lower end of the frequency rages, having a peak at 80-120hz is a typical problem with speakers like that and after a while can drive you insane, so test for that. Also run some tests to determine the mid-high response at the crossover points so you can compensate in your mixes. I think room treatment as a whole is a bit overrated (even though it's necessary to some degree), what's most important is speaker placement in your immediate listening environment, so as long as you have the equilateral triangle you're halfway there. Buzz words like "flat" and "clear" are trivial because regardless of how flat a speaker is, the room will change the frequency response of what you're actually hearing, having some form of diffusion behind you, and some kind of absorption (like a heavy couch) can help you more than making sure your speakers are flat (which they wont be under the $2k per speaker range anyway, regardless of what the manufacturer tells you). Flat speakers need a precisely treated/designed room, otherwise you're defeating the purpose, that's why buying expensive speakers at home is not recommended.
  5. zyko knows what's up tho, I never have, nor will I ever question his dedication, I'm still on the fence about him deciding to start tuning his instruments, tho
  6. If only you guys put this much effort, emotion and dedication into practicing music...
  7. I like doing all three, and understanding that all three are derivative. The only "originality" we can achieve these days is in interpretation, and even then we're just combining influences.
  8. yeah it can all boil down to the impostor syndrome we all have, it's ok to express yourself openly to someone else's music cuz it means something to you, but when making your own music your expression has to be limited to something that's impossibly not derivative or biased? makes no sense to me
  9. I love coming back to check the OCR forums once every 6 years and still the same debates are going on. Just enjoy making music, no point in trying to justify it, or find some kind of deeper meaning or value in it, just have fun.
  10. Hell yeah I still remix B) B) B) B) B) B) B) B) B) B) )B )B )B )B )B )B)B)BB
  11. There's nothing wrong with hentai games. There is something wrong with a game that sexualizes explicit and very violent rape. A person who gets sexual gratification from watching soldiers beat a woman and have sex with her at gunpoint is not someone I want to associate with, and I assume that goes for most of the users on OCR. It doesn't matter that you're depicting it via silly looking sprites, it's still a very grotesque thing for you to consider pornographic. I'm sure there's a place for that kind of material somewhere on the internet, but I don't think that fits in here.
  12. So you want someone to write music for this game with is basically girls being raped at gunpoint? What's wrong with you?
  13. The thing is that you have to make sure your system is running smoothly as a whole so you don't get offline bouncing issues. Cubase is very good about it and there are very few issues with Kontakt if you are not maxing out your CPU meter. It comes down to how the DAW utilizes the CPU during the export process, Cubase usually takes a minute longer than Reaper does, but I get way more issues with Reaper so I have to re-render often. If you hit F12 in Cubase you will get the VST Real Time Performance and CPU meter, there are two bars here, the top bar is the CPU meter that shows you how well your CPU is handling the load overall, the second bar is the Real Time Performance bar which shows how efficiently the entire system is running. If your CPU meter is maxing out then you simply cannot run that project well, and you can get export/bounce problems both real time and offline because offline bouncing depends almost entirely on your CPU (and offline bouncing will take a lot longer). If the real time performance meter is maxing out or being erratic while the CPU meter is relatively low (CPU anywhere between 10-70%) it means that your OS is managing too many things (resource allocation for drivers, LAN, wifi, all kinds of programs) and the CPU has to handle too many other tasks before it can return to Cubase to process the audio. This is where the sample buffer comes in, if you increase the buffer size you let the CPU handle bigger chunks of Cubase's audio data, which gives you better real time performance, but at the cost of higher latency. Offline bouncing is fast because Cubase doesn't need to play the project back to you in real time, it can send huge chunks of the project to the CPU for processing and then wait to send more chunks as the CPU becomes available again, making it much faster than real time, so the real time peaks don't matter. So the general rule is, if your CPU meter is OK, then you can do offline bouncing 99% of the time without issues regardless of the real time meter. If your real time performance meter is spiking then increase your buffer size and be careful about doing real time export, and make sure to double check that everything exported properly. The best way to handle external instruments has always been to record them into the project individually, and in the case with Kontakt and other AWFUL samplers like PLAY you can do the new "Bounce In Place" feature in Cubase 8 that does a fast spot bounce of that particular track (in virtually any output configuration you want, it's very very powerful), so that way you convert those MIDI/instrument tracks into Audio tracks automatically and your offline bounce process doesn't need to worry about it and can go quickly and smoothly.
  14. In terms of the guitar analogy think of your guitar as the DAW. You get a guitar that you like to hold and play, it feels comfortable and looks the way you want. The amps and pedals you use are the plugins that you use with your DAW, they give character, depth, sound, quality, timbre and life to the notes you extract from your guitar. The playing techniques you use are all dependent on your experience and skill, so just as any advanced guitarist can do great things with a novice guitarists gear, so can an experienced composer/arranger/engineer do with a novice's DAW/plugins. So try a few DAWs out and get the one you're most comfortable with, then get to work learning everything you can about music and working in any DAW.
  15. Hell yeah!!!! This is mad exciting. I really hope everyone from OCR who's gonna be at magfest enters at the very least. Ideally I would really love to see a bigger OCRemix presence in DoD but that's just wishful thinking. Anyway, I can't wait to hear what my OCRemixer comrades come up with, and can't wait to see you all at magfest and at the DoD listening party. Love you! <3333333333
  16. Also keep in mind that orchestration and proper sample use is key. Listen to Final Fantasy Tactics and remind yourself that you're listening to what amounts to a MIDI sequence through a very basic soundfont from the mid 90s. You can spend $100,000 on sample libraries but if you can't orchestrate a simple MIDI file well then you're wasting money. Also I'm just gonna come out and give you some good advice that's kind of scummy but consider pirating a huge sample library to try it out and learn it, then buy it. (And I do mean BUY it because pirated versions are almost never up to date, and even then companies will release very buggy versions of their libraries as version 1, knowing that they'll be pirated like crazy and then shortly afterwards send a fixed updated version to the actual customers via email). So yeah, use whatever means you have to obtain a copy of LASS2 and learn it, it'll give you a basic understanding of working with legatos, crossfades and divisi.
  17. Yeah exactly, and even then I've heard some stories about VGL (though unconfirmed so I wont share them here). The goal is to break even so your valuable time is not wasted. If you can spare the time and find value in the process of making music itself then that's payment enough (and has been payment enough for all of us here at OCR for over a decade).
  18. I use MIDI files every time I remix something. But that's because I transcribe the song and make the MIDI myself first. The rules and stigma against using existing MIDIs stem from people who will do nothing with the MIDI except play some synths with it and layer a drumbeat over it and call it a remix. You can and should MIDIs whenever you want as long as you're actually arranging the material and transforming it into your own musical vision. That doesn't mean changing the sounds but changing the MIDI itself into something that's original and your own. If you do use an existing MIDI you should consider giving credit to the person who made it also.
  19. Most VGM bands (even most of the biggest ones) do not earn a viable living doing VGM. Your best hope with a VGM band or charging for VGM arrangements is to have it pay for itself in terms of CD manufacturing, shipping/distribution and production fees. At best.
  20. Logic and most other DAWs are FANTASTIC for orchestral music. So fantastic that the worlds most talented and esteemed composers use them for scoring. What you want to look into these days are libraries like Spitfire's Albion series, East West Hollywood Strings/Brass, Berlin Woodwinds/Strings, 8Dio's Adagio series, LASS2 and Cinesamples' offerings also (especially Cinepercs). There are all kinds of awful misinformed opinions floating around out there about using specific DAWs or notation software like Sibelius but if we listened to types of opinions we'd all be using Pro Tools and buying $5,000 hardware compressors and EQs. In the end all that a DAW does is with regard to orchestral sounds is create a way for you to enter notes into VSTs, even if you go the notation software route you're going to need the same VSTs (in Logic's case AU), and you'll have to painstakingly edit legato transitions, note start times, mod wheel transitions, decays, keyswitches, arpeggiators and faulty Kontakt scripts (or god forbid you go via EastWest and have to deal with the PLAY engine...) in the same exact way. Any DAW out there that supports VST/AU will work pretty much identically in that regard. So yeah, composing the song may be at best 40% of the work. The rest of the work comes down to you and your DAW, and spending many many hours trying to make your thousands of dollars of samples sound decent. So if you are used to GarageBand and can work well in it, go with Logic and you'll be set. Always remember that writing your music is step 1, and orchestrating it is step 2, and in the digital music realm you're orchestrating a second time for the samples, not to an orchestra, and that requires ridiculous amounts of editing because every sample library is recorded differently with its own legato timings,keyswitches, crossfades, patch structure and all around quirks.
  21. Dynamic range does more for fidelity than bit/sample rate these days. Harmonic distortion due to peak normalization causes very similar artifacts to file compression, so the first step is getting music that's mixed/mastered properly, that way you're guaranteed to have pure music coming at your ears without any damaging distortion.
  22. Like I said, every major DAW is pretty much great these days. Even Logic which I personally dislike working in. You can't go wrong. They all borrow so many features from each other that it really is a subtle flavor difference.
  23. Logic is the only DAW I actively dislike using. On OSX I use Cubase, Reaper and Pro Tools, and all three work great. On Windows I use Cubase, Reaper, Pro Tools and Studio One and they are also all great. It's hard to find a truly bad DAW these days, but I really dislike working in Logic.
  • Create New...