Jump to content

timaeus222

Members
  • Posts

    6,121
  • Joined

  • Last visited

  • Days Won

    47

Everything posted by timaeus222

  1. Well, you aren't crazy when it comes to doing that. If I'm thinking of the same thing you are, sometimes I make music containing synths that don't reset their phase upon rendering (because I didn't set it to do so), so I get versions of the piece that don't quite sound how I want it, and I render until I get exactly the phase offset I want. It's almost like a specific set of "round robins" is cycled through, in a sense. I would hesitate to say that there is an actual difference between perfect non-laggy playback and an actual render in FL though.
  2. The album is so BadAss, that something is promised for LEAP MONDAY (which, if you think about it, only happens once every 28 years, right?).
  3. If anything, I would not hard-pan guitars that alternate. I don't use Reaper, so I can't tell if you did that or not, but try about 30~50% L/R and I think that'll be less awkward. Also, since panned instruments stick out more than centered instruments, make sure that their EQs are approximately the same. You don't want a bunch of low-mids on one lead but little low-mids on the other lead (e.g. you might have some low-mids frequencies in the picking noise), because the "imbalance" in the frequency distribution will be more noticeable on those panned instruments.
  4. Actually, FL 12 does have a 64-bit version, so it's all fine and dandy there.
  5. Now that I think about it, at 0:00 - 1:25, what I think is your lead guitar is far back, and what seems like the rhythm guitar is pretty upfront. Is that intentional? That entire time interval, it feels like there's not much leading going on. At 1:25 - 1:59, though, that organ sounds like a practical lead despite being an arpeggiated pattern. At 2:10 - 2:21, what sounds like the lead guitar seems too far back. At 2:22 - 3:00, it sounds more like a lead. The rest sounds more reasonable to me when it comes to what's leading. Besides the production, I think this arrangement might come off a bit directionless sometimes because at 0:00 - 1:25 and 2:10 - 2:21, for instance, what seem like leads to me are not upfront enough to suggest that they are leading, and other instruments that are more upfront have been sometimes playing notes that don't sound like melodies to me (they sound more like accompaniment). It may just be a result of the source not having much of a melody to latch onto, but don't be afraid to put an original melody on top of source arpeggios or something like that, as long as the source is audible. Having arpeggiated patterns upfront can feel "longer" than having a melody upfront to latch onto.
  6. The mixing seems boomy in the low-mids (around 120~200 Hz), and it's mostly coming from the bass guitar, I think, which plays about at the same time as the syncopated rhythm guitar (acoustic, I believe). It's most noticeable at 1:24 - 2:10 where you have clutter near 130 Hz whenever the toms try to play while the bass is also playing. It IS a problem in other similarly dense places too though, like at 2:35 - 2:37. But if you didn't cut down on the low-mids for the rhythm guitar, check that as well. The (Shreddage drums?) toms can be heard clearly enough at 1:05 that I can tell they are a bit more bassy/rumbly than would be comfortable for me. Maybe low shelf them below 200 Hz a few dB. Most of it is near 130 Hz, which will clash with the bass guitar and make the result muddy. The toms are audible enough, but they are adding low-mids clutter. Those are really the main issues: the bass guitar being indistinct, the toms being a bit too boomy, and potentially the rhythm guitar (acoustic I think) having too much low-mids and clashing some with the bass guitar and toms. I would suggest sidechaining the toms with the bass a bit in addition to the low-shelving the toms and rhythm guitar to decrease the amount of boominess overall. You might want to compare with this for the rigorous bass mixing I'm suggesting that you do, because this, admittedly, is not an easy fix for many people.
  7. It's not a problem with FL Studio in and of itself. Clearly, if you have a problem in a practically empty project file and I don't (and never will), it's not a problem with FL Studio as a DAW. My computer is good, but it's not like it's incredibly strong and fast. More likely than not, it's a problem with your settings IN FL Studio. You even said that apparently Cubase worked fine, but if you then assert that it's automatically because it's not FL Studio, you've implicitly assumed coincidental correlation (post hoc, ergo propter hoc). Seriously, compare your FL Studio settings again with Flexstyle's configuration and experiment some more before shelling out money to avoid trying to solve your own problems.
  8. Really, just any chord with really low notes (e.g. roughly bass/sub bass range) will sound bad---the frequencies will clash. For instance, you might expect a major third interval in the midrange (500~5000 Hz, let's say) to sound good, and it should. Start moving that diad down an octave and listen to how the harmonics play along. The respective harmonics for each note would overlap more and more at lower frequencies, so they clash more and more easily the lower you go, until it just sounds like mush. That's why I wouldn't suggest that you write chords on very low notes. On the other hand, fourths and fifths can still sound decent on very low notes, and octaves still play pretty well at low notes.
  9. Yeah, pretty much all of the drum samples I have are samples, not soundfonts. http://goldbaby.co.nz has some nice free "sampler" packs of samples.
  10. I'm so glad this made it on the site (was there ever any doubt?). The piano (and guitar) solos are killer as always. I particularly liked 1:53 - 2:18 where the piano and guitar trade off solos! I think you also added some subtle toms at 2:18 later. Awesome!
  11. I always like how ambitious your arrangements can be. This still retains the general mood of Metroid, but adapts it into an orchestration that utilizes a great deal of dynamic range and lets the brass players have some fun too, instead of mainly the strings players. The percussion work was also notable and well-executed, and I thought they helped anchor the track pretty often.
  12. I agree, that would do it, if "true staccato" samples aren't available (if they were, the notes should feel really short). I think this is getting to the point where we'd have a hard time describing things more specifically (like how I might say "adjust the second note on measure 192" or something). I would suggest comparing back to the examples I gave to see if there are any notes of yours that are being "played" particularly hard when you meant to have them "played" more softly (like at 0:59 - 1:00 in the left hand). Another thing I'd say is that if all the notes in a chord are the same intensity, it doesn't make sense. It would imply that your pinkie is as strong as your thumb when it's almost always weaker. This is why it would help to get a pressure-sensitive MIDI keyboard if you haven't already, so you don't have to simply imagine how hard each finger would likely hit the keys. Every finger will hit the key slightly differently in intensity AND timing on each chord, which is what gives each chord its own dimension and textural richness.
  13. Yeah, that sounds better, though still noticeably plunky. Do you think you could check an even more substantial exponential velocity response? So maybe "Convex 60%" or however it's called. Getting there!
  14. Either way, the hand placement would be the same; you would just leave your middle and index fingers in place and move your thumb for the high note. Hence, the intensity pattern is similar.
  15. The velocity response is essentially a way to control how low/high velocities access the samples in the sample library. Generally, low velocities access low-intensity samples, and high velocities access high-intensity samples. Changing the velocity response can change it so that, for example, lower velocities access lower-intensity samples than before (exponential curve). EDIT: Yes, consider overlapping notes as well for the durations of each note. The way you wrote the right hand, the pianist plays staccato almost the whole time.
  16. That's updated? For some reason I still hear similar velocities in the right hand. Maybe it'll work better after checking the velocity response.
  17. Can you provide an example? Essentially, muddy sounds generally arise from too much frequency clash in the low-midrange (100~500 Hz or so), so you should check the amount of low-mids each instrument has and consider which ones actually need those frequencies. Or, alternatively, check the octaves that each instrument is occupying. When you "record", are you saying with a microphone, or through MIDI, or? Generally you could work towards fixing muddiness in multiple ways: Playing in different octaves Cutting the low end out of certain instruments that don't need bass frequencies (e.g. high pass or low shelf could work); sometimes you may have bassy ambient noise that you don't need. Raising the low cut frequency of any reverb you apply to the instruments until you start hearing a difference, and then find that borderline between where you stop hearing a difference and start hearing a difference.
  18. Better, but yeah, still sounds like "MIDI". The faster phrases are a bit too perfect in terms of the evenness of the velocity magnitudes. Is it possible to adjust your sample library's velocity response? It might help to smooth out the hardness of the tone overall by making the slope more exponential than logarithmic.
  19. Rozo and Slimy are right; the piano chords are noticeably stiff; I can tell that every note in each chord has zero timing offset with respect to the others and/or are quantized to the left edge of each grid subdivision (i.e. it's too robotic). I had made an audio example a while ago that illustrates the general difference between completely stiff rhythm and humanized rhythm but I can't find it. If I get the time tonight I can make a new example though, by quantizing something that's humanized. I like the transformation to a slow waltz. I played piano for about 8 years, so one thing I'd say is that on the left hand, on each [single-note]-[chord]-[chord] 3/4 pattern (or 6/4, depending on how you group the measures), the single note, generally played by your left pinkie, should be strongest, and the two chords, generally played by your left hand's middle finger+index finger+thumb, should be weaker. Those two chords should also not be exactly the same intensity. So, if we suppose velocities on a scale from 1 to 10, I would do something like [8]-[4]-[5] or [8]-[5]-[4] for the [single-note]-[chord]-[chord] velocity magnitudes. For the right hand, suppose the main, waltz-y parts of this (i.e. not the quieter outtro) are in 6/4 and we consider 6/4 measures, counting the beats as 1 2 3 || 2 2 3. Under those circumstances, for the right hand, I would emphasize the 1 of measure 1 and the 2 of measure 2 and have the velocities of the other notes in the pair of measures be lower. That's how I would prefer to play it naturally. EDIT: Example (Humanized) ~ Humanized rhythm and velocities Example (Robotic Rhythm) ~ No effort to humanize rhythm (except on grace notes, triplets, and other irregular/non-4/4 tuplets) Example (Robotic Velocities) ~ No effort to humanize velocities Example (Robotic Rhythm AND Velocities) ~ Literally no effort to humanize (except on grace notes, triplets, and other irregular/non-4/4 tuplets) You may notice that generally, humanized rhythm can be more subtle, but it's still significant, especially on chords. I would say that the humanization on your remix is roughly between the third and fourth examples, though closer to the fourth than the third.
  20. Are you sure? Can you give us an example of something you've done with guitars?
  21. The vocals could be more upfront. It's hard to hear them above the boomy bass, which is also too loud. You could try scooping the midrange on the bass and not boosting so much on the low-mids.
  22. Pretty much agreed; the only thing I would question is what I bolded. I'm on board that anyone who has not trained his/her ears to listen for dynamic range, loudness, and the like would have trouble detecting pertinent issues, and that communicating to them via language like "dB RMS" is an effective way of conveying how loud something is without knowing how loud their system is. But it seems like you're also saying that taking a systematic, numeric approach makes it easier to learn how to write loud music properly. It might, if one doesn't get overly focused on what the numbers "ought" to be. For instance, it sounds like you prefer a borderline of -10 dB RMS, and Brandon as well. So, based on numbers and numbers alone, then I should expect that either of you would find the MP3 of this too loud. I seem to recall it being -6 dB RMS when I looked at it with Sonalksis FreeG Stereo, but correct me if I'm wrong. Personally, I intended it to be that loud, and I was aware that it was that loud. Based not on numbers, but on my ears, I don't believe it is too loud with respect to my vision (high energy, guitar-toting drum & bass). If by your ears you believe it is not too loud, then I don't think mere reference numbers, like for dB RMS, are sufficient to assert "too loud" or "too quiet" (perhaps our volume settings on our computers are different; always a possibility, as "too loud" can mean either actual volume or small crest factors), or sufficient to guide a "newbie" faster towards proper dynamic compression. It can be helpful though. Granted, they have to constantly train their ears on such material too; I think we agree on that. But if a "newbie" gets caught up on what the numbers "ought" to be, couldn't it hinder them more that they are shooting for a goal that might encourage his/her conforming to specific loudness standards? I'm all for numerical measures of 'loudness', but IMO, there comes a time where being less systematic might be more conducive to writing more creatively. Come to think of it (and I don't mean to belittle; this is just for discussion), Master Mi is one example that comes to mind when I consider someone who adheres to "EBU R128" loudness standards that he has continued to expressly praise, and yet, I find his music rather quiet (EX: https://soundcloud.com/master-mi/lufia-2-tyrant-breaker-master-mi-remix-version-15). Either he did not adhere to those standards properly, or something's wrong with those standards. Whatever the case, how loud his music sounds correlates with how much he adheres to those standards, no? That's my point. If anything, I would let your ears be your guide (or even someone else's whom you trust), and check numbers if you know that they aren't going to limit how you express yourself through music.
  23. Neblix and APZX, you were technically correct on each point, but yeah, you guys could have been more polite (APZX, you were a bit more rambling, but nevertheless I get what you were going for, and Neblix, at least you removed some undesirable words). Thank you, Dan, for being that guy. Regardless, some good information here. Thanks!
  24. I think you've inspired me to keep working on my book. Time to write about music production!
  25. Can't remember if I still own Rayman Advance, but I'm sure I had a blast playing it.
×
×
  • Create New...