Jump to content

timaeus222   Members

  • Posts

    6,137
  • Joined

  • Last visited

  • Days Won

    49

Everything posted by timaeus222

  1. This wasn't directed towards your samples, but free samples in general in the first sentence. The second sentence was a very general statement, and could apply to anyone. The third sentence was expressing my personal opinion (which was obvious from the "IMO"), but that comment about the harp was a suggested particular sampled instrument for you. So nah, I'm not saying that it sounded like you were using soundfonts, but I am saying that I very much like the quality of FluidR3's harp because it already has strong treble content to let it pierce through a thick mix. A regular harp, as I've found, can come through mixes less often for me than FluidR3's harp because specific harmonics present in FluidR3's harp are more present than in the harps in orchestral libraries I've heard. It's not about sample quality that you start with, it's about how you process those samples to make them sound realistic and better than they were before. Applying tasteful reverb, EQing carefully, sequencing meticulously, and adding automation to humanize expression/volume/articulations is pretty important, whether you're using high OR low quality samples. Elevate low quality samples with production tricks, and although it won't sound as good as well-sequenced orchestra libraries, it'll sound good for a soundfont-heavy piece, specifically---in context. If you were rejected based on sample "quality", it's not factual that your sample quality was literally not high. In actuality, the perception of your sample quality was that it was not high. Literal high quality libraries sequenced and reverbed in an average way may actually sound like general MIDI soundfonts that haven't been touched up well enough in terms of humanization and reverb sometimes, depending on the comparative tonal qualities of each instrument between a particular library and a particular soundfont. As usual though, it's case-by-case. The OCR judges are not inconsistent. tl;dr: It's quite possible to write low "quality" music using literal high quality sounds. Putting it to the extremes, dry mechanical $1000 libraries might actually sound objectively worse in their realism than well-reverbed humanized soundfonts simply because the soundfonts would then sound more realistic due to the proper reverb and the proper humanization tricks. Literal higher quality does not equal more realism/believability. Yeeep, I did just say that. By the way, the cross-fades comparison has some difference, but not much. There's a difference in the reverb, expansiveness, panning, and other stereo image aspects, but it's not quite a fair comparison. Soundfonts need more effort in humanization to match up to adequately sequenced sample libraries (which is another general statement, not applying directly to you). Right now, the mixing sounds pretty good, and it's better than before, though I feel like the acoustic drum kit sounds out of place in an orchestra. Just an opinion though. Also, Theophany has some great points.
  2. One of the important abilities you should have is to simply know how to identify what key you're in. You don't even have to say what the key is---you could just internalize the feel of the key, and try to write melodic content in that key. When you internalize a key, you have a general idea of what fits and what doesn't. Within some confines of tonal centers, it may help you stick to writing things that don't sound awkward (a bunch of completely random accidentals in a row, for example. Meaningful accidentals may work).
  3. I use NastyDLA MKII and Density MKIII the most. Other than that, FerricTDS is a dynamic compressor, and TesslaPro MKII is a saturator. These last two do seem pretty subtle, but I'll have to work with FerricTDS some more later.
  4. Yeah, JPG "added" a weird tint (the blue is the most noticeable) and some fuzziness earlier. Looks better now; try taking the smoothing off the bottom text. It's a pixel-centric font, so leaving the pixel rendering raw should give it the rendering quality of the big Pokemon-font text up above. If you're not sure what pixel-centric means (it's just a description I made up), it's like the URL on the graphic in my signature below.
  5. Variety of Sound makes some great free stuff too, and I use it, but that's just adding more plugins to the mix.
  6. Oh yeah, this remix. Fun stuff. I wasn't really bothered by the harmonica at all.
  7. Personally, I would have liked more progressive elements to keep the flow going and to liven up the dynamics, and a stronger ending would have helped. 1:26,1:34, and similar spots had some sort of cut-out cymbal for some reason. Also, the accompaniment has more treble than the leads. Other than that, this is pretty good.
  8. Gotcha; At the same time though, that post was meant as a piece of info for everyone.
  9. Oh, I'm not saying I expect you to do it well, I just know it's an available feature. All I'm suggesting is that you know the visual aid aspect exists. Everyone learns at their own rate, but I love that feature.
  10. [SIZE=3]#include <iostream> #include <string> using namespace std; int main(){ [/SIZE][SIZE=3] char hbdArray[] = "Happy Birthday, CHz!"; cout << hbd[/SIZE][SIZE=3][SIZE=3]Array[/SIZE] << endl; }[/SIZE]
  11. The sustains sound flat. They need some automation to "swell" the volumes and emulate a human's tendency to constantly vary the volume at which they play. Harmonies help the depth of the sound, but they also add the need to vary the note start timings. The more harmonies you add, the more stiffness can be an issue. Reverb helps, but using only reverb in hopes of "hiding" the mechanical parts is like covering up the problem. It's like saying you solved a math problem in the wrong way and got the right answer, then assumed that getting the right answer means you did the work correctly, when instead you may have made a mistake and one mistake covered up the other.
  12. I'll have something longer for you soon (about 2 minutes' worth)! It's sounding really cool! I had it done up to there about two months ago, but I didn't want to seem like I worked too quickly! I just love the fact that this is my first full-on cinematic track.
  13. I thought they were the same piano. In that case, yeah, the lower one is less human. When the time slider reaches in between the T and r of "Track", the Ti-Do-Re-Mi (or Mi-Fa-So-La) notes sounded mechanical. Might as well check both, then! The lower one sounds more low-quality than the other. By quality, I mean the results of the depth of the sampling in however you're using it (a General MIDI soundfont?), but not actually what I simply like better. To conserve space, many soundfonts sample at the C of every octave, and the more realistic ones sample at those C's plus each G, at least. Each pitch recorded is then shifted up and down the keyboard to give all the note timbres, and as you may guess, drastic pitch shifts (C<->G or larger) can warp the tone a lot, especially on the lowest and highest notes on a piano. So, I'm not saying that a real piano can't sound better than a soundfont, but if the soundfont sampled a piano that's supposed to sound higher quality in timbre than the real piano that was compared to it (i.e. Steinway & Sons vs. Yamaha S Series), then I'd say the soundfont sounds higher quality, but the real piano, of course, sounds more realistic. I'd opt for realism more so than quality any day, but if you can humanize the soundfont, then you've accomplished both parts.
  14. Can you timestamp what you're talking about with the loop? This track gets really muddy later on, so the dynamics are pretty flat right now and I don't hear wherever it's supposed to be. 0:38-0:58 is where I'd say it starts getting too muddy, 0:58-1:17 gets muddier, and 1:41-2:12 is even muddier than that. You'll need to mute some instruments and pick only the ones you want to keep. You may have too many things going on at once for you to EQ. Know what's there, keep track of what frequencies it's taking up, and organize yourself. You're using FL Studio, so you should be able to see the frequencies that your instruments are occupying.
  15. The strings are missing some humanization, and more reverb would help. Their phrasings aren't smooth, so that breaks their melodic flow. Some automation on the volumes to add swells on sustains would really help the realism. I'm unsure how off the rhythm could be in the intro since there's no rhythmic reference. The piano sounds rather mechanical, especially near the moment the slider reaches the left of the T in "Track". Each note sounds like it's literally starting over. Its sample quality is kind of low, too. Also, it would really help to adjust the velocities to how a real pianist would play it, overlap notes a bit, and offset the timings a little bit for more realism. Since the texture of this track is pretty much just piano+strings, it's that much more important to fix those issues. Good start. This could be a good read.
  16. I agree that people shouldn't have to compare a breakdown with a remix to get the source usage, but I think it's nice to have one for the fans. Like Larry said, if you write melodic content or otherwise that sounds like it could have come from the source but is actually original, you're going to give the judges a harder time than if you actually pointed out what was supposed to be source usage. Sometimes original content works so seamlessly that it could accidentally be counted as actual source, and that's happened a few times before, I believe. Now, if you don't trust that your source breakdowns are accurate, that seems to be what Shariq's against---using a breakdown to justify source usage that doesn't actually count; as in, is truly way off and much too liberal. If you think it counts, then it should be recognizable, and when it's borderline, well, that's something for the j00jes to pore over. In the end... I just provide one with every sub, and if I really like the sub, I plop it into my actual comments.
  17. 0_0 Happy... Birthday? O_O
  18. As chill as I remember it. Some spots were a little jarring, but other than that, good stuff.
  19. Oh, yeah, I *was* a little vague. Yeah, I meant to say to try taking out the last notes of every *other* chunk. Great! =) That's good! I really love FL's workflow and convenience. I can't get enough of its automation clips (I tend to use more layers for my automation clips than my musical content! ).
  20. I'd recommend that video too. I think anosou makes good sense and what he said in that video applies in the general case (though not in exactly the same way with everything, of course).
  21. Pappy nerfday!
  22. Not bad. I'm noticing the guitars are rather thin-sounding though. The tone is lacking some low-mids, and the bass seems to be EQed to make up for that. As for the drums, the snare is good, but the kick is buried. I can kinda hear it, but it's not that present. Needs more click and a little stronger sidechaining with the bass. Good drum playing though. Things get really muddy at 1:34, 2:06, and other places with fast kicks. It's kinda muddy in other places, but not to the extent that it obscures the clarity too much, IMO. As for the arrangement, all of the transitions seem awkward to me. I can instantly tell when you've shifted to the next source tune or a different one, so this is pretty close to a literal medley. Also, each source tune was treated in a pretty straightforward, cover-ish manner. Good start. Needs more production polish, but more importantly, much more expansion on the themes if this is supposed to be a sub for OCR. This post I made might help with the metal production.
  23. Either use the "FL.exe" file for FL 11 instead of the "FL (extended memory).exe" file from FL 10 (both of these let you use more than 4 GB RAM), or get more RAM for your computer.
  24. The cases I've seen are: - VGM that wasn't originally written for the game. EX: TV show themes that were adapted to chiptune - VGM that was used in a trailer for the game but not the actual game - VGM that sounds like classical music or non-VGM. EX: Magic & Might: Theme Song So I think that matches up with the third case.
  25. What's up? I like how this is so far, but I agree on how the strings are off. I don't think they're annoying as they are, but they sound stiff, and sometimes the timings are late. The choice so far to use all spic articulations makes it seem like you ran out of time, I think. It would really help to incorporate some legato and/or tremolo in there with some volume swells via automation. I think 0:31 - 0:40 gets pretty quiet compared to the rest of the track. It's totally fine if you want to give dynamic contrast by doing that, but I think it's a bit too quiet. My bad headphones have a pretty low impedance, so I should be hearing things more loudly than with my actual mixing headphones, yet that section was barely audible on these (aside from some of the trebly sound effects). At 0:45, the guitar that fades in is pretty narrow. It may help to double track it. That aside, I think the drum and bass loop doesn't help to lead into a half-time section. That fast tempo didn't foreshadow a very slow tempo for me. It may help to just put a few cohesive risers there, and maybe even trilling violins. I also just realized that the section at 0:53 has some rather lofi hi hats. I like the modal shifts up to 1:56, though it seems a little cliche to me with the triplets. Cool trebly FX at 2:22. It kinda sounds like something from Evolve Mutations, or Evolve, or something of that sort. I would have liked to see those details in here earlier, but that's up to you. Also, 3:18 - 3:20 felt the weakest to me in terms of the realism. Maybe trying shorter/tighter articulations there on the brass could help. Overall, I think this is pretty good. Just needs some production love and some refining on the sequencing. The low end support could be stronger and clearer.
×
×
  • Create New...