Jump to content

timaeus222   Members

  • Posts

    6,161
  • Joined

  • Last visited

  • Days Won

    53

Everything posted by timaeus222

  1. Are you okay? @ShadowRaz
  2. For sure, this is your best work! Loved it You seem to love the harp as much as I do.
  3. Thank you, I really appreciate that! Actually, a Shantae remix ("On Fire") already went through the queue!
  4. Depends on how long you say you have waited... I'm assuming it's a few days, and not a few years. That does happen to me, where I think I have something going, and a few days later I listen to it and think "nah, scrap it". I think it's natural, because a few days later, you SHOULD have a fresh perspective on what you wrote, and if you don't like it, then I would believe it. A fresher perspective would be more realistic as to what a general audience would think than right in the moment of when you wrote the WIP. If you are concerned about that, ask a friend to listen to see what he/she thinks. If you think it is because you listen to too much of the same music, then I'll suggest to you a few playlists of music I like to see if it helps; I play it for my chemistry students each week, and maybe it'll help inspire you (by that I mean that you presumably will not have heard it before). Let me know if something changes for you. Joshua Morse's Greatest Hits (imo) - https://soundcloud.com/timaeus222/sets/joshua-morse-greatest-hits Exciting Music - https://soundcloud.com/timaeus222/sets/exciting-music Energetic/Smooth Music - https://soundcloud.com/timaeus222/sets/energetic-smooth-music It happens. It happens to me as well, and I have plenty of resources that would satisfy my visions. To resolve that, I would examine another song that is similar to yours. There is no harm in imitating the style of ending of someone else. For example, I wrote this remix that was inspired by Weather Report's "Birdland", and the ending was based on the ending of "Havona". I'm not embarrassed about imitating someone else, because so many other people do as well. I think you shouldn't restrict yourself to free material. I'm not saying splurge yourself on expensive stuff, but don't limit yourself to free stuff. Look for relatively inexpensive libraries, and definitely consider Native Instruments' Kontakt ($400) as a sampler engine---I know it's expensive, but if you want to expand your resources, that's the most flexible one you can get, because most of the sample libraries out there are made for Kontakt. Or, if you want to start out slower, try to find libraries that run on the free Kontakt Player, such as Super Audio Cart PC, or Leonid Bass. Or, if you want to continue using free stuff, I might suggest FluidR3 (especially the harp [see sfArk decompressor]), Squidfont Orchestral (see sfpack decompressor), and jRhodes3. DarkeSword has some other good suggestions here. I did write a learning log of what I worked on over the years. Maybe it can help give you some direction. But you'll have to decide what you want to work on next.
  5. Very true to the original, but I liked how personalized the rhythm guitar was. I think the performance is what sold it.
  6. Yeah, I can see where you're coming from with your vision. At this point I think we've addressed major non-mixing concerns. Now I wonder, did you use a limiter? Any EQ? (When I first started... get this... I did not use a mixer.) There are a lot of bass sounds going on, and their frequency ranges are clashing (for most people this occurs around 300 - 500 Hz, the "low midrange"). What I would do is isolate out groups of instruments that should have similar frequency ranges and examine those. Solo each instrument in that group and look on your EQ plugin to see where the frequency range tapers off. It will depend on each sound, but here are common ranges to look for: Then, try cutting down (usually 2 - 5 dB is fine) at a frequency range for one instrument that is in common between two instruments, so that the feature in the tone of that instrument that you actually want to hear is more distinct. Another thing that helps is that after you identify the frequency range spanned by an instrument, try using a high pass filter (HPF) to cut out the frequencies below the bottom of the range. [You can control the slope so that it doesn't cut frequencies too sharply.] Sometimes, a low shelf filter would be a good alternative if you don't want to make something sound too thin.
  7. Yeah, the re-panning is definitely helping! I know I haven't really given much mixing advice yet, but mixing really accompanies arrangement and composition, so we should be addressing all three. The next thing I'm hearing is that there are some harmony clashings going on; mainly, it's coming from what the woody plucked instrument (first heard at 0:14; is it a cello pizzicato?) is playing compared to the lead. It seems to be playing in a different key sometimes, and landing a major 2nd or minor 2nd interval away from the main scale. To try to make this relatable, look for spots like those where you have simultaneous notes that sound kinda like you played the chord [C C# F G# A]; an example is at 1:30 - 1:37. I don't usually say this, but try figuring out what key you are in and what notes are in that scale so that you use mainly the notes that should harmonize with the rest of your instrumentation. Then, if you get stuck, highlight the text below this to find what key you are in. For the most part, you are in D minor, which has the notes D, E, F, G, A, A#, C, . . . in each octave (except for the few times you play D, A, G#, A, G#, D; that G# is out of the key). On another note, you might like this walkthrough video of a Vampire Killer remix; maybe it'll help you expand your creativity. In particular, I think the 10:53 and 16:00 marks really apply to you, but if you get the time I think the whole thing is worth watching.
  8. Welcome to OCR! Something to get you started is instrument roles; what instrument do you want to play what? At 0:24, you have a low bass and a phase-y synth playing the melody in octaves. Try to imagine what possible instruments can suit the melody sequence you want to write, and where it lies in the frequency spectrum. For instance, you generally don't want a bass playing lead, or a lead playing bass. You also seem to have most tonal instruments panned either left or right 'just because'. Typically, chordal instruments might be panned wide, while leads and basses would be front and center. Something to think about.
  9. It's a good thing we did an album on it then... http://soniccd.ocremix.org/
  10. I listened to the clyp version, which said "new version". I really don't hear that much vibrato (if any) in the sustains, particularly at 1:29 - 1:41 and 1:56 - 2:07. Are you sure it's there? Did you solo the track and check? Vibrato CAN be added via pitch bending, and that tends to be more noticeable, but... I hardly hear any in those time stamps I mentioned. Note that I am not talking about the pinch squeals.
  11. Really smart partwriting. Engaging every step of the way.
  12. Just from a quick listen, I thought the drums stayed fairly static... It just stays with the same kind of half-time rhythm, with relatively few fills. Try using toms sometime, to make the pacing more engaging. The lead guitars can have more attention to detail for added realism. It seems like you may be using Shreddage? Some of the sustains are lacking life, especially at 1:56 - 2:07 with those straight eighth notes and limited vibrato. Putting pitch bends would help (i.e. bending up, or bending up/down for vibrato; but always bend up first, not down first), as well as visualizing when the guitarist would slide his hand on the neck and pick extra hard, etc. Zircon has a great tips video on that for Shreddage here, if it helps at least in principle: Another example is here, where pitch bends really bring the guitar to life, depending on what you want to write:
  13. Hey guys, it's been a long time, but here's an atmospheric Final Fantasy mashup of FF4's Prelude and FF6's Terra with dubstep and light glitch elements! This has been submitted to OCR on March 18, 2018. [EDIT: To show the timeline, this just got approved August 31, 2018 for a direct-post, so about 5 months in this case.] This was inspired by Stephen Anderson (stephen-anderson on soundcloud); it primarily uses bell and pad textures from Spectrasonics' Omnisphere 2; a Chapman Stick from Trilian; 4Front's TruePianos; and various FM sounds from my Zebra2 soundbank "FM Variations". Glitching was via Illformed's Glitch 2, and some of the remaining stuff was from LA Scoring Strings, Rhapsody Orchestral Percussion, and Serum. The solo violin was Embertone's Friedlander, highly recommend it!
  14. I dunno, I think the balance was fine, but instead I had a hard time understanding the vocals... I think the surrounding instrumentation was solid though.
  15. Starts out pretty conservative, but eh, I never really get tired of Dire Dire Docks remixes. There are plenty that I think you might also want to hear to get inspired; here are a few: http://ocremix.org/remix/OCR00952 http://ocremix.org/remix/OCR02909 http://ocremix.org/remix/OCR03183 What I think you could still work on, if you still like your arrangement as it is, is the humanization. Each note you have sounds like it's quantized (rigid, robotic, computer-perfect rhythm), and very similar (if not identical) velocities/intensities. Try to imagine how it would be played on the piano, and emulate that by manually varying the velocities and rhythm (even slight adjustments on rhythm help). In a syncopated 4/4 time signature like this one, you probably want to emphasize the "1", the "and" of "2", and the "and" of "3", followed by the "1" and the "and" of "2", of the main arpeggio, while the rest of the notes can be quieter in comparison, to create a typical phrasing. As an example, compare these to see the difference. https://app.box.com/s/pmwybgad4who5679p9xvuxdnnmqshas5 - Completely quantized and identical velocities https://app.box.com/s/lr9nxha1zbg5vfcxufqqvuiz9vjnliyu - Quantized, but not identical velocities https://app.box.com/s/jjapuupib9zfypwoew1ecr31nlq8siw4 - Humanized
  16. Good luck! We don't get a lot of psytrance, last I recall, so it'll be nice to have some more!
  17. Not exactly offtopic; so I moved it to Community. @bLiNd has been a significant part of our community. Let's help him get back on his feet!
  18. Maybe just a minor gripe, but in the breakdown section you did, I personally found that the supersaw lead you used brought me out of the calm atmosphere that you introduced. How about switching that out for a tamer lead? It would also add variation in the choice of leads, which only helps your case. I don't have much to say otherwise, other than to second @Gario that the source usage isn't totally obvious, even though I know the source pretty well. It's in a minor key most of the time, and is also used in a segmented manner, so I would have to listen to the source side-by-side to be able to compare sometimes.
  19. For another perspective, whenever I collab, I just ask upfront which of us would host things in a primary DAW (if in fact we are comfortable in different DAWs), depending on who feels better about mixing and project organization. That person handles the MIDI and plugin work, and "commits" to less when it comes to set ideas. Also, I find that it would make it easier, at least for one of us, to have an idea of what the big picture of the project is, as it isn't scattered between two DAWs. In terms of what happens in between, typically I openly recommend that we often send rendered WIPs each other's way for feedback before finally sending the newest version of the tracks (rather than sending it because of a lack of inspiration or something and having the other fix it). That way, we minimize fudge factors in mixing, or reworking a performance, etc. because someone found a error in the middle of working on the track that would be frustrating to fix. It also keeps the shared vision as clear as possible so that we can both see what we both want. On the other hand, if we have two people in different DAWs writing something and sending WAVs to each other, eventually it could get confusing whose ideas are newer (say, if one of them decide to work ahead because they felt inspired), so that's why I prefer to have one person doing things on a primary DAW between the collaborators. Furthermore, if it is done that first way (well-defined roles), ideas shouldn't feel as "set", even if you're bouncing WAVs to send to the other person; because one person can write in regular MIDI, and the other person is bouncing WAVs, that other person can tell pretty easily how new their ideas are (although admittedly they might feel they have less control), and can update what they have at any time as long as the primary mixer doesn't mind some slight adjustments based on the updates. ----- You do what works for you, but that's how I tend to do it.
  20. Were you actually looking for a mod review? (If not, just pretend this is regular critique.) ARRANGEMENT I'm not very familiar with these sources, but after listening to this back and forth, it seems to be a fairly conservative arrangement. Not necessarily an issue in and of itself, but in terms of OCR guidelines, a ReMix that sounds too close to the original generally doesn't demonstrate as much interpretation as it could. What I would suggest is deviation in the (i) structure, (ii) rhythm, (iii) harmonies, and so on. I have this feeling that you maybe listened to the original and tried to be faithful to it, and ended up having very similar note sequences on each instrument; from what I can tell, this ended up being not too far from an instrument switch on a nearly exact transcription. A nice analogy is plagiarism (sans the connotation); if you look at someone else's work and just try to reword it, chances are you might not really reword it that much if you really like their wording. But if you don't have that person's work in front of you, and you just try to come up with the same ideas in your own words, you have a much better chance of making a more original work that satisfies the assignment. OVERALL I like the result personally, and I can appreciate the melodic variation you tried later on in the track. The pacing is alright, and I didn't find it all that repetitive. [In terms of "would this be accepted on OCR?" though, probably not.] ----- ADVICE? If I were to have remixed these two sources, my approach would be to listen to each source and try to internalize the melodies over a few days or weeks. From the melody, I would imagine my own harmonies, or perhaps try to hum one melody on top while the other source is playing. When I come up with something neat, then I start writing from scratch with no pre-loaded MIDI. As an easy example, try humming the "chorus" from the Super Mario 64 credits theme on top of the Pokemon GSC Goldenrod City theme. That's one I found recently!
  21. So... you're saying that people's ears got... worse... over time? o_o Even a person like me, who can't really stand pop music, can admit that whoever's doing the production for each pop artist generally does quite a pristine job. Anyways, I get your point that orchestration didn't require EQ for performances to the general public in the past, but we're talking about a digital context here. We can (and do!) use orchestral instruments in a digital setting now, and in that setting, EQ (and volume and panning tweaks) is necessary to even out the orchestral performances, which are necessarily recorded to be produced into a song in the first place. An example that immediately comes to mind is that maybe a contrabass recording picked up too much bass, and would make the mix muddy (bass + reverb = mud!); then it would require some cutting to clear out the low end in the context of the mix.
  22. Or, you could actually try the advice first and make judgment calls afterwards... If you don't want the critique, then don't ask for it. If you never cut frequencies and you boost instead, then I can almost guarantee that your music will sound resonant somewhere in the midrange. If you neither cut nor boost, then you'll likely get midrange clutter that makes your lead compete for attention. I get that you can adjust volumes, velocities, and panning. But those are not the prominent issues here. In fact, even real orchestras need EQ to sound right. It's often not because the instruments were faulty... it's generally because the microphone(s) used were, and EQ is often necessary to even out the frequency pickup of the microphone(s) used.
  23. I always recommend cutting rather than boosting, if possible. Boosting tends to lead to clutter if you don't really know where you actually want to boost and are just spitballing. If a sound becomes too hollow by cutting, slightly undo what you just did (with your mouse, not Ctrl+Z) until you find a balance. The point is to make room for other instruments, in the context of the mix. It doesn't matter what they sound like by themselves, because instruments in a song are generally playing with other instruments. The harp does sound rather dull. I don't think EQ would fix that (unless for some reason you gave it a low pass or a 10 dB cut in the midrange or something... THAT can be undone), though a small dip in the low-midrange (2-4 dB) would help reduce its resonance. Or try a different soundfont. If you can find FluidR3 GM out there, that harp actually comes through mixes quite well.
  24. You would add your external hard drive as another directory for FL to search when you want to install a new plugin. When it finds the plugin, checkmark it and it should show up.
×
×
  • Create New...