Jump to content

How to achieve Feeling in your music.


Recommended Posts

I was going to try to post a more general overview of techniques, but that turned out to be way too involved for tonight, so I'm just going to talk about one specific thing and try to post more as time allows if this is at all helpful.

Ascending melodic leaps:

Generally speaking, upward leaps are frequently used in music that is intended to evoke an emotional response. This is probably because leaps stand out in melodies because stepwise motion is the norm. (If you compose a melody entirely out of steps, you might end up with a perfectly usable melody. If you compose it entirely out of leaps, it will likely sound disjointed). This is maybe a subjective assessment on my part, but in melodies such as the Star Wars theme or "Somewhere Over the Rainbow," it's the melodic leaps that stand out to me.

So ...

There are some formulaic ways to deal with upward leaps. The first thing to consider is the interval of the leap. I don't have a lot to say about this other than to note that leaps of greater than an octave tend to sound very disconnected. Useful if you're trying to fragment the melody, not so useful otherwise.

More important is note behavior after the leap. Very often, the melody will step back down after the leap -- the leap is resolved down, in a sense. Some theories of music assert that the listener wants the gap that was created in the melody to be filled in, which is done by moving the melody back into the space that was leapt over. Knowing this, you can play with listener expectations by delaying this resolution or avoiding it altogether. Here are some examples of how this can work out.

:

Lot of upward leaps, but they aren't really resolved down in any conclusive way until the ends of sections, and in some cases the melody continues upward after the leap. The emotional effect here is one of openness, adventure, and boldness in the face of the unknown. If the leaps had been clearly resolved down, I think the music would sound too stable. Compare this with another John Williams theme --

:

Much more assertive-sounding than the Indy theme. Part of this comes from the downward-resolved leaps. Because the leaps are resolved, the music doesn't have the upward momentum of the Indy theme, and it lacks Indy's sense of the unknown.

(Zelda):

Similar to the Indy theme. An upward leap is played twice and then continues up and doodles around a bit before resolving back into the leapt-over space. Again, the music conveys a sense of the unknown.

and
(Secret of Mana)

Both of these feature upward leaps that leap back down to where they started without resolving into the leapt-over space. This leaves the leapt-to note hanging in the air -- the leap is superfluous and unfulfilled -- and this is especially effective for sad or wistful music, or anywhere that a sense of unfulfillment is appropriate.

A particular form of the upward leap -- the appoggiatura -- occurs when the leapt-to note occurs on the beat, isn't a member of the present harmony, and resolves down by step into the harmony. The listener's expectation that a leapt-to non-chord tone will resolve down by step into the harmony as an appoggiatura is very strong, so if the leap you're dealing with is a potential appoggiatura, it may be an especially good candidate for the sort of playing-with-expectations treatment discussed above.

Link to post
Share on other sites

Realism in VIs is about programming.

Programming not merely automation, which is important, but also articulation.

Realism with samples is about blending together several articulations or patches to create a total performance.

It is about employing automation with not just volume curves, but using patches that respond to dynamic cross-fading using MIDI CC1.

A trombone sounds warm and round at lower dynamic levels (p, mp) and sounds forceful, aggressive, and even flat and farty at higher dynamic levels (mf, f, ff+).

A realistic phrase with a Trombone requires you to employ a greater range of dynamic performance than what CC7 or Volume does.

You need to use Modulation or MIDI CC1 and patches in EWQLSO that respond to MIDI CC1 are the XFD and DXF patches.

When notes in a phrase are short, switching from a long, sustained patch to a short staccato patch can be more appropriate sounding.

To program realistic VI performances, you must THINK like a player and IMAGINE what a player is doing, what a player CAN and CAN NOT do.

Feeling and emotion in creation requires feeling that emotion and articulating that feeling through music.

If there is music that MAKES you feel that emotion, then that is how your brain associates that feeling to music and that is what will work for you--to make you feel something.

If your brain does not associate feeling and music, then you probably will not be able to effectively articulate feeling in music.

Critical listening is the only effective way to engage these two concepts--feeling and realism--otherwise you will be relegated to trial and error which will take a long, long time. The reason being that most of what makes feeling and realism possible in VI programming can not be easily quantified.

Link to post
Share on other sites

I didn't see these last two posts until just now. I had something else I was going to try here and I'll need to get to those after my vacation tomorrow and Friday.

This is something I threw together in roughly 20-30 minutes using EWQLSO and FL Studio's volume automation. Apart from the mixing, which I didn't pay that much attention to, how close am I to this being somewhat convincing or something you'd hear in a generic JRPG?

stats.png

Link to post
Share on other sites
i mean like flashing pictures. different flash rates will produce different alpha waves n stuff. or if the flash rates are irregular it can produce even more emotions. now how do u apply this stuff to music

also me and my cousin downloaded a frequency producer one day. we discovered certain frequency intervals that were extremely calming and others that were jarring. so this involves the timbre of an instrument, except the instrument we were using was basically a simple beep (i think you could make it sine or something.) so even though u may have two instruments playing the same note, you can still drastically alter the mood by changing the instruments frequency.

Neither of these are pertinent to Meteo's question. You can try to apply the former to your music but aside from screwing with ppl's perception of rhythm I don't think it'll do much.

The latter is related to timbre and basic harmony. Look up harmonics and overtones.

Link to post
Share on other sites
I didn't see these last two posts until just now. I had something else I was going to try here and I'll need to get to those after my vacation tomorrow and Friday.

This is something I threw together in roughly 20-30 minutes using EWQLSO and FL Studio's volume automation. Apart from the mixing, which I didn't pay that much attention to, how close am I to this being somewhat convincing or something you'd hear in a generic JRPG?

stats.png

I don't get what the big deal is, I can't find anything wrong with it. Although if I were to remix it, I'd add some contrasting overtones after the second repeat.

Link to post
Share on other sites

I agree with the above, that doesn't wound bad or fake. Part of it could be that you know it's a fake, a synth, so you'll always hear it as such. Same with other people who've used or heard a particular synth version, they know from working with it it's fake so no matter how well it's done it will always sound fake to them.

Ascending melodic leaps

I LOVE this kind of stuff, this is exactly what I want. Please do more as you mentioned.

I've found some things in the same vein online, they inspire me to write some interesting melodies, at least to myself.

http://ase.tufts.edu/psychology/music-cognition/emotion2009.html

http://www.wmich.edu/mus-theo/courses/keys.html

Link to post
Share on other sites

I think the part of the problem in that, Meteo, is all of the string sustains sound exactly the same in articulation. EXACTLY the same.

It's not bad by any means and I don't even think OCR judges would bite on that if it were in a remix submission. It's the kind of thing that's fake if you look into it but real enough so that you shouldn't care.

Link to post
Share on other sites
I think the part of the problem in that, Meteo, is all of the string sustains sound exactly the same in articulation. EXACTLY the same.

It's not bad by any means and I don't even think OCR judges would bite on that if it were in a remix submission. It's the kind of thing that's fake if you look into it but real enough so that you shouldn't care.

Awesome - now we're getting somewhere! Granted, that was just a 20-30 minute little file and I was just using the same double bass, cello and violin samples alongside the harp and flute - mostly my goal there was to figure out if I was doing correctly the volume automation to make it less stiff, but I hadn't graduated to the line of thinking (or computer resources) to use and blend several articulations of the same instrument in a song.

Also Special thanks to WXVAVA for providing EXACTLY what I was looking for (still not sure why that was so hard, but beggars can't be grumblers) and Moseph for a much better and tangible explanation on leaps. I will put these to use.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...