Jump to content

Humanize the Cello More


Recommended Posts

Hey everyone.

In all the remixes I've worked on in the past couple years, one recurring comment I've received is that the instrument sounds don't sound realistic. (For instance my current remix I'm working on, http://ocremix.org/forums/showthread.php?t=49125 , where the cello is the issue.) I can sort of see what they're talking about, in so far as I wouldn't think "oh, that cello sounds like a live performer!", but I also can't put my finger on anything in particular that sounds wrong with it.

My first question is this. OC ReMix has a policy (at least people on the forums will cite this) that you don't have to pay for expensive virtual instruments to make a posted ReMix. I tried that a few years ago; the first draft of Melodies of Mabe Village (http://ocremix.org/remix/OCR02951) was played on a bunch of free SoundFonts, and it didn't even make it to the judges panel. So I did buy some expensive virtual instruments--the EastWest Complete Composers Collection (the first one). Eventually I did get that ReMix posted, but I've struggled this whole time. And I'm not expecting that the price tag on the samples will excuse me from having to do any work on humanization: I do have a large amount of humanization, for instance, for that cello above, each note has a custom articulation and volume CCs throughout. But evidently this isn't good enough, I just get "humanize the cello more". My question is, if free SoundFonts with little-to-no humanization are "supposed to" be good enough, how are professional cinema-quality samples with extensive humanization not good enough? Instead of "humanize the cello more", they must mean "humanize the cello differently"--but I don't know how to do that. For that matter, how do the pros do it? I can't imagine someone who writes film scores 8 hours a day sitting there fiddling with CCs and scrolling through articulations trying to find one that doesn't sound awful. They must just write in the notes, and the computer plays it, and that's that. Do they have some sort of middleware that "plays" the virtual instrument, sending it articulation and dynamics data? Or do they just keep buying more and more expensive virtual instruments until their parts sound good?

My second question is, since I don't think I can do much better with that cello on that piece, and since the Mod Review recommended I try to get a live performer, where would I find one of those?

Link to comment
Share on other sites

I'm no expert by any means but I can share some thoughts with you. Sorry for the long post :P

First, your second question because it's easier to answer. What I do when I need a live performer is click on the 'WORKSHOP' menu item on this very site. On that page you'll see a list of skills on the left. Click 'cello' there and you'll get a list of people that can play the cello on the right. This list is ordered by the collab option you can set in your profile; the ones that indicate they want to collab are at the top (marked in green), the ones that are a 'maybe' are after that (orange), and the ones that give a clear 'no' are at the bottom.

What works for me is going to that list, check the people marked in green, see if they have been active in the last weeks/months, try to find some of their work to get an idea if it would match what you are going for and just contact them through a PM. This approach works well for me so far ;)

Now, your first question. In general, expressive and lively instruments like this (cello, violin, lead guitar, sax, mouth harps come to mind) are a tricky beast to get right with just virtual instruments. So that's why people suggest a live performer for this realistic and human feel.

You can get that realistic and human feel using a virtual instrument, but it depends on a few things. Sample quality is one of them and can be a big help. Having good, expressive samples with decent velocity scaling, envelopes, round robins and controls really can do wonders. But other than good samples you also need quite a bit of knowledge on how the instrument works and sounds and what you can and can't do with it. A simple example is a woodwind or brass instrument; those people need to breathe every once in a while and if you don't have any natural breathing pauses in your writing it won't feel right. Also this involves lots of fine grained detailed MIDI automation to control dynamics, expressiveness, velocity changes, transitions, slides, whatnot. It's a lot of work and you need a good understanding of the instrument itself.

You can probably still get a remix posted with just free Soundfonts, but I think that the style you pick makes the difference here. Orchestral stuff with just free Soundfonts? Gonna be very tricky and a whole lot of extra work. A more electronic genre will be much more likely to pass.

Link to comment
Share on other sites

From personal experience, when you want to get a solo instrument to sound good (as in realistic good) - specifically strings - it's incredibly important to do more than just articulation and CCs. It's about the timings, the swells of the expression, the up-bows, down-bows, the human qualities that a string player would inject. As such, doing that in computer facing a screen of options and hundreds of bad choices that you might make obviously make that seem pretty tough. I still tend to avoid solo string instruments excluding violin because of this kind of difficulty.

There's a researcher in the music department of my university who is currently working on actually physically modelling string instruments specifically so that they can create a more realistic human feel to a sound, but the basic gist of everything is that an actual performer does a lot of things subconsciously and as such, modelling these is very very difficult.

Link to comment
Share on other sites

Just did a quick listen to your track and have some things to add that I hope you find useful.

First of all the cello sounds a bit thin and in my n00b mind a cello has more body.

Also, as far as I understand the strings section, violinists and cellists tend to have a natural swell and drop in velocity when they play a note. So ramping up the sound from soft to louder at the beginning and dropping down from louder to softer at the end of a note. Of course that will also cause tonal differences, and if your samples don't support that it feel uncanny.

Not saying I can do a realistic virtual cello, but I'm willing to give it a shot. Feel free to PM me the MIDI part for a try with a decent sample library (https://soundcloud.com/embertone/blakus-blakus-cello-improv).

Link to comment
Share on other sites

And I'm not expecting that the price tag on the samples will excuse me from having to do any work on humanization: I do have a large amount of humanization, for instance, for that cello above, each note has a custom articulation and volume CCs throughout. But evidently this isn't good enough, I just get "humanize the cello more". My question is, if free SoundFonts with little-to-no humanization are "supposed to" be good enough, how are professional cinema-quality samples with extensive humanization not good enough? Instead of "humanize the cello more", they must mean "humanize the cello differently"--but I don't know how to do that. For that matter, how do the pros do it? I can't imagine someone who writes film scores 8 hours a day sitting there fiddling with CCs and scrolling through articulations trying to find one that doesn't sound awful. They must just write in the notes, and the computer plays it, and that's that. Do they have some sort of middleware that "plays" the virtual instrument, sending it articulation and dynamics data? Or do they just keep buying more and more expensive virtual instruments until their parts sound good?

I will answer this in two parts:

1) Understand what "humanization" actually means. Humanization is not articulations, velocities, and CC data. Humanization is the process of *using* those tools for an end goal, which is to create a performance which ceases to be fake in the perception of the listener. It doesn't matter how "much" or how "little" you use articulations, velocities, and CC. It matters how "human" it is.

I can create the most complex MIDI performance using keyswitches, velocity ranges, mod CC data, expression, vibrato control, etc... and still have it sound like shit. What matters is not VARIETY but QUALITY. If you have sample libraries at your disposal, and you're still not making human performances, it's because you don't know what it *should* sound like, not because you're not working hard enough.

There are easy ways to fix this problem. The obvious one being, go listen to real cello performers playing music. Trying looking for solo, virtuostic stuff. For cello, finding a performance Bach's Prelude in G Major will pretty much tell you all the basics of what cello sounds like at its most expressive. Others being, go look on YouTube for composers who stream their orchestral production and stuff like that. You'll be able to get glimpses of their MIDI curves and keyswitch habits.

2) As for how do the pros do it, you are 100% off. It's the complete opposite of what you say!

Professionals are professionals because they've mastered using their tools in order to create realistic mock-ups (a mock-up is basically what we all do on OCR, which is creating computer generated performances). You say professionals might have special shortcuts that do the work for them, and that couldn't be further from the truth.

It's *BECAUSE* they work so hard, spending hours doing CC data and articulation selections that makes them very good and very successful at what they do. There are no professionals in the film/TV/game scene who don't know how to work to good mock-ups; they don't exist because they don't get jobs.

So unfortunately, there are no shortcuts. But on the bright side, this should be encouraging to you; it means all your frustration and practice is not in vain. You are indeed learning the "correct" method of humanization, so to speak. As for your undesirable ratio of work to results, refer to my point number 1. You're working too hard on creating performances that are subpar, which means you need to realign how you think the performance should sound. It starts with your musical mind, and ends in the computer, not the other way around.

Now that I've addressed the part of the humanization that occurs before you touch your computer, I'll address what actually happens in the DAW.

Here are the basics:

Articulations - Staccatos, pizzicato, legato, etc. These will be keyswitchable in modern libararies, so you can change them all on the fly in one MIDI track.

CC Data - Mod is usually velocity x-fade in modern libraries. It means it doesn't just get louder with the mod wheel, but the instrument itself increases dynamic. On a french horn, for example it would get buzzier.

Legato scripting - Believe it or not, this is actually more important than all the other stuff combined, in terms of humanization. Legato scripting is when the instrument will play actual note transitions as your MIDI notes cross into each other. Good legato scripting is the cornerstone of a realistic solo performance; and that Blakus cello Jorito mentioned does it very well. An alternative is "Tina Guo Cello Legato" from CineSamples. The reason for it is simply that when you have long notes, human performers will always connect them. Hammer-ons, pull-offs, glissandos, slurs, slides, bow changes, retongue-ing, etc. All of these things are legato types in sample libraries. Top-of-the-line legato scripts will have selectable types, either by keyswitch, velocity, CC, etc. For instance, EW QL Solo Violin's legato patch will change its legato type on the mod wheel. It starts normal, then changes to quick bow changes (for faster notes and scale runs) and then finally big slurs (which sound *gorgeous* on melodic leaps, like up a 6th).

In essence, it's hard to explain, but good legato will usually mask other flaws of a mock-up (the exception is glaring issues, like repetitive brass stabs, but that's easy to tell as a flaw). This is why sometimes when people have fake choirs, they'll just get one soprano girl to sing over it and it suddenly sounds like a full real choir. Play a good solo violin over a fake ensemble, sounds amazing. Etc. With better solo sample libraries, we can even just layer a good solo sample library over a so-so ensemble one. The same trick works. :)

If you don't have good (or any) legato scripting in your sample library, and you're trying to create an expressive performance, then you need to do it the old fashion horror way. Blend notes into each and use clever volume fades and keyswitches to make it sound decent. This way is super archaic, and though people got good at it, it's no match for actual true legato, which professionals are using nowadays except in rare cases where someone has a specific thing they need from an older library.

*FINALLY*, the bar for humanization is a lot lower for OverClocked ReMix than it is in professional settings, so take all this stuff as very idealistic prattle for the best possible, state-of-the-art mock-up production. If you get a handle on it, you'll pass OCR's standards with flying colors. If you don't, you still have room to maneuver; for example, if you don't have legato scripting it's not the end of the world. But you still need to pretend you do using CC.

Or find a live performer and never worry about a thing.

Edited by Neblix
Link to comment
Share on other sites

Hey everyone.

In all the remixes I've worked on in the past couple years, one recurring comment I've received is that the instrument sounds don't sound realistic. (For instance my current remix I'm working on, http://ocremix.org/forums/showthread.php?t=49125 , where the cello is the issue.) I can sort of see what they're talking about, in so far as I wouldn't think "oh, that cello sounds like a live performer!", but I also can't put my finger on anything in particular that sounds wrong with it.

My first question is this. OC ReMix has a policy (at least people on the forums will cite this) that you don't have to pay for expensive virtual instruments to make a posted ReMix. I tried that a few years ago; the first draft of Melodies of Mabe Village (http://ocremix.org/remix/OCR02951) was played on a bunch of free SoundFonts, and it didn't even make it to the judges panel. So I did buy some expensive virtual instruments--the EastWest Complete Composers Collection (the first one). Eventually I did get that ReMix posted, but I've struggled this whole time. And I'm not expecting that the price tag on the samples will excuse me from having to do any work on humanization: I do have a large amount of humanization, for instance, for that cello above, each note has a custom articulation and volume CCs throughout. But evidently this isn't good enough, I just get "humanize the cello more". My question is, if free SoundFonts with little-to-no humanization are "supposed to" be good enough, how are professional cinema-quality samples with extensive humanization not good enough? Instead of "humanize the cello more", they must mean "humanize the cello differently"--but I don't know how to do that. For that matter, how do the pros do it? I can't imagine someone who writes film scores 8 hours a day sitting there fiddling with CCs and scrolling through articulations trying to find one that doesn't sound awful. They must just write in the notes, and the computer plays it, and that's that. Do they have some sort of middleware that "plays" the virtual instrument, sending it articulation and dynamics data? Or do they just keep buying more and more expensive virtual instruments until their parts sound good?

My second question is, since I don't think I can do much better with that cello on that piece, and since the Mod Review recommended I try to get a live performer, where would I find one of those?

This is something I get feedback on a lot. Some helpful advice I got that I am now using: slow down the piece - wayyy down, if you're not a keyboard player - and play the parts by hand. That will help the humanization from the timing aspect at least, i.e. so you don't end up having really fast, super perfect violin runs (which is something I had to fix by using the aforementioned method). Hopefully that helps some. I also agree with what Neblix said though - you have to think about it musically before you think about it digitally. Even then it's really hard to do, but if you compare what you come up with with what you WANT it to sound like, it should give you at least a rough idea of the direction you should head in. Good luck!

Link to comment
Share on other sites

I forgot to mention this, because it was more intrinsic common sense to me, but I realized it's not to everyone.

If you're writing a cello part, WRITE FOR A CELLO. :tomatoface:

Don't exceed the range of the cello. Don't use a lot of notes that cello players would be uncomfortable playing (like don't write double or triple stops, which are bows across multiple strings, for chords that a cello performer literally can't finger).

An aspect of humanization is human limitation. If you're not limiting your computer cello the way a real cellist is hindered by physics, anatomy, etc. it's going to sound fake. Humanization starts in the notes and durations.

This is an orchestration book that will explain to you the role, tone, dynamic range, articulation set, etc. of most classical instruments. When I say it starts with the musical mind, I actually mean it literally does. You need to have a solid understanding of instrumentation if you want to humanize.

Edited by Neblix
Link to comment
Share on other sites

Neblix has some good points. Some further explanation:

The "archaic" way he's talking about may also be called a form of a "stitching" method of sequencing, where you layer a bunch of suitable articulations that work together to make a more realistic performance overall. What I mean is, if you have a solo cello part playing by itself (btw, this isn't redundant!), it'll be fully exposed.

Similar to what Neblix mentioned, while layering a real solo violin performance over a fake ensemble performance covers up the fakeness somewhat, you can also thicken up the overall sound by layering in various suitable solo cello articulations, and the sum total "hides" the fakeness of the performance a bit. It doesn't have to sound like an ensemble as a result; in fact it should still sound like your intention---a solo cello sound. The only difference is, it's a layered solo cello sound, which is in a sense "smoothing out" the fakeness. This is something Chimpazilla mentioned here, as well: http://ocremix.org/forums/showthread.php?t=46437

...The solo cello is sequenced quite well which is not easy to do (I can hear many different articulations being used/layered)...

Also, "at OCR", the realism we shoot for is just so the average person can't tell it's fake. But the realism of the performance itself, while ideal, is not absolutely crucial. If the instrument isn't exposed enough, there is a little leeway as to how real it truly sounds. Something can sound slightly fake exposed, but real enough in context that it's 'passable' by OCR standards. Because we don't want it to be impossible. :<

---

Yup, the 'professionals' do it fast merely because they have gotten used to doing it a certain way that happens to sound good (and they're aware that it does). But, it's no surprise to us that it's just careful manipulation of MIDI CC1 and 11 (among others), keyswitches potentially, and other automation.

if free SoundFonts with little-to-no humanization are "supposed to" be good enough, how are professional cinema-quality samples with extensive humanization not good enough?

Well, that's the issue here. There isn't "little-to-no humanization". It all depends on the context. With poor samples, if the arrangement is exceptional, production isn't quite as emphasized (but still is quite important). You aren't expected to sound as good as real samples (tonally) by any means with free soundfonts. Someone good enough can tell the difference pretty quickly. What you'd have to work on here is the tonal realism through layering and performance realism also through layering.

With great samples, the expectation is that you can get them to sound great and realistic enough because you have greater resources than those free soundfonts and because it's already tonally realistic. What you'd have to work on is the performance realism (and tonal realism, to a lesser extent), so you're halfway there. It's a little harder to tell what issues there are with good samples because they have realistic tones in themselves, but nevertheless, they should be treated with enough care that they convey a realistic performance, ideally.

I'm not saying it's necessarily super-duper easy with any good samples, nor am I saying that there are less expectations with soundfonts, but you cannot expect that you can apply the same approach to soundfonts as you can with sample libraries.

Edited by timaeus222
Link to comment
Share on other sites

I'm not saying it's easier with good samples,

But... you can sort of say that, though, because sample libraries are starting to be just a tad more foolproof in their usage, favoring tailored single-patch default behavior instead of completely granular, meticulous multi-patch behavior.

For example, I have to do almost nothing except mod wheel for my CineWinds stuff. The legato scripting is just that good. :tomatoface:

Link to comment
Share on other sites

But... you can sort of say that, though, because sample libraries are starting to be just a tad more foolproof in their usage, favoring tailored single-patch default behavior instead of completely granular, meticulous multi-patch behavior.

For example, I have to do almost nothing except mod wheel for my CineWinds stuff. The legato scripting is just that good. :tomatoface:

developers and their drive for efficiency

:roll:

Edited by timaeus222
Link to comment
Share on other sites

Thanks everyone for your extensive replies! I will try to address each of the points, but I might miss something. Since some of the points were restated in different ways, instead of quoting a particular person I will just summarize.

Getting solo expressive instruments to sound good is hard.

Yeah, I noticed! :-P Not gonna give up though.

It's not about quantity of humanization, but how well it gets the instrument to sound realistic. Listen to an actual cello once in a while to remind yourself how they sound. Don't write parts that a real person couldn't play.

In a general way, this isn't too much of a problem for me; I was classically trained on oboe, and I've been in a variety of bands and orchestras over the years. I've also written a few string quartets, two of which were performed by live musicians at my previous college. However, it wouldn't hurt to spend some more time listening to this kind of music.

OCR standards are not as high as in the real world of game/film music

I'm not so sure about this. I've heard arrangements of game music by people who do make a living from arranging music, which didn't sound as good as most recent OC ReMixes. There's a handful of ReMixers whose music I know outside OCR as well; only their very best tracks are posted here. Finally, there's people like Jake Kaufman who do excellent work professionally, and then come here to drop a bombshell like The Impresario. My impression is that if someone can consistently pass the judges panel here, they can make it in the real world.

Pros do all the tedious work too, they're just more practiced at it

I suppose. I would wonder how a professional would remember all the best settings for hundreds of instrument plugins--but I guess there's only a handful they use on a regular basis, and they'd be used to figuring out the others. But I am happy to see, as Neblix pointed out, that there's movement in the right direction with this.

The details: legato scripting, etc.

I do have the EWQL Solo Violin (the version in the Gypsy pack), and I've used it--it is very nice. However, all the other instruments in the collection I have, don't work like that. You get a choice of the "Keyswitch Master" configuration, where each articulation is switchable; the "Elements" configuration, where you can mix different articulations (but that mixture is unchangeable!), and the "DXF Mod Wheel" configuration, where the mod wheel controls expression (but there's no articulations). I usually use the "Master" configuration, since that gives me the most flexibility; but that means that the mod wheel is unavailable, and it's impossible to combine articulations.

This configuration does include several legato scripted articulations, but they're all just a different starting note followed by the same legato notes. For instance using "Exp Leg" the first note is the "Exp Long" articulation and all the rest are the legato notes; using "Lyr Leg" the first note is the "Lyrical" articulation and all the rest are the legato notes (same as the others).

Most of the time what happens is I write in a part, pick the articulations that would seem to fit, throw in some CC 11, listen to it. It doesn't sound like a real person; I pick a note that sticks out, and scroll through articulations until I find one that doesn't sound as bad, and repeat. The result is something that doesn't sound as bad as it did to start, but it also doesn't sound really good.

Link to comment
Share on other sites

I'm not so sure about this. I've heard arrangements of game music by people who do make a living from arranging music, which didn't sound as good as most recent OC ReMixes. There's a handful of ReMixers whose music I know outside OCR as well; only their very best tracks are posted here. Finally, there's people like Jake Kaufman who do excellent work professionally, and then come here to drop a bombshell like The Impresario. My impression is that if someone can consistently pass the judges panel here, they can make it in the real world.

While OCR is great as a community and holds reasonably high standards, I don't consider them higher than film music standards. For example, OCR doesn't discriminate between stereo music and surround-sound music (though I suppose surround sound might have some stereo phase cancellations on regular setups that may pose issues...), but film music has within its standards a surround-sound mixdown (for theaters) AND a regular stereo mixdown (for DVD, Blu-Ray, etc). Also, hardly have I ever seen OCR judges discuss sub bass as a major point when it comes to passing a mix (if it has happened, feel free to post it here), but sub bass is well-considered for film music.

Not to sound 'elitist snob' or anything, but here are some examples of film music that I think transcends the OCR standards (in other words, if they were valid, source/original-balanced ReMixes of actual VGM, they would pass with flying colors):

https://soundcloud.com/stephen-anderson/unity

https://soundcloud.com/stephen-anderson/the-black-flag

https://soundcloud.com/stephen-anderson/i-am-the-sentinel-credits

The point of this is, you don't have to push yourself extremely hard when it comes to passing something on OCR---but if you just so happen to push yourself to make "film-quality" music, it may not be as hard to pass on OCR production-wise because you'd be pushing yourself harder than you need to.

Edited by timaeus222
Link to comment
Share on other sites

Everyone's standards are different. OCR tends to be less picky than a AAA game studio (which might well just get a live orchestra) but much more picky than a "professional" who makes a living from indie game soundtracks and/or YouTube ad revenue. They seem to be much less picky about actual live instrumental playing and singing (although they're likely to be picky about the recording quality or soundscape of live performances).

There are posted remixes done entirely with free orchestral samples, but the remixers who did them are really, really good at what they do.

The next time I have time to remix, I'm considering trying an entirely 16-bit orchestra piece. Far too fake for anyone to expect humanization, but close enough to real instruments that one should be able to emulate the same kind of music and get the intended feel out of it. I haven't seen any remixes like that before, but if pure chiptunes can pass, maybe this might work.

Link to comment
Share on other sites

Regarding the question of OC ReMix standards:

surround-sound... sub bass

If those are the only two factors you can think of offhand that separate OCR from film music, that's plenty high-quality for me! Correct me if I'm wrong, but those seem to be relatively minor factors that someone would be able to pick up quickly if they had experience and skill with everything else.

OCR tends to be less picky than a AAA game studio (which might well just get a live orchestra) but much more picky than a "professional" who makes a living from indie game soundtracks and/or YouTube ad revenue.

Well said. I think I'm almost at the latter level, of course nowhere near the former--but I want to be as good as I can be. The only way I can see myself "pushing myself too hard" would be giving up because my (or OCR's) standards were too high and I got frustrated.

some examples of film music that I think transcends the OCR standards

I can't listen to these right now, but of course there will be--just like there will be original music of all kinds that goes way beyond OCR standards, and game music that goes beyond OCR standards--and occasionally even OC ReMixes that do! (To avoid namedropping "The Impresario" again, I'll go with "Lullaby of the Sky".)

There are posted remixes done entirely with free orchestral samples

There are posted remixes (from a long time ago) that are played directly on free soundfonts with no post-production, and even a few that use samples from Microsoft GS Wavetable SW Synth (the default MIDI instrument in Windows). I don't by any means blame those ReMixers; in many cases I like the ReMix anyway (I have the very harp soundfont that DarkeSword used for the opening of "Legendary Hero", which I think is the #2 or #3 most played OC ReMix counting YouTube reuploads), and better technology was not as available (nor as good, of course) back then. But nowadays things have changed.

I'm considering trying an entirely 16-bit orchestra piece

I've been doing a lot of 16-bit arranging over the last few months, since I've been doing music for Ocarina of Time 2D. I don't think a remix made that way will get you posted, but it's a lot of fun.

Regarding the question of how to humanize:

Basically I feel like I'm usually in a position where I've worked on a part, and I've gotten it to sound the least bad as best I can, but it still doesn't really sound good. Like, I'm picking articulations, they all sound awful in the context except for one, and that one sounds okay but not great; so I pick it, sweeten it with a little CC dynamics, and move on, because there's not much other choice I have. Has anyone else felt similarly about a virtual instrument they're using? I think at this point I'm just going to look into a live performer, but a few months down the line I might be open to buying some more virtual instruments (not more than a couple hundred dollars, though), if anyone has any suggestions.

Link to comment
Share on other sites

What matters is if the people you are trying to write music for like your music and think it's suitable for the project or not. If they're planning on recording a live orchestra in a world class studio with mixing and mastering engineers of 30 years experience anyway, they're probably going to be a bit more lenient on how realistic that oboe in your mock up sounds.

Link to comment
Share on other sites

I'm not so sure about this. I've heard arrangements of game music by people who do make a living from arranging music, which didn't sound as good as most recent OC ReMixes. There's a handful of ReMixers whose music I know outside OCR as well; only their very best tracks are posted here. Finally, there's people like Jake Kaufman who do excellent work professionally, and then come here to drop a bombshell like The Impresario. My impression is that if someone can consistently pass the judges panel here, they can make it in the real world.

I am completely sure about it. OCR standards are not nearly as high as production in the real world. Just because OCR greats have good production doesn't mean the bar is set for their level.

This kind of thing is all about perspective. Either being an artist or someone who studies music, something will always seem "high" or "the best" until you find something better. And trust me, there's a long road in music beyond OCR. The people who go down that road come back and submit stuff and just completely shatter the standards (Jake Kaufman, bLiNd, zircon, etc.), not because they work towards OCR standards, but because they work towards their own standards (which are *way* beyond the minimum for OCR). When I was younger, I aimed to get a posted remix. I did. I even got consistent remix posts (five now, I think). But that stuff? Ultra low quality. Bad balancing, frequency balance, mechanical sequencing, frequency clutter, voicing issues. My debut remix? Just listening to it makes me shudder. It was 2011, the standards weren't that much lower.

Seriously, there's stuff that gets posted on this site with glaring dissonances, bad counterpoint, murky production, etc. It's high quality compared to hobbyists, but professionals out there work on their craft for decades and employ a level of artistic nuance you won't find on OCR except in those greats (who, by the way, are usually professionals or people who spent decades crafting to their own standards as well).

As you improve your ear, though, you'll come to realize this as well. You just need more exposure to music, and more importantly, the ability to better distinguish quality. Good and bad is about contrast in perspective.

What matters is if the people you are trying to write music for like your music and think it's suitable for the project or not. If they're planning on recording a live orchestra in a world class studio with mixing and mastering engineers of 30 years experience anyway, they're probably going to be a bit more lenient on how realistic that oboe in your mock up sounds.

Yes and no. You need to have it be ballpark (dynamics and frequency space), as well as clearly demonstrating articulation (is it supposed to sound SLURRED or is it just connected?). The sad reality is that many clients don't have a command of music in their head, and it's difficult for them to "imagine what it would sound like!" That's why pros are good at mock-ups, because clients don't like spending money on leaps of faith.

Edited by Neblix
Link to comment
Share on other sites

There are posted remixes (from a long time ago) that are played directly on free soundfonts with no post-production, and even a few that use samples from Microsoft GS Wavetable SW Synth (the default MIDI instrument in Windows). I don't by any means blame those ReMixers; in many cases I like the ReMix anyway (I have the very harp soundfont that DarkeSword used for the opening of "Legendary Hero", which I think is the #2 or #3 most played OC ReMix counting YouTube reuploads), and better technology was not as available (nor as good, of course) back then. But nowadays things have changed.
That may be true for some really, really old remixes, but, for example, http://ocremix.org/remix/OCR01911 was done using entirely free samples and soundfonts, and, while I'm no judge, my guess is that it's good enough to pass even today.
I am completely sure about it. OCR standards are not nearly as high as production in the real world. Just because OCR greats have good production doesn't mean the bar is set for their level.
Again, context. For instance, the indie gaming scene is huge these days; there are a lot of indie OST's which are nowhere near the OCR bar on several grounds. Whether those were profitable ventures for the composers, I have no idea. Edited by MindWanderer
Link to comment
Share on other sites

If those are the only two factors you can think of offhand that separate OCR from film music, that's plenty high-quality for me! Correct me if I'm wrong, but those seem to be relatively minor factors that someone would be able to pick up quickly if they had experience and skill with everything else.

Well, those are primary factors I can think of. Another minor factor is the uppermost treble, because you hear less uppermost treble as you get older. Most of the OCR judges (besides WillRock, for example) are over 24 I believe. At some point they'll be less able to hear that uppermost treble so much that it becomes a minor issue for their ears. If they can't hear it, they can't judge it (usually). *shrug*

Well, not everyone has sub woofers at home (maybe it bothers the neighbors), and in that sense it can count as a minor factor. Similarly, far less people have surround-sound home setups than stereo or mono home setups, and a lot of people just use headphones or earbuds to listen to music. Factor that in, and you've got two possible things OCR judges don't often need to discuss (IMO)! :)

Edited by timaeus222
Link to comment
Share on other sites

Well, those are primary factors I can think of. Another minor factor is the uppermost treble, because you hear less uppermost treble as you get older. Most of the OCR judges (besides WillRock, for example) are over 24 I believe. At some point they'll be less able to hear that uppermost treble so much that it becomes a minor issue for their ears. If they can't hear it, they can't judge it (usually). *shrug*

Well, not everyone has sub woofers at home (maybe it bothers the neighbors), and in that sense it can count as a minor factor. Similarly, far less people have surround-sound home setups than stereo or mono home setups, and a lot of people just use headphones or earbuds to listen to music. Factor that in, and you've got two possible things OCR judges don't often need to discuss (IMO)! :)

OCR judges also rarely get into things like counterpoint. There was a Final Fantasy mix a few months back that people were gushing over; I check it out, because orchestra and Final Fantasy!

I get some ways into the intro and hear a sloppy harmony voicing that spelled a wrong chord (could've been fixed easily, too). I have to admit bias, though, because it was in proximity to that that I was enjoying the FF6 Symphonic Poem, which is a true work of art in all facets; orchestration, part-writing, and source treatment.

*No one* save for a few people on OCR recognize those kinds of things without looking for them, because it's something so detailed that there's no reason to even care (unless your ear is used to appreciating that aspect of composition). A handful of judges are not classically trained, and as a result, they naturally don't delve into esoteric discussions about composition.

Edited by Neblix
Link to comment
Share on other sites

I primarily compose orchestral pieces, and write a lot of solo cello. Like others have said, it's important to know the physicalities of the instrument and to "think like a cellist", both when writing and programming. But it's more than that. It's about emotion. Making a cello sound real is not the same as making it sound good. The great cellists out there know how to access that emotion with each swell, where to add vibrato and when to back off, when to portamento, etc.

A great VST that gives you all that control is the Embertone Blakus Cello. But it won't sound good all on it's own; you have to control the expression, vibrato, legato types, etc. I've heard countless demoes with the Blakus cello where the composer is controlling all those things, varying the vibrato and expression and legato, and they think it sounds great, but it really sounds bad. Because they seem to think that by simply controlling those parameters, that's all they need to do. They're not thinking of exactly where to add that swell, how to trail off that vibrato, when to bring the dynamics down to a whisper and when to bring them up to soar.

I'm sure you've heard this piece before:

Couple of pieces I did with the Blakus cello:

https://soundcloud.com/kekopro/the-gum-tree

https://soundcloud.com/kekopro/shadows-solace

There's no special software that can play a cello well. Computer composers have to know how a real cello is played, how it feels, it's ability to convey emotion, and then transmit that knowledge through the often awkward interface of a keyboard, knobs, and faders.

Making a cello sound good requires a strong sense of feeling and emotion. Think about what you want to say, what feeling you want to evoke, and hone in on that with both the writing and the performance. Make that your singular purpose at first. Practice that a lot, and the realism you want will develop.

Edited by Neifion
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...