Jump to content

Nabeel Ansari

Members
  • Posts

    5,797
  • Joined

  • Last visited

  • Days Won

    31

Everything posted by Nabeel Ansari

  1. This topic has been a subject of contemplation recently as I upgraded off of the horror that is PLAY and acquired some new high quality Kontakt libraries for my orchestra. I noticed I can choose to either have the dynamic levels of the orchestral instruments controlled by velocity-sensitivity or by MIDI CC. It seems like a great number of people like CC automation as a macroscopic "sculpting" tool for dynamics and such, while others like me prefer velocity-sensitivity for a more note-by-note dynamic approach to humanization. What do you prefer, and why? Keep this question in the context of orchestral instruments. Obviously CC is inappropriate for jazz-like articulations, for example... unless you disagree, then you're welcome to contest that statement as well. I'm curious.
  2. EDIT: Looks like EastWest is a little bitch and I can't actually do that. I'll find a local buyer and just give them the accounts and hand them the iLok since they're isolated. Sorry guys.
  3. I love FL Studio, for everything except automation. The automation is pretty garbage, and I'm saying this as a 7 year user. It's unorganized and can very easily clutter the workspace. Some common sense things like volume and pan you have to generate clips for (if you want one per track, you have to do it for every track too. That's a lot of clicking), drag them, resize them, and get them to sit nicely on the playlist view. It's a hassle, and it really doesn't need to be, considering other DAW's do it for you just fine, where you just select the parameter you want to automate from a list and it has a special lane switch-view that doesn't clutter the playlist. You can do makeshift lanes in FL too, but again, all manually, where the clip window will stack them in an unorganized fashion that you have to fix up yourself. I would suggest making a template and having your lanes premade, so you don't have to deal with this bullshit every time you start a project. That being said, the mixer is great, the MIDI editing is wonderful, and I really jive with the step sequencer for drum programming. I use Studio One as my main DAW now, and I ReWire FL in to get access to my favorite tools and techniques only possible in FL. It's a perfect marriage, so I don't get bogged down by FL's flaws and I can still utilize the things I love about it.
  4. One problem is that a lot of people aren't really qualified to make meaningful musical judgments like these tags. A community member may hear a saxophone and type "jazz", for example, even though his or her judgment is hasty and inaccurate. This is why Pandora selectively hires people to do so instead of letting everyone do it. Last.fm is the opposite, where the people do the tagging. Still I have no idea what the OCR staff are up to and how they plan to update the system, but rest assured, it is coming, as staff members have said.
  5. "Exposure" lol Couldn't have said it better. Also, Taucer got it. Free work is for charity and non-profit stuff.
  6. Nintendo is heading Smash, one of the most competitive fighting games out there right now. These "Nintendo isn't gonna last" comments are consistently proved wrong, year after year after year...
  7. By simple exercise with physics, a spotlight is no longer visible if the entire stage has an equal luminosity to it.
  8. I feel taking issue with what you say is more educational to readers than not taking issue with it at all.
  9. Well, yes, but your wording is careless and you're giving a skewed idea of how sound actually works. Mastering louder allows you to hear the quieter details in your mix, *yes*. However, this has no different effect from a person turning up their volume knob on a quiet master. You have no more room for detail in a quiet mix than you do in a loud mix. The difference is whether your playback is loud enough to hear it or not. The *reason* you master loudly is not for "more detail", rather it is to be at the standard loudness that people will listen at such that they don't have to turn their playback system up or down in volume to hear that detail. The purpose of mastering is not to modify the sound (outside of spectral balancing, perhaps) but rather to unify and normalize. I'm speaking in general terms, so like making all songs on an album have a universal loudness, one coincident with modern-day production standards, and frequency balance. If you're using mastering as a tool to bring details into your mix that weren't there before, it means your mixdown stage needs improvement. If the production standard were to master at -3 dBFS instead of -0.2 dBFS, and you ignored that to mix at -0.2 dbFS anyway, then I would say your mastering is flawed, because you're making your music much louder than everybody else's and audiences will have to adjust their systems for you, especially if you're on shuffle with other libraries. But there's no mathematical reason to mix loud (other than it's potentially easier to play clean on playback systems with low power output), nothing to do with detail or such. You just mix loud because everything else is loud, and you need to be consistent.
  10. This doesn't really make any mathematical sense.
  11. From what I understand, he wants to verify the effectiveness of his algorithm. The way he is planning to do this is by operating under the assumption that people's personal libraries are sort of the "optimal" case, because people will only have similar songs in their own library. @Zygoth Unfortunately, this assumption fails in many cases, because people may be storing the entirety of the OCR catalog, or near, because they don't want only the music that sounds similar. Many people just download the whole catalog just to have it (I have it all, but I've listened to maybe less than 200 remixes, I'm definitely not the only one. )
  12. It's not an uncommon line of thinking, in fact it means that you're on the right track. However, keep in mind that while you may be *comparing* features correctly, what these features *actually are* are a different story. At the very most, you'd be pairing songs with similar timbres, and that's about the extent of it. However, that's not quite a useful recommendation system, as users look for a lot more in similar music than just the spectral make-up (mood, tempo, lots of melodies or lots of solos, simple tonality or complex tonality, types of instruments, etc.). I'm not saying your system won't work at all, I'm just saying be wary of its limitations. I'm recommending avoiding DSP because tagging would give you a system with far more pleasing results, and something that could be legitimately used by OCR goers.
  13. You have to understand the data a paper is drawing conclusions from when you read it. Obviously if you test your algorithm on data of 4 *distinct* musical styles (classical, jazz, rock, country), you'll see better results because of such strong differences. However, OCR artists are more hobbyists than professionals, a lot of whom blend styles and don't follow production conventions. Additionally, most electronic music has varying timbre even if it shares style with another, so that would further confuse the algorithm. There's no such thing as a practical genre DSP classification system yet. That is the reason for which Pandora uses professional taggers. In fact, one grad from a music tech lab I worked at is now working at Pandora, I'm assuming trying to implement strategies similar to the one you're trying to use but more optimized and complex, but we're not going to see it in use for a long while. If you took advantage of OCR's tag system that Larry and created your recommendation system out of that, that would still be an impressive data project of its own. You can do whatever you want, all I'm saying is you may not get the results you were hoping for if you use signal analysis methods like feature extraction and such. I'm providing this insight as I was exposed to MIR when I worked for this lab, I just wanted to let you know sort of the other side of the coin so maybe you could redefine your project if you wanted to.
  14. Your assumption is that MIR DSP is advanced enough these days to be able to do this, and, well, it's not. Unless you're not using DSP and you're using user-defined tags, or internally defined tags (see last.fm, Pandora), in which case, you may want to take advantage of OCR's existing tag system, it'll make your job easier.
  15. I finally am starting to not look like a frail, thin but with belly weakling. I also just pushed myself to 3x10 sets of 40 lb Shoulder press, up from the last time two weeks ago when I did 1x10 of 20 and 2x10 of 30. I'm starting to understand how willpower factors in.
  16. Thirding Studio One. I've been using FL Studio for 7 years and just made the switch. I still ReWire FL in because audio and MIDI routing is so god damn easy, but anyways, yes, it is a super fast and simple DAW that does not compromise on functionality at all. Very easy to do complicated music production techniques and never once does it get in your way of anything. It also automates a lot of things for you, like clicking a button to set up 8 MIDI tracks sending to your favorite sampler.
  17. Thinking about getting back into WoW. What do you think, guys? Worth? I stopped right before Siege of Orgrimmar only because of academic concerns, but I play a lot of League anyway and still do well, so it's not like games are taking over my success
  18. Trackers work on different internal protocols than the MIDI protocol. It could be that no existing standardized translations between these systems and MIDI have been established yet, or that translation is too time-consuming and difficult a software design task in the context of the tracker already having been built. It's always harder to modify your software to do X than to build it from the ground-up to do X, that's why trackers aren't good at it. They're not built for it, and no developers consider implementing a translation to have a good enough benefit to pour effort into. Also, GSXCC is a MIDI player. That's what it is, it's built for MIDI (easier to build something for X than to modify to do X). Also, it's not a tracker.
  19. Finally fully transitioned to 20 lbs for my curls! Did 3 full sets, then another one because my friend thought it would be funny to challenge me. Good thing I was feeling good today.
  20. If you read what he said, he said the vote was for "us" Americans, which is correct, the vote was open for everyone.
  21. Glad to see some people appreciate modern game music, and I'm not talking about nostalgia soundtracks like Shovel Knight. Right on, Brandon.
  22. Basically what your saying is "it does work when people make it work", but the thing is, people don't, so your argument falls apart. Just, you know, look on the internet (maybe take a walk through a city) and look at all of the cultural problems (like RACISM?) that have persisted through generational gaps, disproving your "people over time will sort it out" mentality. Yeah, YouTube videos and blogs are pretty ineffective. But your proposition is just as ineffective. :/
×
×
  • Create New...