Jump to content

Nabeel Ansari

Members
  • Posts

    5,797
  • Joined

  • Last visited

  • Days Won

    31

Posts posted by Nabeel Ansari

  1. On 8/12/2019 at 11:35 AM, BloomingLate said:

    Okay, we'll I'm not that familiar with syncopation yet, so that may explain my initial difficulty with it. I tried playing some parts on the piano and recording the MIDI, but unless you got the right tempo and time signature up, that doesn't help all too much. I honestly couldn't figure out how to count beats during the intro.

    I am familiar with pickup notes, only I didn't know you called them pickup notes (in English) :) Sometimes I'll know a term in Dutch but not English, and some times it is the other way around.

    So what do you think of this next track then? Is it as straight forward as the first one?

     

    Pretty much, I can't hear anything it does the first track doesn't already cover. The tempo and time signature are the same throughout, standard 4/4, and the syncopations used here are the same as the other track.

    You should become familiar with syncopation because I don't see another way to help you wrap your head around what's happening in the rhythms. In other words, you're not going to get really far at all analyzing Ikaruga, or most interesting music for that matter. https://en.wikipedia.org/wiki/Syncopation

    https://www.dropbox.com/s/b3i23bwa021nlhy/2019-08-14_10-00-26.mp4?dl=0

    I've recorded a crude video showing the basic thing happening in the Ikaruga tracks. My right hand is just playing basic 8th notes in 4/4. The left hand plays the 1st 8th note and the 4th 8th note.

    If you draw it out it looks like this:



    image.png.77d342a2a56d994dbdee86dde5f386b8.png

     

    The top is the 4 beats in 4/4. The middle is the eight 8th notes I was playing. The bottom is the main driving beat that happens in the song, showing up on the down of beat 1 and the up of beat 2. Or in other words, right on the first beat, and halfway between the second and third beat.

  2. 15 hours ago, Zubaru said:

    This is incorrect, the boxed version just gives you a key to register on your account and a cool usb drive with the files for installing. But you still get lifetime free upgrades, since the license will work with all versions. I bought it boxed myself just because the box looked cool, so if you want that voucher it's not a bad idea, but shipping does take time compared to downloading.

     

    That answer was from 8 years ago.

  3. Additionally, it's worth mentioning that software combos like Notion and Studio One let you write in notation, and then import directly to DAW for mockup and mixing.

    It's not an all-in-one solution, but if you want a way to smooth productivity from traditional composition methods into the production phase, that would be the way to go.

  4. If you're only using the headphones in your studio, buy the most comfortable pair and hit it with Sonarworks for the most ideal headphone response possible.

    If you plan to use the headphones elsewhere, obviously Sonarworks can't follow you around, I'd say the best out of what you provided is the K-702 just based on the chart.

    However, based on testimonial of friends, ubiquity, and an even better chart, I'd say you should probably go with the Sennheiser HD 280. This is the 280 i pulled off of google:

    image.png.b82910074d74f36e3852adbab18acbd0.png

     

    I have used the DT 880 for a long time, but to be perfectly honest, it's just as bad as the frequency response graph tells; it has incredibly shrill spikes in the treble range. It's honestly an eye-opener when you switch between a flat response and the DT880's (which I have, I use DT 880's + Sonarworks) just how bad the DT 880's actually sound. When compared A/B in that fashion, it honestly does sound like the audio is coming out of a phone speaker when you hear the DT 880's natural sound.

    Of course, it doesn't matter too much at the end of the day. Headphone responses are easy to get used to and compensate for because they have very broad features (unlike a bad room, where you can get a random 9 dB spike at 130 Hz and nowhere else due to room geometry). What really matters is that most of the frequency range is represented adequately, and that they're comfortable to wear. Every other consideration can be appeased by practice and experience. You're never going to get truly good bass response on headphones (unless you have Nuraphones) and you're never going to get good stereo imaging without putting any crosstalk simulation on your master chain. Unlike studio monitors, there's a pretty low ceiling to how good headphones can sound, and if you're in the $100-150 range, any of the popular ones will do once you get some experience on them.

  5. With SSD's, the RAM usage of patches decreases by lowering the DFD buffer setting in Kontakt. A typical orchestral patch for me is around 75-125 MB or so.

    Additionally, one can save a ton of RAM by unloading all mic positions besides the close mic in any patches and using reverb processing in the DAW instead. I recommend this workflow anyway regardless of what era of orchestral library is being used because it just leads to much better mixes and allows for blending libraries from different developers.

  6. On 11/12/2018 at 11:19 AM, Master Mi said:

    ... unless you have a source track with the original frequency exposure to make a better comparison between different speakers in relation to the source track itself - like in one of the videos above I've posted.
     

     

    That'd be true if human loudness perception was linear and frequency-invariant; it is neither (hence the existence of the db scale and the fletcher munson curves).

    If you're listening on a colored system which has any dramatic deficiencies, those ranges that have deficiencies will have a worse perception of the comparative difference between the source and the chosen monitor.

    It's the same reason you can not just "compensate" if your headphones lack bass. If the bass is way too quiet, you literally are worse off to tell the difference between +/- 3 dB in the signal compared to if it were at the proper loudness coming out of the headphones, and this is terrible for mixing (why do you think you're supposed to mix at a constant monitoring level?). Only through lots of experience can you compensate that level of nuance with at such reduced monitoring level, like Zircon can do with his DT 880's; his sub and low end is freaking monstrous but controlled, just the right amount of everything, but he had to learn how to do that over years and years of working with the exact same pair of headphones.

    Again, it's the worst way to judge anything. For example, I don't really agree with half of the assessments you made on those speakers listening on this end. The HS7 is actually the worst sounding speaker in this video, as it sounds like it was run through a bandpass. There's no life in any of the transients. Taking the HS7 as a close reference would not be a "fairly good decision" on your part, it would be pretty bad.

    Also, note the disclaimer at the end where he said "these speaker sounds contain room acoustics".

  7. Quote
    So far the M.2 SSD’s for sample libraries has been absolute bliss. It really showed off while I was developing Shreddage 3. At a 6kb disk stream buffer our main patch is still 2GB of RAM, and yet whenever I’d load new instances of the patch I feel like I could never even notice a “load time”. The time between “i want to open this instrument” and hearing the sound respond from my MIDI controller is just insane. For the promo trailer score I was just cloning instances left and right (i think there’s 11 total??) to get different tones, different performance configs, etc. and the PC kept up as if I was just making new .txt files on my desktop.
     
    And the tiny disk stream buffer has reduced the RAM load of my large projects by *over 60%*. For example one of the older Sole projects (indie game I’m scoring) was 29GB RAM load on my last computer. On this one it’s 11GB. Of course it depends how many libs are using DFD and how big their sample start buffers are, but still, it was a wide spread of orchestral stuff from different devs, so pretty averaged out in my use case.
     
    With the fast processor, the Sole project also went from bleeding and sputtering at over 90% CPU usage (at a FULL 2048smp ASIO buffer) to floating around 45% (at a tiny 128smp ASIO buffer).
     
    As far as dev is concerned, combining these drive speeds with FIOS gigabit is a killer combo. I can compress Shreddage 3 into an archive and drop it on our FTP in just a few minutes. Also, all of Kontakt’s resaving and audio compression utilities are much faster as well, so it makes release prep and distros in general a really painless process (previously was something you had to set aside time out of your day to take care of).
     
    So far 110% happy with this rig.

    My message to Brad earlier this week.

  8. On 11/8/2018 at 2:17 AM, SnappleMan said:

    Yeah the only ways I know of to "eliminate" room reflections are to either absorb or diffuse them, but I am relatively inexperienced. I never heard of doing something like phase cancelling reflections, not sure how that would physically be viable but I assume that's what "... filter to invert the problems in your room" means.

    So I should amend my statement to be more technically accurate: Sonarworks can not remove reflections from the room, they are still bouncing around, and no amount of DSP can just stop them from propagating.

    However, the effect is "cancelled" at the exact measured listening position. Sonarworks is an FIR approach, which is another name for convolution style filtering. Deconvolving reflections is totally and absolutely in the wheelhouse of FIR filtering, as reverb is "linear" and "time-invariant" at a fixed listening position, (relatively) fixed monitoring level and fixed positions of objects and materials in the room. So it absolutely necessitates re-running calibration process if you change stuff around in the room, change the gain structure of your system output, etc

    I can't comment on standing waves but it seems in their paper they noted that it wasn't covered by the filter approach and so they recommended treatment for that. Just from my peanut gallery background in studying EE I think it makes sense that standing waves aren't linear and time invariant and so trying to reverse them through a filter wouldn't go well. Same goes for nulls, if a band is just dying at your sitting position, trying to reverse that via filter is just not smart at all.

    Regardless, if you were to move your head or walk around, you would again clearly notice how horrible the room sounds (though the fixed general frequency response is still an improvement), because now you've violated the "math assumption", introducing sound difference created by changing your spatial position. This is the disadvantage of relying on DSP calibration (along with latency and slight pre-ring for the linear phase) and is a compelling reason why you wouldn't want to choose it over proper acoustic design in a more commercial/professional studio design (you don't want crap sound for the people sitting next to you in a session!).

    I think it's a pretty decent trade for home producers and produces much better results than trying to put cheap speakers in a minimally treated room and still having to learn how to compensate for issues. I just see it as more expensive and time-consuming. Compensating isn't fun; its easy on headphones  where problems are usually broad, general tonal shifts in frequency ranges. But in a room, and this is shown in the measurement curve, the differences are not broad and predictable, they're pretty random and localized in small bands. In my opinion it's difficult to really build a mental compensation map unless you listen to a metric ton of different sounding music in your room. It is traditional to learn your setup, but I think the tech is there to make the process way simpler nowadays.

    To be scientifically thorough, I would love to run a measurement test and show the "after" curve of my setup, however sadly I don't think it's really possible, because SW has a stingy requirement that for measurement the I/O for the computer has to be running on the same audio interface and the calibrated system output is a different virtual out, so there's no way I could run the existing calibration and then also measure that in series. All I can do is volunteer my personal anecdotal experience at how it has improved the sound. I'm not trying to literally sell it to you guys, and no, I don't get kickback, I just think it's one of the best investments people should make into their audio before saving up to buy expensive plugins or anything else. Especially because its results are relatively transferrable to any new environment without spending any more money no matter how many times you move, where room treatments would have to be re-done and maybe more money spent depending on the circumstance.

    And because of the topic of this thread, it shouldn't be understated that SW calibration can drastically improve the viability of using cheaper sound systems to do professional audio work. I've run calibration at my friend's house with incredibly shitty, tiny $100 M-Audio speakers, in just about the worst way to possibly place/orient them, and I'd say the end result really was within the ballpark of sound quality I get at my home room with more expensive monitors and a more symmetrical set up. It wasn't the same, but it was a lot more accurate (sans any decent sub response) than you could roll your eyes at. Stereo field fixing is dope too.

    @Master Mi I'm not sure what's to be accomplished by linking YouTube videos of the sound of other monitors. They're all being colored by whatever you're watching the YouTube video on. At best, a "flat response" speaker will sound as bad as the speakers you're using to watch the video, and furthermore, a speaker set that has opposite problems that yours do will sound flat, when they aren't flat at all. Listening to recordings of other sound systems is just about the worst possible way to tell what they sound like.

  9. On 11/5/2018 at 4:55 PM, Master Mi said:

    @PRYZM

    Sounds kinda impressive what I've read about Sonarworks and the pretty complex measuring method.

    But tell me one thing.
    The both graphs in your picture....

    1) Are these the two graphs of the frequency response of the sweet spot listening position vs the average frequency response of the different measure points in the room?
    2) Or are those the two graphs of the frequency response before and after calibration?

    I guess it's rather 1) - because I've seen after calibration results that were much more impressive concerning a flat frequency response.
    But just to get sure about the meaning of the graphs in the picture...

    And just another question.
    After the measurement of the frequency response of the about 20 differents points of the room with the special microphone of Sonarworks - what happens afterwards?
    Is there some kind of a software equalizer within Sonarworks that adjusts the whole sound output of your PC into a more flat frequency response by taking the peaks and adding a similar, mirrored counter frequency on an imagined axis supposed to be the intended "flat frequency axis" to flatten the exceeding peaks?

    The two graphics are both before, for each the left and right speakers separately.

    That's a crude way of understanding the calibration but it's really not just about EQ. It develops a filter to invert the problems in your room; not just "frequency" the way musicians think of it (musicians associate frequency with pitch, from bass to treble) but the true extent of what frequency represents in electrical engineering, which includes eliminating things like room reflections.

  10. It doesn't matter what the graph of the speaker says because your room colors it beyond recognition. Also, I really doubt you can just hear what a flat response truly sounds like, especially if you're only testing it by listening to music where resonant peaks and standing waves may not even be present depending on the song you're playing. You measure this stuff, in real life, by using a sound pressure meter as you sweep up the range to detect deviations.

    The Sonarworks product will tell you the total response of your room + speakers at your listening position, and I guarantee you, especially because you remarked that your room is untreated, that your system is most certainly way off.

    Here was my system pre-calibration, in a completely untreated room:

    image.png.15815b49a80c71f22c8bc9716ba1f11a.png

    Those peaks are 9 dB. Would make mixing snares basically impossible, as when swapping tones I get a completely different thump to them depending how they're tuned.

    After calibration, the response has been flattened and the extra reflections from my walls were silenced so I could also perceive reverb mixing much better.

  11. Most big-time composers work using separate articulations per patch instead of keyswitching.

    Yes, it's dumb UX to not offer any switching (and VIenna knew this a long time ago and got on a great mapping software), but to say that the sample library like HO is incapable of getting a good result even after spending "hours" compared to "minutes" (lot of hyperbole here) is just plain wrong. A lot of people use the several-track-articulation workflow to great effect. It gives better mix control because you can control articulation volumes independently.

    Keep in mind also, just because the VST doesn't offer switching doesn't mean that's the end of the read. Logic composers get to map things however they want externally. There's also a handful of free ways to create articulation maps intercepting MIDI, and this use-case is actually superior to in-built articulation switching systems inside libraries because your output is just going to separate MIDI channels, and so you can thus create 1 MIDI map that works for several different orchestral libraries from different companies (by just loading the same MIDI channels with the same artics like "1 leg 2 stacc 3 spicc 4 pizz")  instead of having to go and configure all of them individually inside their tiny GUI's.

  12. Sends are for groups of signals going through one effect chain. It saves a lot of CPU compared to loading effects units on individual channels (and came about because with hardware, you physically can not duplicate a $3000 rack gear reverb on like 50 tracks). Additionally with send levels, it's pretty easy to control how much of each signal goes through the effect, and additionally if it is pre-fader (absolute send, no matter what the volume fader of the track is) or post-fader (the track volume directly scales what is sent to the send).

    Yoozer pointed out several use cases of sends. It's really just a better way to work and allows more mixing possibilities than sticking to inserts does. It's literally less effort to manage 1 (or a few if you're experienced) reverb send and just controlling what's going through it. You get a 100% wet signal in the send track and can do stuff to it, like EQing or filtering it, adjusting the *entire* reverb level of the song all at once on one fader, applying mid/side techniques if you're into that, and more. The consistency afforded by routing all of your tracks into just a few plugins creates much better mixes, much more easily.

  13. I think (because this is how it was for me), they're useful to a point where problems with the music is more objective and agreeable by everyone. Once stuff gets subjective (which is a lot earlier than people would think), the feedback might continue contributing to shaping how the person writes and produces, but isn't at all necessary for that person to keep improving.

    Like you said people who have the drive for this stuff will keep going, and I particularly relate to not coming to WIP forums anymore and just bouncing tracks at a few peers instead. When I do it, it's more one-sided. I'm not asking what's wrong with the track or what I should fix, but rather I'm gauging the reaction of a person who isn't already familiar with it just as a fresh set of ears. Just if they might say something like "this ____ part feels like _____ ". I might've clearly perceived that and been fine with that in my taste, but it comes down to how listeners will react to it, so I'll try to compromise a little. It's like when a game dev polls about a control scheme or something. The dev might be perfectly fine with it, and it's not just that they created it, because it really does just work for them, but they just want to make sure other people can enjoy it too, so they send out a survey.

    Even my most skilled peers, I'll send stuff to them and on rare occasions they're like "wow this sounds really good", but I'll come back to the stuff a year later and I can still clearly perceive that I've been improving and I can see flaws in the sound I used to have. And those flaws are subjective to my own taste, because it's been shown other people with experienced tastes still enjoy them.

    My improvement, personally, is pretty self-driven at this point. I don't think a lot of friends really share much of my influences at this point, and I chase production techniques I don't see my friends doing. I think at a certain point you can absolutely trust yourself to be your own critic (without causing self-esteem issues or creative paralysis).

×
×
  • Create New...