Jump to content

Nabeel Ansari

Members
  • Posts

    5,797
  • Joined

  • Last visited

  • Days Won

    31

Reputation Activity

  1. Like
    Nabeel Ansari got a reaction from Geoffrey Taucer in Faithful high definition studio headphones with flat frequency response   
    If you're only using the headphones in your studio, buy the most comfortable pair and hit it with Sonarworks for the most ideal headphone response possible.

    If you plan to use the headphones elsewhere, obviously Sonarworks can't follow you around, I'd say the best out of what you provided is the K-702 just based on the chart.

    However, based on testimonial of friends, ubiquity, and an even better chart, I'd say you should probably go with the Sennheiser HD 280. This is the 280 i pulled off of google:

     
    I have used the DT 880 for a long time, but to be perfectly honest, it's just as bad as the frequency response graph tells; it has incredibly shrill spikes in the treble range. It's honestly an eye-opener when you switch between a flat response and the DT880's (which I have, I use DT 880's + Sonarworks) just how bad the DT 880's actually sound. When compared A/B in that fashion, it honestly does sound like the audio is coming out of a phone speaker when you hear the DT 880's natural sound.

    Of course, it doesn't matter too much at the end of the day. Headphone responses are easy to get used to and compensate for because they have very broad features (unlike a bad room, where you can get a random 9 dB spike at 130 Hz and nowhere else due to room geometry). What really matters is that most of the frequency range is represented adequately, and that they're comfortable to wear. Every other consideration can be appeased by practice and experience. You're never going to get truly good bass response on headphones (unless you have Nuraphones) and you're never going to get good stereo imaging without putting any crosstalk simulation on your master chain. Unlike studio monitors, there's a pretty low ceiling to how good headphones can sound, and if you're in the $100-150 range, any of the popular ones will do once you get some experience on them.
  2. Haha
    Nabeel Ansari got a reaction from HoboKa in Soundtrack Analysis: what is going on in Ikaruga (Gamecube)?   
    I think you're way overthinking it. The song is in a basic 4/4, same tempo (somewhere 145-150 BPM) the whole way through. It's not really do anything special either, just some syncopation.
    The "melody beings on the 15th of the previous measure" is just called a pickup note. I would be stunned if you told me you've never heard a melody do that before.
  3. Like
    Nabeel Ansari reacted to JohnStacy in DAW based on sheet music?   
    Get the hell out of my thread.
    I have no idea what your problem is or what the hell is wrong with you but I was actually seriously pursuing this idea. If you read my first post, and the discussion that followed, you could see that we were actually seriously discussing this concept, which was much different from the other thread, the point of which you also missed. If somebody is asking a question on a forum, saying "hurr durr just google it noob" doesn't actually do anything. It's 2019. No shit people can google things. I can google "daw based on sheet music" to see if there's anything out there. Do you know what comes up? Not much. Stuff on Reaper's notation feature (which is not a DAW based on sheet music), and other similarly related things that aren't actually helpful to what I am looking for. I'm a professional musician (studio musician) with a degree in music composition. I am most comfortable working with music notation. A DAW based on notation would save me a lot of time by cutting out the middleman of having to use a notation software and a DAW. Somebody asking for a DAW based on notation probably knows how to read sheet music.
    Do you know what is helpful? @Dextastic mentioning Overture 5, which most of us had never heard of, and which seems to fit the bill closer than anything else. Asking a question on a forum brings a human element that interprets the question and answers in ways that google just simply doesn't. Do you know what wasn't helpful? You, at all.

    "Please use google if you want to use a DAW based on sheet music instead of a piano roll. It is too complicated to explain here, of all places. "

    I fail to see the relevance of this to the original thread. Nowhere ANYWHERE did the OP ask about a daw based around sheet music. I saw this comment and was curious if it existed so I started a new thread (see how the topic was different so I started a new thread?).
    Keep your bulllshit away from these discussions.
  4. Like
    Nabeel Ansari got a reaction from Malcos in OCR Cribs (the "Post Pics of your Studio Area" thread!)   
    Did some rearranging.

  5. Thanks
    Nabeel Ansari reacted to DarkeSword in Having Trouble with FLStudio   
    Hey, don't do this. Don't call out other replies as "patronizing." Give your advice without the snide commentary.
  6. Like
    Nabeel Ansari got a reaction from Master Mi in Faithful high definition studio headphones with flat frequency response   
    If you're only using the headphones in your studio, buy the most comfortable pair and hit it with Sonarworks for the most ideal headphone response possible.

    If you plan to use the headphones elsewhere, obviously Sonarworks can't follow you around, I'd say the best out of what you provided is the K-702 just based on the chart.

    However, based on testimonial of friends, ubiquity, and an even better chart, I'd say you should probably go with the Sennheiser HD 280. This is the 280 i pulled off of google:

     
    I have used the DT 880 for a long time, but to be perfectly honest, it's just as bad as the frequency response graph tells; it has incredibly shrill spikes in the treble range. It's honestly an eye-opener when you switch between a flat response and the DT880's (which I have, I use DT 880's + Sonarworks) just how bad the DT 880's actually sound. When compared A/B in that fashion, it honestly does sound like the audio is coming out of a phone speaker when you hear the DT 880's natural sound.

    Of course, it doesn't matter too much at the end of the day. Headphone responses are easy to get used to and compensate for because they have very broad features (unlike a bad room, where you can get a random 9 dB spike at 130 Hz and nowhere else due to room geometry). What really matters is that most of the frequency range is represented adequately, and that they're comfortable to wear. Every other consideration can be appeased by practice and experience. You're never going to get truly good bass response on headphones (unless you have Nuraphones) and you're never going to get good stereo imaging without putting any crosstalk simulation on your master chain. Unlike studio monitors, there's a pretty low ceiling to how good headphones can sound, and if you're in the $100-150 range, any of the popular ones will do once you get some experience on them.
  7. Like
    Nabeel Ansari got a reaction from djpretzel in ProjectSAM Orchestral Essentials 1 or 2?   
    With SSD's, the RAM usage of patches decreases by lowering the DFD buffer setting in Kontakt. A typical orchestral patch for me is around 75-125 MB or so.
    Additionally, one can save a ton of RAM by unloading all mic positions besides the close mic in any patches and using reverb processing in the DAW instead. I recommend this workflow anyway regardless of what era of orchestral library is being used because it just leads to much better mixes and allows for blending libraries from different developers.
  8. Like
    Nabeel Ansari got a reaction from timaeus222 in ProjectSAM Orchestral Essentials 1 or 2?   
    With SSD's, the RAM usage of patches decreases by lowering the DFD buffer setting in Kontakt. A typical orchestral patch for me is around 75-125 MB or so.
    Additionally, one can save a ton of RAM by unloading all mic positions besides the close mic in any patches and using reverb processing in the DAW instead. I recommend this workflow anyway regardless of what era of orchestral library is being used because it just leads to much better mixes and allows for blending libraries from different developers.
  9. Thanks
    Nabeel Ansari got a reaction from SnappleMan in Faithful studio monitor speakers with flat frequency response and truthful high definition sound   
    So I should amend my statement to be more technically accurate: Sonarworks can not remove reflections from the room, they are still bouncing around, and no amount of DSP can just stop them from propagating.

    However, the effect is "cancelled" at the exact measured listening position. Sonarworks is an FIR approach, which is another name for convolution style filtering. Deconvolving reflections is totally and absolutely in the wheelhouse of FIR filtering, as reverb is "linear" and "time-invariant" at a fixed listening position, (relatively) fixed monitoring level and fixed positions of objects and materials in the room. So it absolutely necessitates re-running calibration process if you change stuff around in the room, change the gain structure of your system output, etc
    I can't comment on standing waves but it seems in their paper they noted that it wasn't covered by the filter approach and so they recommended treatment for that. Just from my peanut gallery background in studying EE I think it makes sense that standing waves aren't linear and time invariant and so trying to reverse them through a filter wouldn't go well. Same goes for nulls, if a band is just dying at your sitting position, trying to reverse that via filter is just not smart at all.
    Regardless, if you were to move your head or walk around, you would again clearly notice how horrible the room sounds (though the fixed general frequency response is still an improvement), because now you've violated the "math assumption", introducing sound difference created by changing your spatial position. This is the disadvantage of relying on DSP calibration (along with latency and slight pre-ring for the linear phase) and is a compelling reason why you wouldn't want to choose it over proper acoustic design in a more commercial/professional studio design (you don't want crap sound for the people sitting next to you in a session!).
    I think it's a pretty decent trade for home producers and produces much better results than trying to put cheap speakers in a minimally treated room and still having to learn how to compensate for issues. I just see it as more expensive and time-consuming. Compensating isn't fun; its easy on headphones  where problems are usually broad, general tonal shifts in frequency ranges. But in a room, and this is shown in the measurement curve, the differences are not broad and predictable, they're pretty random and localized in small bands. In my opinion it's difficult to really build a mental compensation map unless you listen to a metric ton of different sounding music in your room. It is traditional to learn your setup, but I think the tech is there to make the process way simpler nowadays.

    To be scientifically thorough, I would love to run a measurement test and show the "after" curve of my setup, however sadly I don't think it's really possible, because SW has a stingy requirement that for measurement the I/O for the computer has to be running on the same audio interface and the calibrated system output is a different virtual out, so there's no way I could run the existing calibration and then also measure that in series. All I can do is volunteer my personal anecdotal experience at how it has improved the sound. I'm not trying to literally sell it to you guys, and no, I don't get kickback, I just think it's one of the best investments people should make into their audio before saving up to buy expensive plugins or anything else. Especially because its results are relatively transferrable to any new environment without spending any more money no matter how many times you move, where room treatments would have to be re-done and maybe more money spent depending on the circumstance.

    And because of the topic of this thread, it shouldn't be understated that SW calibration can drastically improve the viability of using cheaper sound systems to do professional audio work. I've run calibration at my friend's house with incredibly shitty, tiny $100 M-Audio speakers, in just about the worst way to possibly place/orient them, and I'd say the end result really was within the ballpark of sound quality I get at my home room with more expensive monitors and a more symmetrical set up. It wasn't the same, but it was a lot more accurate (sans any decent sub response) than you could roll your eyes at. Stereo field fixing is dope too.

    @Master Mi I'm not sure what's to be accomplished by linking YouTube videos of the sound of other monitors. They're all being colored by whatever you're watching the YouTube video on. At best, a "flat response" speaker will sound as bad as the speakers you're using to watch the video, and furthermore, a speaker set that has opposite problems that yours do will sound flat, when they aren't flat at all. Listening to recordings of other sound systems is just about the worst possible way to tell what they sound like.
  10. Haha
    Nabeel Ansari got a reaction from Master Mi in Faithful studio monitor speakers with flat frequency response and truthful high definition sound   
    The two graphics are both before, for each the left and right speakers separately.

    That's a crude way of understanding the calibration but it's really not just about EQ. It develops a filter to invert the problems in your room; not just "frequency" the way musicians think of it (musicians associate frequency with pitch, from bass to treble) but the true extent of what frequency represents in electrical engineering, which includes eliminating things like room reflections.
  11. Like
    Nabeel Ansari got a reaction from Master Mi in Faithful studio monitor speakers with flat frequency response and truthful high definition sound   
    It doesn't matter what the graph of the speaker says because your room colors it beyond recognition. Also, I really doubt you can just hear what a flat response truly sounds like, especially if you're only testing it by listening to music where resonant peaks and standing waves may not even be present depending on the song you're playing. You measure this stuff, in real life, by using a sound pressure meter as you sweep up the range to detect deviations.

    The Sonarworks product will tell you the total response of your room + speakers at your listening position, and I guarantee you, especially because you remarked that your room is untreated, that your system is most certainly way off.

    Here was my system pre-calibration, in a completely untreated room:



    Those peaks are 9 dB. Would make mixing snares basically impossible, as when swapping tones I get a completely different thump to them depending how they're tuned.

    After calibration, the response has been flattened and the extra reflections from my walls were silenced so I could also perceive reverb mixing much better.
  12. Like
    Nabeel Ansari reacted to zykO in Do You Still ReMix — Why Or Why Not?   
    OOHHHHHH SNAAAPPPPPPPPPPPPPP (leman)
  13. Thanks
    Nabeel Ansari reacted to SnappleMan in Do You Still ReMix — Why Or Why Not?   
    If only you guys put this much effort, emotion and dedication into practicing music...
  14. Thanks
    Nabeel Ansari reacted to JohnStacy in Do You Still ReMix — Why Or Why Not?   
    My last thought, which I'm giving, leaving, and not coming back to:

    One of my remixes is so far removed from the original that if you take away the original melody, because of the altered harmony and counterpoint, it sounds like a completely different piece.  I've actually performed said remix without the original melody as an original composition for a graduate composition recital.  An analysis of it shows that without the melody, the style, harmony, and counterpoint are so far removed from the original that it can classify as a completely different piece no matter how you look at it.  I mean like...it's now less than 10% the speed of the original, the melody was almost completely reharmonized.  The harmony doesn't even classify as tonal anymore at this point.  It has a loose key center, so it's key centric, but the function of the chords don't exist in a traditionally tonal sense.  If you speed it up 10x, then the groups of measures together suggest a tonal progression, but the actual phrases in the piece are not tonal.  Basically I wrote a contrafact of the Underwater theme from Super Mario Bros.

    I'm a jazz musician, so the idea of taking several tunes with the exact same content and changing the melody is normal.  The concept of contrafact is kind of a center point of the genre.  Sometimes when people write a tune, the original writers fade into obscurity while the performers of said piece get credited.  Donna Lee is credited to Miles Davis, but that is heavily debated.  It is a contrafact of the tune Indiana, and is practiced as such.  The chord changes to I Got Rhythm are so iconic that we just basically call them rhythm changes.  There is no effort at all to hide the fact that it's basically the exact content of the song minus the melody.  There are other times where tunes are arranged in DRASTICALLY different styles and although they are the original song, they contributed to the development of the genre, or in some cases multiple genres in a significant way.  Many musicians do arrangements literally all the time to develop their compositional and arrangement technique.  More times than not, doing an arrangement of a VG tune in the style of a composer helps me learn more about the writing of that composer than if I were writing an original tune in that style.  It takes less time, so I can get more out of it really quickly.

    Brahms wrote Theme and Variations on a Theme by Haydn.  But Brahms is credited as the composer, not the arranger.  In the classical canon, having a theme and variations form virtually always results in a new piece, even though the melodic content was written by somebody else.  Brahms, Mozart, Beethoven, and Strauss are some of the major composers in the classical canon, and they all wrote theme and variations on the themes of somebody else, yet are credited as the composers.  Theme and Variations on a Theme by Haydn was basically a remix of a piece by Haydn.  But Brahms is credited as composer.

    I mean if by added some seasoning you mean I dumped so many seasonings to it that it's basically a mountain of rainbow powder with no liquid left, then yes.  I just added some seasoning of my own.  That is a simplification of what goes on and you know it, so please drop the condescending attitude toward the matter, thank you very much.

    And please for the love of God don't do the thing where you quote each individual sentence of this post and make me defend it line by line, because I have better things to do with my time.  People get tired of that REALLY quickly, because more times than not, you simplify what they said in your response, which just adds fuel to the fire rather than continuing the discussion.  People waste so much more time correcting your simplifications than actually continuing the discussion because you "don't give a shit."
  15. Like
    Nabeel Ansari reacted to prophetik music in Do You Still ReMix — Why Or Why Not?   
    how do i turn off reply notifications? this thread is a pit of misery and i want to stop getting dinged every time angelcityoutlaw decides to respond aggressively despite not caring.
    edit: figured it out =D
  16. Like
    Nabeel Ansari reacted to SnappleMan in Do You Still ReMix — Why Or Why Not?   
    I love coming back to check the OCR forums once every 6 years and still the same debates are going on. Just enjoy making music, no point in trying to justify it, or find some kind of deeper meaning or value in it, just have fun.
  17. Thanks
    Nabeel Ansari got a reaction from Jorito in How to get the most out of mediocre instrument samples?   
    Most big-time composers work using separate articulations per patch instead of keyswitching.
    Yes, it's dumb UX to not offer any switching (and VIenna knew this a long time ago and got on a great mapping software), but to say that the sample library like HO is incapable of getting a good result even after spending "hours" compared to "minutes" (lot of hyperbole here) is just plain wrong. A lot of people use the several-track-articulation workflow to great effect. It gives better mix control because you can control articulation volumes independently.
    Keep in mind also, just because the VST doesn't offer switching doesn't mean that's the end of the read. Logic composers get to map things however they want externally. There's also a handful of free ways to create articulation maps intercepting MIDI, and this use-case is actually superior to in-built articulation switching systems inside libraries because your output is just going to separate MIDI channels, and so you can thus create 1 MIDI map that works for several different orchestral libraries from different companies (by just loading the same MIDI channels with the same artics like "1 leg 2 stacc 3 spicc 4 pizz")  instead of having to go and configure all of them individually inside their tiny GUI's.
  18. Like
    Nabeel Ansari got a reaction from Phonetic Hero in Do You Still ReMix — Why Or Why Not?   
    Zircon's fanbase started when he made ocremixes and pretty healthily transferred it into a fanbase for his original electronic music.

    I'm really not sure why there's so much emphasis on reasoning it out with logic. Just look at actual real life examples to see what happens or not.
  19. Like
    Nabeel Ansari got a reaction from zykO in Do You Still ReMix — Why Or Why Not?   
    Zircon's fanbase started when he made ocremixes and pretty healthily transferred it into a fanbase for his original electronic music.

    I'm really not sure why there's so much emphasis on reasoning it out with logic. Just look at actual real life examples to see what happens or not.
  20. Like
    Nabeel Ansari got a reaction from timaeus222 in Direct effect inserts vs. aux effect sends   
    Sends are for groups of signals going through one effect chain. It saves a lot of CPU compared to loading effects units on individual channels (and came about because with hardware, you physically can not duplicate a $3000 rack gear reverb on like 50 tracks). Additionally with send levels, it's pretty easy to control how much of each signal goes through the effect, and additionally if it is pre-fader (absolute send, no matter what the volume fader of the track is) or post-fader (the track volume directly scales what is sent to the send).
    Yoozer pointed out several use cases of sends. It's really just a better way to work and allows more mixing possibilities than sticking to inserts does. It's literally less effort to manage 1 (or a few if you're experienced) reverb send and just controlling what's going through it. You get a 100% wet signal in the send track and can do stuff to it, like EQing or filtering it, adjusting the *entire* reverb level of the song all at once on one fader, applying mid/side techniques if you're into that, and more. The consistency afforded by routing all of your tracks into just a few plugins creates much better mixes, much more easily.
  21. Like
    Nabeel Ansari got a reaction from AngelCityOutlaw in How Significant Is Forum Feedback In Improvement?   
    I think (because this is how it was for me), they're useful to a point where problems with the music is more objective and agreeable by everyone. Once stuff gets subjective (which is a lot earlier than people would think), the feedback might continue contributing to shaping how the person writes and produces, but isn't at all necessary for that person to keep improving.
    Like you said people who have the drive for this stuff will keep going, and I particularly relate to not coming to WIP forums anymore and just bouncing tracks at a few peers instead. When I do it, it's more one-sided. I'm not asking what's wrong with the track or what I should fix, but rather I'm gauging the reaction of a person who isn't already familiar with it just as a fresh set of ears. Just if they might say something like "this ____ part feels like _____ ". I might've clearly perceived that and been fine with that in my taste, but it comes down to how listeners will react to it, so I'll try to compromise a little. It's like when a game dev polls about a control scheme or something. The dev might be perfectly fine with it, and it's not just that they created it, because it really does just work for them, but they just want to make sure other people can enjoy it too, so they send out a survey.
    Even my most skilled peers, I'll send stuff to them and on rare occasions they're like "wow this sounds really good", but I'll come back to the stuff a year later and I can still clearly perceive that I've been improving and I can see flaws in the sound I used to have. And those flaws are subjective to my own taste, because it's been shown other people with experienced tastes still enjoy them.
    My improvement, personally, is pretty self-driven at this point. I don't think a lot of friends really share much of my influences at this point, and I chase production techniques I don't see my friends doing. I think at a certain point you can absolutely trust yourself to be your own critic (without causing self-esteem issues or creative paralysis).
  22. Like
    Nabeel Ansari got a reaction from Dextastic in What's a Decent Price for a MIDI Piano & String VSTi?   
    I worked a little on Palette, it's really good for its price tier, so the free version is a no-brainer for sure.
  23. Like
    Nabeel Ansari reacted to Dextastic in What's a Decent Price for a MIDI Piano & String VSTi?   
    If you have the full version of Kontakt, Palette Primary Colors is a nice freebie for strings.
    https://redroomaudio.com/product/palette-primary-colors/
  24. Like
    Nabeel Ansari got a reaction from WiFiSunset in What's a Decent Price for a MIDI Piano & String VSTi?   
    Inspire is pretty good. Inspire 2 also should be good.

    I personally would recommend Albion, because I think it's wonderful, but you'd do pretty well with either one, and they cover bases. They're also easier to use since they're in consolidated sections instead of individual instrument types.
  25. Like
    Nabeel Ansari reacted to MinnieMoog in NEW ALBUM - COLOURS by PRYZM (Electro Organic Prog)   
    Hey PRYZM
    Just wanted to pop in here real quick to say I am really loving your release. It's wonderful work. I particularly enjoyed Astral Genesis and Constellations. Looking forward to hearing more!
×
×
  • Create New...