Jump to content

Nabeel Ansari

Members
  • Posts

    5,797
  • Joined

  • Last visited

  • Days Won

    31

Everything posted by Nabeel Ansari

  1. Pretty much, I can't hear anything it does the first track doesn't already cover. The tempo and time signature are the same throughout, standard 4/4, and the syncopations used here are the same as the other track. You should become familiar with syncopation because I don't see another way to help you wrap your head around what's happening in the rhythms. In other words, you're not going to get really far at all analyzing Ikaruga, or most interesting music for that matter. https://en.wikipedia.org/wiki/Syncopation https://www.dropbox.com/s/b3i23bwa021nlhy/2019-08-14_10-00-26.mp4?dl=0 I've recorded a crude video showing the basic thing happening in the Ikaruga tracks. My right hand is just playing basic 8th notes in 4/4. The left hand plays the 1st 8th note and the 4th 8th note. If you draw it out it looks like this: The top is the 4 beats in 4/4. The middle is the eight 8th notes I was playing. The bottom is the main driving beat that happens in the song, showing up on the down of beat 1 and the up of beat 2. Or in other words, right on the first beat, and halfway between the second and third beat.
  2. I think you're way overthinking it. The song is in a basic 4/4, same tempo (somewhere 145-150 BPM) the whole way through. It's not really do anything special either, just some syncopation. The "melody beings on the 15th of the previous measure" is just called a pickup note. I would be stunned if you told me you've never heard a melody do that before.
  3. Holy shit what? I mean even just on the notation front it looks way less cluttered than traditional software. Thanks for the mention. EDIT: Online research suggests it's very unstable and crashes a lot.
  4. Additionally, it's worth mentioning that software combos like Notion and Studio One let you write in notation, and then import directly to DAW for mockup and mixing. It's not an all-in-one solution, but if you want a way to smooth productivity from traditional composition methods into the production phase, that would be the way to go.
  5. I think notation input in Logic and REAPER stuff is the best you're gonna get.
  6. You can get nice digipaks from here: https://www.discmakers.com/products/digipaks.asp
  7. If you're only using the headphones in your studio, buy the most comfortable pair and hit it with Sonarworks for the most ideal headphone response possible. If you plan to use the headphones elsewhere, obviously Sonarworks can't follow you around, I'd say the best out of what you provided is the K-702 just based on the chart. However, based on testimonial of friends, ubiquity, and an even better chart, I'd say you should probably go with the Sennheiser HD 280. This is the 280 i pulled off of google: I have used the DT 880 for a long time, but to be perfectly honest, it's just as bad as the frequency response graph tells; it has incredibly shrill spikes in the treble range. It's honestly an eye-opener when you switch between a flat response and the DT880's (which I have, I use DT 880's + Sonarworks) just how bad the DT 880's actually sound. When compared A/B in that fashion, it honestly does sound like the audio is coming out of a phone speaker when you hear the DT 880's natural sound. Of course, it doesn't matter too much at the end of the day. Headphone responses are easy to get used to and compensate for because they have very broad features (unlike a bad room, where you can get a random 9 dB spike at 130 Hz and nowhere else due to room geometry). What really matters is that most of the frequency range is represented adequately, and that they're comfortable to wear. Every other consideration can be appeased by practice and experience. You're never going to get truly good bass response on headphones (unless you have Nuraphones) and you're never going to get good stereo imaging without putting any crosstalk simulation on your master chain. Unlike studio monitors, there's a pretty low ceiling to how good headphones can sound, and if you're in the $100-150 range, any of the popular ones will do once you get some experience on them.
  8. With SSD's, the RAM usage of patches decreases by lowering the DFD buffer setting in Kontakt. A typical orchestral patch for me is around 75-125 MB or so. Additionally, one can save a ton of RAM by unloading all mic positions besides the close mic in any patches and using reverb processing in the DAW instead. I recommend this workflow anyway regardless of what era of orchestral library is being used because it just leads to much better mixes and allows for blending libraries from different developers.
  9. That'd be true if human loudness perception was linear and frequency-invariant; it is neither (hence the existence of the db scale and the fletcher munson curves). If you're listening on a colored system which has any dramatic deficiencies, those ranges that have deficiencies will have a worse perception of the comparative difference between the source and the chosen monitor. It's the same reason you can not just "compensate" if your headphones lack bass. If the bass is way too quiet, you literally are worse off to tell the difference between +/- 3 dB in the signal compared to if it were at the proper loudness coming out of the headphones, and this is terrible for mixing (why do you think you're supposed to mix at a constant monitoring level?). Only through lots of experience can you compensate that level of nuance with at such reduced monitoring level, like Zircon can do with his DT 880's; his sub and low end is freaking monstrous but controlled, just the right amount of everything, but he had to learn how to do that over years and years of working with the exact same pair of headphones. Again, it's the worst way to judge anything. For example, I don't really agree with half of the assessments you made on those speakers listening on this end. The HS7 is actually the worst sounding speaker in this video, as it sounds like it was run through a bandpass. There's no life in any of the transients. Taking the HS7 as a close reference would not be a "fairly good decision" on your part, it would be pretty bad. Also, note the disclaimer at the end where he said "these speaker sounds contain room acoustics".
  10. So I should amend my statement to be more technically accurate: Sonarworks can not remove reflections from the room, they are still bouncing around, and no amount of DSP can just stop them from propagating. However, the effect is "cancelled" at the exact measured listening position. Sonarworks is an FIR approach, which is another name for convolution style filtering. Deconvolving reflections is totally and absolutely in the wheelhouse of FIR filtering, as reverb is "linear" and "time-invariant" at a fixed listening position, (relatively) fixed monitoring level and fixed positions of objects and materials in the room. So it absolutely necessitates re-running calibration process if you change stuff around in the room, change the gain structure of your system output, etc I can't comment on standing waves but it seems in their paper they noted that it wasn't covered by the filter approach and so they recommended treatment for that. Just from my peanut gallery background in studying EE I think it makes sense that standing waves aren't linear and time invariant and so trying to reverse them through a filter wouldn't go well. Same goes for nulls, if a band is just dying at your sitting position, trying to reverse that via filter is just not smart at all. Regardless, if you were to move your head or walk around, you would again clearly notice how horrible the room sounds (though the fixed general frequency response is still an improvement), because now you've violated the "math assumption", introducing sound difference created by changing your spatial position. This is the disadvantage of relying on DSP calibration (along with latency and slight pre-ring for the linear phase) and is a compelling reason why you wouldn't want to choose it over proper acoustic design in a more commercial/professional studio design (you don't want crap sound for the people sitting next to you in a session!). I think it's a pretty decent trade for home producers and produces much better results than trying to put cheap speakers in a minimally treated room and still having to learn how to compensate for issues. I just see it as more expensive and time-consuming. Compensating isn't fun; its easy on headphones where problems are usually broad, general tonal shifts in frequency ranges. But in a room, and this is shown in the measurement curve, the differences are not broad and predictable, they're pretty random and localized in small bands. In my opinion it's difficult to really build a mental compensation map unless you listen to a metric ton of different sounding music in your room. It is traditional to learn your setup, but I think the tech is there to make the process way simpler nowadays. To be scientifically thorough, I would love to run a measurement test and show the "after" curve of my setup, however sadly I don't think it's really possible, because SW has a stingy requirement that for measurement the I/O for the computer has to be running on the same audio interface and the calibrated system output is a different virtual out, so there's no way I could run the existing calibration and then also measure that in series. All I can do is volunteer my personal anecdotal experience at how it has improved the sound. I'm not trying to literally sell it to you guys, and no, I don't get kickback, I just think it's one of the best investments people should make into their audio before saving up to buy expensive plugins or anything else. Especially because its results are relatively transferrable to any new environment without spending any more money no matter how many times you move, where room treatments would have to be re-done and maybe more money spent depending on the circumstance. And because of the topic of this thread, it shouldn't be understated that SW calibration can drastically improve the viability of using cheaper sound systems to do professional audio work. I've run calibration at my friend's house with incredibly shitty, tiny $100 M-Audio speakers, in just about the worst way to possibly place/orient them, and I'd say the end result really was within the ballpark of sound quality I get at my home room with more expensive monitors and a more symmetrical set up. It wasn't the same, but it was a lot more accurate (sans any decent sub response) than you could roll your eyes at. Stereo field fixing is dope too. @Master Mi I'm not sure what's to be accomplished by linking YouTube videos of the sound of other monitors. They're all being colored by whatever you're watching the YouTube video on. At best, a "flat response" speaker will sound as bad as the speakers you're using to watch the video, and furthermore, a speaker set that has opposite problems that yours do will sound flat, when they aren't flat at all. Listening to recordings of other sound systems is just about the worst possible way to tell what they sound like.
  11. The two graphics are both before, for each the left and right speakers separately. That's a crude way of understanding the calibration but it's really not just about EQ. It develops a filter to invert the problems in your room; not just "frequency" the way musicians think of it (musicians associate frequency with pitch, from bass to treble) but the true extent of what frequency represents in electrical engineering, which includes eliminating things like room reflections.
  12. It doesn't matter what the graph of the speaker says because your room colors it beyond recognition. Also, I really doubt you can just hear what a flat response truly sounds like, especially if you're only testing it by listening to music where resonant peaks and standing waves may not even be present depending on the song you're playing. You measure this stuff, in real life, by using a sound pressure meter as you sweep up the range to detect deviations. The Sonarworks product will tell you the total response of your room + speakers at your listening position, and I guarantee you, especially because you remarked that your room is untreated, that your system is most certainly way off. Here was my system pre-calibration, in a completely untreated room: Those peaks are 9 dB. Would make mixing snares basically impossible, as when swapping tones I get a completely different thump to them depending how they're tuned. After calibration, the response has been flattened and the extra reflections from my walls were silenced so I could also perceive reverb mixing much better.
  13. Most big-time composers work using separate articulations per patch instead of keyswitching. Yes, it's dumb UX to not offer any switching (and VIenna knew this a long time ago and got on a great mapping software), but to say that the sample library like HO is incapable of getting a good result even after spending "hours" compared to "minutes" (lot of hyperbole here) is just plain wrong. A lot of people use the several-track-articulation workflow to great effect. It gives better mix control because you can control articulation volumes independently. Keep in mind also, just because the VST doesn't offer switching doesn't mean that's the end of the read. Logic composers get to map things however they want externally. There's also a handful of free ways to create articulation maps intercepting MIDI, and this use-case is actually superior to in-built articulation switching systems inside libraries because your output is just going to separate MIDI channels, and so you can thus create 1 MIDI map that works for several different orchestral libraries from different companies (by just loading the same MIDI channels with the same artics like "1 leg 2 stacc 3 spicc 4 pizz") instead of having to go and configure all of them individually inside their tiny GUI's.
  14. Zircon's fanbase started when he made ocremixes and pretty healthily transferred it into a fanbase for his original electronic music. I'm really not sure why there's so much emphasis on reasoning it out with logic. Just look at actual real life examples to see what happens or not.
  15. Some people don't care how the legal and business circumstances pan out. In fact, a lot of people don't.
  16. Sends are for groups of signals going through one effect chain. It saves a lot of CPU compared to loading effects units on individual channels (and came about because with hardware, you physically can not duplicate a $3000 rack gear reverb on like 50 tracks). Additionally with send levels, it's pretty easy to control how much of each signal goes through the effect, and additionally if it is pre-fader (absolute send, no matter what the volume fader of the track is) or post-fader (the track volume directly scales what is sent to the send). Yoozer pointed out several use cases of sends. It's really just a better way to work and allows more mixing possibilities than sticking to inserts does. It's literally less effort to manage 1 (or a few if you're experienced) reverb send and just controlling what's going through it. You get a 100% wet signal in the send track and can do stuff to it, like EQing or filtering it, adjusting the *entire* reverb level of the song all at once on one fader, applying mid/side techniques if you're into that, and more. The consistency afforded by routing all of your tracks into just a few plugins creates much better mixes, much more easily.
  17. I think (because this is how it was for me), they're useful to a point where problems with the music is more objective and agreeable by everyone. Once stuff gets subjective (which is a lot earlier than people would think), the feedback might continue contributing to shaping how the person writes and produces, but isn't at all necessary for that person to keep improving. Like you said people who have the drive for this stuff will keep going, and I particularly relate to not coming to WIP forums anymore and just bouncing tracks at a few peers instead. When I do it, it's more one-sided. I'm not asking what's wrong with the track or what I should fix, but rather I'm gauging the reaction of a person who isn't already familiar with it just as a fresh set of ears. Just if they might say something like "this ____ part feels like _____ ". I might've clearly perceived that and been fine with that in my taste, but it comes down to how listeners will react to it, so I'll try to compromise a little. It's like when a game dev polls about a control scheme or something. The dev might be perfectly fine with it, and it's not just that they created it, because it really does just work for them, but they just want to make sure other people can enjoy it too, so they send out a survey. Even my most skilled peers, I'll send stuff to them and on rare occasions they're like "wow this sounds really good", but I'll come back to the stuff a year later and I can still clearly perceive that I've been improving and I can see flaws in the sound I used to have. And those flaws are subjective to my own taste, because it's been shown other people with experienced tastes still enjoy them. My improvement, personally, is pretty self-driven at this point. I don't think a lot of friends really share much of my influences at this point, and I chase production techniques I don't see my friends doing. I think at a certain point you can absolutely trust yourself to be your own critic (without causing self-esteem issues or creative paralysis).
  18. I worked a little on Palette, it's really good for its price tier, so the free version is a no-brainer for sure.
  19. The Impact Soundworks Ventus Ethnic Winds collection is on sale for 70% off right now! https://audioplugin.deals/ There's 5 wind products here, the Tin Whistle, Bansuri, Shakuhachi, Panflutes, and Ocarinas. They all have true legato and tons of ornaments to get some really articulate performances. Don't sleep on it. It'll be over in 2 weeks (until 8/22 11:59PM).
  20. Thanks Minnie. Astral Genesis was really fun and I wish I had time to make it a longer piece.
×
×
  • Create New...