• Content Count

  • Joined

  • Last visited

  • Days Won



  • Rank
    Pikachu (+5000)
  • Birthday 11/02/1995

Profile Information

  • Gender
  • Location
    Philadelphia, PA
  • Interests
    Music, Mathematics, Physics, Video Games, Storytelling

Artist Settings

  • Collaboration Status
    1. Not Interested or Available
  • Software - Digital Audio Workstation (DAW)
    Studio One
  • Software - Preferred Plugins/Libraries
    Spitfire, Orchestral Tools, Impact Soundworks, Embertone, u-he, Xfer Records, Spectrasonics
  • Composition & Production Skills
    Arrangement & Orchestration
    Drum Programming
    Mixing & Mastering
    Recording Facilities
    Synthesis & Sound Design
  • Instrumental & Vocal Skills (List)


  • Real Name
    Nabeel Ansari
  • Occupation
    Impact Soundworks Developer, Video Game Composer
  • Facebook ID
  • Twitter Username

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Most big-time composers work using separate articulations per patch instead of keyswitching. Yes, it's dumb UX to not offer any switching (and VIenna knew this a long time ago and got on a great mapping software), but to say that the sample library like HO is incapable of getting a good result even after spending "hours" compared to "minutes" (lot of hyperbole here) is just plain wrong. A lot of people use the several-track-articulation workflow to great effect. It gives better mix control because you can control articulation volumes independently. Keep in mind also, just because the VST doesn't offer switching doesn't mean that's the end of the read. Logic composers get to map things however they want externally. There's also a handful of free ways to create articulation maps intercepting MIDI, and this use-case is actually superior to in-built articulation switching systems inside libraries because your output is just going to separate MIDI channels, and so you can thus create 1 MIDI map that works for several different orchestral libraries from different companies (by just loading the same MIDI channels with the same artics like "1 leg 2 stacc 3 spicc 4 pizz") instead of having to go and configure all of them individually inside their tiny GUI's.
  2. PRYZM

    Do You Still ReMix — Why Or Why Not?

    Zircon's fanbase started when he made ocremixes and pretty healthily transferred it into a fanbase for his original electronic music. I'm really not sure why there's so much emphasis on reasoning it out with logic. Just look at actual real life examples to see what happens or not.
  3. PRYZM

    Do You Still ReMix — Why Or Why Not?

    Some people don't care how the legal and business circumstances pan out. In fact, a lot of people don't.
  4. Sends are for groups of signals going through one effect chain. It saves a lot of CPU compared to loading effects units on individual channels (and came about because with hardware, you physically can not duplicate a $3000 rack gear reverb on like 50 tracks). Additionally with send levels, it's pretty easy to control how much of each signal goes through the effect, and additionally if it is pre-fader (absolute send, no matter what the volume fader of the track is) or post-fader (the track volume directly scales what is sent to the send). Yoozer pointed out several use cases of sends. It's really just a better way to work and allows more mixing possibilities than sticking to inserts does. It's literally less effort to manage 1 (or a few if you're experienced) reverb send and just controlling what's going through it. You get a 100% wet signal in the send track and can do stuff to it, like EQing or filtering it, adjusting the *entire* reverb level of the song all at once on one fader, applying mid/side techniques if you're into that, and more. The consistency afforded by routing all of your tracks into just a few plugins creates much better mixes, much more easily.
  5. PRYZM

    How Significant Is Forum Feedback In Improvement?

    I think (because this is how it was for me), they're useful to a point where problems with the music is more objective and agreeable by everyone. Once stuff gets subjective (which is a lot earlier than people would think), the feedback might continue contributing to shaping how the person writes and produces, but isn't at all necessary for that person to keep improving. Like you said people who have the drive for this stuff will keep going, and I particularly relate to not coming to WIP forums anymore and just bouncing tracks at a few peers instead. When I do it, it's more one-sided. I'm not asking what's wrong with the track or what I should fix, but rather I'm gauging the reaction of a person who isn't already familiar with it just as a fresh set of ears. Just if they might say something like "this ____ part feels like _____ ". I might've clearly perceived that and been fine with that in my taste, but it comes down to how listeners will react to it, so I'll try to compromise a little. It's like when a game dev polls about a control scheme or something. The dev might be perfectly fine with it, and it's not just that they created it, because it really does just work for them, but they just want to make sure other people can enjoy it too, so they send out a survey. Even my most skilled peers, I'll send stuff to them and on rare occasions they're like "wow this sounds really good", but I'll come back to the stuff a year later and I can still clearly perceive that I've been improving and I can see flaws in the sound I used to have. And those flaws are subjective to my own taste, because it's been shown other people with experienced tastes still enjoy them. My improvement, personally, is pretty self-driven at this point. I don't think a lot of friends really share much of my influences at this point, and I chase production techniques I don't see my friends doing. I think at a certain point you can absolutely trust yourself to be your own critic (without causing self-esteem issues or creative paralysis).
  6. I worked a little on Palette, it's really good for its price tier, so the free version is a no-brainer for sure.
  7. The Impact Soundworks Ventus Ethnic Winds collection is on sale for 70% off right now! There's 5 wind products here, the Tin Whistle, Bansuri, Shakuhachi, Panflutes, and Ocarinas. They all have true legato and tons of ornaments to get some really articulate performances. Don't sleep on it. It'll be over in 2 weeks (until 8/22 11:59PM).
  8. Thanks Minnie. Astral Genesis was really fun and I wish I had time to make it a longer piece.
  9. Inspire is pretty good. Inspire 2 also should be good. I personally would recommend Albion, because I think it's wonderful, but you'd do pretty well with either one, and they cover bases. They're also easier to use since they're in consolidated sections instead of individual instrument types.
  10. Not really fair to compare a string library to a full ensemble sketching library Albion is a pretty amazing sample library; it stands by itself really well and can create an emotive full spread. It's not as agile with line-writing as a full string library, but again, not really a fair comparison. What you get with Albion is tone and ease of mixing. That being said, I think you're greatly overusing the word "need". You "want" gear and you "want" to get started on a music journey. You don't "need" to. "Need" is stuff like, your life circumstances will severely diminish if you don't get it. Unless you've somehow bet your life finances on a music career you haven't even started yet, this is probably not the case.
  11. Here's some answers in more plain english: 1. Clipping sounds like crackling distortion. Just keep raising the volume and eventually you'll hear it. That's the rule in computers; when it goes past 0 dB, it will clip. Because the computer can't process stuff that is louder. 2. dB is just a measure of amplitude (loudness). Hz is completely different. When you have a pure tone, it's a wave moving at some number of times per second. 1 Hz is one pulse per second, 2 Hz is 2 per second, etc. The range of human hearing is 20 Hz to 20,000 Hz (that's generous, it's different per person. My ears stop at 17,000.) Go to this site and you will quickly understand what Hz means in the context of what you hear. Low Hz is bass, high Hz is treble. Raise and lower the Hz on that website and you'll see as you go from the lowest number to the highest number, you're going from lowest pitch to highest pitch. 3. Normalize just means it finds the loudest point in your song and raises the entire song equally all at once so that THAT particular loud point is 0 dB. Usually normalize in programs lets you tell it what to normalize to (-3 dB, -1 dB, 0dB, etc.). 4. The overall loudness of your song is whatever is passing through the master channel. That's what you're hearing, and the master channel loudness meter tells you what it is.
  12. Ah I see. In that case, I get what you're saying. Definitely want to never touch the internal volume. Mine's always at 100%, but that's because I have separate volume knobs for my headphones and speakers. I have 0 consistency in what my (interface physical output) volume is set. The output knob to my monitors will move randomly at least 10 times a day just listening to random stuff (like other music, YT vids, etc.) But when I'm making music, and doing a final loudness check on my stuff wrt perceived loudness, I definitely pull up something professionally done and released on iTunes to reset it back to where I'm comfortable listening to mastered music. At this point, I can just visually see where that knob position should be that I hear "powerful and comfortable" for professionally done music. It's around 30% on the knob dial for speakers, and 100% for the headphones (with the Sonarworks calibration giving -7.9 dB, and having the 250 ohm DT 880's, my interface really needs to try hard) And then I just take that sort of familiar volume comfort zone and then mix my desired perceived loudness there; this way, I know exactly how my music is going to contrast with other albums people might be streaming alongside me, because I picked that volume level while listening to other stuff. And I know that other stuff is at 0 dB, so I put mine to 0 dB too. If they're both 0 dB, and they both sound just as loud, then they are just as loud, absolutely, on all devices. Timaeus hit the nail on the head. Before you mix anything you should set your listening volume while listening to a reference track, preferably something professionally done and commercially released.
  13. @timaeus222 I think you're going in a perceived loudness direction which is a little more advanced than the kind of issue BloomingLate has. The issue here is simply that OP doesn't understand the dB scale, which is the "absolute loudness" measurement he's looking for. BloomingLate, you can raise the master track of your song up to 0 dB FS, which is the digital limit for clipping. You should always mix to 0 dB because that's the standard for mastering. 0 dB is marked at the top of the loudness meter in your DAW software. The dB number has absolutely no bearing on the perceived sound energy without a consideration of dynamic range (you can still have soft music where its loudest peak is 0 dB). If you don't like a high amount of sound energy, mix to 0 dB but avoid any master compression or limiting so that nothing goes over. In other words, avoiding 0 dB doesn't mean you're avoiding making the music sound too loud, you're just annoyingly making people raise their volume knobs relative to all the other music they listen to. To explain your own example, trance music isn't loud because it's at 0 dB (the "red" part), it's loud because it's very compressed with little dynamic range, so the sound energy over time is packed and you feel it harder in your ears. For a practical solution to your problem, you can also render your mix so it never hits 0 dB (to truly avoid the need any master compression and limiting) and then just Normalize it. This will make your music at least hit the same peak that other music does, and shouldn't require the listeners to vastly pump up the volume to hear. However, I would wager that without any compression whatsoever, people will still be raising their volumes. Most music is compressed in some form nowadays, and I can't remember the last album I saw with full dynamic range (besides classical music, which is impossible to listen to in environments like the car because of said dynamic range). As for the volume levels of your devices (headphones, laptops, stereo), none of that stuff matters at all. If someone's listening device is quiet and they need to dial it to 70% to hear anything, that's their problem. If your music is mixed to the same standards as everyone else, then it will sound the same on their system as any other music they listen to, and that's what you shoot for. This is the 0 dB thing I was talking about before. How loud it sounds is a matter of handling dynamic range using stuff like compression, and that's what Timaeus is talking about with referencing a track to match the perceived loudness. That stuff is its own rabbithole and takes a lot of learning and experience to understand how to do properly. tl;dr If you mix it so that you go up to but never cross 0 dB, you will never blow out speakers/headphones and your signal won't distort. This is one of those things that should just be automatic for every piece of music you create.
  14. PRYZM

    Advice on Channeling Creativity from Anxiety

    The most important thing that prolific artists and composers will tell you is that this stuff becomes incredibly easy if you just do it all the time and consistently. Anxiety about being creative is self-fulfilling, since the issues you talk about (not having ideas, not knowing what to do) come from being unpracticed. Do you ask someone to run a 5-minute mile if they've been a couch potato for the last 3 years? Being a creative is like being an athlete. If you don't keep those muscles in shape, they'll never, ever work when you ask them to. Just make stuff, and stop worrying about if it's bad. Bad art can improve, non-existent art can not. And remember:
  15. What's the issue pertaining to Shreddage?