PRYZM

Members
  • Content Count

    5,785
  • Joined

  • Last visited

  • Days Won

    31

About PRYZM

  • Rank
    Pikachu (+5000)
  • Birthday 11/02/1995

Profile Information

  • Gender
    Male
  • Location
    Philadelphia, PA
  • Interests
    Music, Mathematics, Physics, Video Games, Storytelling

Artist Settings

  • Collaboration Status
    1. Not Interested or Available
  • Software - Digital Audio Workstation (DAW)
    Studio One
  • Software - Preferred Plugins/Libraries
    Spitfire, Orchestral Tools, Impact Soundworks, Embertone, u-he, Xfer Records, Spectrasonics
  • Composition & Production Skills
    Arrangement & Orchestration
    Drum Programming
    Lyrics
    Mixing & Mastering
    Recording Facilities
    Synthesis & Sound Design
  • Instrumental & Vocal Skills (List)
    Piano

Converted

  • Real Name
    Nabeel Ansari
  • Occupation
    Impact Soundworks Developer, Video Game Composer
  • Facebook ID
    100000959510682
  • Twitter Username
    _nabeelansari_

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. With SSD's, the RAM usage of patches decreases by lowering the DFD buffer setting in Kontakt. A typical orchestral patch for me is around 75-125 MB or so. Additionally, one can save a ton of RAM by unloading all mic positions besides the close mic in any patches and using reverb processing in the DAW instead. I recommend this workflow anyway regardless of what era of orchestral library is being used because it just leads to much better mixes and allows for blending libraries from different developers.
  2. That'd be true if human loudness perception was linear and frequency-invariant; it is neither (hence the existence of the db scale and the fletcher munson curves). If you're listening on a colored system which has any dramatic deficiencies, those ranges that have deficiencies will have a worse perception of the comparative difference between the source and the chosen monitor. It's the same reason you can not just "compensate" if your headphones lack bass. If the bass is way too quiet, you literally are worse off to tell the difference between +/- 3 dB in the signal compared to if it were at the proper loudness coming out of the headphones, and this is terrible for mixing (why do you think you're supposed to mix at a constant monitoring level?). Only through lots of experience can you compensate that level of nuance with at such reduced monitoring level, like Zircon can do with his DT 880's; his sub and low end is freaking monstrous but controlled, just the right amount of everything, but he had to learn how to do that over years and years of working with the exact same pair of headphones. Again, it's the worst way to judge anything. For example, I don't really agree with half of the assessments you made on those speakers listening on this end. The HS7 is actually the worst sounding speaker in this video, as it sounds like it was run through a bandpass. There's no life in any of the transients. Taking the HS7 as a close reference would not be a "fairly good decision" on your part, it would be pretty bad. Also, note the disclaimer at the end where he said "these speaker sounds contain room acoustics".
  3. PRYZM

    I want to build you a computer

    My message to Brad earlier this week.
  4. So I should amend my statement to be more technically accurate: Sonarworks can not remove reflections from the room, they are still bouncing around, and no amount of DSP can just stop them from propagating. However, the effect is "cancelled" at the exact measured listening position. Sonarworks is an FIR approach, which is another name for convolution style filtering. Deconvolving reflections is totally and absolutely in the wheelhouse of FIR filtering, as reverb is "linear" and "time-invariant" at a fixed listening position, (relatively) fixed monitoring level and fixed positions of objects and materials in the room. So it absolutely necessitates re-running calibration process if you change stuff around in the room, change the gain structure of your system output, etc I can't comment on standing waves but it seems in their paper they noted that it wasn't covered by the filter approach and so they recommended treatment for that. Just from my peanut gallery background in studying EE I think it makes sense that standing waves aren't linear and time invariant and so trying to reverse them through a filter wouldn't go well. Same goes for nulls, if a band is just dying at your sitting position, trying to reverse that via filter is just not smart at all. Regardless, if you were to move your head or walk around, you would again clearly notice how horrible the room sounds (though the fixed general frequency response is still an improvement), because now you've violated the "math assumption", introducing sound difference created by changing your spatial position. This is the disadvantage of relying on DSP calibration (along with latency and slight pre-ring for the linear phase) and is a compelling reason why you wouldn't want to choose it over proper acoustic design in a more commercial/professional studio design (you don't want crap sound for the people sitting next to you in a session!). I think it's a pretty decent trade for home producers and produces much better results than trying to put cheap speakers in a minimally treated room and still having to learn how to compensate for issues. I just see it as more expensive and time-consuming. Compensating isn't fun; its easy on headphones where problems are usually broad, general tonal shifts in frequency ranges. But in a room, and this is shown in the measurement curve, the differences are not broad and predictable, they're pretty random and localized in small bands. In my opinion it's difficult to really build a mental compensation map unless you listen to a metric ton of different sounding music in your room. It is traditional to learn your setup, but I think the tech is there to make the process way simpler nowadays. To be scientifically thorough, I would love to run a measurement test and show the "after" curve of my setup, however sadly I don't think it's really possible, because SW has a stingy requirement that for measurement the I/O for the computer has to be running on the same audio interface and the calibrated system output is a different virtual out, so there's no way I could run the existing calibration and then also measure that in series. All I can do is volunteer my personal anecdotal experience at how it has improved the sound. I'm not trying to literally sell it to you guys, and no, I don't get kickback, I just think it's one of the best investments people should make into their audio before saving up to buy expensive plugins or anything else. Especially because its results are relatively transferrable to any new environment without spending any more money no matter how many times you move, where room treatments would have to be re-done and maybe more money spent depending on the circumstance. And because of the topic of this thread, it shouldn't be understated that SW calibration can drastically improve the viability of using cheaper sound systems to do professional audio work. I've run calibration at my friend's house with incredibly shitty, tiny $100 M-Audio speakers, in just about the worst way to possibly place/orient them, and I'd say the end result really was within the ballpark of sound quality I get at my home room with more expensive monitors and a more symmetrical set up. It wasn't the same, but it was a lot more accurate (sans any decent sub response) than you could roll your eyes at. Stereo field fixing is dope too. @Master Mi I'm not sure what's to be accomplished by linking YouTube videos of the sound of other monitors. They're all being colored by whatever you're watching the YouTube video on. At best, a "flat response" speaker will sound as bad as the speakers you're using to watch the video, and furthermore, a speaker set that has opposite problems that yours do will sound flat, when they aren't flat at all. Listening to recordings of other sound systems is just about the worst possible way to tell what they sound like.
  5. The two graphics are both before, for each the left and right speakers separately. That's a crude way of understanding the calibration but it's really not just about EQ. It develops a filter to invert the problems in your room; not just "frequency" the way musicians think of it (musicians associate frequency with pitch, from bass to treble) but the true extent of what frequency represents in electrical engineering, which includes eliminating things like room reflections.
  6. It doesn't matter what the graph of the speaker says because your room colors it beyond recognition. Also, I really doubt you can just hear what a flat response truly sounds like, especially if you're only testing it by listening to music where resonant peaks and standing waves may not even be present depending on the song you're playing. You measure this stuff, in real life, by using a sound pressure meter as you sweep up the range to detect deviations. The Sonarworks product will tell you the total response of your room + speakers at your listening position, and I guarantee you, especially because you remarked that your room is untreated, that your system is most certainly way off. Here was my system pre-calibration, in a completely untreated room: Those peaks are 9 dB. Would make mixing snares basically impossible, as when swapping tones I get a completely different thump to them depending how they're tuned. After calibration, the response has been flattened and the extra reflections from my walls were silenced so I could also perceive reverb mixing much better.
  7. Most big-time composers work using separate articulations per patch instead of keyswitching. Yes, it's dumb UX to not offer any switching (and VIenna knew this a long time ago and got on a great mapping software), but to say that the sample library like HO is incapable of getting a good result even after spending "hours" compared to "minutes" (lot of hyperbole here) is just plain wrong. A lot of people use the several-track-articulation workflow to great effect. It gives better mix control because you can control articulation volumes independently. Keep in mind also, just because the VST doesn't offer switching doesn't mean that's the end of the read. Logic composers get to map things however they want externally. There's also a handful of free ways to create articulation maps intercepting MIDI, and this use-case is actually superior to in-built articulation switching systems inside libraries because your output is just going to separate MIDI channels, and so you can thus create 1 MIDI map that works for several different orchestral libraries from different companies (by just loading the same MIDI channels with the same artics like "1 leg 2 stacc 3 spicc 4 pizz") instead of having to go and configure all of them individually inside their tiny GUI's.
  8. PRYZM

    Do You Still ReMix — Why Or Why Not?

    Zircon's fanbase started when he made ocremixes and pretty healthily transferred it into a fanbase for his original electronic music. I'm really not sure why there's so much emphasis on reasoning it out with logic. Just look at actual real life examples to see what happens or not.
  9. PRYZM

    Do You Still ReMix — Why Or Why Not?

    Some people don't care how the legal and business circumstances pan out. In fact, a lot of people don't.
  10. Sends are for groups of signals going through one effect chain. It saves a lot of CPU compared to loading effects units on individual channels (and came about because with hardware, you physically can not duplicate a $3000 rack gear reverb on like 50 tracks). Additionally with send levels, it's pretty easy to control how much of each signal goes through the effect, and additionally if it is pre-fader (absolute send, no matter what the volume fader of the track is) or post-fader (the track volume directly scales what is sent to the send). Yoozer pointed out several use cases of sends. It's really just a better way to work and allows more mixing possibilities than sticking to inserts does. It's literally less effort to manage 1 (or a few if you're experienced) reverb send and just controlling what's going through it. You get a 100% wet signal in the send track and can do stuff to it, like EQing or filtering it, adjusting the *entire* reverb level of the song all at once on one fader, applying mid/side techniques if you're into that, and more. The consistency afforded by routing all of your tracks into just a few plugins creates much better mixes, much more easily.
  11. PRYZM

    How Significant Is Forum Feedback In Improvement?

    I think (because this is how it was for me), they're useful to a point where problems with the music is more objective and agreeable by everyone. Once stuff gets subjective (which is a lot earlier than people would think), the feedback might continue contributing to shaping how the person writes and produces, but isn't at all necessary for that person to keep improving. Like you said people who have the drive for this stuff will keep going, and I particularly relate to not coming to WIP forums anymore and just bouncing tracks at a few peers instead. When I do it, it's more one-sided. I'm not asking what's wrong with the track or what I should fix, but rather I'm gauging the reaction of a person who isn't already familiar with it just as a fresh set of ears. Just if they might say something like "this ____ part feels like _____ ". I might've clearly perceived that and been fine with that in my taste, but it comes down to how listeners will react to it, so I'll try to compromise a little. It's like when a game dev polls about a control scheme or something. The dev might be perfectly fine with it, and it's not just that they created it, because it really does just work for them, but they just want to make sure other people can enjoy it too, so they send out a survey. Even my most skilled peers, I'll send stuff to them and on rare occasions they're like "wow this sounds really good", but I'll come back to the stuff a year later and I can still clearly perceive that I've been improving and I can see flaws in the sound I used to have. And those flaws are subjective to my own taste, because it's been shown other people with experienced tastes still enjoy them. My improvement, personally, is pretty self-driven at this point. I don't think a lot of friends really share much of my influences at this point, and I chase production techniques I don't see my friends doing. I think at a certain point you can absolutely trust yourself to be your own critic (without causing self-esteem issues or creative paralysis).
  12. I worked a little on Palette, it's really good for its price tier, so the free version is a no-brainer for sure.
  13. The Impact Soundworks Ventus Ethnic Winds collection is on sale for 70% off right now! https://audioplugin.deals/ There's 5 wind products here, the Tin Whistle, Bansuri, Shakuhachi, Panflutes, and Ocarinas. They all have true legato and tons of ornaments to get some really articulate performances. Don't sleep on it. It'll be over in 2 weeks (until 8/22 11:59PM).
  14. Thanks Minnie. Astral Genesis was really fun and I wish I had time to make it a longer piece.
  15. Inspire is pretty good. Inspire 2 also should be good. I personally would recommend Albion, because I think it's wonderful, but you'd do pretty well with either one, and they cover bases. They're also easier to use since they're in consolidated sections instead of individual instrument types.