Jump to content

Nabeel Ansari

Contributors
  • Content Count

    5,797
  • Joined

  • Last visited

  • Days Won

    31

About Nabeel Ansari

  • Rank
    Pikachu (+5000)

Profile Information

  • Gender
    Male
  • Location
    Philadelphia, PA
  • Interests
    Music, Mathematics, Physics, Video Games, Storytelling

Artist Settings

  • Collaboration Status
    1. Not Interested or Available
  • Software - Digital Audio Workstation (DAW)
    Studio One
  • Software - Preferred Plugins/Libraries
    Spitfire, Orchestral Tools, Impact Soundworks, Embertone, u-he, Xfer Records, Spectrasonics
  • Composition & Production Skills
    Arrangement & Orchestration
    Drum Programming
    Lyrics
    Mixing & Mastering
    Recording Facilities
    Synthesis & Sound Design
  • Instrumental & Vocal Skills (List)
    Piano

Converted

  • Real Name
    Nabeel Ansari
  • Occupation
    Impact Soundworks Developer, Video Game Composer
  • Twitter Username
    _nabeelansari_

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Pretty much, I can't hear anything it does the first track doesn't already cover. The tempo and time signature are the same throughout, standard 4/4, and the syncopations used here are the same as the other track. You should become familiar with syncopation because I don't see another way to help you wrap your head around what's happening in the rhythms. In other words, you're not going to get really far at all analyzing Ikaruga, or most interesting music for that matter. https://en.wikipedia.org/wiki/Syncopation https://www.dropbox.com/s/b3i23bwa021nlhy/2019-08-14_10-00-26.mp4?dl=0
  2. I think you're way overthinking it. The song is in a basic 4/4, same tempo (somewhere 145-150 BPM) the whole way through. It's not really do anything special either, just some syncopation. The "melody beings on the 15th of the previous measure" is just called a pickup note. I would be stunned if you told me you've never heard a melody do that before.
  3. Holy shit what? I mean even just on the notation front it looks way less cluttered than traditional software. Thanks for the mention. EDIT: Online research suggests it's very unstable and crashes a lot.
  4. Additionally, it's worth mentioning that software combos like Notion and Studio One let you write in notation, and then import directly to DAW for mockup and mixing. It's not an all-in-one solution, but if you want a way to smooth productivity from traditional composition methods into the production phase, that would be the way to go.
  5. I think notation input in Logic and REAPER stuff is the best you're gonna get.
  6. You can get nice digipaks from here: https://www.discmakers.com/products/digipaks.asp
  7. If you're only using the headphones in your studio, buy the most comfortable pair and hit it with Sonarworks for the most ideal headphone response possible. If you plan to use the headphones elsewhere, obviously Sonarworks can't follow you around, I'd say the best out of what you provided is the K-702 just based on the chart. However, based on testimonial of friends, ubiquity, and an even better chart, I'd say you should probably go with the Sennheiser HD 280. This is the 280 i pulled off of google: I have used the DT 880 for a long time, but to be perfectly honest,
  8. With SSD's, the RAM usage of patches decreases by lowering the DFD buffer setting in Kontakt. A typical orchestral patch for me is around 75-125 MB or so. Additionally, one can save a ton of RAM by unloading all mic positions besides the close mic in any patches and using reverb processing in the DAW instead. I recommend this workflow anyway regardless of what era of orchestral library is being used because it just leads to much better mixes and allows for blending libraries from different developers.
  9. That'd be true if human loudness perception was linear and frequency-invariant; it is neither (hence the existence of the db scale and the fletcher munson curves). If you're listening on a colored system which has any dramatic deficiencies, those ranges that have deficiencies will have a worse perception of the comparative difference between the source and the chosen monitor. It's the same reason you can not just "compensate" if your headphones lack bass. If the bass is way too quiet, you literally are worse off to tell the difference between +/- 3 dB in the signal compared to if it
  10. So I should amend my statement to be more technically accurate: Sonarworks can not remove reflections from the room, they are still bouncing around, and no amount of DSP can just stop them from propagating. However, the effect is "cancelled" at the exact measured listening position. Sonarworks is an FIR approach, which is another name for convolution style filtering. Deconvolving reflections is totally and absolutely in the wheelhouse of FIR filtering, as reverb is "linear" and "time-invariant" at a fixed listening position, (relatively) fixed monitoring level and fixed positions of objec
×
×
  • Create New...