Jump to content

Nabeel Ansari

Members
  • Posts

    5,797
  • Joined

  • Last visited

  • Days Won

    31

Everything posted by Nabeel Ansari

  1. Super Mario RPG Save stated at the last boss battle with two dead companions and a poisoned 1-hit-from-death Geno. Last save state was at just getting Bowser. Little kids aren't smart okay
  2. http://en.wikipedia.org/wiki/3dB-point Also, it doesn't matter what scale you're on. The definition of dB is constant, it's the reference point you use that gives rise to things like dBFS, dBSPL, etc. It is worth mentioning that when I say "power", I am referring to the physics definition of power (basically work/time or energy/time). This has nothing to do with loudness perception. So no, I didn't say "twice volume", I said "twice power".
  3. 3dB is a doubling of power; though generally with beginner ear training, yes, 5dB is barely distinguishable, being almost 4x the power. You have to know what you're referring to when you say difference, though. Are you talking about adjustments? Back to back comparisons? In mixing you can hear down to a difference of a quarter of a dB when doing EQ stuff, but with back to back chunks (like two songs), it's a little harder because your brain can't distinguish power levels that small through memory.
  4. That makes sense. I was specifically pinpointing -0.2 (I don't see an issue with -0.1 in this regard). What DAW are you using? Also, I would suggest opening the Windows mixer and making sure all programs are outputting the same level.
  5. http://www.sgconvention.com/ Looks like Keiji Inafune is gonna be there. This sounds like MAG for western people. Maybe the OCR guys over there should start hanging out here?
  6. Crest factor is referred to in dB as a convention. Don't think about it too hard. ;_; Keep SPAN after your limiter. Doesn't do you any good for mastering if you put it before. You want to analyze your result, not your input. (Though you can add a second one before for comparison) Crest factor is the ratio of your peak to your RMS. It's not something you follow along every step of the way, you look at the over-arching picture of the song itself. Max crest factor is what you're going for, because unless you're doing something terribly wrong where your automating the dynamic range of your song with a gain-limit control, the max should tell you all you need. So again, I amended the numbers to be a little more realistic, you want to try to get around 10dB or above for your max crest factor. Anything less is gonna start sounding smushed or saturated (or both). RMS, I usually keep the max around -12dB (max meaning it's -12, so -11 and -10 go above the max). And... I'm not sure who's telling you to set your master limiter at -0.2dB. When you're mixing, you want headroom, yes, but... Do you know why you're told to leave headroom? You want headroom because maximizing your music for printing and distribution is not something you're focused on in the mixing stage. That's mastering. You leave headroom when you're mixing so that the mastering process can then say "I have all this headroom, let's fill it up now". 0dBFS is the equivalent to a maximum floating point waveform (decimals ranging from -1.0 to .999...). This is why 0dBFS is the maximum for digital signal processing. Any values outside the range don't exhibit the behavior of numbers inside the range; if you multiply decimals within -1 to .999..., you get an absolute value smaller than your factors (.5 * .5 = .25), while outside of -1 to .999..., they get bigger (3 * 5 = 15). This is important in DSP techniques, trust me. There's no formal reasoning for a limiter at -0.2 dBFS, afaik, other than habit by whoever's telling you. If you want to put TLs at -0.2, put a gain effect after TLs and lower it by .2. Can you explain what you mean? This doesn't quite make any sense; you're further adding dB to your master after you've determined it sounds full and loud? Why?
  7. Here: http://downloads.izotope.com/guides/iZotope-Mixing-Guide-Principles-Tips-Techniques.pdf This is a nice guide which explains to you not only how these tools work but why you use them. Having people tell you what EQ settings to use for certain things isn't really helpful if you're not really clear on why the equalizer exists in the first place. Remember that technology comes as the result of necessity. The best way to learn how to use it is to use it only when you feel it's necessary. Don't ever do things just because they make the project more complex. Your mastering chain should be an input and a desired result, not an input and different sounding result. Knowing when to use effects comes from practice and feedback. Example, once you learn what crowded mids sounds like due to people telling you your mids are crowded, then you know every time you hear crowded mids to fix the problem on your own before you show it to anyone. Also, in this context, it is worth pointing out that mixing and mastering are indeed very different things, and you should learn the principles of a good mix before trying to master your stuff. Good mixing can survive without mastering, but mastering can not survive without good mixing. In the industry, the mastering engineer would send back a bad mix to the studio who mixed it, because mastering can't solve instrument balance issues and qualities of the individual elements of a track. If you want general loudness advice, though, start with a good visualizer, like Voxengo SPAN. SPAN will tell you the RMS and the crest factor. RMS is basically the mathematical "perceived loudness", or the actual energy of the sound. Crest factor is just the mathematical ratio between the peak amplitude value and the peak RMS value. Crest factors have been getting really small these days (about 6 dB, which sounds kinda crunched and terrible), but for good loudness without in your face brickwalling, you should try to shoot for a crest factor around 10-12 dB. (once again, Crest Factor = Peak / RMS)
  8. However, this is only true for exported audio. Your DAW project file might increase if you're using more instruments, plugins, settings, mappings, etc. but this is usually in the magnitude of mere KB, I wouldn't worry about it.
  9. Interesting, I would attribute it to headers personally but I can't be sure what additionally is being rendered.
  10. Not even; it'll be exactly the same. 0000000000000000 takes up the same amount of space as 1011010010100011 and every other 16-bit binary number. It takes up 16 bits. So by induction, if the signal length is the same, the size will be the same. And think about this, instead of 0000000000000000, we can say something like 10000 0, saying "there are 16 0's". We have captured 16 0's while using less than 16 0's to say it. This is the simplest, most crude of basic compression. This is not how mp3 compression works, though. mp3 centers the lower frequencies in a mono channel (yes, mp3 is actually a very destructive compression because it destroys panning for the low end) and cuts out high frequencies. In that case, you've taken all the stereo low end and turned it to mono, making the low end HALF THE SIZE. It's very effective, but also very destructive if you crafted a stereo field down there. EDIT: My explanation was slightly inaccurate. To read more about what actually happens with the stereo image, read about joining.
  11. These things are only things that are present in your DAW. There's no MIDI in a wave file. There's no plugins or effects in a wave file. There's no layers. It's just a string of numbers. If you import the file in something like Audacity, you can zoom in super close and see each number for every index in time (in my previous example, every 44100 positions is 1 second). What happens for output is that your soundcard will just take these numbers and generate a smooth voltage signal where the voltage will match each sample value and oscillate the speaker in that fashion. Imagine over time the speaker cone takes the position of every next y-value (amplitude) as your x position (time) increases. In this example, it wobbles a little bit forward and backward, and then starts pushing really hard forward and backward. In other words, think of the signal (waveform) like the path of the speaker, where a sample value of 1.0 is the speaker cone pushing all the way forward, and -1.0 is pushing all the way back. All the numbers in between are appropriately all of the speaker cone positions between all the way out and all the way in. If you're confused about why you can output a bunch of different instruments and frequencies with just one signal, read about Fourier's Theorem. Basically, it's the same as superposition. A bunch of things adding together can be expressed as one thing (like how 2 + 4 + 6 can be expressed like 12). A signal can be represented as a sum of a bunch of sine waves at different frequencies, so if you were to play out multiple instruments through individual speakers for each one, and then play everything as a summed signal through one speaker, you'd hear more or less the same thing. This is because our ears automatically analyze the frequencies for us. There's a rolled up sheet in there which basically has a little vibrator for every frequency; when it vibrates, it tells our brain that we hear it. Whether we're manually adding a bunch of sine waves together or just draw the square wave manually; it's a mathematically equivalent result, it doesn't matter how it's done beforehand. If you've done anything in Photoshop, just think of it as flattening the image. You're taking all the settings and generating just a raw image out of it. You can't go back to find all the blending options, the pen tool paths, the smartobjects, etc. It's just raw pixels. It's the same with rendering a wave file. Samples are like pixels but for audio.
  12. Going off of this and the other thread, I should clear something up with you; plugins don't really change the quality of sound, but rather they just modify the sound itself. Your quality of the audio will still be matched to the internal processing quality of the DAW itself. Yes, there are poorly designed plug-ins that may do things like sound deterioration as a side effect of a bad way to process audio, but these are every uncommon and certainly not the defaults you find in the DAW. So anyways, what a limiter does is simply compress down all of the amplitude data that goes above the threshold (set the ceiling to 0 and let it go). This doesn't change the quality of the sound so much as just how it sounds. If you're using a barebones decent limiter, you can have a limiter that does absolutely nothing; the input will be identical to the output, indicative that it does not change the sound quality. If you have a limiter with a ceiling at 0 and a lot of the audio goes above the threshold, you're going to have a lot of "pumping" in the audio and it'll sound really weird and fatiguing. For this reason, you want to lower the volume before it hits the ceiling, so the ceiling is only catching extraneous peaks. In old music industry days, they never really like limiters. You would just turn the volume down and mix in a more controlled fashion to keep track of your peaks, RMS, etc. manually. It's more work, but the results are better as you can have as much dynamic range as you want and as little unintended side effect (like pumping) as you want. Nowadays electronic artists just pump limiters as hard as they can and it sounds arguably disgusting.
  13. Yes, the crackling is called clipping and it's coming from internal workings of your mp3 player. A lot of players come with built-in effects and you might be amplifying your song in the player without knowing it. Check the equalizer settings, and poke around and see if there are other effects options. My recommendation: turn everything off.
  14. So here's how audio works: A WAVE file is simply a gigantic list of amplitude vs. time values. Every successive value is the next sample of audio. So if I have 3 seconds of audio at 44100 KHz sampling rate, that means that 44100 of these values will represent the output signal for one second. The next 44100 samples will be the next second, and so on. When you mix signals together to create layered sounds, all that's happening (think about waves in physics) is that you're simply adding all of the values together. If this is challenging for you, read up on the concept of superposition. It basically means that if you have smaller "things" contribute to a system, you can express a net thing. Crude example, you can spend $40 on gas, $100 on games, make $500 all in a month. That's 3 separate things, but I can simply express it as one thing, a net gain of $360. No matter how many things happened to my finances that month, it reduces down to one number every month. This is the backbone of physics as well. All of the dotted lines are separate waveforms (layers). Adding them all together gets you the solid waveform, and that's represented in digital signal as a string of values on the y-axis (in computer calculations this will be decimals from -1.0 to ~.999). This is true for simple waveforms, it's also true for every one of your instruments and mixer tracks. At each sample point in time, everything is getting added up into a single value. Doesn't matter if it's 3 instruments or 30. It still sums into one signal. In 16-bit audio, each samples will be an integer from -32768 to 32677 (2^16 or 65536 values in total). Since there are 8 bits in a byte, that means each sample is 2 bytes of size. Give it a moment of thought, and you'll quickly realize that no matter how many plugins, how many crazy effects, instruments, imported audio, recorded, processed, whatever... The size of your wave file will still only ever be the sampling rate * the time length * the bit depth in bytes * the channels (so like, 2 if stereo), not including some miscellaneous bytes of headers and stuff that media players read before they get to the audio data in a wave file. So that means if my song is in mono at 5 minutes long at 44100 KHz 16-bit, it is most certainly going to be around 360 seconds * 44100 samples per second * 2 bytes per sample = 26460000 bytes, or ~25 MB. If my final output is in stereo, that means there are two channels with that many samples in time, so it doubles to roughly ~50MB. The only thing that differs with your project complexity is how long the computer takes to sum everything up into one signal. Your computer has to pump harder to do all the math when there's more math to do. However, this will only slow down the rendering process in the DAW's exporting. The final file is still just one signal, and the only determinant of size is the length in time of that signal. To answer your side thought, yes this is common to all DAW's. It doesn't matter if one DAW renders the silence of the muted track and the other one doesn't render it at all. The final output is still one signal. Going back to the finances example, rendering a muted track is just adding $0 to my net gain every month... and nothing changes.
  15. Here is the main issue: distribution. How do you make so the your common listener has access to your remix and the ability to interface with it? Do you program a simple non-visual game in which your key commands trigger things with your remixes? You're not off the mark here; interactive music is a hot topic for PhD research at universities who do music tech. I'm just trying to get you to think about this in a practical manner so you can better understand the pitfalls. Additionally, I personally think that interactive music is its own medium, and since Elias is designed for game integration, using Elias for this task seems like a broken approach. In other words, the idea of interactive music listening (music in general; a remix is no musically different than an original composition) is pretty cool; achieving it using Elias is probably not the greatest approach.
  16. Even if you were to create multi-variant dynamically composed remixes using this engine, OCR would only accept and post a single rendered export of just one of the "outcomes" you would get from the software. This is provided it even sounded good, the problem with random dynamic arrangement is that it lacks direction and sacrifices musical progression in favor of ambiguity; by ambiguity, I mean music written this way is purposefully "directionless" so that it can seamlessly transition into any of its other parts. There's no "single song", it's just a bunch of blocks you're stringing together. Not saying it can't be done, because that's basically electronic music performing in a nutshell, but even that is usually guided by a musician at the controls, who has a grander idea in mind. This type of workflow is better suited to writing stuff like level music. You're not scoring a scene, there's no need for tight cohesion with the visuals. Thus, you can write using this, and have your score dynamically arranged so that it doesn't sound like it keeps looping all the time. It can be randomly arranged, or arranged according to player actions, but yes, as Moseph said, listening to music is a completely non-interactive experience. You'd essentially only be able to randomly generate the arrangement and record one of the outcomes for submission, and that could turn pretty stale pretty fast.
  17. If you want excellent MIDI and an efficient piano roll, FL needs to be something you try. From personal experience, I've found no piano roll easier to work with than FL's. It uses the Paint-style "draw with right click and erase with left" instead of the traditional "draw with one tool and erase with another tool" style you find in things like Adobe products. FL manages your song parts as patterns instead of entries in a track. Your instruments are loaded in a channel list (which is tied also to step sequencer) and you can write MIDI for multiple channels in the same pattern clip. The arrangement window is where you arrange your pattern clips however you'd like. The tracks are just containers; they have nothing to do with the mixer or the channels. It supports VST's and ghost channel writing; you can write more than one instrument in a pattern, and see faded notes of the other instruments in the "background" as you write. The mixer is very simple and unlike other DAW's, is not modeled like a recording studio console (so there's no additional learning curve). You just have tracks. You can send tracks to other tracks with a push of a button, you can send multiple channels to the same track, etc. The fact that it's not modeled like a console makes it easy for them to do these things in a simpler manner than emulating how to do them in real life (there are no busses, insert/returns, sends, auxiliaries, etc.). Downsides of FL Studio: -Audio recording is pretty basic and leaves a lot to be desired -The automation system is pretty clunky and annoying -As of right now, you can't ride faders or apply batch functions to multiple mixer tracks. It's going to change soon with the new overhaul coming in FL 12. So yeah, if you want a good piano roll, download the demo and try it out.
  18. OLIOLIOLIOLIOLIOLIOOO wait what Oh yeah, the Seaboard. I can't wait to get my hands on one. In a few years I'll probably be able to muster up the cash for one.
  19. ^ It's really not bs, it's just how this day and age works. There's a time and place for "nice guy"; getting work is neither. If you do get work, however, make sure you maintain charm (respectful, funny, etc.). People have to like working with you or they won't ask you again. I was going to work with someone on a game OST and he was just... behaving like a child, making demands and dodging my questions about proof of concept and such. Try to avoid these types; I know I will and I just saved wasting a lot of time and work for something that never really got off the ground. Also, it may be worthwhile trying to nudge yourself GDC; I managed to do it by marketing myself as a composer, even though I'm actually a programmer in game technology (didn't have this job at the time of registration, so I couldn't really put it just yet). If you get the Audio pass (super $$$ i know) you can attend a lot of really cool panels, some of which are designed to point you in several valid directions for trying to get work. There are table areas filled with just small time game devs hanging out, you could go around, make some friends, they might ask for your help on their next project if they like you (not as a composer, but as a friend). Don't expect to build $$$ here, expect to build relationships. The money will come with patience and nurturing/expanding those relationships until one day, someone who needs a composer will be recommended you by one of your friends, like "What about that cool ass MikeViper motherfucker?".
  20. YUP YUP YUP YUP I might start playing more often after this comes out.
  21. Thanks bro! I've seen this class before, but I avoided it because of its Linear Algebra recommendation (I haven't taken Linear yet and I don't want to really be stumbling around in the dark). I will probably end up taking this after I DO take Linear, which may be at some point next year. I ended up ordering: http://www.amazon.com/Designing-Audio-Effect-Plug-Ins-Processing/dp/0240825152 It's a book with both a bunch of audio-specific exercises and still a decent amount of theory behind it.
×
×
  • Create New...