Jump to content

Harmony

Members
  • Posts

    1,081
  • Joined

  • Last visited

Posts posted by Harmony

  1. When you assign things on the fly, do you just go (as a general procedure) to the software while you're doing some track and use the "Learn MIDI assignment" function for whatever you think is cool to fiddle around with? Logic Pro 7 has that (as well as every other sequencing software I've used) but I just want to check that that's an efficient way to "do stuff" so to speak.
    That's essentially how I do it. SONAR has a listing of all assignable parameters for a given effect or instrument, so I usually go that route rather than use whatever built-in learn functionality the plugin may have. But learn is learn, whatever way you look at it I guess.
  2. No need to be nervous, welcome to the boards :)

    Audacity is an audio editor, which means it takes in or records audio files (for example, .wav, .mp3, .aif) and allows you to manipulate them (for example, add effects, change the pitch/volume/speed). Those effects are performed by what are loosely called “plugins”. Some plugins come with Audacity and others (usually in a format called VST) can be bought or freely downloaded.

    Now, while you can use Audacity for sequencing entire songs, it's not the best tool for the job and it doesn't offer all of the features that a full multi-track sequencer (sometimes referred to as a DAW) will have. Reaper and Mixcraft are two good options that offer free unlimited trials and cost about $60 if you decide to buy. There are more expensive and potentially better options, but these are absolutely fine to get started with. I'd still get Audacity to use as a great audio editor, but for your sequencing, go with a full sequencer. So I'll try to answer your questions with that advice in mind.

    1. How easy is it to master the program?
    2. Is there any kind of tutorial literature that can help me study the program?
    3. What is "soundfont"?(Have mercy! Please don't hit me!)
    4. How do I connect my keyboard to the program?
    5. Do I need Finale?
    6. Are effects like modulation, echo, and that-machine-gun-speed-sound-repetition supposed to be created with different program?

    1)Audacity is cake to use. Reaper takes a little getting used to if you've never used a sequencer before, but once you get the hang of it, or any sequencer, it's very straightforward. Everything is nicely laid out and it didn't give me any problems when I tried it a while ago. I actually just found out about Mixcraft yesterday when I saw a friend of mine using it, but it looked really good. Mixcraft is modeled after GarageBand, so it's goal is ease of use.

    2)Reaper has a users guide and a forum that can be helpful. A YouTube search pulled up

    , which looks helpful for newbies.

    3)A soundfont is a collection of audio samples. Those samples are bundled with instructions used to tell a program how to play them (volume/length/pitch/effects) when they are triggered with MIDI data (see the next question if you don't understand MIDI). Most sequencers, Reaper included, don't natively read soundfonts, but most sequencers, Reaper included, can load a soundfont player which in turn loads the soundfont, reads the MIDI data that you send to it, which then triggers the sounds. They're incredibly useful and I'm a huge fan of soundfonts.

    4)There are a couple of ways in general. You have to understand the difference between MIDI and audio data though. Audio is a digital or analog representation of sound. Your keyboard produces audio. MIDI is a digital instruction that tells hardware or software how and when to play a certain piece of audio. Therefore, MIDI itself has no sound. Your keyboard also can send and receive MIDI data. Knowing this, there are 3 basic ways to use your keyboard in any given sequencer.

    • You can connect the audio output of the keyboard (e.g. a headphone jack or line-out port) to the audio input of your computer (e.g. a line-in port) and record whatever audio is coming out of your keyboard directly into Reaper/Audacity/Mixcraft as an audio track. You play the performance live, or use one of your recorded sequences stored in the keyboard. Problem is, if you play live and mess up, there's no easy editing. If you use one of your recorded sequences, you could edit the sequence, but then you're not really taking advantage of the sequencer on your computer, are you? :)
    • You could connect the MIDI out from your keyboard to the MIDI in of your soundcard. Even if you don't have a MIDI-in on your soundcard, you're fortunate enough to have a keyboard that transmits MIDI through USB in addition to the standard dedicated MIDI lines. Take your pick of which to connect. After it's connected, you can send MIDI messages directly to Reaper/Mixcraft (not Audacity though, it's an *audio* editor!). Reaper/Mixcraft are “hosts” which means they can load lots of other programs (“plugins”) within them. Some of these plugins are instruments and some of these are effects (both commonly in the VST format). If you have a VST instrument, for example a VST electric piano, you can load it in Reaper/Mixcraft then use your keyboard to play that virtual instrument using MIDI. There are lots of advantages to this method. First, your sequencer will record MIDI, not audio. This means that if you make a mistake, you can immediately go in and correct the bad note without having to replay the entire thing. You can make your performances quantized, which means that all of the notes start perfectly in time with the tempo. In general, there's just lots of flexibility with editing the performance, once it's recorded. Also, there are tens of thousands...maybe more...virtual instruments out there that can be loaded into Reaper/Mixcraft. Many of these will be free and many of them will sound better than the sounds that came with your keyboard! The possibilities are endless.
    • The final connection method is if you want to use the computer sequencer to sequence the sounds that come with your keyboard. To do so you hook up the MIDI in/out of your keyboard to the MIDI out/in of your soundcard (again, your keyboard allows you to do it just by using the usb cable). You also hook up the audio out of the keyboard to the audio in of the soundcard, as in the first method. Now Reaper/Mixcraft are receiving AND sending MIDI data, and receiving audio. With this setup you can use a Piano Roll View (see below) to program in MIDI “notes”. That MIDI sequence is then sent to your keyboard where it is used to play one of the keyboard's sounds. That sound is then sent back to Reaper/Mixcraft and is recorded as an audio track. You could also play the keyboard into Reaper/Mixcraft, record the MIDI sequence, have that MIDI sequence sent back out to the keyboard which it will control one of the keyboards sounds, then send that audio back into Reaper/Mixcraft and record it as audio.

    5)No, that's a separate program for a separate purpose. Essentially the functionality of Finale is replaced by the Piano Roll View (PRV) in most sequencers. With the PRV you can use your mouse to directly input midi notes which tell the virtual instruments you have loaded (or external instruments, like your keyboard) what to play.

    6) As I've said, effects and instruments popularly come in a format called VST (sometimes called VSTi if it's an instrument). Reaper/Mixcraft can load these VSTs onto your audio tracks. So lets say you go to this huge archive of VSTs (and other stuff) and pick up a free modulation VST effect. Lets also say that you also used the first method above to record yourself playing the piano into an audio track. Now, once that modulation VST is installed, you can go to Reaper/Mixcraft's list of plugins and and select your new modulation effect and place it on the piano audio track. BAM! You've got a modulated piano.

    Isn't music fun!

  3. It's actually pretty easy. There's a couple buttons that let you page left and right. It lets you go over to the next group of channels if there's more than 8. Now that would only become a pain if I was mixing a project with like 64 tracks but that never happens for me.
    Oh cool, I'm going to have to see if I can set something up like that when I get home. I average around 30 tracks for a full song so I've never bothered trying to setup up my 9 faders for levels.

    On a sidenote, if SONAR would get it's ACT together (Active Controller Technology) then it would a snap. As it is, ACT never works as it should for me.

  4. It does but I still don't find it to work well enough to make me want to use them. There's just something about the feedback that you get from the BCF that makes it a lot more fun to use. I'm not sure I could go back to something that doesn't have motorized faders at least as far as surface controllers go.
    I actually looked at that controller since it's SOOOO cheap, but it would be a luxury item for me since I'm pretty comfortable with my current setup. What I'd be concerned with is how effectively you use the faders with more than 8 tracks? With your setup, what do you have to do to change which tracks each fader/rotary is assigned to?
  5. The MIDI keyboard I use is an Edirol PCR-M50 and I can honestly say that I never use any of the surface controls on it. I find it's more of a pain in the ass than using a mouse to control things due to things not always being in sync as it were.
    My Axiom 61 has a "null" function that prevents parameter jumps and it works 90% of the time. The Edirol seems to be in the same class of controllers, but it doesn't have something similar?
  6. For me, it completely depends on what I'm doing. That is, I don't have any painstakingly constructed master assignment scheme -- I assign everything on the fly.

    If I'm tracking, I will usually assign the volume of the armed recording track to a fader. That way I can adjust the volume without the mic picking up the mouse clicking. I also do volume automation, even simple ones, by using the faders.

    If I'm in dBlue Glitch, I have a standard setup where faders control amounts of a parameter, like modulation speed, or bit crusher amount. Knobs control filter/cutoff of each effect, and one knob controls the global Glitch mix so I can easily turn off the effect.

    For other effects I usually just setup a single fader for automation, like setting one as a Wah amount.

    For synths, I have two faders that I almost always use for cut/res, but my favorite thing to do is assign aftertouch or modulation to a pad. You get a different style of control over the effect than you could ever get with a knob/wheel/fader.

  7. I've given up on corner desks. They're such a pain in the butt and they suck to move. I'm all about the modular desk stuff at Ikea.
    My decision to get a corner desk was purely practical. I haven't had a studio space that wasn't in my bedroom since high school, and when I bought this desk it was placed as shown here. There were literally 3 inches between it and the bed and no other desk would fit in that space, so it was actually a perfect choice (a choice made after a trip to WalMart with a measuring tape).

    Corners have their pros and cons. I don't like the difficulty setting up a dual monitor setup, but I love the accessibility behind the desk. Because of the triangular space between the back of the desk and the wall corner, I can go under and look behind my computer or hardware without moving anything. I'm going to miss that when I finally get a room big enough for a great desk.

    I wish that was the case. This poor desk has been through 5 moves and is over 6 years old. :(

    I'll be moving twice in the next 3 months, so it would have to make it through 2 moves. Alas I shall have a parting drink with my desk though.

    Oh. I see. You've had faith, and now it's time to put her to rest. Well ♫ This one's for my homies ♪♫ RIP WalMart desk.
    I have hammered so many nails into the desk and that's the only remedy that I foresee it will need to last a long time... I have broken the thing so many times, especially where the keyboard tray slides into.
    OMG, that thing breaks constantly on me. Actually the worst part for me is that the screws for the metal rails rip out of the particle board of the keyboard tray if I put too much pressure on the tray (say, if I slam my fist against the tray after dying the 12,000th time in Call of Duty). I've had to re-drill holes in the rails 5 times now :)
  8. Haha, speed-of-sound based mic placement for optimal phase alignment on a djembe? Please, for my sake, let's leave that idea on the drawing board. My job is fluid dynamics, I do music to get AWAY from it :)

    Brandon, if you don't already have one, you should def look into finding a good spectrum analyzer for phase etc... for the recording.
    Yeah I've got one that I like, but I really wouldn't know how to use it to determine phase cancellation effects. It's something I'll look into though. And as you all have said, I'll play with the mic placement and make sure I'm getting the fullest sound possible.
  9. oooooo brandon - seems like someone is getting ready for their sample library :D
    Haha, yeah I did think about sampling my djembe, but I already piss my neighbors off enough with my yelling (a.k.a. singing) and occasional drum playing :)
    just make sure you check the phase on that bottom mic - your probably gonna have to invert it
    I thought about that, but I'm not so sure it's the best way to go for a djembe in general, primarily because the frequencies that you end up picking up at the top and the bottom are very different. So, for example, when you have a low frequency (and likely high amplitude) wave recorded at the bottom, the same wave recorded at the top will be of a much lower amplitude and will be contaminated heavily with the high frequency content of the initial hand strike. So the phase cancellation should be minimal. But it would still be significant enough of an issue if the mics weren't separated by almost 3 feet. With that distance, like you alluded to, I think the two signals are enough out of phase that it may end up hurting the audio sum by inverting the phase of one of the signals. Take a look at this:

    Capture.PNG

    The top wave is the top mic and it's view has been magnified vertically 10X. This is the low strike on the djembe that you hear at the beginning of the loop. You can see the much greater high freq content in the top mic and, judging by the 10X zoom, you can also see how big of a difference in amplitude there is between the two signals. Finally, you can also see the slight phase lag on the bottom mic.

    Put these together, and as said, I'm not sure that inverting the bottom mic is the right thing to do. Any thoughts?

  10. Let me guess...the MidiLFO? :)
    While that one does look interesting, I saw this challenge earlier because I had come to KVR for a free I was actually in the market for a free simple audio file view, like "eigenmode".

    Also, last week I woke up with an idea for a guitar vst that was VERY similar to "Armchair Guitarist". Can I claim that my intellectual property has been plagiarized? <_<

  11. OMGawd that bass is ridiculous. It takes up the ENTIRE soundstage...which, of course is awesome. So awesome in fact that when the main theme get first hinted at around 0:52 I get worried that no drop-to-lead in the world is going to have enough punch to follow that bass. I was, sadly right. The drop into 1:09 seemed anti-climactic to me. The lead just isn't strong enough at that point to keep the energy high imo, especially since the bass scales WAY back.

    By 1:43 when the bass and the lead (an octave higher) come in, I'm feeling it though. Great stuff with the subdued section to a much more effective drop at 2:15. Love the added count in there to shake things up a little from the 4-count omm-tss.

    Bass solo section. Yes. Yes. Yes.

    I'm not sad at the ending. I'm a little confused though. I thought everything was going so well, we were getting along great. Our relationship was built on a solid foundation of trust, understanding and a phat bassline, and then you just drop me like that :puppyeyes: Was it something I said?

  12. New Mic :-o

    Just picked up an Audix D4 for recording the low end of my djembe, and maybe the bongos. I also picked up a D-Vice mic clamp which I thought would attach nicely to the bottom rim of the djembe, but thanks to a finicky design by Audix, no dice on that one. Although the clamp claims versatility, that thing won't attach to anything except metal rimmed toms or snares, and with how poorly designed it is I'd bet that even some of those are difficult to attach to :-/ Anyway, I ended up having to construct a bracket (ugh) that would accommodate the d-vice, then drill into the djembe (ugh) to attach the bracket. Not happy with the d-vice at all, but I am happy with what I was able to do with it (second pic).

    D4.JPGD4_ClippedIn.JPG

    The mic sounds great! I just did a little test run (hope I didn't wake the neighbors banging on my drum at 1am) using the D4 on the bottom and my Rode NT1-A on the top:

    The sample plays through 4 times:

    1) top mic (NT1-A) only. raw audio, no processing.

    2) bottom mic (D4) only. raw audio, no processing.

    3) top+bottom. raw audio, no processing.

    4) top+bottom. compression on each mic, EQ'd out the high end on the D4, EQ'd out the low end on the NT1-A, compressed below 80Hz on the D4 to keep rumble in check, limited and further compressed the whole thing.

  13. The term "aux" output doesn't say anything about what type of cable you need or can use. It just means that you can use that port for either input or output, as the case may be.

    The aux input on your car stereo is probably a 1/8" stereo connection (a.k.a. stereo mini-jack) like you'd use on an mp3 player. I'm not sure, but I'd think that your violin would use a mono connection like most instrument connectors. If that is the case, then any guitar/instrument cable will do. Since most of those are 1/4" mono, to connect to the violin you'd need a male 1/8" mono to female 1/4" mono adapter. If the connection is a stereo mini-jack, then you can use a male 1/8" stereo to female 1/4" mono adapter. Of course using a mono instrument cable (a.k.a. a TS cable) gives you a mono signal. If the signal from the violin is actually stereo then instead of a mono instrument cable you could use a stereo 1/4" (a.k.a. a 1/4" TRS) cable with the appropriate adapters.

×
×
  • Create New...