Jump to content

Kanthos

Members
  • Posts

    1,844
  • Joined

  • Last visited

Everything posted by Kanthos

  1. I wouldn't say improv is the genesis of composition. They're similar, yes, and some composers do improv, but for some people, like me, it's the other way around. When I improv, I'm playing what I already hear in my head; were I to compose and arrange a piece instead, I'd be writing down what I've heard, not what I've played. A good composer doesn't necessarily play anything or improv well; theyr'e just gifted with exceptional creativity, however they derive it. Interpretation becomes a much bigger thing in music after the classical period. A good interpreter breathes new life into the original piece. If we don't see that as a creational process, it's possibly because we don't have the ear to pick up the subtle differences. I know that I can't determine significant differences in interpretation between recordings of the same piece, and I wouldn't call myself a seasoned listener of classical. I'm still at the stage where I listen for the composition, not for the interpretation. I think interpretation here is much like mixing: a small and subtle change can make a big difference if you can notice it but is lost on the majority.
  2. Music as an art form always has been and always will be about creation. The most basic level of that creation is interpretation: playing someone else's music with your own feeling, which is significantly different than playing someone else's music in the style of some other performer. Beyond interpretation, there's improvisation and composition. Which of the two is more valuable is subjective: a performer typically creates up to 10 notes at a time on a single instrument while a composer can write for any instrumentation he or she desires, but doesn't do so in realtime as the music is being played. While I believe improvisation and composition are more powerful skills than interpretation, musicians who interpret music well and in their own style are still a rarity and deserve recognition above the average musician.
  3. Threads run in the same memory space as the process that spawned them, regardless of OS. That's the only way this could work: if a thread ran in its own memory space, how would it communicate with other threads in the same process? You'd have to have some kind of protocol (shared memory, sockets, pipes, or the like) to share data between threads, which would drastically complicate the process of writing threaded code.
  4. My soundcard is an Edirol UA-25, and I've been happy with it so far. I guess all I'd want to do is make sure that the signal going back into Cubase isn't sent out a second time to avoid a loop where the audio out became the audio in. Would multiple outputs be important or necessary here? I only have one on the UA-25. EDIT: Well, I'm stupid. The Pod models I'm looking at have USB output and act as a second input source, so I can record off the device directly.
  5. My only complaint is that this could use a bass. Shnabubula definitely has a future in jazz performance.
  6. I'm considering getting a Line6 Pod (probably the XT model) to add effects to my keyboards and sax without taking up CPU. In a live situation, no problem: run my MIDI controllers through VSTs to create the sounds, run that into the Line6 for effects, and then out into an amp or DI. Ideally, I'd also like to use this for recording and mixing. There are two scenarios that I'm considering. 1) Using MIDI controllers. I'd play the keyboards, have Cubase record the MIDI, send the resulting audio through the soundcard to the POD. I'd route the POD's output back through the soundcard to Cubase and record the processed audio. Basically, Cubase would record processed audio and MIDI. Besides being the only way to do this, the benefit would be that I could swap the effects and rerecord, much like studios that re-amp guitar parts (record both a clean and processed signal). 2) Using my sax: I'd play into a mic which would run through my soundcard into Cubase. The output audio from the soundcard would go to the POD, which would go back into the computer to be recorded. Would I be introducing any weird loops doing that? It seems like a bad idea since I'd be using the soundcard output for both the unprocessed and processed signals. Is there any other way that I could do this, given that I'd have a sound card and effects box, short of getting a second soundcard or other fairly expensive piece of hardware?
  7. I doubt they'd do it, at least in their version, because their current method of transferring data from one place to the other undoubtedly uses sockets (the one common way of doing network programming; really, all other ways of doing it just wrap sockets). Their first thought would probably be to tell you to just use sockets on a local machine, which is what zircon's already tried. Whatever overhead there would be in using sockets would be present here. I'm not sure there's a whole lot you can do to improve on this, although I've started looking at the options. Probably the only significant value over FX Teleport would be the ability to load 64-bit plugins, assuming FX Teleport doesn't do that already; getting the latency down would probably be hard. That doesn't mean this isn't worth pursuing, but needs some good design, good coding, and careful thought.
  8. I have a bunch more questions about this, but it's probably best not working out all the details of proprietary software on a public forum
  9. I'm curious to know where you're going with this. I don't have access to a Mac environment (although designing code to be easily cross-platform compatible isn't that hard; you just have to make sure to recognize the dependencies on one system and isolate them), but have experience or at least interest with everything else. Of course, I'm working full-time, but depending on your timeframe and what you're envisioning, I might be able to get on board with this. Post some more details or let me know what you're thinking. As for other places, university message boards or newsgroups might be a good resource. uw.cs.general is the place to go if you want to ask around UWaterloo (the school I graduated from, and I believe Emura is in engineering there as well); other schools will have something similar.
  10. Unless you're planning to use only soundfonts (and even then, arranging for a whole orchestra might be pretty CPU-intensive), there's no reason to limit yourself further. You *will* hit limitations using native plugins and not running a VM if you use decent plugins; I don't know why you'd want to hit those limits sooner by using a VM.
  11. Congrats! I only had time for about 100 votes this month, so not as much bumping up of OCR people as I would've liked. Is there a page that shows the overall standings? I can only seem to find the #1 song in each category or standing by category, not standing in the semifinals. I'm curious to see where other songs like Red July and Furious George ended up.
  12. Could you find out who was in charge of the LAN room and ask them what they used and where it came from?
  13. I'm going crazy at work at the moment (just became team lead on one of my team's two projects, and the other lead is on vacation so I'm acting in his place as well). I probably won't have any time to judge the quarterfinals, but when the semis come up, I should be ok to do a solid block of judging. Good luck too all the OCR people!
  14. Needs more Rhodes! Just kidding, of course; I haven't gotten my copy yet. Although it's funny how you guys have corrupted me; I read the title of the thread as "Pixietricks ALBLUM RELEASE"
  15. http://twitter.com/mjchase
  16. Congrats Jill! I'll probably get my copy in a couple weeks, shipping to Canada being what it is (at least, it took about two weeks for the VGDJ pin).
  17. Note to self: don't post technical information first thing in the morning. Wake up first. zircon's right. Smaller buffers increase CPU load, and I think that larger buffers increase latency. You have to figure it out on a case-by-case basis for your CPU and sound card combination; you'll want to get the buffer as small as possible and the latency as low as possible without causing static because you're trying to play in real-time, and a large latency will likely throw you off. Any more than 10-15 milliseconds and I can't play along with anything pre-recorded.
  18. I recently got Kore 2. BIG WARNING: If you plan on using plugins that don't load patches on their own (i.e. they rely on the host interface to load patches for them, in fxp or fxb format), Kore WILL NOT let you load them, and those plugins are basically useless other than whatever default presets the plugins might have unless you program all the sounds yourself. I've also asked NI directly and they have no intention of adding this functionality anytime soon. That said, I'm quite happy with it. I can live with the plugin patch loading problem since I only have one plugin that doesn't load any presets within its own interface (and it's a smaller one, so the author is considering adding that functionality because I mentioned the incompatibility with Kore). I don't really do a lot of sound tweaking; I mainly got Kore so I could have an easy way to organize the sounds I'd use in a live performance situation and for the searching features. Soundbanks really haven't been an issue to me. I've been using them solely as a way of indicating where the sound came from. There's nothing forcing you to do this though; you could put everything in one soundbank if you wanted and essentially ignore the idea of soundbanks altogether. I find it helpful to have some organization. Searching is also done across soundbanks, so I haven't felt limited in any way.
  19. Odds are good that you've got one or more of these things going on: 1) You aren't using ASIO drivers for your soundcard causing lag. 2) You have a lot of stuff loaded in the background that individually takes up little CPU time but combined is using a fair chunk of it. Your CPU can't catch up. 3) Your sound card's buffer size is too big (taking up more memory and making the CPU work harder to fill it) or too small (more data is being generated than the buffer can hold, causing crackles). 4) Guitar Rig 3, with all the effects you're trying to use, is just too powerful for your CPU. Recording audio isn't terribly CPU-intensive. The problem is that you're processing the audio first and recording it at the same time. If you have a DAW (FL Studio, Cubase, Sonar, Logic, Garageband, Live, etc.), you might want to try recording a clean signal and sending it through Guitar Rig once it's recorded, although depending on your guitar skills, that might affect your playing. Something else to check is whether or not you can play fine in Guitar Rig *without* recording. By that, I mean play your guitar and make sure you have Guitar Rig and your sound card configured so that you hear the processed audio output from Guitar Rig. (Apologies that I don't know how to do this myself; I use Guitar Rig within Cubase or Native Instruments Kore to add effects to other virtual instruments like Native Instruments Elektrik Pianos, so I don't know how th signal flow would work for playing with a live instrument). If you can't play without static when you're not recording, you certainly won't be able to when you are. More than likely though, this is a problem with the way your sound card is configured. M-Audio soundcards usually have a Control Panel application that lets you set latency and buffer size; try playing around with the settings there and make sure you've updated to the latest sound card drivers (go to M-Audio's site to get them).
  20. Nah, TMNT2 was B A B A Up Down B A Left Right B A Start (or replace start with select start if you wanted to play with two players).
  21. I've got the code done to generate the images. Still need to work out how to cache them though; waiting on djpretzel's feedback for that.
  22. Don't know enough to answer your other questions, but the Silver XP needs the normal version to function.
  23. Wishing I was still a student, doing some kind of AI/music PhD (something like melody detection would be really cool).
  24. Another note with Cubase: sometimes you'll have to tell it to rescan your plugin folder. I haven't had a 100% success rate with it auto-detecting new plugins (although that might have to do with the installation procedures for the plugins themselves). Also, it *is* a VST. It isn't a VSTi (the i being instrument). Most types of plugins (RTAS, AU, VST, etc.) can be instruments or effects, and most people call them the same thing to keep it simple.
×
×
  • Create New...