Jump to content

100_PERCENT ROEMER

Members
  • Posts

    358
  • Joined

  • Last visited

  • Days Won

    7

Profile Information

  • Real Name
    Derek Roemer
  • Occupation
    Teacher
  • Interests
    Metalworker. Musician. Modern Renaissance man.

Contact

Artist Settings

  • Collaboration Status
    1. Not Interested or Available
  • Software - Digital Audio Workstation (DAW)
    FL Studio
  • Software - Preferred Plugins/Libraries
    famisynth
  • Composition & Production Skills
    Arrangement & Orchestration
    Drum Programming
    Lyrics
    Mixing & Mastering
    Recording Facilities
    Synthesis & Sound Design
  • Instrumental & Vocal Skills (List)
    Accordion
    Acoustic Guitar
    Electric Bass
    Electric Guitar: Lead
    Electric Guitar: Rhythm
    Organ
    Piano
    Vocals: Male

Recent Profile Visitors

3,195 profile views

100_PERCENT ROEMER's Achievements

  1. (Full disclosure, I've been studying and playing music for nearly 30 years and producing for fun for half of that, and I've never even once used a loop or sample in my tracks, let alone AI generation) You're not wrong in that "if it's good, then good". That's how art is and I'm 100% in agreement that if the music sounds good, it's good. The problem I have (and others have) with AI is twofold: 1. With generative AI, it's just a matter of time before AI algorithms utilize AI generation to self-produce "music" without human prompting. This "music" will then flood the internet faster and farther than any human could possibly produce, even with an army of spambots at their disposal. Who is going to filter all of the crap from the good stuff if the crap is self-replicating? Even today, with more humans than ever before producing good music (and total garbage) with human hands, there is still an upper limit, a human limit, on the amount of material that is produced on a daily basis. When the streaming sites/algorithms start self-producing AI music, everything made by human hands, good and garbage alike, will be drowned out forever with AI-regurgitated tonal spam. And as a side note, it also completely undermines the concept of commissioning artwork. Why pay someone and wait when you can get something "good enough" with auto fill? A real bummer for those who rely on commissions to make ends meet. When people talk about "AI stealing their livelihoods", it's a legitimate concern. If the amount of material exceeds the amount of time that every human on earth can dedicate to listening, then of course people will never be exposed to "genuine" and "original" music. It's just a numbers game at that point and whichever AI can produce the biggest number of tracks and capture the largest audience will win, even if the AI generated "music" is just a non-original rehash of the material it was trained on. AI music is essentially a race to the bottom and do you really want to spend your limited time as a conscious being consuming the lowest quality non-original art? If you do, that's all fine and well, but it will also financially reward and therefore promote AI music above and beyond human artists because AI doesn't cost anything more than a little bit of electricity. With the profit incentive in place, human artists will be further disincentivized to pursue their craft and will inevitably be priced out of the game even if they can manage to catch a handful of fans. I'm fine with AI replacing monotonous labor in a factory. I'm NOT fine with AI replacing human artists. It's simply too 1984 for my tastes. 2. The current AI models like SunoAI are only able to generate convincing "music" because they were trained on material without the consent of the creating artists which is a violation of copyright law. I'm not a lawyer, but I guarantee you that SunoAI will get sued into oblivion in relatively short order, or at least ordered to retrain their model on material that has no copyright (classical music, free licensed music, etc). It's really a slap in the face of the artists and there will be some form of retaliation in response. In the meanwhile, I'm sure that there will be legions of "influencers" punching generate on whatever AI model is the flavor of the week and advertising themselves as "musicians" to their subscriber base even though they have zero talent or skill simply because the models produce music that is "good". Do you really want an entire generation of "artists" clicking ctrl+A -> Generative Fill? Is a future devoid of original thought and creativity in art really the future we deserve? That's not to say that AI doesn't have its place in music production! For example, I came across this Ocarina of Time in the NES soundfont and while it sounds "good" and it's cool and I like it and it's music and, and, and... There's nothing original in the notes playing or were there notable changes to the soundfonts used. It probably took some time to produce but it doesn't require musical skill, just monotonous button pushing until the desired result is compiled. AI could easily reduce the monotony of rehashing midi files (or generating them from a track wholesale) with new sounds and I don't see any issue with that because it's simply improving the productivity of a producer as long as credit is given to the original artists (and in this case it was). Anyway, just my 0.02. I'm not opposed to the tech, but I am opposed to people using the tech to steal from artists and fill the world with spam.
  2. Thanks for the clarification, I should read the announcements sometime, hah. I don't think there is any form of reverse search for AI generated audio yet. I played with with the Red Brinstar theme some more and fed a few results from one model into a different model and used a mic with some filters to record back into another model to give a new result and... yeah there's some steps involved but none of it requires any form of musical skill or theory. I was just clicking buttons on a browser and waiting for it to compile, and after about an hour of playing around I got the source material notation to play in a specific "genre" and it sounded "good". It's pretty scary, all things considered. If you take a few extra steps to obfuscate the AI generation, at what point is it "authentic" music (or impossible to tell otherwise), at what point is it just "sampling", at what point is it...?
  3. Howdy OCR. If you haven't played with SunoAI or at least heard of it, it's a gamechanger! However, please don't assume that when I say it's a "gamechanger" that it's changing the game for the better. A quick example: I generated these four "Super Metroid Remixes" in less than 5 minutes with just a couple of genre tags and the "problem" is that they're actually "good" for the most part. The first two even auto-generated a fitting title without any input using only the prompt "supermetroid redbrinstar remix". Shadows of Zebes - Iteration 1 Shadows of Zebes - Iteration 2 Sample Ambient Track - Iteration 1 Sample Ambient Track - Iteration 2 *links removed to follow forum policy, play with the AI yourself* Of course, the bitrate is low, and the tracks abruptly cut off, and there's occasionally some unwarranted dissonance and, and, and... all of that will get refined within a few short months to the point where AI generated music is indistinguishable from human created music. I personally think that this is largely due to SunoAI unlawfully using the music of countless artists to train their model and I can only assume that they will get sued eventually, but the genie is out of the bottle regardless. What does OCR think of AI generated music and what mechanisms (if any) does it intend to put into place to ensure that submitted tracks are written/built/performed by human hands? I guarantee you that with a couple of beers and a lazy afternoon that I could generate a track that meets all the submission standards and is comparable in quality to the "genuine" music produced and posted here which scares the shit out of me, if I can be frank about it. Any thoughts?
  4. Oshit it's dj carbunk1e! I had your tales of phantasia thor remix on repeat on my summer workout playlist some 20 years ago, haha. Good stuff! The idea of this track is that someone wakes up to their alarm clock on turbo mode and they have a total mental breakdown from having to wake up to the same obnoxious sound every day of their wretched life so they can waste their existence grinding away at a soul sucking job. So today, they make a very rational decision to pull out a gun to shoot their clock but miss every shot and have to deal with the ear-splitting repercussions of their insomanical actions as the alarm clock continues to taunt them while they're half-deaf until they finally kill it with the final bullet (Inspired by the first release of the Turok comic but with a gun instead of an ax, lol). The main "synth" in my track is the 808 cowbell sample. I was really minimalist on this track and besides a sax synth most everything is just FL drumkit samples. Not sure how much more range I can get out of a cowbell but you were right in the comparison that it's lacking midrange umph. Maybe some more EQ with reverb could pull it off? Also, I doubt I tuned any of the drums. That's something I totally neglected. Thanks for the reminder. The extreme rapid panning on some of the samples is deliberate and I made automation pulse waves that more or less matched the waveforms of the samples that were panning to make the listener feel somewhat disoriented while the main track continued unchanged (gotta take any excuse I can get to deliberately screw with people cause that was the whole point of the original LISA game, lol). I don't think the main drums and percussion moves much in the stereo field though, I'll have to doublecheck. The idea for a rumble in the beginning is fire, I'll have to take your advice on that. Thanks for the inspiration! I've been living under a rock for the last decade and a half, so I don't have a discord let alone even know what it is but maybe I'll get with the times and check into it. Thanks again for the input!
  5. Thank you! It's an obscure game with an even more obscure soundtrack but you can find the full OST here: p.s. if you happen to find the original vinyl release of the soundtrack, be sure to buy it. It's worth it's weight in gold (no joke).
  6. GOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDERGOHARDER
  7. I have a WIP that is finished as far as arrangement goes (happy to play with the mixing/mastering if necessary, it's somewhat loud at -9.08 average LUFS). My remix goes a little further than originally intended and includes source material from three tracks of the LISA game (Forever Turbo Heat Dance, Pebble Man, and Men's Hair Club). I will not be pursing a Knytt Underground remix at this point. Here's a terrible music video of my LISA WIP for your en"joy"ment! For those of you who haven't heard the LISA soundtrack before, it's.... different to say the least.
  8. This wretched travesty is now running on 100% extra sax. Get your terrible sax fix today.
  9. He certainly seems like the kinda person who would be down for a little clowning around on the side. My only regret is not buying the clown van for $2999.99 when it was still up for sale...
  10. This is some of the best music I've heard in at least a decade. Seriously.
  11. I gave all three tracks a listen. My background is classical piano so that's where I'm coming from regarding my take on the tracks. Lucid dreams sounds like a midi, which is not a good thing. The piano is monotone and obviously digital and not in a good way. It's possible to make digital piano sound realistic, but you'll need to add some amount of reverb, sustain, stereo imaging, and perhaps delay to mimic the sounds of a real piano and give it depth. Variations in the velocity of the notes is critical and generally pianists will emphasize the 1st and 3rd beats in 4/4 time, the 1st beat in 3/4, and so on depending on the time signature. Some slight offsets in the note lengths, passing notes, trills, crescendo and decrescendo, and other human touches are generally necessary. Real piano recording for reference: If you're recording a solo piano track, then it makes sense to simply record a real piano and it's surprisingly cheap and easy to find a good piano and a couple of condenser mics these days. However, if you're wanting to include other instruments like you did for your tracks, then getting the recording to be in time and in tune with your other instrumentation can be very difficult, not to mention having to denoise the background to create more headspace for the rest of the mix. I'm not sure what DAW you are using, but in FL studio, even the stock FL Keys plugin is surprisingly good if you take the time to work with it. There are other free piano plugins that are realistic as well (I used to use Tascam CVPiano back in the day). The other tracks were... different. I noticed that there was a lot of dissonance as well. Dissonance can be an incredibly powerful tool but you have to return to some sort of root or resolution to prevent it from simply sounding like random noise. I don't know if you've heard of clown core, but I recently started enjoying their "music" and they are a perfect example of how to use dissonance effectively to convey a musical intention (Don't be fooled by the clowns! These two guys are extremely talented musicians). Take some time to find some sheet music and pick apart your favorite songs to analyze the key signatures and chord progressions. From there, read into the main melody and the accompanying harmony and see how it fits into a cohesive whole with the intention of a specific expression. Music is hard to understand and even harder to compose in a manner that is artistic because there are rules that you should follow except for when you shouldn't follow them. Quite the conundrum. And hey, you can also try learning an instrument! Once you know how to play an instrument or two it becomes much easier to understand how to write music for any instrumentation. Best of luck and don't give up! Edit: I took a look at your youtube uploads and just wanted to say that I really enjoyed Recusion - Black. Very nice ambient feel and a perfect fit for a menu!
  12. Thanks for the info and recommendations (and warnings)! I already have a good analog mixer and it doesn't cost me anything to keep using it, so I'll probably just go with an inexpensive interface to add in between the mixer and PC for now to see how it performs. Thanks again for your help!
  13. The most I record simultaneously is three input lines when I play my hollowbody electric guitar so I don't think I can get rid of my mixer for recording. I use 2 mics for the acoustic space, one line for the electric pickup, and then combine both the acoustic and electric signals which results in what sounds like two different guitars playing together simultaneously. Maybe this is a silly approach, but since the three inputs are ultimately reduced down into a L and R signal and fed into the DAW, I pan the acoustic signal entirely to one side with the electric signal panned to the other. Once I have the two independent signals on the L and R channels respectively (now both in mono), I then set up two completely separate tracks/FX for them in the DAW and then invert the panning so they're centered (more or less) and in stereo but with each "stage" having their own unique processing chain. Looking at some of the less expensive interfaces with two channels (like this one https://www.sweetwater.com/store/detail/OnyxProd22--mackie-onyx-producer-2-2-usb-audio-interface), I'm left wondering: will the signal going into the interface get processed by the computer before going to the outputs for listening? Or will the signal coming in from the mixer just go directly to the outs regardless of the USB connection?
×
×
  • Create New...