Jump to content

Master Mi

Members
  • Posts

    391
  • Joined

  • Last visited

  • Days Won

    4

1 Follower

Profile Information

  • Location
    Germany
  • Occupation
    landscape gardener
  • Interests
    martial arts, training, nature, philosophy, music, composing remixes and own soundtracks, video games, movies/animes, exceptional literature, pescetarian raw food diet, cozy naps in the sunlight

Artist Settings

  • Collaboration Status
    2. Maybe; Depends on Circumstances
  • Software - Preferred Plugins/Libraries
    Independence Pro Premium Suite, Revolta² & DN-e1 synthesizers, Magix Vita instruments, Vandal: Virtual bass and guitar amplifier, Titan 2, ERA II: Vocal Codex, Shevannai: The Voice Of Elves, Native Instruments
  • Composition & Production Skills
    Arrangement & Orchestration
    Drum Programming
    Mixing & Mastering
    Synthesis & Sound Design

Recent Profile Visitors

13,165 profile views

Master Mi's Achievements

  1. I've found new trailer and preview content for Visions of Mana that you might have missed in the last few days, weeks and months. ... 1) The official Japanese trailer -------------------------------------- I think the Japanese developers really know how to appeal to fans of the Mana series through the magic of childhood memories. And I have to admit that I kinda like it. ... 2) The official gameplay trailer -------------------------------------- This short trailer primarily provides further impressions of the battle system. ... 3) The launch date trailer -------------------------------- This slightly longer trailer mainly gives some impressions of the story and the characters. ... 4) The official tourism guide trailer -------------------------------------------- It shows some locations and regions in this game. ... 5) The demo announce trailer -------------------------------------- This trailer gives some impressions of the recently released demo version of Visions of Mana. ... 6) A full demo playthrough video ----------------------------------------- This is a little extra video content for those who want to dive a little deeper into the recently released demo version of Visions of Mana and be passively guided through the entire demo. ... Have fun and good night for now. ))
  2. Made some progress with the electric guitar part in my Crisis Core remix, by the way. As a guitar player, what would you say about guitar playing techniques, composition and mixing for this part? CC - FF7 (Excerpt) - Clean Electric Guitar (Improved Version).mp3
  3. Thanks, @paradiddlesjosh. Man, that was an even more detailed in-depth answer than I had expected. And somehow it triggered a sense of déjà vu in me, as if we'd had a similar conversation some time ago. It would certainly be more interesting to see to what extent such plug-in settings in delay and reverb can be used to acoustically recreate even more complex room structures, different types of surfaces or even temperature differences. But my main concern was how to set primarily the stereo delay in such a way that it has a realistic effect in terms of panorama and depth. ... So let's say you have the stereo filter delay settings as in my picture in the main post - the activated stereo delay with the 250 ms delay on the left and the 500 ms delay on the right (as well as the other parameters in this filter delay). Is this basically a stereo delay setting that would be panned rather to the left or right side in the panorama (so that it would conform to the laws of physics and not come across as unnatural)? And how would you change all the parameters in the stereo delay accordingly if you then moved the signal to the other side of the panorama in mirror image (for the same room reverb, of course)? Or how would you change the parameters in the stereo delay if you wanted to move the sound source more into the foreground or more into the background of the room (I'm not quite sure whether you can work with the integrated low-cut and high-cut filter in the stereo delay in the same way as with the reverb, or whether the psychoacoustic effect would be different in a pure delay plug-in)? ... I mainly ask these questions because I often use similar stereo delay settings with similar parameters without really considering the panorama (panned more to the left or more to the right side) or the depth level (sound source put more in the front or more in the back of the room) of the sound source. And I guess that's not the way to use a stereo delay in the best possible and most effective way (especially not if you want to create a realistic and natural room ambiance).
  4. Guess I had a good nose for some of the most illuminating 15 minutes of the video, skipping through the whole video content like a passionate and fiercely investigating hobby detective. ;D
  5. Nice one. )) Without seeing the whole video with the length of over 4 hours, I would say the following things are crucial to bring some VSTi/VST-based electric guitar magic in the soundtrack: 1) a realistic electric guitar VSTi with a good amount of faithful articulations/playing techniques 2) a nice electric guitar and bass amp VST simulation that features lots of settings and effects 3) a good understanding of electric guitars and performing special guitar techniques in the DAW (so, everything about the technical things going on when playing a real electric guitar and how to translate it with all your DAW tools like the electric guitar VSTI interface, the MIDI editor and further plugins) 4) and lots of mixing experience On the other side, even a real electric guitar in the mix can sound like a Goomba stuck in the sewage pipe if the guitar has bad pickups, you stole the fishy guitar amp from a grumpy octopus at the bottom of the shore or your playing and mixing skills just passed the toddler difficulty. ... During this week in the first part of my summer holiday I also tried to implement a nice clean electric guitar into my coming Crisis Core remix composition - and I got pretty much inspired by the soundtrack "Everytime We Touch" by Maggie Reilly: For the clean electric guitar that I've played more or less via MIDI keyboard in my DAW, I might have to do some work concerning timing and articulation. But it already sounds like something I'd definitely go for in the coming remix version. Here's a short audio sample of my early results: CC - FF7 Remix (Excerpt) - Clean Electric Guitar.mp3 You might turn up the volume a bit because I uniformly master my soundtracks at EBU R 128 loudness standard at around - 23 dB (LUFS).
  6. This question has been on my mind for some time now whenever I use stereo delay effects, for example when I want to work mixing-technically with a filter stereo delay in combination with a cathedral reverb as in this visualized example: But first to the basics of delay effects. You often have the choice between synchronized or BPM-related (BPM = beats per minute) delay times and non-synchronized delay times, where you can set the delay precisely in milliseconds. It's possible to convert the values between both delay modes - just take a look at this helpful link: https://sengpielaudio.com/calculator-bpmtempotime.htm The synchronized delay time can be more interesting for electronic music (or maybe also for avoiding too much phase issues), while the non-synchronized delay time can be more interesting for a natural and organic soundscape due to some irregularities (for example, by using settings such as 195 ms instead of 250 ms). To have more freedom, precision and a better understanding of the delay time I'm using when setting the values, I often use delay effects I can set in milliseconds. So, in my case with the non-synchronized delay settings of this image, a played note of the track with this delay plugin will get its echo(s) on the left side after every 250 ms and on the right side after every 500 ms. ... The feedback value, on the other hand, indicates how strong or how loud a sound event transitions to its subsequent delay or echo or how quickly it loses intensity/volume over time. With a feedback value of 50%, for example, the subsequent echo will be half as loud as the previous sound event and the delay will flatten out relatively quickly as a result. A value of 100 %, on the other hand, would create an endless echo with the same loudness as the original sound event. With a stereo delay, the whole thing is obviously a little more complex, which is why you would have to set the feedback values on both sides to 100% for such an endless echo (and the signal must also not be absolutely dry). ... The pan value is easy to understand and sets only the panning of the delay effects of course (panning of the source signal won't be affected by this setting). Just like for reverb sends, I would also recommend hard pannings for delay effects to avoid an accumulation of sound mud in the center area of the mix. ... The dry/wet value is the ratio between the loudness of the source signal and the loudness of its echoes/delays. So, extremely wet settings with very silent source signals and much louder echoes might sound pretty wild - or weird. ... And since I used a really sophisticated filter delay with a low-cut and high-cut filter in this case, I can also set the frequency range for the delay effects. In my case with the settings in the picture, the delay effects below the frequency range of 500 Hz will be filtered out or radically reduced loudness-wise. ... But now my core question. How do you use a stereo delay effect plugin like this in a mix to create a natural and realistic spatial impression for a certain purpose? Let's say that the source signal is located in a big cathedral (cathedral reverb activated) and might be used in 7 different situations, after you (the listener) just entered a large cathedral: 1) In the first situation, the source signal should play pretty much in the center of the cathedral (no special heights involved here - sound source should be at ground level or at the height of the human ears). 2) In the second situation, the source signal is still in the horizontal center of the cathedral, but it's shifted much more to the front side and towards you standing in the entrance area. 3) Similar case, but this time the horzontally centered source signal gets shifted towards the back of the cathedral. 4) In another situation, the source signal should play on the left side in the back of the cathedral. 5) In the next situation, the source signal should play much more on the right side in the back of the cathedral. 6) In the following situation, the source signal should play just a little less far on the right side and a little less in the back of the cathedral. 7) In the last situation, the source signal should play on the right side in front of you around the entrance area of the cathedral you just entered. How would you set and change the delay values (especially the delay times for the left and right side, the feedback and the dry/wet parameters for the left and right side) for these situations in order to create a realistic and natural spatial impression (maybe also in connection with shifting some of the reverb parameters)? Maybe somebody also knows how to create a further impression of height above the ground level for a sound source in a cathedral just with delay effects like these. Might be a tough question even for professional audio engineers and die-hard physicists, but maybe someone already geeked out into topics like these.
  7. Dude, this sounds more like a broken 8-bit lawnmower engine than a NES soundtrack. But I can well imagine how this "track" came about: --------------------------------------------------------------- CEO in the game development team: "Guys, the release of the game is tomorrow, but we still have to compose an extraordinary boss theme." Staff: "Boss, our schedule is filled to the brim - we have no capacity." CEO: "Heya, intern - we're relying on you. We need you to create something special in less than 30 minutes that even posterity will be talking about." Something special that even posterity will still be talking about: ...
  8. Heyo, this one might be a more difficult mission for the almighty IT mage @DarkeSword... The problem is this: If I want to reuse an audio file I've already uploaded at OC Remix (and which is stored as an attachment under "Other Media" >>> "Insert existing attachment" at the lower right corner of the comment field when you are writing a comment) in a new comment, the audio file will be inserted in the comment, but you can't play it. It totally works with other uploaded attachments such as images and even movie files, but not with audio files. ... Maybe you can use some sort of IT Esuna magic to fix this issue. You can check out the specific problem in this thread at the audio file number 6 towards the end of my very last comment from May 05, 2024 (I've tagged you there): https://ocremix.org/community/topic/52614-cleaning-up-the-low-end-and-low-mid-sections-in-a-mix-with-single-track-eq-master-track-eq-eqed-aux-effect-sends-and-other-methods/
  9. A few major further steps to improve the clarity and spatiality of the sound by cleaning up the center area of a stereo mix from less relevant or counterproductive audio information -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Although I haven't been able to continue working on the composition of my Crisis Core remix for the last few months due to a whole lot of work, both in my private and professional life, I was able to achieve a real breakthrough in my mixing concept shortly after my last post in this thread. The result was a kind of improved LCR mixing method (LCR for left-center-right), which not only brings more clarity, more spaciousness and width to the mix, but also allows much finer panning for the spaces in between (instead of just simple LCR), with considerably fewer steps compared to doubling the tracks for processing a hard-left and hard-right side of a single instrument or sound source and with the aim of a mixing result where the sound is also completely convincing on a normal stereo system (and not only on studio monitors and or studio headphones), ... ... which was an important core concern of mine in this thread, as I was previously able to perceive audible qualitative differences in comparison to a really professional mixing and I didn't know why this was the case or why it was so clearly apparent on a normal stereo system. I've also put in some audio samples around the end of this comment to show the further improvements in the sound quality of the mix. … I would almost call this audible problem I've especially experienced at standard stereo systems the "overloaded center effect". The center in the mix (please correct me if I'm not quite right in my definition) is probably something like the crosstalk between the left and right sides of a stereo system (it is said that the center is the sum of the right and left channels divided by 2). If you remove a large part of this crosstalk in the most sensible way possible (i.e. you remove as much irrelevant or counterproductive sound information as possible from the center area - especially concerning the reverb), you can achieve considerably more clarity in the mix and a better assertiveness of the individual instruments. ... I guess in professional terms it's called "mid-side processing" where you remove certain audio information like instruments, synths and effects from the center to bring it to the sides. A good traditional way to do it is by hard left or hard right panned signals and a counter signal on the other side (for a realistic stereo image without involvement of the center area) like in the LCR mixing method. As soon as you use a softer panning (let's say, the pan knob ist only at -10 dB on the right side), you get also crosstalk of the sides with the center - which isn't the evil of mixing (it also can provide some interesting spatial information), but you should not overdo it in order to get a clean mix. If just a few sound signals like drums and vocals/lead instruments fill the gap between center and sides, while the bass plays only in the center and all the other instruments are fully panned to the sides, it might be really beneficial for the mix. Of course you can also mix the bass a bit more from the center to the sides and put the drums fully to the center and/or the sides depending on the music genre and your personal taste, but always keep in mind that you should not produce too much mid-side crosstalk, especially not with many instruments or other sound signals in the same frequency range, because it can cloud and clog up your mix much faster (especially in soundtracks with lots of instruments and various audio information - in comparison, this stuff won't matter too much at a solo piano part in terms of mixing quality). … In my DAW Samplitude Pro X4 Suite I use my integrated 2-channel surround feature (which simple encodes spatial information in a standard stereo signal in connection with a visual interface) not only for getting some more depth in the mix, but also for the regulation of the center/side ratio of the instruments, effects and all the other audio information. It looks like this in my DAW: In this screenshot you can see the spatial settings and measurements of a harp playing together with its very own aux reverb send. In the right part you can see my setting of the instrument within my 2-channel surround panner (just left and right channel are involved, center is completely left out), the aux reverb send of this instrument has a similar panning. And in the lower left part with the vectorscope, you can see once again, that the harp with its aux reverb send is out of the center and widely panned. Just note that this metering device is loudness-based or volume-based - this means that the louder the measured signal gets, the bigger the expanse of the graph will be. So, if you want to check how far the signal is panned to the sides or how much it is in the center, you have examine the ratio beween width and height of the graph (big height und low width means, the signal is more in the center, while equal height and width means that the signal is on the sides - and a bigger width than height means, you are probably cheating with a stereo enhancer - but don't worry, tools like these doesn't sound good 'n' natural either). … Here's another screenshot which shows the panning of the new trumpets (including the measurements of the aux reverb send of the trumpets, which is exactly panned like the trumpets) in my Crisis Core remix: As you can see, the center is not involved again (same thing goes for the aux reverb send, of course), and I've lowered the volume on the right channel by 7 dB. This means that - with the help of the 2-channel surround editor I can set up a pretty fine stereo panorama without any center involvement and in only one track (plus one more track for the aux reverb send, of course)- with the conventional LCR mixing method I might have needed one track for the left side and one track for the right side (with different volumes and/or delay effects for each side) to create a similar stereo panorama (plus one or even two more tracks for the also hard panned aux reverb sends). … If don't have something like a 2-channel-surround editor in your DAW, just stick with the mentioned LCR mixing method to clean up (especially the center of ) your mix. There might be also another solution for easily reducing the center volume of a track in your DAW. Your DAW might also have some kind of the stereo editor for each track (in my DAW, I get into the stereo editor if I do a right mouse click on the virtual pan knob in the track editor on the very left side - after this, the window with the stereo editor for this track opens up, and there I can reduce the center volume under "Kanalabsenkung Mitte (dB)", but only with 6,02 dB at the maximum - don't ask about this kind of precision in a volume value, I really dunno why): ... In the next screenshot, you can see how I panned the electric bass in this track (measurements only show the electric bass with deactivated aux reverb send, where the aux reverb send is fully panned to the sides, but at the same depth level like the instrument): Here you can see, that the electric bass plays almost fully in the center without the sides ("almost", because I like it much more if the bass hops a bit around the center without being a too stiff mono center thing - so I open the bass just a very little bit towards the sides, but only towards the lower surround-information-encoding stereo channels, which can be used to put the signal source more in the back of the mix). The vectorscope and the directionmeter next to it show once again how the electric bass with the deactivated aux reverb send behaves in the stereo panorama. … And in this additional screenshot, you can see how the whole track (in the final showdown part with lots of different instruments playing together) behaves in the stereo panorama: By playing and measuring the whole soundtrack, the vectorscope shows a really wide panning (and this already with my preferred EBU-R-128 target loudness of only around - 23 dB, while with a more common mastering loudness of - 15 dB the metering graph would even shoot over the edge of the vectorscope and even over the "L" and "R" letters) with a very clean center (where mainly the electric bass plays and only a few other instrument tracks like drums, power chords and some sort of leading piano arpeggios are panned between the center and sides, affecting both with a bit of crosstalk, giving the mix a bit more spatial feel - something that could be instantly ruined if you put too much crosstalking audio information in the center or between the center and sides). All other instruments in the track are panned out of the center fully to the sides, but they still have their unique stereo pannings between the left and right side. The aux reverb sends for all instruments instead are completely panned out of the center fully to the sides without any exceptions, which cleaned up the whole soundtrack a lot. Either the aux reverb sends are panned the same (or in a similar) way like the instruments (same depth, similar left-right-side behaviour in the stereo panorama - but always without involvement of the center area, even as aux reverb sends for the few instruments which affect the center) . Or, for example, if an electric guitar is panned with a volume of - 5 dB on the right side (so that the instrument is 5 dB louder on the left side), I've panned the aux reverb send for this electric guitar hard to the right side. … So, just a final summary on the mentioned (and some additional) changes I made in the mix to radically clean it up and to improve it further: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ A) I got all the aux reverb effect sends out of the center fully to the sides (maybe one of the most important things I did in this stage of mixing for improving the clarity). No matter how you do it - if by hard pannings like in the LCR mixing method or with visuals tools like a 2-channel surround editor, where you can remove the volume of center area - just do it. B) I got also most of the instruments (except bass and the few instruments that are panned between center and sides) out of the center area. C) I split the track with the acoustic drums into several tracks for the individual drum elements (bass drums, snare drums, tom drums, cymbals) - just to put different EQs on the different drum elements for improving the clarity within the drums section, between the drums and the other instruments and for additional cleaning up the area between center and the sides. D) I reduced a few annoying or slightly clashing frequencies from certain instruments with peak filters. For example, I made a little peak cut for the bass drums at around 200 Hz by 5 dB: Nothing really earth-shattering - but it transforms the kick drum from a former "bop, bop" a bit more into a smacking "bip, bip" and makes the kick drum a bit more assertive against the electric bass in the mix. I also made some peak cuts on the acoustic guitar playing the rhythmic chords: It makes the guitar sound less blocky, much lighter and more stylish, letting a few other instruments "breathe" more in the mix. E) I removed some overloaded, gimmicky and unnecessary effect plugin chains from my mix. For example, in the previous mixing version I used two different reverb plugins on the acoustic drum kit - one as a preceding direct insert and another one as a low-cut-filtered aux reverb send - just to let the kick and snare drums sound mightier. I completely removed the direct insert and only adjusted the low-cut-filtered aux reverb send for all drum elements in a way where the kick 'n' snare drums sound also powerfully reverberating - but way cleaner without unnecessary low-end reverb mud throughout the mix. F) True to my previous motto "It's all in the mix" or "Complicated masterings are just missed opportunities in mixing", I removed the master EQ plugin again. I somehow had the feeling that it was sucking a bit of the punch out of the mix in the low-end range because it was taking the range and power out of all the instruments in the lower frequency range - unfortunately also those that are supposed to play right there. So, I went back to single-track EQing and made a few more adjustments to the relevant tracks. … And lastly, I want to show ya the final audio samples, where you can finally listen to the mixing progess I've made with all these steps and compare the last version of this audio sample (the results before the mixing approaches mentioned in this post) with the new version (the results after the newest mixing approaches)... 6) Latest update of the remix section showed in the previous audio samples (former version) -------------------------------------------------------------------------------------------------------------------- 6) Latest update of the remix section showed in the previous audio samples.mp3 (Somehow, the former audio sample doesn't seem to work in the new posting - gotta find somebody who will fix it, maybe the almighty IT janitor @DarkeSword. But not today anymore - until stuff gets fixed, just listen to the audio sample number 6 in my previous post.) ... 7) New mixing update of the remix section after radically cleaning up the center area (new version) --------------------------------------------------------------------------------------------------------------------------- 7) New Mixing Update Of The Remix Section After Radically Cleaning Up The Center Area.mp3 ... Feel free to join the discussion and tell me how you like the new approaches in my mixing concept. )) For the next big update, I want to compose the last few things I still have in my vision for this remix and maybe even deliver a finished new remix version of the whole soundtrack for a much bigger comparison.
  10. I found a really impressive Terranigma remix and a radically vibin' Manic Mansion remix on YouTube lately:
  11. Just from the point of the artistic level and the joy and intensity of a creative journey, AI is even worse than using premade loops. You might be able to get something that sounds good 'n' ready for the masses of listeners - but you'll never be able to put all the compositional details, thoughts and feelings from inside your imagination into the realization of the soundtrack. And if you don't have the knowledge and experience in music theory, composition, mixing and sound design, you won't even have an idea about what's even possible in the soundtrack you create. For the most part, AI draws on things that already exist, on things that are known or have been grasped by the human mind. A fine consciousness of a vital life form in combination with a high level of creativity, on the other hand, might be able to recognize things, energies and phenomenons that are still unknown in this world, and to create some really new 'n' unique stuff. ... Or to put it in some more romantic words of video game poetry: Creating video game music or remixes with AI technology is like feeding the plastic/wax fruit to the hungry, music-loving Green Tentacle in Maniac Mansion. Even if the Green Tentacle likes the artificial stuff and already feels stuffed after eating it, as a hungry composer fueled and inspired by true life force within and around you, you wouldn't feel vital, nourished and satisfied if you ate the stuff yourself. ))
  12. Good visual mixing tutorial for beginners and pros to better understand the possibilities and tools for mixing and creating specific spatial images ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ A few days ago, I stumbled across a really interesting and useful tutorial video from David Gibson that explains some of the really important tools of mixing in a very sound, understandable, in-depth and most importantly visual way (which really helps to understand the process, tools, possibilities and little details of mixing): The video is quite long, a bit trippy at times (and quite funny in places) and looks kinda dated (like an instructional video from the 80s or 90s) - but it's full of grounded logic and sophisticated, yet easily accessible mixing knowledge. Interestingly, the video not only provides basic knowledge about the process of mixing and imaging itself, but also goes into some physical background and the "Why am I actually doing it this way?". What I particularly like about the video is that it doesn't show you a specific way to mix, but rather gives you a fundamental understanding of the mixing basics, which opens up a multitude of possibilities to realize the mix according to the concrete idea in your head or your inner feeling. It should therefore not only be of interest to beginners who want to approach the subject slowly, but also to advanced users who have already gained years or even decades of in-depth experience in the field of mixing. ... In addition, I was also able to implement a few groundbreaking improvements in my specially developed mixing concept through a large number of experiments some time ago, which not only greatly improved the clarity and spatial range of the mix, but also finally improved the translation of the mix on ordinary hi-fi stereo systems in a way that's pretty close to how professional mixes sound on these devices. Since my job life obviously turned back to my regular 4 working days per week after a longer time of 5 or even really nasty 6 working days per week (due to a pretty high sickness absence rate in my company I didn't want to leave the remaining colleagues high and dry - and so, I could already save up some extra hours for the next winter) and the soothing fact that I could also settle the major shitload of tasks 'n' stuff in my privat life within the last months, I might have some time again to catch a recovering breath and present my results soon. Maybe I'll already write some smaller parts during the coming weeks after work and upload the whole content soon after. Since I wasn't really able to work on my music projects for around 3 months now, I'm kinda on fire to finally do some composition and mixing stuff once again. I will upload it in a new post - including some further audio samples of the latest results - in my thread "Cleaning up the low-end and low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods" as soon as possible. You'll also find the thread under this link in OCRemix: https://ocremix.org/community/topic/52614-cleaning-up-the-low-end-and-low-mid-sections-in-a-mix-with-single-track-eq-master-track-eq-eqed-aux-effect-sends-and-other-methods/
  13. In Celtic Era 2 from Eduardo Tarilonte - one of my most appreciated VSTi developer - you have some really well-sampled Celtic instruments including the Irish bodhrán. It costs quite some money - but with the high-quality VSTis in this collection it's definitely an investment in your future as a composer: https://www.bestservice.com/en/celtic_era_2.html If you wanna listen to some specific instruments from Celtic Era 2 (in addition to that, the instruments from the first Celtic Era should be also included in the package), check out this link (bodhrán samples from around minute 10:35 to 12:50): I hope I was able to help you in this case. ))
  14. That's right. That's why I try to avoid listening to soundtracks and other audio programs on devices with extreme frequency boosts (especially treble and bass boosts) and rather prefer listening devices with a more linear frequency response. Another way to keep your ears alive is by mixing and listening to audio programs without dynamic compression and by listening to them at lower volume levels on a regular basis. Make sure to keep this volume level. As soon as you get the impression that the volume level isn't enough anymore that's the right time when your ears should get a longer break of silence. I tried Sonarworks for my studio headphones long time ago - but I didn't like it. I didn't like to change the presets everytime when switching between headphones and studio monitors. And I didn't like the result of the sound. It sounded more exciting after calibration, but also in a kinda weird and artificial way. I would expect that it may sound more boring, less spectacular after calibration, and that it would show me much more the weaknesses of my mixes. But it sounded more explosive and polished. I don't trust tools that make my mixes sound instantly better without having done anything in the mix. My own experience at mixing - not specifically at mixing with headphones, but at mixing with studio monitors. Let me explain what I'm talking about... ... My second pair of kinda useful studio monitors - the Presonus Eris E3.5 - had a bigger bump in the bass and higher mid/treble section: Back than, my mixes sounded like this: Not too bad - but still far away from a professional mix. ... Then I got the Yamaha MSP3 with a much more linear frequency response and a really good midrange: Just some time later after getting used to my new Yamaha MSP3, my first mix created on these studio monitors sounded like this: Much better and cleaner. Even the bass frequencies you might expect to hear less in the mix on the Yamaha MSP3 with the much more linear frequency response is way cleaner and much more assertive in this mix compared to the previous track. ... Just wait for my upcoming Crisis Core: Final Fantasy 7 Remix (my second big mix on the MSP3). In view of my increased mixing experience and a few new mixing possibilities that opened up to me some time ago, it will really exploit the potential of the Yamaha MSP3 and, apart from the compositional creativity, even surpass the mixing quality of the original track. I'm really looking forward to finally finishing the remix (I'd say it's already 90 to 95% finished). Unfortunately, my private and professional life has been taking a toll on me with lots of work and additional hurdles, so I haven't really had the time and peace of mind to continue working on the soundtrack for almost 2 months. But I hope that my somewhat extended Easter will give me the necessary time, peace and creativity to continue working on the track and to present the final results some time later. ))
  15. You might be looking for an audio interface (which is some kind of a professional sound card unit for music production). The big advantages of an audio interface over a standard sound card are: 1) high sound quality and accurate, truthful sound and frequency response 2) high functionality and much more connection possibilities depending on the interface and your budget (like several connections for instruments, microphones, headphones, studio monitor speakers, USB and MIDI connections) 3) improved latency (no big delays at live recordings of voice and instruments) and much better DAW performance 4) Since audio interfaces are improved sound cards, you won't need to buy and upgrade soundcards anymore. A current audio interface might still be good for music production (or just for listening to music, gaming and cinematic experiences) in 20 years. 5) With external audio interfaces you're kinda independent. New system software, new PC - usually no problem. Just install audio interface drivers, connect the audio interface via plug & play and it will usually work with most other PC systems. I would rather save some money for your future audio interface, because it will have a big impact on your possibilities as a musician or maybe even as a whole band. And I would always recommend an audio interface with a good separate current supply, one that doesn't come from standard USB connection 'cause it might also affect the sound quality and audio definition - I experienced this phenomenon when using headphones at two different audio interface models of the same series: Steinberg UR22 (USB current supply) and Steinberg UR44 (separate current supply), where the Steinberg UR44 provides a better bass, mid-range and high frequency definition (and I still think it's because of the better current supply). If you just need some connections for instruments or microphones as a solo musician or a smaller band, I'd go for the Steinberg UR44: https://www.musicstore.com/en_US/USD/Steinberg-UR44-/art-PCM0012773-000 If need even more connections for a bigger band for just a few more bucks, have a look at the Tascam US 16x08: https://www.thomann.de/gb/tascam_us_16x08.htm?shp=eyJjb3VudHJ5IjoiZ2IiLCJjdXJyZW5jeSI6NCwibGFuZ3VhZ2UiOjJ9&reload=1 Both models are excellent Japanese technology - you can't go wrong with them and they might last for some decades. ... PS: I wouldn't bother too much with separate hardware mixing consoles these days. They take lots of space for work you can easily do with your DAW software or with a nice multifunctional MIDI keyboard, which has a separate mixer unit and lots of other stuff like programmable drum pads, buttons, knobs, faders, transport console. Nowadays there are many good multifunctional allround MIDI keyboards that aren't really expensive: https://www.thomann.de/gb/m_audio_oxygen_49_mk5.htm
×
×
  • Create New...