Jump to content

Master Mi

Members
  • Posts

    404
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by Master Mi

  1. In Celtic Era 2 from Eduardo Tarilonte - one of my most appreciated VSTi developer - you have some really well-sampled Celtic instruments including the Irish bodhrán. It costs quite some money - but with the high-quality VSTis in this collection it's definitely an investment in your future as a composer: https://www.bestservice.com/en/celtic_era_2.html If you wanna listen to some specific instruments from Celtic Era 2 (in addition to that, the instruments from the first Celtic Era should be also included in the package), check out this link (bodhrán samples from around minute 10:35 to 12:50): I hope I was able to help you in this case. ))
  2. That's right. That's why I try to avoid listening to soundtracks and other audio programs on devices with extreme frequency boosts (especially treble and bass boosts) and rather prefer listening devices with a more linear frequency response. Another way to keep your ears alive is by mixing and listening to audio programs without dynamic compression and by listening to them at lower volume levels on a regular basis. Make sure to keep this volume level. As soon as you get the impression that the volume level isn't enough anymore that's the right time when your ears should get a longer break of silence. I tried Sonarworks for my studio headphones long time ago - but I didn't like it. I didn't like to change the presets everytime when switching between headphones and studio monitors. And I didn't like the result of the sound. It sounded more exciting after calibration, but also in a kinda weird and artificial way. I would expect that it may sound more boring, less spectacular after calibration, and that it would show me much more the weaknesses of my mixes. But it sounded more explosive and polished. I don't trust tools that make my mixes sound instantly better without having done anything in the mix. My own experience at mixing - not specifically at mixing with headphones, but at mixing with studio monitors. Let me explain what I'm talking about... ... My second pair of kinda useful studio monitors - the Presonus Eris E3.5 - had a bigger bump in the bass and higher mid/treble section: Back than, my mixes sounded like this: Not too bad - but still far away from a professional mix. ... Then I got the Yamaha MSP3 with a much more linear frequency response and a really good midrange: Just some time later after getting used to my new Yamaha MSP3, my first mix created on these studio monitors sounded like this: Much better and cleaner. Even the bass frequencies you might expect to hear less in the mix on the Yamaha MSP3 with the much more linear frequency response is way cleaner and much more assertive in this mix compared to the previous track. ... Just wait for my upcoming Crisis Core: Final Fantasy 7 Remix (my second big mix on the MSP3). In view of my increased mixing experience and a few new mixing possibilities that opened up to me some time ago, it will really exploit the potential of the Yamaha MSP3 and, apart from the compositional creativity, even surpass the mixing quality of the original track. I'm really looking forward to finally finishing the remix (I'd say it's already 90 to 95% finished). Unfortunately, my private and professional life has been taking a toll on me with lots of work and additional hurdles, so I haven't really had the time and peace of mind to continue working on the soundtrack for almost 2 months. But I hope that my somewhat extended Easter will give me the necessary time, peace and creativity to continue working on the track and to present the final results some time later. ))
  3. You might be looking for an audio interface (which is some kind of a professional sound card unit for music production). The big advantages of an audio interface over a standard sound card are: 1) high sound quality and accurate, truthful sound and frequency response 2) high functionality and much more connection possibilities depending on the interface and your budget (like several connections for instruments, microphones, headphones, studio monitor speakers, USB and MIDI connections) 3) improved latency (no big delays at live recordings of voice and instruments) and much better DAW performance 4) Since audio interfaces are improved sound cards, you won't need to buy and upgrade soundcards anymore. A current audio interface might still be good for music production (or just for listening to music, gaming and cinematic experiences) in 20 years. 5) With external audio interfaces you're kinda independent. New system software, new PC - usually no problem. Just install audio interface drivers, connect the audio interface via plug & play and it will usually work with most other PC systems. I would rather save some money for your future audio interface, because it will have a big impact on your possibilities as a musician or maybe even as a whole band. And I would always recommend an audio interface with a good separate current supply, one that doesn't come from standard USB connection 'cause it might also affect the sound quality and audio definition - I experienced this phenomenon when using headphones at two different audio interface models of the same series: Steinberg UR22 (USB current supply) and Steinberg UR44 (separate current supply), where the Steinberg UR44 provides a better bass, mid-range and high frequency definition (and I still think it's because of the better current supply). If you just need some connections for instruments or microphones as a solo musician or a smaller band, I'd go for the Steinberg UR44: https://www.musicstore.com/en_US/USD/Steinberg-UR44-/art-PCM0012773-000 If need even more connections for a bigger band for just a few more bucks, have a look at the Tascam US 16x08: https://www.thomann.de/gb/tascam_us_16x08.htm?shp=eyJjb3VudHJ5IjoiZ2IiLCJjdXJyZW5jeSI6NCwibGFuZ3VhZ2UiOjJ9&reload=1 Both models are excellent Japanese technology - you can't go wrong with them and they might last for some decades. ... PS: I wouldn't bother too much with separate hardware mixing consoles these days. They take lots of space for work you can easily do with your DAW software or with a nice multifunctional MIDI keyboard, which has a separate mixer unit and lots of other stuff like programmable drum pads, buttons, knobs, faders, transport console. Nowadays there are many good multifunctional allround MIDI keyboards that aren't really expensive: https://www.thomann.de/gb/m_audio_oxygen_49_mk5.htm
  4. Something final -------------------- It might be already a bit late. But I wanted to let some nice surprises about the Final Fantasy 7 remake project out of the pile before the release of Final Fantasy 7: Rebirth and provide the final trailer and gameplay material from the official side. For the sake of suspense, let's start from the smallest to the biggest surprise. ... 1) The Story So Far trailer --------------------------------- This is a rough summary of the first part of the Final Fantasy 7 remake project. ... 2) The last launch trailer ------------------------------- Nothing overly outstanding - just wanted to add this final launch trailer for the sake of completeness, as it was released shortly after I wrote this comment, just in time for the game's release. ... 3) The final trailer ----------------------- So, this is really the true final trailer for Final Fantasy 7: Rebirth. ... 4) The State of Play trailer --------------------------------- This is a pretty detailed presentation of Final Fantasy 7: Rebirth with a strong focus on the different regions you travel to in the game, different gameplay mechanics (like graphics mode vs. performance mode, unique character abilities or the new party level feature), mini-games and side quests and music (since we are members of OCRemix, get ready for over 400!!! soundtracks in Final Fantasy 7: Rebirth). It also includes the final trailer from before. ... 5) Full Nibelheim demo ----------------------------- I was really wondering if I wanted to post this already because it's kind of a bit of a spoiler. But I just found it really fascinating how they realized the Nibelheim flashback in the Final Fantasy 7 remake project, that I just wanted to show you how well the storytelling contributes to the atmosphere of the game apart from the graphical, sound and gameplay top performance. Since the Nibelheim flashback will take place in the city of Kalm pretty much at the very beginning of the second part of the remake, I don't think it's too bad to post a more detailed and rather long demo video, which is part of an official playable demo of the game. There is also a similar demo for a section of the Junon region. But since it takes place a bit later in the game, I won't post it here. ... Enjoy the new content. )) And don't make the mistake of spending your days watching blindly YouTube videos about Final Fantasy 7: Rebirth, as some people have already had the privilege of playing the game all the way through before the release this week and there are already a lot of scenes up to the end of the game in circulation. The era of the internet is the great information age, but unfortunately also a time of countless leaks of information and spoiled surprises.
  5. According to my information, the Harman curve has nothing to do with a flat frequency response. It rather represents a bathtub-like frequency response (overpresented bass and trebles, cut back mids) for pleasing ordinary consumer hi-fi listeners. And it was created by the Harman company, which holds famous audio technology brands (or parts thereof) like AKG, JBL, Bang & Olufsen or Bowers & Wilkins. The explanation seems to be bit too uncritical (especially for things like mixing/mastering purposes) for my taste - but the background information of the creation of the harman curve is kinda interesting: https://headphonesaddict.com/harman-curve/ I'm really looking for studio headphones with a really flat frequense response without any sound optimizing tools. They might sound "dull" or "boring" for consumer listeners - I prefer the term "less overexcited" and "more truthful". But if you manage to create a mix that sounds exciting on studio headphones (or studio monitors) with a flat frequency response, the mix will sound much more explosive, exciting and clean on all the other hi-fi systems preferred by consumer listeners. Important for clean mixes is also a good representation of the mids when doing mixing and mastering stuff. If the mids sound really clean in your mix on a listening device with flat frequency response, you have already done one of your biggest tasks in the whole mixing process (resulting in a pleasing listening experience when playing the mix on consumer hi-fi systems or Harman-curve-related audio devices). "It's all in the mix, especially in the mids." So, besides looking for studio headphones with a flat frequency response it might be also useful to look for studio headphones with rather overrepresented mids (and reduced bass/treble) than studio headphones with overrepresented bass/treble... similar like with the legendary Yamaha NS-10 studio monitor phenomenon. The Yamaha NS-10 were/are kinda midrange-heavy studio monitor devices which share the legacy that some of the best mixes in the world since the 80s have been done with these legendary speakers. The legacy: The sound (in comparison to newer high-end studio monitors): The frequency response measurement: It kinda reminds me of the frequency response of my Yamaha MSP3 studio monitors: (And I was also really impressed after my first mixing results with these little precision tools.) ... So if you can find professional studio headphones with a similar frequency response, you are guaranteed to be on the safe side as a mixing engineer in a similar or perhaps even better way than with studio headphones that have a neutral sound or a flat frequency response. And you probably won't suffer as much from ear fatigue (unfortunately a rather common problem that occurs when mixing with most headphones over a longer period of time).
  6. The huge influence of the ear pads on the sound of studio headphones ------------------------------------------------------------------------------------------ It was quite a while ago when the original ear pads of my Sony MDR-7506 studio headphones - consisting of an exaggeratedly thin layer of extremely soft artificial leather - literally fell off with crumbling greetings after only a few years of use. The Sony MDR-7506, which I still appreciate for their extremely good audio resolution (even if the stereo panorama they offer is rather narrow than wide and the frequency response tends to have slightly emphasized bass and somewhat sharper treble), were my first better studio headphones at the time, which really impressed me with the sound compared to my previous consumer headphones, so I naturally wanted to continue using them from time to time and therefore decided to simply replace the ear pads. However, I didn't want the somewhat short-lived, original ear pads again. This time I wanted some really long-lasting ones that also didn't generate so much heat in the ear area. So I bought some fluffier velour replacement ear pads for the Sony MDR-7506. And they weren't bad either, apart from the fact that they were a bit itchy when worn for long periods of time, they severely limited the bass response, which meant that the sense of spatiality was lost somewhat, the isolation got way worse (I could metaphorically hear the pigeons fart, so I guess velour pads are generelly not the best choice for closed-back studio headphones) and they were even tighter around the ears than the original ear pads. And, dude, I don't have some kind of huge rabbit ears - I swear on the mystical legacy of Big Chungus. So, the Sony MDR-7506 ended up being less used or even unused for a while, especially after getting my Beyerdynamic DT 880 Pro studio headphones, which are still the best studio headphones I've experienced over all the years, especially for mixing purposes. But during the last weeks - after checking my newest mix with them - I had the idea to give 'em another chance and look for some bigger, better and more comfortable ear pads. I looked for hours, finally came across these ones and bought these after recognizing that these ear pads might satisfy all of my expectations: https://www.amazon.com/dp/B0C9T7QFKT/?th=1 (just recognized that they are about twice as expensive here in Germany compared to US) And, yeah, these cooling gel ear pads even surpassed my expectations: 1) They kinda brought back the pretty tight sound I experienced with the original earpads (maybe the bass response is just a litte bit stronger now). 2) They are much more comfortable than the original ear pads, much bigger and thicker (no nasty feeling of tightness or pressure around the ears), you don't sweat under these ear pads and your ears don't get hot like with the original ear pads (at least not during winter days). 3) And even the soundstage feels much bigger and wider now! (This really surprised me, but it might have to do with the larger size and thickness of these ear pads, where the sound has more space to roam between the headphone drivers/membranes and my ears.) I still won't use these headphones for regular, critical mixing stuff, but definitely for temporary and final checks of my mixes, for listening to music or maybe watching some terrifying horror movies. So, I'm really glad I could save, repair and even improve these awesome headphones. ... Another good example of how the ear pads can radically affect the sound of the studio headphones are my Beyerdynamic DT 880 Pro. Back then, I ordered them in the Beyerdynamic DT 880 Pro Black Edition (completely black design with black ear pads), simple because I thought they looked way cooler than the original silver edition with silver ear pads, and they really matched my black home studio design. And yeah, they sounded really great - high audio resolution, great depth and sound stage, fair frequency response with a kinda full, very deep bass (at least I thought that this frequency response was flat like the frequency response graphs for the Beyerdynamic DT 880 Pro I've seen in the internet showed me). Time passed and I used them for my first mixes. But I still wondered why I wasn't able to create such powerful mixes as other composers had done with them. Maybe it was also due to my mixing skills at the time, but I don't think that was the only problem. And years later, I've read an important customer feedback of somebody who also bought the Beyerdynamic DT 880 Pro Black Edition, where after consulting Beyerdynamic, the customer stated that the Beyerdynamic DT 880 Pro Black Edition has a different sound (much bassier - the bass also seems to cover the midrange) than the original silver model. However, the customer also noted that after attaching the silver ear pads on the black DT 880 Pro model, the black DT 880 Pro model sounded like his silver DT 880 Pro model - with a much more analytical sound and a much better presentation of the midrange. But you should be careful, because there are at least 2 different types of silver velour ear pads that are used for the DT 770/880/990 series and for similar Beyerdynamic headphone models: - the EDT 770 V (they are mostly for the closed-back models like the DT 770 Pro) - the EDT 990 V (they are especially for the semi-open and open-back models like the DT 880 Pro and the DT 990 Pro) Since there is nothing like a "EDT 880 V", the EDT 990 V ear pads are the right choice if you want to replace the ear pads of the DT 880 Pro: https://north-america.beyerdynamic.com/edt-990-v.html (just recognized that they seem to be around twice as expensive in North America compared to Germany) Lucky Mi... I didn't need to buy the silver ear pads at the time because I already owned a third pair of studio headphones - the open DT 990 Pro with silver velour pads (so with these three studio headphones I finally had a pair of closed, semi-open and open studio headphones). Since I didn't like the sound of the DT 990 Pro that much (the treble was much too harsh, the soundstage and the representation of depth and 3-dimensional stage felt rather less impressive and less realistic than with the DT 880 Pro), I swapped the silver ear pads of the DT 990 Pro for the black ear pads of the DT 880 Pro. So, the DT 990 Pro got the black ear pads (and sounded even worse with them), and the DT 880 Pro finally got the silver EDT 990 V velour ear pads. And I was really impressed with the sound of the DT 880 Pro afterwards - very relaxed and natural sound, great midrange presentation and very relaxed but very defined bass (almost sounded like better audio resolution), very analytical, detailed sound with a very accurate, razor-sharp representation of depth and panoramic stage. After that I fell even more in love with the Beyerdynamic DT 880 Pro (especially in combination with my Lake People G109-P headphone amplifier, which can drive these high-impedance headphones even better and unleash the higher potential of these studio headphones) - probably one of the best studio headphones for mixing purposes, but also for simply listening to music and watching movies. Even the newer DT Pro X studio headphone series (I tested the DT 900 Pro X in a music store a while back) couldn't really match the insightful, analytical and linear, truly natural sound of the legendary DT 880 Pro - maybe I'll write about my big studio headphone test marathons another time. And I'm still impressed by how big the difference in sound and listening experience can be just by swapping the black velour ear pads for the silver velour ear pads from the same manufacturer. I mean, I don't think it's mainly due to the color of these headphones, but it might have to do with a slightly different velour surface/texture and a different softness of these ear pads that make a big difference in sound and listening experience for the same pair of studio headphones. ... As a result, with professional studio headphones that already have a good audio resolution, it sometimes seems to be enough to simply replace the existing ear pads with more suitable ear pads that soften the prominent frequency ranges instead of replacing the headphones as a whole. So if the bass is pumping way too much, the treble is way too overemphasized and shrill, or if the soundstage lacks width, it might help, for example, to use much softer, much larger (for a slightly larger diameter for more space around the ears) and thicker (for more distance between the drivers/diaphragms and the ears) velour ear pads suitable for this headphone model.
  7. There 's new trailer for the highly anticipated Final Fantasy 7: Rebirth, the second part of the official FF7 remake. I guess it's the final launch trailer before the planned release of the game at the end of February. The trailer is only about a minute long and doesn't show too much new, apart from: - a familiar Nibelheim scene - a brief glimpse of the Midgar Zolom (the giant snake in the swamp area between the Chocobo Farm and the Mythril Mine) - and a giant creature, which reminds me a little of the Weapon at the bottom of the sea, the Emerald Weapon (but it's almost likely that there will be further Weapons as additional challenges in the remake of FF7, especially since the spin-offs in the Final Fantasy 7 universe sometimes had their own unique Weapons, for example Omega Weapon from Dirge of Cerberus or Jade Weapon from Before Crisis)... And, dude, put an ear to the "Those Who Fight" battle theme in the Rebirth version - it really blew me away. (Just reminded me these days that I wanted to continue working on my battle theme remix "Fighting Fantasies", but the Crisis Core remix has top priority for now, apart from 1 or 2 other projects.) So, here's the trailer: I think that this game will be a gigantic audiovisual experience, embedded in a masterfully told story. ... Besides, if anyone has always wanted some of these atmospheric snow globes with a Gold Saucer design, here's your big chance: I'm really not a fan of all this decorative dust catcher stuff, but I've still toyed with the idea of a snow globe for the winter season some time ago. )) Nevertheless, I think I've seen enough snow for now after the last few winter service side missions.
  8. How do you pan aux reverb sends in the context of the panning of the instrument or main signal source? ----------------------------------------------------------------------------------------------------------------------------------- After finally getting aux effect sends working in my DAW some time ago, I've increasingly integrated them into my mixing approach due to the improved clarity, sound quality and enhanced sound design possibilities they can bring to a mix. But there's one thing I'm still not quite sure about - how to arrange them in the stereo panorama (especially in the context of the panorama of the instrument or source signal) in the best possible way. I've had various thoughts on this and am currently drawing on different approaches, for example: A) Panning the aux reverb send just like instrument/source signal (so that the relationship between the instrument/source signal and its reverb send is not torn apart too much - could be useful if too much is already happening in the mix outside the panning of the instrument/source signal, but also disadvantageous if there is already too high a density of musical events in the area of the panning of the instrument/source signal) B) Bringing the aux reverb sends away from the center of the panorama to the sides (might increase the carity of the track a lot - since I rarely pan instruments fully to one side due to the loss of spatial information of instruments panned like this, it could be useful to rather pan reverb effects more or even fully to the sides) C) Panning the aux reverb send a little bit more to the side (on the same side as the source signal - e.g: source signal is panned + 3dB to the right, then the reverb send could be panned + 10 dB to the right - can be useful if the instrument in its panorama area needs a little more punch/less reverb and there is still some room for the reverb send in the panorama a little bit further to the right) D) Panning the aux reverb send fully to the side (on the same side as the source signal - amplifies the effect of variant B somewhat, but the spatial relationship between the source signal and its reverb send is lost to a greater extent) E) Panning the aux reverb send fully to the opposite side (the far extreme on the opposite side of the source signal, for example, if the instrument is panned + 3dB on the right side, the aux reverb send of the corresponding instrument is panned hard left - this does not seem to disturb the spatial relationship between source signal and reverb to such an extreme extent, while at the same time, the assertiveness of the instrument/source signal drastically increases in the mix) D) Panning the aux reverb sends just where you've got lots of free space, how it fits your needs as a sound creator and like it sounds best (guess this sounds like some kind of a text book answer) So, how do you handle the panning of aux reverb sends? Do you have a general solution for this or do you rather use an individual approach depending on the instrument/signal source, music genre or the specific intention of sound design? … For showing some practical stuff let's go to some further audio samples of my Crisis Core remix I'm currently working on. During the last weeks I could make some huge progress with this track - not just with the mixing, but also by composing lots of new stuff after playing, recording some further melodies for old and new instruments via MIDI keyboard and finally editing the content with the MIDI editor of my DAW. Although the audio samples are still a few steps behind the actual mixing and compository state of my remix, the mixing of most instruments was already done at this point. But I still had an issue with the electric guitar, where I wasn't sure how to pan the guitar reverb send the best possible way. Just to give you an fundamental idea of the mixed instruments which appear in the following audio samples without any further changes (if I remember correctly): - electric bass (plays almost fully in the center with a slight stereo width of around 2 % in order to make the bass sound a bit broader and with a less stiff spatial impression) >>> aux reverb send of the electric bass (a minimal scoring stage convolution reverb with low-cut filter) is panned like the instrument itself (still not sure if the mix sounds better when bringing the subtl bass reverb more to the sides) - acoustic drums (play a bit more in the background between center and sides - stereo width should be around 50 % - source signal also has a reverb insert with a subtle EQed concert hall convolution reverb, which also makes the kick drum more powerful) >>> aux reverb send of the acoustic drums (a subtl cathedral convolution reverb with heavy low-cut filter to add some airy vibes to the drum kit) is panned to the sides, leaving out the center (you will hear much more of this great effect shortly after the intro of my remix - it's not in the following audio samples) - viola (panned around + 3 dB to the left side, source signal contains delay effect and also a smaller low-cut filter) >>> aux reverb send of the viola (a subtl cathedral convolution reverb with a smaller low-cut-filter) is panned like the instrument itself - acoustic guitar chords (fully panned to the sides, leaving out the center, a smaller treble boost from a vintage EQ is used on them) >>> aux reverb send of the acoustic guitar chords (a delayed hall reverb with moderate low-cut filter) is panned in a similar way like the instrument - trumpets (one of the new sections I've composed for this remix, panned around + 7dB to the left side, source signal contains a heavy low-cut-filter and a subtl, already low-shelved cathedral convolution reverb) >>> no aux reverb send is used for this instrument ... And now comes the critical part with the raw clean electric guitar, where I'm still looking for the best solution concerning the mixing. In all audio samples the electric guitar source signal (panned around + 3 dB to the right side) goes through my guitar amp plugin Vandal (with a smaller overdrive stomp box, a special Alnico cabinet simulation and a stronger ping-pong delay effect) and a moderate low-cut-filter. The used aux reverb sends for the electric guitar after audio sample 1 contain a subtle cathedral convolution reverb with a heavy low-cut-filter. The differences in the electric guitar section are shown in the following audio examples with different mixing approaches for this instruments: 1) No aux reverb send - guitar reverb comes via direct plugin slot insert from guitar amp plugin ----------------------------------------------------------------------------------------------------------------------- 1) No aux reverb send - guitar reverb comes via direct plugin slot insert from guitar amp plugin.mp3 ... 2) Aux guitar reverb send panned like the guitar ------------------------------------------------------------ 2) Aux guitar reverb send panned like the guitar.mp3 … 3) Aux guitar reverb send panned about 7 dB more to the right side than the guitar -------------------------------------------------------------------------------------------------------- 3) Aux guitar reverb send panned about 7 dB more to the right side than the guitar.mp3 .... 4) Aux guitar reverb send fully panned to the right --------------------------------------------------------------- 4) Aux guitar reverb send fully panned to the right.mp3 … 5) Aux guitar reverb send fully panned to the left (the far extreme on the opposite side of the source signal) --------------------------------------------------------------------------------------------------------------------------------------- 5) Aux guitar reverb send fully panned to the left (the far extreme on the opposite side of the source signal).mp3 … Since I didn't like the sound results I got with a standard method for mixing electric guitars (like panning the source signal fully to one side and panning the reverb send fully to the opposite side - as it seems to have been done with the electric guitars at some points in this really awesome Maniac Mansion remix composition: https://www.youtube.com/watch?v=v-6Le36mlDA - but somehow I can't stand the sound from mixing appoaches where instruments - especially lead instruments - are fully put to the sides), I almost think the mixing approach from audio sample 5 works best in this case (especially at the point where the trumpets kick in). … But let me know your opinion about this topic and my different mixing approaches. … Besides, I just thought it couldn't even harm to additionally upload the... 6) Latest update of the remix section showed in the previous audio samples ----------------------------------------------------------------------------------------------- 6) Latest update of the remix section showed in the previous audio samples.mp3 In this version (which is based on audio sample 5) I slightly enhanced the trebles and brilliance of the trumpets and the electric guitar (so they can shine a bit more in the mix and so, they are also put more to the front as the lead instruments of this part), and I spiced up the drums section with a few variations.
  9. After a long time, I finally found out why aux effect sends such as aux reverb sends in my DAW regularly pushed the DSP (digital signal processor - the internal CPU of my DAW, so to speak) in my DAW Samplitude to its limits, including dropout-related scratching noises (although they should actually require fewer computer and program resources compared to direct effect inserts) and I simply couldn't really work with the aux reverb sends (which are extremely important for a clean mix). It was solely due to a general setting in my DAW, namely the setting that determines how many processor cores your DAW should have available. I had set 8 cores at the time - but my Intel Core i7-6700 processor system only has a maximum of 4 cores (I had probably confused the number of cores with the number of threads). In any case, after I had fixed the problem purely by chance at some point by changing this setting, the aux effect sends suddenly worked perfectly, stably and with a much more efficient use of the ressources related to the DSP. ... Maybe this can also help somebody else. So, if anybody has some trouble like this in its DAW, make sure to check all the program settings. Just one single setting like this can make a huge difference. ... This thread can actually be closed. The main topic of this thread will be continued on a larger scale in the thread "Cleaning up the low-end und low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods" - a thread you can find much faster with the help of this link: https://ocremix.org/community/topic/52614-cleaning-up-the-low-end-und-low-mid-sections-in-a-mix-with-single-track-eq-master-track-eq-eqed-aux-effect-sends-and-other-methods/ ... In this new thread I also provide (and still will provide) some (further) audio samples in order to show how the mixing quality improved after finally (and primarily) using separate aux reverb send channels for the instruments/source signal tracks (where you can especially change just the reverb with things like EQ/low-cut-filter, different pannings and other stuff - but without touching/changing the sound of the instrument source signal) instead of using direct reverb effect inserts in the plugin slots of your instrument tracks (where the instrument and the effect/reverb get processed together alongside the order of the inserts in the plugin slots).
  10. @Sengin I finally came up with a pretty bulletproof method for figuring out the key of this (or any) soundtrack - I think I'll write an extra post because it might interest others as well. So, your soundtrack has all the notes of D major and its relative key of B minor in common (my other two guesses for A major or E minor were wrong because at least one note in those keys sounded wrong in the context of the soundtrack). Since the piece sounds much better and more harmonious with a D major chord than with a B minor chord, the actual key of the piece should be D major. D major means that the following notes are completely safe to play: D, E, F#, G, A, B, C#. With the help of this, you should be able to compose your own alternative lead melodies, countermelodies, basses, chords, etc. for your remix.
  11. Awesome, dude. )) Really amazing working spirit, even at the possible vacation times at the end of the year. Problem's totally solved - no further request incoming for 2023 (at least not from me). Guess it's always wise to have a passionate white IT mage in the party. ... ----------------------------------- "Well... that was impressive. You have earned your reward... and your freedom." ----------------------------------------------------------------- As a reward you radically deserve a decent 7-course dinner. But since I've almost finished mine, I'd give you the radically legendary "Joy 'n' Fluffiness Gold Award" instead, which has been successfully treasured in the depths of the internet: Carrying it like a daily accessory in your life, it should be some kind of a lucky charm which might lower your attack and defence stats or even block your amgery (limit break) bar, but it will reduce your stress level, drastically boost the kinda mystical "luck" stat or even unlock some good ol' memories music-wise. ... But now you need to take a well-deserved break from work and have a good nap (just the special final fantasy regulations in employment law), have an interesting video game evening with friends or family, put on some good music or even continue composing your own passionate music project. ... Happy New Year to the OCRemix community. ))
  12. I had recently uploaded a few audio files of mine (4 MP3 files with 192bit/s) in the thread "Cleaning up the low-end and low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods" within the "Music Composition & Production" section of the forum for the purpose of presenting different mixing approaches in my comment field. The uploaded audio samples worked perfectly at first, but then suddenly stopped playing - until I found out that they only stopped playing after I had logged out with my OCRemix account. After I logged in, however, they worked again and could be played. For a quick search of the audio uploads - these can be found under this link in my comment from December 27, 2023: https://ocremix.org/community/topic/52614-cleaning-up-the-low-end-und-low-mid-sections-in-a-mix-with-single-track-eq-master-track-eq-eqed-aux-effect-sends-and-other-methods/ ... Then I tested the whole thing with my own image uploads and a video upload of mine (to be found in the thread "Creating a realistic impression of depth in stereo mixes" in the forum section "Music Composition & Production"). And there - even when I was logged out - I was able to view the uploaded images and watch the uploaded video myself. For a quick search of the video - it can be found under this link below my comment with the small underlined subheading "Visual tools for creating a realistic impression of depth in stereo mixes" from April 17, 2023: https://ocremix.org/community/topic/49135-creating-a-realistic-impression-of-depth-in-stereo-mixes/ ... Don't know if it follows some kind of intention that only uploaded content like audio samples in a comment field can't be played when logged out. But maybe it's just some unintentional stuff that can be fixed.
  13. Some first steps of improving the mixing quality - using EQ filters and using aux effect/reverb sends instead of direct effect/reverb plugin inserts ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ So, finally I can start with the promised audio samples of my Crisis Core: Final Fantasy 7 remix at 4 different mixing approaches or mixing stages (only excerpts, but in which you can hear some crucial parts regarding the mixing quality - the final remix might still get some changes in the composition)... Here they go: 1) The old version ----------------------- 1) CC - FF7 Remix (Excerpt) - Old Version.mp3 This version is about 6 to 7 years old and was mixed on my former studio monitors (Presonus Eris E3.5 studio reference speakers in combination with my still existing Fostex PM-SUBmini 2 subwoofer) and on my Beyerdynamic DT880 Pro studio headphones connected to my Steinberg UR44 audio interface. I didn't have the biggest mixing experience back then and didn't have as fine an ear as I do now (which may have something to do with the fact that my studio monitors in particular at the time couldn't show me the necessary details and the critical things in the mix - they had a very pleasant and powerful sound, but they rather varnished my tracks and made them sound kind of finished at a still unfinished stage). I didn't even use dedicated EQ VST plugins back then (just a couple of 3-way vintage EQs and timbre controls in the VSTi and synth interfaces) because I thought that EQing (especially if you overdo it) turns the natural sound of an acoustic instrument into something very artificial and that the more EQ you use, the more your hearing constantly adapts to the new timbre until you can't tell when the bass is getting too heavy and trebles are getting too shrill - and so on. I was thinking more about how it is still possible for so many different instruments to play together with the whole room reverb in large concert halls without the overall listening impression sounding muddy (I just wanted to understand this concept on a fundamental level and try to implement it in my DAW). I rather suspected that it might be due to the well-placed positioning of the instruments or the size and spaciousness of the concert hall for a more relaxed expansion of the audio waves or acoustic signals, or that it might also be related to certain differences between real acoustic, analog sound signals and digital sound signals. But back then I didn't even get to the kinda obvious thought that also great concert halls often get an enormous acoustic treatment on a highly professional level. And if you talk about large bass traps, acoustic panels for the walls, floor and ceiling, or even about special materials for the seating, you could also say that these acoustic tools work in a similar way like a low-cut filter of EQ VST plugin, for example (only with the big difference that with a good EQ plugin you can react much better, much faster and without huge expenditure to changes in the soundscape). ... But despite my hesitant moves at mixing, I still had the base of a mixing concept back than - a concept which should bring at least some great possibilities for dynamics, untouched/undestroyed signal peaks for better sound quality and a natural sound of the instruments, and a concept for sophicated loudness regulation of soundtracks. I'm talking about mixing at EBU R 128 standards (a recommendation of the European Broadcasting Union) developed by really farsighted audio engineers, who wanted to bring back lots of dynamic, sound quality and a very good loudness regulation to audio and audiovisual content (no more annoying loudness jumps between different soundtracks and other audio programs) at broadcasting as a reaction to the ongoing phenomenon called "loudness war", mainly caused by a growing use of compressors in the ad and music industry to sound louder than the competitors. With the help of this, every single soundtrack and audio program will be mastered to a target level of - 23 dB +/- 1 dB in context of the full scale (for a better imagination: 0 dB is the point no sound signal can exceed and where a sound signal turns into clipping). Loudness is something like the perceived sound pressure level measured over a certain amount of time. So in order get a mix towards a loudness target level of -23 dB, you need a loudness meter and you have to measure your soundtrack from the very beginning to the very end with your loudness meter (cause the target level of - 23 dB is always an average value - so, your soundtrack might start at - 33 dB or even will be at - 20 dB in the middle of the track - if it is at -23 dB at the end, it's fine so far). Of course there are also limits for the maximum dynamics at EBU R 128 mixing (so, an untamed sound of a gunshot after a soft piano melody won't blast your ears after mixing it at EBU standards) defined in terms like "Maximum Short-term Loudness Level" (should not exceed -18 dB), "Maximum Momentary Loudness Level" (turns red in my EBU-adjusted loudness meter as soon as it reaches - 15 dB) or "Maximum True Peak Level" (should not exceed - 1 dB), but I usually just keep a fleeting eye on these parameters (because I mainly create music and no heavily dynamic cinematic special effects or stuff like that). Many modern soundtracks are already mixed at target levels of -15 dB to -12 dB or something like that, leaving no big headroom for the signal peaks above (guess that was the time where sound surgery like peak compressing or brickwall limiting started, and where the dynamic of soundtracks has decline more and more). So, soundtracks and audio programs mastered at EBU standards are around 50% to 60 % as loud as lots modern music - that's a similar loudness like the loudness of original sound mixes from the 80s. The really cool thing is that you don't have to care about the signal peaks when mixing at EBU R 128 loudness standards because they always more than enough headroom. Even in the master track the signal peaks will barely scratch the - 5 dB mark (and I don't use compressors or limiters in my soundtracks). Saves a lot of time at mixing/mastering and provides a good uncompressed signal. ... But enough small talk about the early foundations of my mixing concept. Let's go to the next audio sample for showing the next stage of mixing I'm currently working on. ... 2) The new version with my over the last years developed new mixing concept -------------------------------------------------------------------------------------------------- 2) CC - FF7 Remix (Excerpt) - New Mixing Concept.mp3 This is the new version I'm currently working on. It's based on my newly developed mixing concept, which I already used for my Goldfinger remix called "Safe 'N' Sane Skater Heaven Superman", and it is mixed on my new professional Yamaha MSP3 studio monitors (Yamaha MSP series is the professional product line of the Yamaha HS studio monitor series, so it comes with an even better audio definition and a flatter, more natural and relaxed sound) in combination with my Fostex PM-SUBmini 2 subwoofer as well as with my Beyerdynamic DT 880 Pro studio headphones (finally with the silver ear cups giving me a much more neutral and natural listening experience) connected to my new Lake People G109-P headphone amp (which can finally drive these high-impedance headphones with ease). Cause of the really outstanding audio resolution and truthfulness of the Yamaha MSP3 studio monitors (they also have no annoying hissing or humming noises at the teeters or woofers, so they are perfect as a relieable near-field studio monitor solution), I can hear much more details in my tracks and I can play much more with the reverb without being afraid of messing up the mix with that. In addition to the EBU loudness standards as the early foundations of my mixing concept for better dynamics, peak signals and sound quality, I also used EQs with low-cut filters in the single tracks of this mix the first time. Another big core of my new mixing concept is the use of a really helpful 2-channel surround feature of my DAW, which encodes surround information into a stereo signal and where you can place instruments and other sound signals in a graphical interface as you wish. Wheter hifting a signal more to one side or to the front or back, getting a signal only to the sides and sparing out the center completely, reducing the stereo width by dragging the two stereo objects more together and to the center - no problems within a short amount of time with this useful tool. So, it's no only good for creating a great imagination of depth and spaciousness in your mix - it's also good for separating the competing frequencies of your instruments by different placement of these, and for radically cleaning up the mix in this way. If you want to read more or even get a little audiovisual impression of this feature, I'd recommend the thread "Creating a realistic impression of depth in stereo mixes" (especially under my comment with the title "Visual tools for creating a realistic impression of depth in stereo mixes") in the Music Composition & Production section of this forum. ... I guess this was a nice and really succinct summary of the main elements of my new mixing concept. ... 3) The new version + master track EQ with low-cut filter --------------------------------------------------------------------- 3) CC - FF7 Remix (Excerpt) - New Mixing Concept + Master LCF.mp3 This version is almost the same like the second audio sample, but in addition to that I also used a steep-edged low-cut-filter on the master track this time (low-cut filter with 36 dB per octave - starts slowly around 50 Hz, at 20 Hz it lowers the frequencies about 10 dB, and at 15 Hz it lowers the frequencies already about 20 dB). You'll maybe hear a little difference between audio sample 2) and 3) - but it's not like the far bigger jump from 1) to 2). I'm not sure if I will use a master track low-cut filter as a general device in the future of my mixing concept - still afraid of losing some crucial audio information between 30 and 50 Hz (especially the kinda earthy power of the bass and kick drum - not the reverb). ... 4) The new version + master track EQ with low-cut filter + aux reverb sends with low-cut filters for specific instruments --------------------------------------------------------------------------------------------------------------------------------------------------- 4) CC - FF7 Remix (Excerpt) - New Mixing Concept + Master LCF + Aux Reverb Sends With LCF.mp3 This final audio sample is based on the version for the previous audio sample. But it has one big difference. Instead of using EQ and reverb plugins as inserts for all instruments, I picked 4 instruments with the most critical (lower) frequency ranges, especially due to possible reverb mud, and used EQed aux reverb send with decent low-cut filters on the reverb effects for these instruments. The 4 instruments on which I used aux reverb sends on are: - the drums (whole drum kit) - the electric bass - the viola - and the acoustic guitar playing the chords I think that this made a bigger perceivable difference and cleaned up the bottom of the mix with the low-end and low-mid reverberation really well. ... So, now I'm really curious about your opinions regarding the different mixing approaches (or the possible stages of mixing) and the sound quality. And I would like to know which mixing results you like best or which might be the most promising ones.
  14. PS: Since the topic became more extensive and more useful than expected, I changed the title from "Steep-edged low-cut filter on the master track as a general solution for solving low-end clutter issues in the mix?" to "Cleaning up the low-end und low-mid sections in a mix - with single track EQ, master track EQ, EQed aux effect sends and other methods". ... And since the upload feature below the comment field is working again (big thanks to DarkeSword at this point for restoring the upload fuction), I can provide some audio samples to show different mixing approaches or possible stages of mixing in my next comment.
  15. Big thanks, man. )) The useful "Drag files here to attach, choose files" feature below the text field seems to work again. ))
  16. Maybe it has to do with maintenance on the OCRemix site some time ago - but somehow the option to upload content from your computer (such as images, audio material and videos) below the text field (as it was there before if I remember correctly) seems to have disappeared completely. It now only seems possible to insert existing attachments from previous uploads into the text, or to insert images from a URL via "Other Media". ... The thing is... I actually only wanted to upload 4 audio samples to show different mixing approaches - but no chance with that. Maybe someone here can help me or has a useful tip on how I can solve the problem.
  17. Ah, I guess we were talking about two different things in this case. You are talking about the direct insert plugin effect order: Y1) EQ before reverb... will make a different sound result (maybe even cleaner as well - also might save some processing power of the CPU or internal DSP of the DAW) than... Y2) reverb before EQ.... So I finally get what YOU are talking about - and thanks for the reminder at this point, because (if I remember correctly) I usually took the Y2 route. This might have to do with my work habits when composing, arranging and mixing, where I often take a suitable instrument, then try to fit it into the ambience of my imagination with reverb, delay, chorus and other stereo/pan/room effects, and often do the fine mixing with the EQ stuff last. That's probably the main reason for my plug-in insert order with the EQ at the end. So if the source signal is "A", the EQ is "B" and the reverb is "C", the two ways of signal processing would result in different equations... I'm neither a math geek nor a signal chain processing expert, but in this case the equations of processing the stuff for the two ways could be kinda close to my following creations of equations (guess it's still not the best and most accurate way to transribe the signal processing chains into an abstract terms - but perhaps it is enough for a rough imagination of the different results in the two different versions of processing the signal): Y1 = sound result 1 (EQ before reverb) Y2 = sound result 2 (reverb before EQ) A = source signal B = EQ C = reverb Y1 = AB + C*(AB) = AB + ABC = A (B + BC) Y2 = AC + B*(AC) = AC + ABC = A (C + BC) Let's take numeric values instead of the variables, something like: A = 2, B = 3, C = 5 Y1 = 2*3 + 5*(2*3) Y2 = 2*5 + 3*(2*5) Y1 = 36 Y2 = 40 Different numbers, different sound results on both ways. "quod erat demonstrandum" :D (Dude, I really hope I won't radically fool and disgrace myself with the math stuff here - if a math wizard 'n' tech sage reads this, feel free to correct, improve and transcend my light-footed pigeon-level equations.) So, this was about the stuff you were talking about. ... But I was talking about a different thing when I wrote: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- "But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids. >>> only aux effect reverb sends For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb). >>> only direct plugin inserts for the instrument track (I didn't have the plugin order in mind here when writing about processing the sum signal - I could have also written "I will certainly continue to put a reverb plugin insert after the sum signal (source signal + EQ)" instead - my main focus here was just about using the plugins as direct plugin inserts in the instrument tracks.) The main reason for EQing the entire sum signal for the higher frequency instruments is firstly the fact that the instruments and sounds in the higher frequency range are often much more competitive (... so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?), and secondly the fact that reverb in the higher frequency ranges doesn't cause much of a problem for human ears (higher frequency reverb doesn't blur the soundtrack like low frequency reverb does)." >>> In this case, direct plugin inserts for these instrument tracks would save a lot of time compared to creating additional aux effect send tracks for each individual instrument track (unless you want to work with entire instrument groups where you use one aux effect send for the entire group). ------------------------------------------------------------------------------------------------------------------- ... I don't know how much knowledge and experience you have with aux effect sends. But when you work with aux effect sends, you really have 2 different tracks there - the instrument track (let's say track number 1) and another aux effect send track (could be track number 40, for example)... Both tracks (and this is the great feature of working with aux effect sends) can be processed completely differently - different pannings, different inserts etc. You just need to activate the aux effect send in the aux slots of your instrument track to route the effect to the instrument (otherwise the aux effect send track doesn't "know", to which instrument track it should put the aux send reverb effect or other effects) - and of course you need to set the relative ratio or intensity/level strength (in dB) of the aux track in relation to the level strength of the instrument track directly in the aux slot of the instrument track (if the level strength ratio between both tracks is set, the routed aux effect sends will get louder or quieter as the instrument track gets louder or quieter). And here comes the big one - for the case you want to radically clean up your mix (especially the blurring reverb) without losing the original character and frequency range of your instrument. Just as an example... You're composing a complex ambient soundtrack with an acoustic guitar that has a really cozy, warm tone (you definitely want to keep the full frequency spectrum of this particular instrument) - but as soon as you apply reverb to this instrument, it completely messes up the other instruments (bass, drums and a few other instruments that play in the lower frequency range). So, dry acoustic guitar sounds great in the mix - acoustic guitar with reverb makes the mix messy and muddy. A problem which could be solved with the magic of aux effect sends. Remember? Two completely different tracks - the instrument track (which we will leave as it is - without any plug-in inserts in this case) and the other aux effect send track (into which we will insert the reverb and EQ only this reverb). We want to keep the full frequency range, warm tone and clean sound of the guitar in the mix - so, we don't use any EQ plugin insert oder reverb insert on this instrument track. Yep - sounds warm and clean, but still dry as hell. So we still need some reverb (but a radically cleaned up reverb without the problematic low-frequency reverberation) for the ambience. And of course we'll only put the reverb plugin in the plugin slots of the separate aux effect send track, because if we EQ the reverb in the separate aux effect send track, it won't affect the source signal of the instrument (two separate signal chains - one for the instrument track, one for the aux effect send track). It is not of primary importance here whether the reverb is switched before the EQ in the signal chain of the aux effect send track or whether the EQ is processed before the reverb. The important and really helpful feature here is that you can EQ only the reverb for the instrument without EQing/touching or changing the instrument itself. So... If you do it wisely, you can get a instrument with its full frequency range and original sound character together with a decently low-cut-filtered reverb by using aux effect sends. In our case, we can have a nice, warm and cozy acoustic guitar with an untouched frequency range in combination with an ambient but clean guitar reverb, where just the low frequencies of the separately processed guitar reverb have been heavily low-cut-filtered. And as a result, the guitar reverb shines much brighter and won't mess up the mix anymore. ... I hope that I was able to make it a little clearer what I was referring to in my previous comments.
  18. Here you have at least some visible note material (lead/melodies, chords, bass, drums) you could start with in order to create your remix: https://onlinesequencer.net/1639136# I still have some trouble to figure out the key of this composition ('cause of the many possible semitone steps according to the huge note material). Could it be something like A major - or D major - or maybe E minor - or is it a special mode? How would you proceed to determine the key in this case?
  19. Nah, I'll definitely use separately EQed aux reverb sends - but not for every instrument... I think I will handle it just as I wrote before: The main reason for EQing the entire sum signal for the higher frequency instruments is firstly the fact that the instruments and sounds in the higher frequency range are often much more competitive (... so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?), and secondly the fact that reverb in the higher frequency ranges doesn't cause much of a problem for human ears (higher frequency reverb doesn't blur the soundtrack like low frequency reverb does). ... Besides... After the last week of work with the crappiest weather conditions (lots of rain, mud and almost a storm) on the building site, many hours of Christmas preparations, a merciless workout, the final cleaning of my cozy palace, a somehow relaxed and interesting Christmas Eve (as a big surprise my uncle visited my mother and talked about his trip to Japan and his experiences with the Japanese culture and the people there) and also a lot of boring small talk (I even took a big break from further family events and could finally enjoy working on my music projects), I managed to finish the 4 audio samples. Maybe I'll already upload them in the next few hours or tomorrow morning. ))
  20. If I understand it correctly, it's more or less a change of the order of the plugin insert signal chain. Usually you use the reverb first on the signal and then the EQ on the signal + reverb - but in your case it's first the EQ that hits the signal, and after this, the EQed signal gets the reverb. Depending on the order of the plugin inserts, the sound result will be a different one. ... But it could be really useful if the DAW developers create a system with primary plugin slots (maybe 7 per track) and secondary support plugin inserts (maybe 3 per primary plugin slot). So, you could put a reverb in the primary plugin slot and an EQ in the connected secondary plugin slot that will only affect the plugin effect in the primary slot (and not the source signal itself). Since I usually treat each instrument/track individually, I'd prefer a system like this over creating several aux send tracks for each instrument track. But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids. For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb). ... Got the 4 audio samples almost done in between work and weekends full of sprawling Christmas preparations. Just gimme a few more days (already working on the 4th one, where I still try to find out which further instruments besides drums, viola, acoustic guitar, and maybe also the rather dry bass in the mix would benefit by using EQed aux reverb sends on them - and especially how much EQ/low-cut filter on the reverb effect is optimal to clean up the mix without destroying its ambience).
  21. Might be some really good news for Secret of Mana fans. There will be a completely new game with a new story for the Mana series - have a good look at it: The real title should actually be "Visions of Mana" instead of "Vision of Mana", as many other trailer uploads show. And the story so far seems to revolve around a boy who accompanies a girl on her pilgrimage to the myth-enshrouded Mana Tree. Although I think that hardly any Mana game or remake can surpass the charm of the original Secret of Mana for the Super NES with its unique graphic style and truly beautiful, mesmerizing soundtracks (perhaps "Sword of Mana", which is a rather advanced and story-wise much more elaborate reinterpretation of the chronologically first Mana game called "Seiken Densetsu - Final Fantasy Gaiden", or "Mystic Quest" for the Gameboy), this new Mana game may well have potential to win me over. Although I'd really like to see a new story approach that starts in a pretty dark, kinda dystopian modern day city with only a few traces of life and mana left, like a few wild herbs trying to break through a pavement (and where people are much more concerned about money, status, influence and more inanimate things instead of life, spirit and nature), the new game could also have an interesting storyline. Despite the fact I'm still a big fan of the old Secret of Mana SNES graphic style, the graphics of the upcoming Mana game seem to be really beautiful and very detailed - the graphics might even surpass the really well done remake "Trials of Mana". I'm not worried about the soundtracks because the music in the trailer, which is easily recognizable to fans of the Mana series, already sounds really great. Let's be surprised by the further development of this title. ... Besides... I'm also working on a Secret of Mana remix, which will start with a real whale song as mana's response to a little flute melody for the remix intro - in combination with a self-written introductory poem about mana (life force). The musical genres revolve around ambient, orchestra (especially for the calm and atmospheric introductory section) as well as electronics with a small touch of rock ballad (especially for the much more upbeat main section). Although I'm pretty far along with this remix, it will still take some time (especially for the video clips from the original SNES game - so I'll have to play the game again), because I want this music project to be really good - precious childhood memories 'n' stuff like that, you know. )) But before I dive back into this remix project more, I have at least two other priorities in music projects that come first (a drum composition and a new mixing for a Crisis Core: Final Fantasy 7 remix, which should also help me move forward with my new mixing concept). Let's see what the future brings - especially in the last cozy days of this year and the coming year. ))
  22. A new, very interesting trailer for the second Final Fantasy 7 remake project - Final Fantasy 7: Rebirth - has surfaced from the depths of the internet. Among other things, It contains: - a new theme song for the second part of the remake - the introduction of further characters like Dio (the owner of the Gold Saucer amusement park), the good ol' Bugenhagen, Dyne (Barret's best friend, worker buddy and the biological father of Martlene) Vincent Valentine and Cid Highwind - the reintroduction of a few famous Avalanche members who were thought to be dead - maybe they still are and just wandering through the lifestream, as hinted at in the novel "The Maiden who Travels the Planet" by Benny Matsuyama published in the ultimate Final Fantasy 7 game guide "Final Fantasy 7: Ultimania Omega" - who wants to listen to an audiobook version of this novel, check out this link for a complete and good quality version originally provided by thelifestream.net: https://www.youtube.com/watch?v=fm3pymQPaU8 - some further summons like Titan, Phoenix and maybe some kind of a smaller Bahamut - the introduction of the sometimes really weird and humorous theater performance in the Gold Saucer, which will probably be staged a little more seriously, bigger and more professionally in the remake, plus a few more romantic moments between Cloud and Aerith So, have fun and enjoy the newest trailer material: ... Beyond that, there was also an official TGS presentation of Final Fantasy 7: Rebirth, where some familiar and new mini-games for the upcoming remake were presented, e.g: - an improved version of the fluffy Mog House game (around 1:01:20) - a new card game, which is apparently called "Queen's Blood" (around 1:02:06) - and - for all the ambitious composers here - an enhanced version of the piano mini-game from the original Final Fantasy 7 (at 1:03:32 and even more around 1:06:34) Here's the big TGS presentation video:
  23. I do it in a similar way. I keep the bass in mono or very close to mono, opening it up around 1 to 5% towards full stereo - that way the bass sounds less stiff, static and monotone, and it has at least some room to move, especially if I'm using a really dry bass. And for acoustic drums, I often use a greater stereo width of between around 20 and 50%. That's the big question. Guess I'm still afraid of losing some low-end audio information instead of embracing high-end sound quality in my mixes. But when listening to some soundtracks from the 50s, it calms me down a little bit, because in the soundtracks back then wasn't too much low-end stuff at all: It might have to do much more with the recording technology and the consumer audio playback devices of those days. But even if you listen to the soundtracks of the 50s with today's audio equipment, they still sound really fresh, clean, well-mixed, highly dynamic, very controlled in the low and mid ranges and still pretty complete in the frequency spectrum. ... In my remix of the Baywatch opening, for example, I really wanted some mighty low-end rumble in the industrial percussion section at the beginning and towards the end of the soundtrack. Unfortunately, the latest remix version of this track that I uploaded many years ago, was still mixed on my old studio monitors (where I couldn't really hear or evaluate what was going on in the low-end and low-mids sections) without the application of my new mixing concept I've especially developed over the last two years: I've already started working on an improved mix in a way that meets my current standards and mixing skills, where I even use a low-cut filter on the mighty industrial percussion and other instruments in order to clean up the low-end and mids sections in the track. But it might take a while, because I also want to enhance the piano composition in this track and I'm mainly working on 2 or 3 other music projects at the moment. At the moment, I might be lucky and be able to continue working on my mixing and composing projects during the week between winter service assignments and some work on the construction sites. Yeah, lower frequencies like from bass and kick drum often sound most effective and powerful in the center of the stereo panorama. But there are also interesting exceptions to this rule concept. Just have a look on the famous soundtrack "Stand by Me" by Ben E. King: In this track, you have the bass far on the left side, together with a shaker and triangle as a nice contrast in the higher frequency section. On the right side of the stereo panorama in this track, you have a lower strings section, a humming choir and a higher strings section with a violin (if I hear it correctly) as a contrast in the higher frequency section again. But the center of the stereo panorma seems to be mainly reserved for the singer voice in this case. Kinda unsual mixing concept with the bass panned to the side - but pretty effective in this track. I guess you're talking about working with aux sends for VST plugin effects such as reverb, which allow you to process the plugin effect separately from the main signal (e.g. EQing only the reverb without affecting the main signal of the instrument, synthesizer, voice, etc.) - in contrast to working with direct VST-based plugin inserts, where entire signal chains including the source signal get processed. The thing is, I'm really used to work with direct plugin inserts or integrated effects of the VST instrument itself (where I also like the much better, faster and more accurate regulation of several settings based on much more conceivable parameters and values) 'cause I never really got aux sends to work properly in my DAW. Whenever I tried to create an aux send and turned it on in a track where I wanted to use that effect, the DSP (the internal digital signal processor of my DAW) would suddenly go over 100% and cause huge instabilities, nasty sound artefacts or even crash-like dropouts. This was very unusual, because with direct plugin inserts the DSP barely reaches the 50% performance mark of my DSP even in my biggest, most complex music projects - and I usually work with raw, unbounced/unfrozen MIDI tracks (needs much more DSP/CPU performance, but it's totally uncomplicated to change something in the composition while listening, mixing and editing the track). Until some weeks before, I have never really found out what was causing this issue - and I own a really good computer with an Intel i7 6700 processor system, 32 GB of DDR-4 RAM, a decent UR44 audio interface, top-notch DAW named Samplitude Pro X4 Suite, and more than enough free disk space. But then I have found out that I messed up with one single setting in my DAW - the number of processor cores that should be used to process my DAW tasks. I had set 8 cores in my DAW because I thought that my i7 6700 processor system really had 8 cores - but it only has 4 cores (which I may have confused with the 8 threads). After changing the setting to 4 cores, I was finally able to use my first aux sends for separately processed effects plugins with smooth DSP performance and no futher issues in my DAW. I could have also used my Origami convolution reverb from my Independence Pro FX plugin library in Samplitude. This Origami reverb plugin also includes a 4-band parametric EQ that just affects the reverb - unfortunately, it only comes with a low shelf filter, two band-pass filters and a high shelf filter without a clear graphic display instead of providing a nice low-cut filter, several peak filters and a high-cut filter with a clear graphical interface). It looks like this: But with the finally functioning possibility of working with plugin-based aux effects sends, I may be able to enhance the sound quality of my mixing concept even further. I probably won't use EQed aux sends on the main instruments in the upper frequency range (if the frequency of an instrument including reverb there might clash with the frequency of another instrument including reverb, it could make sense to EQ the whole signal chain directly in order to get a cleaner mix, or - if just the reverb is the problem - drastically reduce the reverb or replace the reverb with some nice ping-pong delay effects). But for the instruments with the lowest frequencies in the track - like bass and bass-heavy drum elements with stronger reverb - that don't have to compete with other instruments from even lower frequency ranges, it could really be useful to filter out just the long-reverberating low-end reverb clouds (which often sounds like dull, undefined sound mud on ordinary consumer speaker systems) from the mix, while maintaining the power and assertiveness from the main signals of the bass and lower drum elements. … Since I currently work on a new mix (based on my new mixing concept) for my Crisis Core: Final Fantasy 7 remix called "Wings Of Freedom", I could try out a few things and provide you with sound clips from different mixing approaches - especially the old version, the new version (based on my new mixing concept), the new version with an additional master low-cut filter, and the new version with an additional master low-cut filter plus some aux reverb sends with low-cut filter for crucial instruments. As long as winter doesn't give me its legendary white-out ultra finisher with unexpected masses of snow these days (as I have already mentioned, I also work in winter maintenance during the cold season), I'll upload a few audio samples for you soon. ))
  24. @Nase Thanks for sharing your information and experiences. )) I guess I've already made a decision. If I take all the essentials into consideration - especially the assumed quality, the sound and sound stability in different playing styles and articulations, the pickups and possibilities of internal sound control and then the really stylish design - my choice will most likely be the Ibanez GRG140. I really fell in love with sound signature of this Japanese masterpiece: But I will still wait a few months for saving some money without scratching my minimum reserve. I just have spend further 300 bucks (pretty annoying, I know) for a huge set of robust, durable, water-repellent workwear for winter service 'cause I don't want to freeze my ass and have at least some fun in the snowy nights. But with the hefty winter service surcharges of just under 20% to around 50% (depending on whether night work or work on Sundays or public holidays is involved in addition to the simple winter service surcharge) on top of the already decent hourly wage, the money for the electric guitar should be recouped quite quickly. ... Besides, no superiors were harmed or used as punching bags in this company. Maybe I just pissed them off so much, or those nutty gossips were just scared that they'd catch a good knockout punch if they don't behave, that they just sent me back to the building sites. Luckily, it was actually totally in my interest, because the workers and supervisors there aren't quite so bitchy 'n' scratchy - and I can snore the night away for a good hour longer. Guess the magical trio of nasty supervisors consisting of the cold witch, the prickly besom and the way too warm gossip gay lord won't be bothering me for a while. ))
  25. At the moment I'm listening to some soundtracks from the 50s, 80s and - of course - lots of cool video game remixes. I just found a pretty rad remix of Gambit's theme from the X-Men series:
×
×
  • Create New...