Jump to content

Master Mi

Members
  • Posts

    414
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Master Mi

  1. Maybe it has to do with maintenance on the OCRemix site some time ago - but somehow the option to upload content from your computer (such as images, audio material and videos) below the text field (as it was there before if I remember correctly) seems to have disappeared completely. It now only seems possible to insert existing attachments from previous uploads into the text, or to insert images from a URL via "Other Media". ... The thing is... I actually only wanted to upload 4 audio samples to show different mixing approaches - but no chance with that. Maybe someone here can help me or has a useful tip on how I can solve the problem.
  2. Ah, I guess we were talking about two different things in this case. You are talking about the direct insert plugin effect order: Y1) EQ before reverb... will make a different sound result (maybe even cleaner as well - also might save some processing power of the CPU or internal DSP of the DAW) than... Y2) reverb before EQ.... So I finally get what YOU are talking about - and thanks for the reminder at this point, because (if I remember correctly) I usually took the Y2 route. This might have to do with my work habits when composing, arranging and mixing, where I often take a suitable instrument, then try to fit it into the ambience of my imagination with reverb, delay, chorus and other stereo/pan/room effects, and often do the fine mixing with the EQ stuff last. That's probably the main reason for my plug-in insert order with the EQ at the end. So if the source signal is "A", the EQ is "B" and the reverb is "C", the two ways of signal processing would result in different equations... I'm neither a math geek nor a signal chain processing expert, but in this case the equations of processing the stuff for the two ways could be kinda close to my following creations of equations (guess it's still not the best and most accurate way to transribe the signal processing chains into an abstract terms - but perhaps it is enough for a rough imagination of the different results in the two different versions of processing the signal): Y1 = sound result 1 (EQ before reverb) Y2 = sound result 2 (reverb before EQ) A = source signal B = EQ C = reverb Y1 = AB + C*(AB) = AB + ABC = A (B + BC) Y2 = AC + B*(AC) = AC + ABC = A (C + BC) Let's take numeric values instead of the variables, something like: A = 2, B = 3, C = 5 Y1 = 2*3 + 5*(2*3) Y2 = 2*5 + 3*(2*5) Y1 = 36 Y2 = 40 Different numbers, different sound results on both ways. "quod erat demonstrandum" :D (Dude, I really hope I won't radically fool and disgrace myself with the math stuff here - if a math wizard 'n' tech sage reads this, feel free to correct, improve and transcend my light-footed pigeon-level equations.) So, this was about the stuff you were talking about. ... But I was talking about a different thing when I wrote: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- "But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids. >>> only aux effect reverb sends For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb). >>> only direct plugin inserts for the instrument track (I didn't have the plugin order in mind here when writing about processing the sum signal - I could have also written "I will certainly continue to put a reverb plugin insert after the sum signal (source signal + EQ)" instead - my main focus here was just about using the plugins as direct plugin inserts in the instrument tracks.) The main reason for EQing the entire sum signal for the higher frequency instruments is firstly the fact that the instruments and sounds in the higher frequency range are often much more competitive (... so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?), and secondly the fact that reverb in the higher frequency ranges doesn't cause much of a problem for human ears (higher frequency reverb doesn't blur the soundtrack like low frequency reverb does)." >>> In this case, direct plugin inserts for these instrument tracks would save a lot of time compared to creating additional aux effect send tracks for each individual instrument track (unless you want to work with entire instrument groups where you use one aux effect send for the entire group). ------------------------------------------------------------------------------------------------------------------- ... I don't know how much knowledge and experience you have with aux effect sends. But when you work with aux effect sends, you really have 2 different tracks there - the instrument track (let's say track number 1) and another aux effect send track (could be track number 40, for example)... Both tracks (and this is the great feature of working with aux effect sends) can be processed completely differently - different pannings, different inserts etc. You just need to activate the aux effect send in the aux slots of your instrument track to route the effect to the instrument (otherwise the aux effect send track doesn't "know", to which instrument track it should put the aux send reverb effect or other effects) - and of course you need to set the relative ratio or intensity/level strength (in dB) of the aux track in relation to the level strength of the instrument track directly in the aux slot of the instrument track (if the level strength ratio between both tracks is set, the routed aux effect sends will get louder or quieter as the instrument track gets louder or quieter). And here comes the big one - for the case you want to radically clean up your mix (especially the blurring reverb) without losing the original character and frequency range of your instrument. Just as an example... You're composing a complex ambient soundtrack with an acoustic guitar that has a really cozy, warm tone (you definitely want to keep the full frequency spectrum of this particular instrument) - but as soon as you apply reverb to this instrument, it completely messes up the other instruments (bass, drums and a few other instruments that play in the lower frequency range). So, dry acoustic guitar sounds great in the mix - acoustic guitar with reverb makes the mix messy and muddy. A problem which could be solved with the magic of aux effect sends. Remember? Two completely different tracks - the instrument track (which we will leave as it is - without any plug-in inserts in this case) and the other aux effect send track (into which we will insert the reverb and EQ only this reverb). We want to keep the full frequency range, warm tone and clean sound of the guitar in the mix - so, we don't use any EQ plugin insert oder reverb insert on this instrument track. Yep - sounds warm and clean, but still dry as hell. So we still need some reverb (but a radically cleaned up reverb without the problematic low-frequency reverberation) for the ambience. And of course we'll only put the reverb plugin in the plugin slots of the separate aux effect send track, because if we EQ the reverb in the separate aux effect send track, it won't affect the source signal of the instrument (two separate signal chains - one for the instrument track, one for the aux effect send track). It is not of primary importance here whether the reverb is switched before the EQ in the signal chain of the aux effect send track or whether the EQ is processed before the reverb. The important and really helpful feature here is that you can EQ only the reverb for the instrument without EQing/touching or changing the instrument itself. So... If you do it wisely, you can get a instrument with its full frequency range and original sound character together with a decently low-cut-filtered reverb by using aux effect sends. In our case, we can have a nice, warm and cozy acoustic guitar with an untouched frequency range in combination with an ambient but clean guitar reverb, where just the low frequencies of the separately processed guitar reverb have been heavily low-cut-filtered. And as a result, the guitar reverb shines much brighter and won't mess up the mix anymore. ... I hope that I was able to make it a little clearer what I was referring to in my previous comments.
  3. Here you have at least some visible note material (lead/melodies, chords, bass, drums) you could start with in order to create your remix: https://onlinesequencer.net/1639136# I still have some trouble to figure out the key of this composition ('cause of the many possible semitone steps according to the huge note material). Could it be something like A major - or D major - or maybe E minor - or is it a special mode? How would you proceed to determine the key in this case?
  4. Nah, I'll definitely use separately EQed aux reverb sends - but not for every instrument... I think I will handle it just as I wrote before: The main reason for EQing the entire sum signal for the higher frequency instruments is firstly the fact that the instruments and sounds in the higher frequency range are often much more competitive (... so you may need to cut more frequencies in generel to clean up the track - then why not cutting straightly the entire sum signal?), and secondly the fact that reverb in the higher frequency ranges doesn't cause much of a problem for human ears (higher frequency reverb doesn't blur the soundtrack like low frequency reverb does). ... Besides... After the last week of work with the crappiest weather conditions (lots of rain, mud and almost a storm) on the building site, many hours of Christmas preparations, a merciless workout, the final cleaning of my cozy palace, a somehow relaxed and interesting Christmas Eve (as a big surprise my uncle visited my mother and talked about his trip to Japan and his experiences with the Japanese culture and the people there) and also a lot of boring small talk (I even took a big break from further family events and could finally enjoy working on my music projects), I managed to finish the 4 audio samples. Maybe I'll already upload them in the next few hours or tomorrow morning. ))
  5. If I understand it correctly, it's more or less a change of the order of the plugin insert signal chain. Usually you use the reverb first on the signal and then the EQ on the signal + reverb - but in your case it's first the EQ that hits the signal, and after this, the EQed signal gets the reverb. Depending on the order of the plugin inserts, the sound result will be a different one. ... But it could be really useful if the DAW developers create a system with primary plugin slots (maybe 7 per track) and secondary support plugin inserts (maybe 3 per primary plugin slot). So, you could put a reverb in the primary plugin slot and an EQ in the connected secondary plugin slot that will only affect the plugin effect in the primary slot (and not the source signal itself). Since I usually treat each instrument/track individually, I'd prefer a system like this over creating several aux send tracks for each instrument track. But I guess I will use EQed reverb aux sends only for critical instrument tracks like drums, bass and instruments with lots of low-end and low mids. For the instruments in the frequency ranges above, I will certainly continue to EQ the complete sum signal (source signal + reverb). ... Got the 4 audio samples almost done in between work and weekends full of sprawling Christmas preparations. Just gimme a few more days (already working on the 4th one, where I still try to find out which further instruments besides drums, viola, acoustic guitar, and maybe also the rather dry bass in the mix would benefit by using EQed aux reverb sends on them - and especially how much EQ/low-cut filter on the reverb effect is optimal to clean up the mix without destroying its ambience).
  6. Might be some really good news for Secret of Mana fans. There will be a completely new game with a new story for the Mana series - have a good look at it: The real title should actually be "Visions of Mana" instead of "Vision of Mana", as many other trailer uploads show. And the story so far seems to revolve around a boy who accompanies a girl on her pilgrimage to the myth-enshrouded Mana Tree. Although I think that hardly any Mana game or remake can surpass the charm of the original Secret of Mana for the Super NES with its unique graphic style and truly beautiful, mesmerizing soundtracks (perhaps "Sword of Mana", which is a rather advanced and story-wise much more elaborate reinterpretation of the chronologically first Mana game called "Seiken Densetsu - Final Fantasy Gaiden", or "Mystic Quest" for the Gameboy), this new Mana game may well have potential to win me over. Although I'd really like to see a new story approach that starts in a pretty dark, kinda dystopian modern day city with only a few traces of life and mana left, like a few wild herbs trying to break through a pavement (and where people are much more concerned about money, status, influence and more inanimate things instead of life, spirit and nature), the new game could also have an interesting storyline. Despite the fact I'm still a big fan of the old Secret of Mana SNES graphic style, the graphics of the upcoming Mana game seem to be really beautiful and very detailed - the graphics might even surpass the really well done remake "Trials of Mana". I'm not worried about the soundtracks because the music in the trailer, which is easily recognizable to fans of the Mana series, already sounds really great. Let's be surprised by the further development of this title. ... Besides... I'm also working on a Secret of Mana remix, which will start with a real whale song as mana's response to a little flute melody for the remix intro - in combination with a self-written introductory poem about mana (life force). The musical genres revolve around ambient, orchestra (especially for the calm and atmospheric introductory section) as well as electronics with a small touch of rock ballad (especially for the much more upbeat main section). Although I'm pretty far along with this remix, it will still take some time (especially for the video clips from the original SNES game - so I'll have to play the game again), because I want this music project to be really good - precious childhood memories 'n' stuff like that, you know. )) But before I dive back into this remix project more, I have at least two other priorities in music projects that come first (a drum composition and a new mixing for a Crisis Core: Final Fantasy 7 remix, which should also help me move forward with my new mixing concept). Let's see what the future brings - especially in the last cozy days of this year and the coming year. ))
  7. A new, very interesting trailer for the second Final Fantasy 7 remake project - Final Fantasy 7: Rebirth - has surfaced from the depths of the internet. Among other things, It contains: - a new theme song for the second part of the remake - the introduction of further characters like Dio (the owner of the Gold Saucer amusement park), the good ol' Bugenhagen, Dyne (Barret's best friend, worker buddy and the biological father of Martlene) Vincent Valentine and Cid Highwind - the reintroduction of a few famous Avalanche members who were thought to be dead - maybe they still are and just wandering through the lifestream, as hinted at in the novel "The Maiden who Travels the Planet" by Benny Matsuyama published in the ultimate Final Fantasy 7 game guide "Final Fantasy 7: Ultimania Omega" - who wants to listen to an audiobook version of this novel, check out this link for a complete and good quality version originally provided by thelifestream.net: https://www.youtube.com/watch?v=fm3pymQPaU8 - some further summons like Titan, Phoenix and maybe some kind of a smaller Bahamut - the introduction of the sometimes really weird and humorous theater performance in the Gold Saucer, which will probably be staged a little more seriously, bigger and more professionally in the remake, plus a few more romantic moments between Cloud and Aerith So, have fun and enjoy the newest trailer material: ... Beyond that, there was also an official TGS presentation of Final Fantasy 7: Rebirth, where some familiar and new mini-games for the upcoming remake were presented, e.g: - an improved version of the fluffy Mog House game (around 1:01:20) - a new card game, which is apparently called "Queen's Blood" (around 1:02:06) - and - for all the ambitious composers here - an enhanced version of the piano mini-game from the original Final Fantasy 7 (at 1:03:32 and even more around 1:06:34) Here's the big TGS presentation video:
  8. I do it in a similar way. I keep the bass in mono or very close to mono, opening it up around 1 to 5% towards full stereo - that way the bass sounds less stiff, static and monotone, and it has at least some room to move, especially if I'm using a really dry bass. And for acoustic drums, I often use a greater stereo width of between around 20 and 50%. That's the big question. Guess I'm still afraid of losing some low-end audio information instead of embracing high-end sound quality in my mixes. But when listening to some soundtracks from the 50s, it calms me down a little bit, because in the soundtracks back then wasn't too much low-end stuff at all: It might have to do much more with the recording technology and the consumer audio playback devices of those days. But even if you listen to the soundtracks of the 50s with today's audio equipment, they still sound really fresh, clean, well-mixed, highly dynamic, very controlled in the low and mid ranges and still pretty complete in the frequency spectrum. ... In my remix of the Baywatch opening, for example, I really wanted some mighty low-end rumble in the industrial percussion section at the beginning and towards the end of the soundtrack. Unfortunately, the latest remix version of this track that I uploaded many years ago, was still mixed on my old studio monitors (where I couldn't really hear or evaluate what was going on in the low-end and low-mids sections) without the application of my new mixing concept I've especially developed over the last two years: I've already started working on an improved mix in a way that meets my current standards and mixing skills, where I even use a low-cut filter on the mighty industrial percussion and other instruments in order to clean up the low-end and mids sections in the track. But it might take a while, because I also want to enhance the piano composition in this track and I'm mainly working on 2 or 3 other music projects at the moment. At the moment, I might be lucky and be able to continue working on my mixing and composing projects during the week between winter service assignments and some work on the construction sites. Yeah, lower frequencies like from bass and kick drum often sound most effective and powerful in the center of the stereo panorama. But there are also interesting exceptions to this rule concept. Just have a look on the famous soundtrack "Stand by Me" by Ben E. King: In this track, you have the bass far on the left side, together with a shaker and triangle as a nice contrast in the higher frequency section. On the right side of the stereo panorama in this track, you have a lower strings section, a humming choir and a higher strings section with a violin (if I hear it correctly) as a contrast in the higher frequency section again. But the center of the stereo panorma seems to be mainly reserved for the singer voice in this case. Kinda unsual mixing concept with the bass panned to the side - but pretty effective in this track. I guess you're talking about working with aux sends for VST plugin effects such as reverb, which allow you to process the plugin effect separately from the main signal (e.g. EQing only the reverb without affecting the main signal of the instrument, synthesizer, voice, etc.) - in contrast to working with direct VST-based plugin inserts, where entire signal chains including the source signal get processed. The thing is, I'm really used to work with direct plugin inserts or integrated effects of the VST instrument itself (where I also like the much better, faster and more accurate regulation of several settings based on much more conceivable parameters and values) 'cause I never really got aux sends to work properly in my DAW. Whenever I tried to create an aux send and turned it on in a track where I wanted to use that effect, the DSP (the internal digital signal processor of my DAW) would suddenly go over 100% and cause huge instabilities, nasty sound artefacts or even crash-like dropouts. This was very unusual, because with direct plugin inserts the DSP barely reaches the 50% performance mark of my DSP even in my biggest, most complex music projects - and I usually work with raw, unbounced/unfrozen MIDI tracks (needs much more DSP/CPU performance, but it's totally uncomplicated to change something in the composition while listening, mixing and editing the track). Until some weeks before, I have never really found out what was causing this issue - and I own a really good computer with an Intel i7 6700 processor system, 32 GB of DDR-4 RAM, a decent UR44 audio interface, top-notch DAW named Samplitude Pro X4 Suite, and more than enough free disk space. But then I have found out that I messed up with one single setting in my DAW - the number of processor cores that should be used to process my DAW tasks. I had set 8 cores in my DAW because I thought that my i7 6700 processor system really had 8 cores - but it only has 4 cores (which I may have confused with the 8 threads). After changing the setting to 4 cores, I was finally able to use my first aux sends for separately processed effects plugins with smooth DSP performance and no futher issues in my DAW. I could have also used my Origami convolution reverb from my Independence Pro FX plugin library in Samplitude. This Origami reverb plugin also includes a 4-band parametric EQ that just affects the reverb - unfortunately, it only comes with a low shelf filter, two band-pass filters and a high shelf filter without a clear graphic display instead of providing a nice low-cut filter, several peak filters and a high-cut filter with a clear graphical interface). It looks like this: But with the finally functioning possibility of working with plugin-based aux effects sends, I may be able to enhance the sound quality of my mixing concept even further. I probably won't use EQed aux sends on the main instruments in the upper frequency range (if the frequency of an instrument including reverb there might clash with the frequency of another instrument including reverb, it could make sense to EQ the whole signal chain directly in order to get a cleaner mix, or - if just the reverb is the problem - drastically reduce the reverb or replace the reverb with some nice ping-pong delay effects). But for the instruments with the lowest frequencies in the track - like bass and bass-heavy drum elements with stronger reverb - that don't have to compete with other instruments from even lower frequency ranges, it could really be useful to filter out just the long-reverberating low-end reverb clouds (which often sounds like dull, undefined sound mud on ordinary consumer speaker systems) from the mix, while maintaining the power and assertiveness from the main signals of the bass and lower drum elements. … Since I currently work on a new mix (based on my new mixing concept) for my Crisis Core: Final Fantasy 7 remix called "Wings Of Freedom", I could try out a few things and provide you with sound clips from different mixing approaches - especially the old version, the new version (based on my new mixing concept), the new version with an additional master low-cut filter, and the new version with an additional master low-cut filter plus some aux reverb sends with low-cut filter for crucial instruments. As long as winter doesn't give me its legendary white-out ultra finisher with unexpected masses of snow these days (as I have already mentioned, I also work in winter maintenance during the cold season), I'll upload a few audio samples for you soon. ))
  9. @Nase Thanks for sharing your information and experiences. )) I guess I've already made a decision. If I take all the essentials into consideration - especially the assumed quality, the sound and sound stability in different playing styles and articulations, the pickups and possibilities of internal sound control and then the really stylish design - my choice will most likely be the Ibanez GRG140. I really fell in love with sound signature of this Japanese masterpiece: But I will still wait a few months for saving some money without scratching my minimum reserve. I just have spend further 300 bucks (pretty annoying, I know) for a huge set of robust, durable, water-repellent workwear for winter service 'cause I don't want to freeze my ass and have at least some fun in the snowy nights. But with the hefty winter service surcharges of just under 20% to around 50% (depending on whether night work or work on Sundays or public holidays is involved in addition to the simple winter service surcharge) on top of the already decent hourly wage, the money for the electric guitar should be recouped quite quickly. ... Besides, no superiors were harmed or used as punching bags in this company. Maybe I just pissed them off so much, or those nutty gossips were just scared that they'd catch a good knockout punch if they don't behave, that they just sent me back to the building sites. Luckily, it was actually totally in my interest, because the workers and supervisors there aren't quite so bitchy 'n' scratchy - and I can snore the night away for a good hour longer. Guess the magical trio of nasty supervisors consisting of the cold witch, the prickly besom and the way too warm gossip gay lord won't be bothering me for a while. ))
  10. At the moment I'm listening to some soundtracks from the 50s, 80s and - of course - lots of cool video game remixes. I just found a pretty rad remix of Gambit's theme from the X-Men series:
  11. This post is about my year-long journey around mixing, following my ambition to bring the really good dynamic mixes from the 80s back into this world or even surpass them. My attraction to revisit this journey in depth actually began with a rather trivial issue around the question of whether master track EQs are useful for mixing (something I would tend to deny nowadays and really rely on individual EQs for each track in the mix without exception) - and it started as follows: ... "Today, I had a listen to some older and newer mixes of mine on the rather ordinary consumer hi-fi system at my mom's home when I was setting up a shelf at her place. Of course the newest mixes I'm currently working on sounded way better than the older mixes - but even there I could hear a very small amount of unwanted low-end clutter which I didn't recognize when mixing all the stuff with my Yamaha MSP 3 studio monitors (frequency range from 65 Hz to 22000 Hz) and my Fostex subwoofer (adds lower frequency representation to around 40 Hz) and my Beyerdynamic DT 880 Pro headphones (with its comprehensively represented frequency range from around 5 to 35000 Hz). I could also hear a bit low-end mud at soundtracks from professional bands known all over the world. So I thought, it could rather have to do with the hi-fi system itself and its own (raised but less defined) bass representation. But I'm still not sure since the soundtracks from famous bands - especially those from the 80s - mostly sounded like they were still on a higher level of perfection at music production. And I think it's mostly because of the cleaner low-end. Up until this point, I've always separated the other instruments from the bass and drums in the mix with various individual low-cut filter settings, while often allowing bass and/or drums to pass relatively freely down frequency. But with the time I think more and more about cutting even the deeper low-end frequencies of instruments like bass and drums, which usually unchallenged cover the lowest frequency ranges in the whole mix, with a very steep low-cut filter below the frequency ranges of around 20 Hz to remove such almost inaudible low-end jumble (which one usually still perceives on rather bass-emphasized hi-fi systems or even some kitchen radios) especially for the playback on less good playback devices in maximum way. I'm already thinking about doing this directly with a master EQ plugin, just because it might save a lot of time and you don't have to care about phase issues (in marked contrast to using steep-edged low-cut filters at standard tracks which share a similar frequency range - if my information about phase problems at this point is correct so far). ... In this case, I could leave out the steep-edged EQ low-cut filter for the low-end frequencies at the bass and the drum elements completely (the softer, less aggressive low-cut filters of the other instruments for a better separation of frequencies between different instruments should still do their common job, of course) - and the steep-edged EQ low-cut filter in the master EQ plugin would do the whole job instead, without causing any phase issues. So, in the EQ plugin interface it might look like this: With the option Modus (mode), I could also set the phase line (green) from Normal (normal) to Linearphasig (linear-phase) - so, it would shift the phase of the whole frequency range and not just the phase of cutted lower frequency range. But it should not make a really big difference at the master track at all, except of a few (not even perceptible) milliseconds in the latency of (parts of) the playback maybe. I rather ask myself what the value of 18 Hz in the frequency field is supposed to mean in this case (the drop in the frequency range caused by the steep-edged low-cut filter with 36 dB per octave already seems to begin slightly at around 70 Hz - at 18 Hz the drop is around 10 dB and at 10 Hz the drop is already around 27 dB) - but maybe it's rather useful for EQ peak adjustments than for low-cut filter settings." ... And this relatively minor issue began to evolve over time into far more interesting topics with the aim of profoundly improving the mix, which raised questions such as these: 1) Is it useful to filter out the bottom of the low-end frequency spectrum completely? 2) If yes, at which frequency you would start to radically filter out the bottom end (with at least a drop of 10 dB) - rather at 40 Hz, 30 Hz or 20 Hz? 3) Would you do it directly with a master EQ plugin like in my example - or would you be afraid of loosing some crucial audio information (maybe some playing noises or reverb/delay effects of a contrabass, an electric bass, a kick drum or a lower key from a concert grand)? 4) What are your favourite methods of cleaning up the bottom end, the (lower) mids and the other frequencies in the mix (or the mix as a whole)? 5) What can you achieve with filtered aux effect sends in the mix? 6) How can you use the panning of your instruments and sound signals in order to clean up your mix? 7) How can you use different stereo widths for instruments and other sound sources to sustainably improve the mix? If you are interested in these topics, then join me on my journey to get a much deeper understanding of mixing in order to create some of the best mixes ever. I will present my mixing journey and sound experiments with plenty of text about my experiences as well as with meaningful images and audio material from my current remix projects I'm working on. I would be delighted if you would take an active part in the discussion. )) ... PS: Since the topic became more extensive and more useful than expected, I changed the title from "Steep-edged low-cut filter on the master track as a general solution for solving low-end clutter issues in the mix?" to "Cleaning up the mix - with single & master track EQs, EQed aux effect sends, smart panning decisions, specific stereo widths for different instruments and other methods".
  12. A little question for the guitarists and electric guitar experts here... I would like to buy my first electric guitar in the foreseeable future (around spring 2024) and had already found my real favorite with the Japanese Yamaha Pacifica 212... ... until I stumbled upon the Ibanez electric guitars (also a well known Japanese electric guitar series with a really cool, stylish design), which really appeal to me from the design and have a really heavy sound, rather so fender-typical sound profile for racy leads and loose, funky chords - while the Yamaha-Pacifica models rather have a warmer, full sound suitable for strong, massive leads without large accompanying instruments or for powerful, soulful chords. With the guitar's internal controls as well as the downstream amp settings and EQ settings, you can of course make pretty much anything your heart desires in terms of sound - so for me it's more important that the basic sound, which is generated primarily via the pickups and guitar materials, doesn't sound somehow fuzzy, washed out, dissonant or off, as is often the case with some entry-level electric guitars in the far lower price ranges. That is also one of the many reasons why I absolutely favor a Japanese electric guitar model. There you usually have solid top technology, durability and a decent amount of kaizen-like perfection. And in contrast to many Western models, you get the most for your hard-earned money. ... Here are the two models with the associated data and listening samples at various settings, which could come with me in the shortlist: 1) Ibanez Gio GRG140 with the ultra-stylish design in white -------------------------------------------------------------------------- https://www.thomann.de/gb/ibanez_grg140_wh.htm or 2) Yamaha Pacifica 212 in translucent black (a bit more expensive, but very pleasing sound) -------------------------------------------------------------------------------------------------------------------- https://www.thomann.de/gb/yamaha_pacifica_212v_fm_tbl.htm ... I don't want to rule out the possibility of buying the other model at some point in the future in order to use both sound characteristics for specific music projects or combine them with each other. But basically I want to first learn to play on a rock solid, flawless and yet affordable electric guitar. ... My general idea was to feed the electric guitar signal directly into my DAW Samplitude Pro X4 Suite via the audio interface (in my case the Steinberg UR44 from Yamaha) for later recordings and then shape the exact guitar sound individually with the guitar and bass amp plugin Vandal as I need it for the particular recording - just like the person did in this case here: ... But since this will be my first guitar, I will have to learn how to play it properly. And that's guaranteed to take a few years, even though I'm already somewhat familiar with how guitars are played through basic theoretical music and instrument knowledge or using guitar VSTis for composing my soundtracks and remixes. I've also played a bit on an electric guitar and bass in the music store - and that was really fun. I think that the fluent coordination of the fingering techniques will be the biggest sack of work. ... It's possible that the whole thing could drag on for a while, since it could be possible that I'm going to crush a couple of sneaky superiors of my company in some real fights - and that could be accompanied by my own dismissal and smaller financial dry spell maybe. But somehow I don't really feel like tolerating such kind of superiors any longer, who obviously had a big share in the fact that a few of the more sympathetic colleagues were spontaneously dismissed or already left of their own accord, and who, apart from racist statements, Nazi-speak, gossip, insults, humiliation, gloating over dismissal of several colleages, bullying and psychological mobbing of especially friendly, sympathetic employees (they even picked on a really nice apprentice who breaked out in tears after she had been informed of her sudden dismissal - until I stopped their nasty bitch 'n' snitch talk into some very long hours of deadly silence after incidentally bringing up a story from my past, where I threatened a superior with force), shone above all through standing around, gossiping and having their little parties during working time - stuff like that. I would kinda break my heart and irritate my sense of justice seeing these little suckers getting away with all this or even grinning towards me with their smug but still overrated and not-always-so-safe-as-they-think "as-superiors-we-can-do-as-we-please" attitude. According to my very own full contact fighting experiences on the street, smug people like these don't even smile anymore after a hard knee strike hits their face and they literally kiss 'n' bleed the streets. But that's a completely different story and with a little luck it might even be a juicy inspiration for another soundtrack. ... Nevertheless, I still wanted to get some feedback from guitarists and connoisseurs about the electric guitars in advance, so I can make a solid decision about buying my first electric guitar in due time. ))
  13. There are also some really interesting gameplay demos of the upcoming Final Fantasy 7: Rebirth, where you can see how some of the new gameplay elements work. And you get a small glimpse of the excellent graphics and visual effects, the convincing sound effects and the phenomenal soundtracks. The video footage contains only a few story spoilers, but they should be somewhere at the beginning of this pretty exciting-looking game: (I really had to laugh when Aerith and Tifa said they saw similarities between Cloud and a Chocobo.) (Controlling Sephiroth on the battlefield or fighting alongside him during Cloud's Nibelheim flashback in Kalm is quite a feature, I'd say.) ... In the meantime, I'm going to play Crisis Core: Final Fantasy 7 Reunion (which is just as well done and, in contrast to the original for the PSP, has a few more small features to offer besides even more groovy gameplay, many revised, well-done soundtracks and graphical excesses) together with a good ol' friend. I'm also working on a major update of my Crisis Core remix "Wings Of Freedom", mainly to show what my (in the last years) newly developed dynamic mixing concept and my Yamaha MSP studio monitors are capable of in terms of sound quality and spatiality of the mix. So, I'm really looking forward to pulling out all the stops and showing off radically as soon as the mix is ready. ))
  14. And secondly, the bigger news with highly anticipated FF7 remake content. Of course, I'm talking about the latest trailer for Final Fantasy 7: Rebirth, the second part of Square Enix's huge FF7 remake project. Since this trailer shows quite a lot of familiar stuff from the original game in a pretty impressive next-gen look, I'll just summarize the trailer's content with the following bullet points: - military parade in Junon - the Cosmo Canyon area - a region with shipwrecks and a lighthouse (could be a location from Before Crisis and Crisis Core, where the Turk Cissney tries to stop Zack at a beach area with a lighthouse) - a more tropical region (could be the area around Mideel where Cloud and Tifa fell into the Lifestream) - riding the famous buggy (after the incident in the Corel prison) - riding special Chocobos (climbing mountains with a brown/black one and flying with a blue one) - new means of transportation like a Segway personal transporter - Sephiroth clones - introduction of more characters (besides the already known Yuffie from the FF7 Remake Intergade version, there are also some first impressions of Cait Sith and Vincent Valentine) - the huge (and quite deadly looking) snake Midgar Zolom - Cloud and Sephiroth fight together and start some kind of team special move against a boss-like creature that should probably be the Materia Keeper (must be during the Nibelheim scene from Cloud's Soldier memories when he was sent to his hometown with Zack, Sephiroth and another Soldier infantryman to inspect the Mako reactor in the Mt. Nibel region due to an outbreak of violent creatures) - summons like Alexander and Odin (the bull-like creature could be the summon Kujata/Kjata). - Gold Saucer with some familiar games and challenges to complete there (including the Chocobo race, the virtual fighting game with the rock-paper-scissors system, and the motorcycle mini-game) - first look at the "Weapons", the giant guardian creatures that protect the planet from great dangers and harmful creatures like Jenova, some of which were among the most powerful (optional) bosses in the original game It is said that this second remake installment alone will have an estimated gameplay time of a hundred hours (the first part of the remake was around 40 hours) and that it will go up to a point in the story with a very tragic incident in the Forgotten City. From the very beginning, I had a bit of a feeling that with the FF7 remake project, the game developers intend to set similarly massive new standards and superlatives in video game history, as they did back in the day with the original game for the Playstation. It's really exciting to see what kind of a milestone-like video game grenade will be dropped here. ))
  15. Two more interesting news about the Compilation of Final Fantasy 7 - one smaller and one bigger. First, the smaller one. It's a short in-game scene with an FMV sequence from Final Fantasy 7: Ever Crisis, where the younger Sephiroth shows up in a fight to help 3 Shinra soldiers (maybe those are the guys he was just worried about in the FF7: Ever Crisis trailer above) and shows his incredible destructive power even at that young age. This could be one of many reasons why he is referred to as the "golden child" in this game.
  16. @crustcate Because the first part of the FF7 remake is already a really deep and interesting gaming experience, I'm sure that the FF7 staff at Square Enix won't screw up the following parts of the remake either. Since this is a really huge and passionate project, I'm sure they'll even try to step things up in a big way. ... Besides, there 's a new trailer for Final Fantasy 7: Ever Crisis that shows some impressions of the younger Sephiroth and his early ambitions. If you have played "Crisis Core: Final Fantasy 7 (Reunion)", you might know that he wasn't always the cruel villain and maniacal madman. He rather had more of a caring attitude towards people he liked or even respected for various reasons. I'm not sure who the kind people are that he talks about and seems to worry about - maybe just an inexperienced Shnra unit, maybe some good-natured villagers. But the samurai-like or mentor-like woman he's talking to seems to be another important figure in the Final Fantasy 7 universe. It might even be someone from whom he received his legendary Masamune sword.
  17. I don't have any listening or even mixing experiences with Samson studio monitors - but the other studio monitors (guess you mean some >>> KRK <<< studio monitors) are quite some decent middle class studio monitors (which tend to overemphasize the bass a bit too much in my opinion - so, if these models are 5-inch-woofer models or even larger ones, you might have to care about acoustic treatment in your studio room). For the studio headphones... If you really want to go for some AKG-K-271-MKII studio headphones, they are reduced around 40 % at the Thomann store (just choose your right language/currency and save settings afterwards). https://www.thomann.de/de/akg_k271_mkii.htm For AKG models they are not bad - seem to have some very pronounced mids (which is quite rare at frequency responses of studio equipment - still much better than pronounced bass), but there seem to be some bigger jumps in the frequency response (around 20 dB in the audible frequency range): https://www.headphonecheck.com/test/akg-k271-mkii/ (Check out the frequency response on the right side under "Measurement Results", click on the picture with the graph and choose "Frequency response: Detail" afterwards.) If you want to get some AKG (the company was re-established as Austrian Audio in 2017) studio headphones with a more linear frequency response for more accurate and precise mixing, I would rather go for the AKG K-701 or the AKG K-702: https://www.headphonecheck.com/test/akg-k701/ https://www.headphonecheck.com/test/akg-k702/ ... I still have to check out some AKG models in the future. But from all the studio headphones I've tested until now, my favourites were always the Beyerdynamic DT 880 Pro (if you get the Black Edition, buy some silver ear pads and attach them instead of the black ear pads for an even flatter frequency response). According to my personal listening experiences, no other models of studio headphones could beat their flatness of the frequency response, their relaxed and natural sound, their amazing audio definition or the high precision of the staging/panorama/depth. I also use them for watching movies because it always feels like you are sitting in a big iSense cinema or like your are straight in the middle of the action.
  18. And here, in addition to the FF7: Ever Crisis trailer, finally comes the big one - a new trailer for the highly anticipated second part of the big remake of Final Fantasy 7 (Final Fantasy 7: Rebirth): The open world looks really amazing and makes a more realistic impression with the huge Mako lines around the Midgar region, which gradually drain the life energy for the Shinra cities. The combat system seems to be even more elaborate this time. There are also team special moves like in the enhanced Intergrade version of the first remake part, a new battle display element below the ATB bars, and you can fight in really uneven terrain with hills etc. for the first time - seems to be a really complex and way more interactive thing. Almost had to laugh heartily at the last statement "On 2 discs." So, the 2nd part seems to become at least already violently large in itself. I'm not quite sure where the storyline of the second part of the remake might end, but I already have one or two silent ideas about it. ... I guess I'm really going to have to work some overtime this year away from my auspicious and really compelling 4-day work week to acquire a soon-to-be-released PS5 Pro next year alongside my long-awaited Yamaha Pacifica 212 electric guitar for this year. ))
  19. I just saw a new interesting trailer for the upcoming Final Fantasy 7: Ever Crisis (EC as probably the last and final part of the FF7 spin-offs after AC, BC, CC, DC - you probably know which titles are meant if you are familiar with the FF7 universe) and want to share it with you: It really seems that Final Fantasy 7: Ever Crisis will be a game title that retells the entire Final Fantasy 7 universe through all the currently known main parts and spin-off titles of the series, but in a similar (just improved) graphic style to the original game with the big head mode in the overworld or dungeon view and more realistic graphics in the battle mode or in the FMV sequences. There also seems to be a huge (side) mission-based mode like in Crisis Core for the players. And as it seems, you might also get some more information about the Final Fantasy 7 universe and characters like Sephiroth. Does anyone have any idea who the guy with the materia-enhanced axe might be, forcing Sephiroth into battle and claiming to be a hero (maybe Angeal or a genetic copy of him, or an early high-level Avalanche member with a similar leader position like Elfe, who could stand a fight against the legendary Shinra soldier)?
  20. In my DAW, Samplitude Pro X4 Suite, it's possible to show your MIDI stuff as a regular notation (editing notes is not possible in this mode) in the MIDI editor. It looks like this: But it's also possible to show your MIDI stuff as piano-roll-synchronized notation (editings notes right in the notation view is possible there) in addition to the standard piano roll in the MIDI editor). And it looks like this: But yeah, I'm so used to the piano roll (for me, it has a better and clearer logical structure for intuitive understanding of what's going on in the track - and it's much easier to work with it) that I wouldn't even dare to use the note editing features in the notation view.
  21. My edited text from one of my former postings might be interesting for Woody mC who checked my remix on his surround speaker setup, in order to find out more about the compatibility of my mix created with the 2-channel surround feature in connexion with stereo and surround speaker setups: ... "(Edit: Unfortunately, this pretty amazing 2-channel surround feature was only in my DAW Samplitude until version Samplitude Pro X4 (Suite). I, along with another person, asked the developer why they did not keep this feature in later versions of this DAW. They answered that this feature is not supposed to be compatible with newer surround formats. But they have already started working on a new version of this feature, which should also include a modern solution for binaural listening. However, the guy couldn't tell me why the developers just removed this feature in the newer DAW versions, even though the latest development of this 2-channel surround feature hasn't even been developed and implemented yet. Until this problem is solved, I will probably continue to work with Samplitude Pro X4 Suite for the time being, in order to establish my new mixing concept, which also uses this promising 2-channel surround feature as a precise visual audio tool for even more clarity and accuracy in the mix.)" ... But, maybe you (Woody) can respond to the remaining rest of my post from May 1, especially how my former mixes like the Star Tropics remix sound on your speaker system in comparison to my newer mixing concept with the 2-channel surround feature demonstrated in my newest Goldfinger remix,... ...or if at least the WAV file of my Goldfinger remix gave any new results for the compatibility with your surround speaker system.
  22. @Nase Dude, just by the cultural input I got and which I admired, I guess I'm rather a Soviet Japanese sunshine devotee with Muslim drinking habits and some really raw eating habits of an indigenous tribesman. Im really not the typical German dude. I kinda fought for my 4-days working week (30 hours a week) and for my holy, 3 days long weekends - and I'm fuckin' proud of it. I simply love it like a mighty pigeon loves cooing, snacking through the city and radically shittin' your car window + balcony: ... @Woody mC I'm looking forward to. )) But no need to rush. Today, I might be totally fit 'n' sane, even after just around 3 hours of sleeping during the night and a day of mostly physical work. But I can't promise that my body won't retaliate and radically freeze my brain power over the next few days, forcing me to sleep instead of allowing me to follow promising content on OCR.
  23. First of all,... ... a HUGE thank for the amazing, extremely detailed response and the load of commitment you've put into it - this was much more than I expected und contains quite some in-depth knowledge. )) In addition to my request, you also checked the dearVR stuff in my posting before for the surround sound compatibility. With your response, I have some sort of certainty or a at least a good hint, that the obviously lacking surround sound compatibility of the tested sources either has to do with the plugin tools themselves or with the audio/video format and/or the streaming platform. Or, could it be that it only works with a 5-channel surround speaker setup at maximum (instead of a 7-channel surround speaker setup)? ... Sorry for the late reaction on your extremely fast response. But during the week, I'm one of those screwed "wake-up-at-4.15-am-for-his-job" dudes, so there's not too much going on in the evening hours after work anymore. I guess the videogames of my early childhood days are radically trolling me these days: ... Nope, it also happens with a completely dry VSTi signal without any other effects in the 2-channel surround mode. ... Maybe I give you some further screenshots from the English version of the digital manual within my DAW, concerning the 2-channel surround mode: ... ... ... I also tried to load my latest Goldfinger remix DAW project with a true surround setup instead of a stereo setup in the project. But this messes up everything - but true surround doesn't even seem to be compatible with the 2-channel-surround mode panning because the 2-channel surround mode turns into the normal surround mode and this sounds completely off (as if the dry signal suddenly contains lots of reverb and stuff like that - especially towards the center channel - so, you wouldn't even be able to mix a dry bass in the center of the mixing panorama) - whole mix sounds as if it was totally washed out (also when listening to the mix with studio headphones). ... Haha, thanks. Even though the 2-channel surround mode doesn't support a true surround speaker setup, I'm kind of glad it doesn't support a full surround speaker setup rather than doing a totally crappy translation from a stereo mix setup to a true surround speaker setup. But to get some more certainty about this one, I'll also send you a directly exported audio file of my Goldfinger remix within the next days. I might have some MP3 versions (with audio bitrates of 192 and 320 kbit/s) of this track on my PC, but since I always keep my DAW project files, I could easily create a WAV file of this remix. ... Sounds goods. )) The trumpet (panorama at around 9:30 am) and the sax (panorama at around 2:30 pm) are mixed with a normal stereo panorama without using the 2-channel surround mode. I mostly keep it this way with lead instruments and lead signals which shall play directly in the front. So, the better clarity of the mix after using a visual and precise audio tool like the 2-channel surround mode seems to pay off at least. Maybe you can give a small feedback on one of my former remixes which I did not mix with the 2-channel surround mode (just with normal stereo panorama mixing setup) as a comparison (especially in things like clarity and spatial impression of depth)? Since you seem to be an organ fan, let's take this one as an older reference track from my remix list: ... Dang, you must have some kind of attentive rabbit ears or something like that. I guess you're speaking of the Vita Power Guitar kicking in at around 3:05 in my Goldfinger remix and which you can hear even more clearly at around 3:25. It's not a whole chord (just a simple sequence of single notes with some variations) and it's not exactly playing D - A - B - F# - G - D - G - A (it's rather D - A - B - F# - G - D - A -D), but you were really close with your super rabbit organist ears: ... Damn, gotta go to sleep or the working day tomorrow will finish me off. C ya! ))
  24. If I need hall-like but also highly assertive sound sources - like lead guitars - in a track, I rather use echoing delay effects than reverb. It makes the sound source stronger in the mix - but it also contains some kind of room impressions, especially if there's already enough reverb from other sources in your mix. I wouldn't go so far to eliminate reverb effects completely from your tracks like some composers and audio engineers obviously do. But I would be really careful with reverb, because even a little too much of it can ruin the clarity and listening experience of the entire track. I try to use only one or two sound sources in the mix with a bigger reverb like concert hall convolution reverb (if it fits the track and the genre) while the other tracks in the mix get a more subliminal recording studio reverb (like drums or rhythmic guitars) or even no reverb (often bass in my case, or maybe even lead guitars with just some echoing delay effects). ... For separating the tracks and placing them more accurately in the room, I often use a 2-channel surround feature from my DAW Samplitude. It's a pretty cool visual tool that lets you place mono and stereo sound sources in a surround environment, and it encodes surround information into a standard stereo signal. According to the manual, it also retains surround information for real surround speaker setups (just reminded me to ask someone on OCR who uses a surround speaker setup and who might be able to verify if that's true or how the track translates between stereo and surround setups). Just to give you an imagination of the interface of the 2-channel surround feature and how I worked with this tool in my Goldfinger remix... 1) Here you can see how I used it for the kinda jazzy clean electric guitars with a great stereo width, which play just a little bit behind the front, only left and right, leaving out the center and the other channels: ... 2) Here you can see how I used it in connection with the completely dry bass, kinda centered between front and rear surround channels, almost fully mono, but I opened the signal just a little bit towards stereo width to give the already dry bass at least a little space to move, roam and articulate more differently: ... 3) For separating the bass from the drums in the mix, I placed the drums (all 3 drum tracks) a little bit behind the bass, added a little recording studio reverb and opened the stereo width up to around 50 % (if mono would be 0 % and true stereo would be 100 %). So, the drum kit with all frequencies from low to high - together with a well-separated bass - dominates the inner circle around the center, while all the other instruments with their frequencies rather dominate the outer circle of the 2-channel surround field: ... 4) And one last example... Some electric guitar power chords, only enhanced with some stereo delay effects, are placed much more in the rear field on the right side (on the mirrored left side is another powerful guitar playing some more bass-like rhythms) with a very small stereo width only affecting a little bit the right front channel and mostly the right surround channel in the back (also to avoid frequency clashes with other instruments that are playing more in the front and center field). ... I also uploaded a short video on OCR in which you can hear how the sound changes when I move a stereo sound source in the 2-channel surround field (my post from April 17, 2023): https://ocremix.org/community/topic/49135-creating-a-realistic-impression-of-depth-in-stereo-mixes/ Maybe you can give me a little feedback with your impression on this visual 2-channel surround tool. I also used an additional positioner tool as a nice feature included in a convolution reverb plugin in the uploaded demonstration video. ... I also got inspired by your idea to use plugins with different microphone settings to create a greater feeling of distance. But the probably most similar thing I could find in my huge internal plugin collection of my DAW was a plugin called Mic Modeler in my Independence Pro Premium Suite. There you can use different kinds of microphones in connection with settings like omnidirectional/directional, close, far, farer etc. But I can't really hear a real difference between the different distance settings of a specific microphone - might be a plugin for something else. ... On the other hand, I have a really nice microphone control feature within my bass and guitar amp plugin Vandal with which I can really feel an impact on the imagination of distance. ... ... But I'm not so sure if a bass and guitar amp plugin (even with undistorted, clean settings) is the optimal VST plugin for using microphone settings on usually non-amped instruments like trumpets, pianos, flutes and drums. ;D I'm not even sure if the original signal of these instruments remains so clean and untouched in a bass and guitar amp as I would hope. Even if I turn off all the other parts of the amp (except the one with the microphone feature), it still sounds a little bit different. It mainly might have to do with the fact that the microphone feature (right side) is directly connected to the cabinet simulation (left side). If you want to switch it off, then the microphone feature is turned off as well. But even if you turn off both, the audio signal with the amp plugin is still not identical to the original audio signal without the amp plugin. ... The mystical late night hours of the dead dreamer ears. Yeah... after waking up, it sometimes really feels like you were mixing like this the night before:
  25. Visual tools for creating a realistic impression of depth in stereo mixes ---------------------------------------------------------------------------------------- Some tme ago, I recorded a little video with some drum VSTi stuff in connection with some visual tools of my DAW Samplitude Pro X4 Suite with which I can create a feeling of depth in stereo mixes - not sure which one is more suitable tool, but maybe you have some ideas about this topic. Make sure to watch the video below in full screen mode: Visual Tools For Creating A Realistic Impression Of Depth In Stereo Mixes.mp4 ... 1) The first one (on the left side) is a 2-channel surround mode which uses the individually placed sound sources (mono or stereo sound sources) in connexion with 5 simulated surround channels - but it writes all the surround information into a standard 2-channel signal. So, you can set up a virtual surround stage with some impression of depth on a normal 2-channel studio setup. And - according to the manual - the surround information encoded in the stereo signal of the exported track should always be fully compatible to pursuant surround speaker setups. I'm not fully sure, what it exactly does to the important parameters for the impression of depth. But I think it does at least a solid gain staging and a pretty nice and visually comprehensible 5-channel separation (left, right, center, left surround, right surround) to create a cleaner and more tiered mix (with a really precise visual presentation of the stereo width in the context of the channels) within a more three-dimensional audio environment. Maybe it is based on some quite realistic microphone algorithms or on EQ adaptation which change with the movement of the sound sources in the simulated surround field. But I'm not sure if this tool makes any changes on reverb and delay - at least I can't hear too much changes on these two parameters there. And what I also don't really understand is the phenomenon of the small left and right volume changes if you go with your sound source parallelly to or vertically on the center channel up (to the front) and down (to the back) - so, even if you go perfectly vertically down with your sound sources from the center more into the lower surround channel field, the volume seems to go more to the left side. I don't know if there are some particularities with the pan law in a 2-channel surround configuration - but maybe somebody of you has an idea on this phenomenon. At the moment, I just use the separate channel settings feature in the top right corner to balance the volume of the left/right channels with a few dB if the sound sources I placed more in the surround background tend to get louder or more silent at one side. I use this 2-channel surround feature since my last Goldfinger remix and I also want to work with it in the future as a major part of my new mixing concept. Although it's a rather dry mix with rather small-sized recording studio reverb settings than with huge, cathedral-like reverbs in direct comparison to my previous remixes, I think that the 2-channel surround mode made some really decent results for my ambition to create an imagination of depth in there - almost as if you are standing right in front of a street band. (Edit: Unfortunately, this pretty amazing 2-channel surround feature was only in my DAW Samplitude until version Samplitude Pro X4 (Suite). I, along with another person, asked the developer why they did not keep this feature in later versions of this DAW. They answered that this feature is not supposed to be compatible with newer surround formats. But they have already started working on a new version of this feature, which should also include a modern solution for binaural listening. However, the guy couldn't tell me why the developers just removed this feature in the newer DAW versions, even though the latest development of this 2-channel surround feature hasn't even been developed and implemented yet. Until this problem is solved, I will probably continue to work with Samplitude Pro X4 Suite for the time being, in order to establish my new mixing concept, which also uses this promising 2-channel surround feature as a precise visual audio tool for even more clarity and accuracy in the mix.) ... 2) The second tool I just found some time ago in the origami convolution reverb section of my Independence Pro Premium Suite as a part of my DAW Samplitude Pro X4 Suite (which can be easily overseen on the complex interface) is a positioner feature on the interface of this reverb plugin (on the right side around the second half of the drum recording video). With this tool you might not be able to create a nice channel separation for the single tracks (sound sources) of your music project. But you can get some pretty realistic impressions of depth for the reverb by placing the sound source (or maybe the listener) at different positions in some sort of a virtual room with individual characteristics like room size. I even think that the impression of depth with this second tool is even bigger or more realistic than with the first tool. But I'm not sure how this one would work out in the whole mix - I'm kinda afraid that it creates too much subtle sound information which might clutter up your mix and kill the definition (like reverb effects in general do pretty fast in a complex mix with a huge number of tracks). And I'm also not really sure if it makes sense to use both tools simultaneously for the same VSTi, synth, instrument etc. - because in this case you would have to work with two different visual tools at the same time, begin to calculate two visual interfaces into one you can't see in order to make the next steps for the next tracks, an annoying, time-comsuming and less accurate procedure I want to avoid if I already have a decent visual tool for creating an imagination of depth in a stereo mix. ... But what's your opinion about these visual tools in the context of creating a realistic feeling of depth in a soundtrack or another kind of an audio program?
×
×
  • Create New...