Jump to content

Moseph

Members
  • Posts

    2,040
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by Moseph

  1. I'm not a big RPG player, but I feel as though many of the games labeled as "RPGs," including the majority of the JRPG sub-genre, don't really fit. For me, the key lies in those first two letters; role-playing. When I think of the original D&D archetype all of these games are based on, it's not just about rolling a D20 in an encounter or leveling skills on your character sheet, it's about actually fleshing out a character and putting something of yourself into them. Look at the great BioWare titles like Star Wars: KOTOR and Mass Effect, where you decide most of how your character interacts with the world around them, even down to their gender and relationship preferences; that's pretty much the definition of taking a role and making it your own. Then compare it to something like Chrono Trigger: even though it's a beloved classic that I enjoy very much myself, there's little to nothing I can do to affect how Crono behaves as a character, and the overall story is extremely linear. Those are two extremely different philosophies of game design, and I think that labeling them both as "role-playing" is something of a disservice.

    So then what should we call games like the mainline Final Fantasies as an alternate? Hmm..."stat-based battling adventure games"? Okay no that's pretty terrible, but you get the idea.

    The thing about video game RPGs, though, is that compared to pen-and-paper RPGs, flexibility and interactivity are practically nonexistent in their storytelling -- even in sophisticated games such as BioWare's. The pen-and-paper model both allows you to do literally anything (within the scope of what the GM will allow) and is inherently social, with the decisions of other real people affecting the way your character develops. Video games can't be anywhere near this flexible; as such, I would classify Chrono Trigger and BioWare games as being much more similar to each other than you would, with BioWare's games simply being a more sophisticated attempt to capture what pen-and-paper RPGs do naturally.

    I read an interesting description of the origins of computer RPGs (and I wish I could remember where) that talked about the early split between stat-/battle-oriented Final Fantasy/dungeon-crawler games and description-/puzzle-focused games such as Zork and King's Quest as being a result of attempts to capture different aspects of the role-playing experience found in traditional pen-and-paper RPGs, since computers couldn't adequately recreate the entire experience.

  2. Yeah, I can't think of any other way to do it with a single audio-out from the rompler. To get a single-pass multitrack recording, I think you'd pretty much need either 16 separate outputs or a USB connection with support for multitrack audio busing.

    If the samples are mono and the output is stereo, though, you could hard pan the tracks and do two at a time.

  3. Sounds like expression is recorded as MIDI CC data (probably on CC 11, because CC 11 is traditionally referred to as "expression"). Most DAWS treat CC data differently from track automation even though broadly speaking they have similar purposes. The overlap can be confusing and exists because the MIDI standard requires DAWs to be able to deal with CC data, but CC data isn't necessarily ideal for doing all the automation tasks that a DAW needs to be able to do. Think of CC data as automation for MIDI devices/VSTs rather than for Cubase itself and you won't be too far off.

    Most of the time, CC data can be shown in the piano roll editor or, with some setup, in the track view where normal automation typically appears. I can't give specific directions because I'm not familiar with Cubase.

    To fix the problem you're having, you need to put a CC data point of an appropriate value at the beginning of the track to overwrite whatever old CC data is stored from the previous playback (unlike normal automation, CC states change only when a data point is encountered by the playhead, so playing back at arbitrary points can retain incorrect values until a data point is encountered). Some DAWs have an option you can set that will look backwards for the most recent CC value behind the playhead and apply that when playback starts, so if that's the case with Cubase, this data point at the start should allow playback to begin properly at any point before the weirdness happens and not just from the start.

    Regarding file structure, I can give an example from my own workflow of why you might not want to package your audio files as part of the project file. When doing heavy-duty orchestral mockups, I split everything into multiple project files on an instrument or sectional basis, bounce WAV stems from these projects to a folder, and assemble the stems in a final project file where I actually mix things. I sometimes pull stems from this folder into the isolated instrument project files just to have reference material from other instruments, and I sometimes find problems in my stems while mixing that require me to rebounce the stems in question from their respective projects. Because the stems exist outside any of the project files, overwriting one of them will change the audio for all of the projects files that use that stem -- I don't need to re-import the stem or anything; I just need to change the WAV file. This makes my convoluted multi-project-file workflow much easier and also saves disk space by not having a bunch of copies of the same WAV files in multiple projects.

  4. I've not used either, but from looking over the product pages, the big differences between the two seem to be that the Roland is synth-action and the Akai is semi-weighted, the Roland has a single stick-style mod controller while the Akai has the traditional mod wheel plus pitch bend wheel, the Akai has pressure sensitive drum pads, and the Akai retails for $100 more.

    It looks to me like the Akai would be a better deal, although you'd definitely want to test the keys' action at the shop to see which keyboard feels better to you.

    The Yamaha is probably only a contender if you want a hardware synth instead of just a controller.

  5. You should consider going through the entire vocal track and automating its gain by hand to smooth out any obvious level differences. This should allow you to glue everything together with the compression a little more easily. You'll be using the compressor mostly to tighten up transients and such and not to fix major level changes, so you'll be able to be a little more specific in the way you set the compressor.

  6. Got a marketing email from Aria, mostly about how the 40% off intro price is valid only for another 24 hours, but at the end it says, "Updates for the first violins will also be sent out soon, with release samples, and an even more even and refined legato interval system."

    I'm very pleased that they're doing maintenance updates, especially since the addition of release samples wasn't something I ever expected to see in the library. We'll see if the update resolves any of the issues I've encountered with the library.

  7. If you pay closer attention to 0:03 on "All mics", there's an odd note on the right speaker. Is that a library issue?

    The note in question is spiccato layered with legato. The thing in the right speaker is noise in one of the round robin samples in the lower dynamic layer for that spiccato note. One of the other RR samples for that note also has the noise, but it's quieter and tuned to the note (as heard in the reverb version). So for that spiccato note at the lower dynamic, you get your pick of loud noise, quiet noise, or glitched room mic. (I say "your pick," but not really, since there's no RR reset.)

    EDIT: The noise seems to be audible mostly in the room mic, and it can be substantially reduced with little change to the overall sound by high-passing. High-passing specifically the room mic, that is, which requires the rather complicated routing mentioned in my previous post, since that's the only way to isolate the mic outputs.

  8. So here are my thoughts/observations about the library in no particular order:

    Not a good download experience. Aria uses a third-party ecommerce provider, Digital Goods Store, to handle downloads, and Digital Goods Store allows three download attempts that must be made within 24 hours from when the link is first clicked -- the most ludicrously restrictive download terms I've ever seen for a sample library -- and advises you not to use a download manager. I have trouble getting stable downloads without a manager, so my first attempt spontaneously failed, my second attempt succeeded but had errors that made one of the files not unzip, and my third attempt (for which I did use a download manager set to make only one connection to the server) succeeded without errors but had to be preceded by a support request because the download link had expired. On the bright side, customer support (which was provided by Aria, not Digital Goods Store) was prompt and helpful -- and on a Sunday, no less. On the subject of download managers: the reason you're not supposed to use them on limited-attempt downloads like this is that they sometimes make multiple connections to the download server which the server interprets as multiple download attempts. Any decent download manager will allow you to restrict the number of connections to one, though, and if you do this, using a manager should be fine. (But does Digital Goods Store care enough to actually explain this? Haha, of course not!)

    I can't tell a big difference between the sound of the fingered and bowed legato. The transitions sound generally pretty good, although in some instances there is a slight change of timbre between the transition and the sustain. Very high pitches at high dynamic levels, for example, have more noise from the bow hair in the sustains than in the transitions. This sort of thing is a fairly common issue in legato transitions, though -- for example, I've run into timbre problems in Hollywood Strings with legato transitions between notes with no vibrato -- and it comes from the fact that the transitions themselves are never sampled as deeply as the sustains. The sound breaks up in really fast passages, which is also typical of legato articulations, so for fast passages you're better off using shorts.

    Aside from the sustain legato articulation, which has four dynamic layers, and the pizzicato, which has three (the loudest layer is Bartok pizz.), the patches have only one or two dynamic layers. Notes are sampled in whole steps (that is, each sample is used for two notes) rather than chromatically. It seems like the library focuses on breadth of articulations more than depth of sampling.

    The instrument interface is limited, and there is no user's manual/read me. You get mic level/solo controls, buttons to load/unload individual mics, and pan/width controls for the close mic. None of these controls have numerical indications of any kind. The vertical mic level sliders are controlled by dragging horizontally, which is peculiar. There is no way to reset round robins, and there is no control over the level of the legato transitions.

    Mic routing is inconvenient. I mix my mic positions in the DAW rather than in Kontakt, and LSS Violins isn't able to route mic positions to separate Kontakt outputs from a single Kontakt instrument. This means that I have to create four instances of each articulation -- one for each mic -- so I can route each instance to an individual Kontakt output. Additionally, the legato transitions (which exist independently of the mic positions) are triggered in a given instance unless one of its mics is solo'd, so I have to make sure that all but one of these instances solo their respective mic positions -- otherwise, the legato transitions will stack from all the instances and be too loud. It's all a little convoluted, but it at least works, and I won't have to set it up again now that it's a template. (And if anyone wants my Kontakt multis for this, I'd be happy to share them.)

    The library construction strikes me as sloppy. A substantial number of the legato transitions in the molto sul ponticello legato articulation have clicks where the transition changes to a sustain. Some of the room mic samples for the spiccato articulation have audio glitches in them -- you can hear one of these in the room mic version of Example 1. The spiccato articulation also does 3x round robins using only two samples in a way that makes half of the pitches repeat the same sample twice in a row. There are two instances in the sordino spiccato articulation, which is supposed to be 4x RR, where a pitch repeats the exact same sample four times. I found three instances in the martele articulation (supposed to be 2x RR) where pitches that should have round robin variants don't. I also found an audio glitch in one of the SFX notes, and the start time on one of the samples for the violin concerto articulation is far too long.

    There are no trill or tremolo articulations, but I like that there are three short articulation lengths (martele, spiccato, and colle). I find that that having access to a variety of shorts is extremely important for realistic mockups, because complicated fast lines often sound dull and static if you use only one type of articulation.

    I like the glissando articulation, which gives you a recorded glissando of variable speed up or down between any two notes. I expect it will be difficult to get exactly the correct speed for any given use of it since it's controlled by velocity and always pretty slow, but I haven't seen this in any other library, and I've deliberately avoided writing string glissandos because I've had no way to make them sound good. So this particular articulation could prove extremely useful.

    Overall? The library lacks polish, as can be seen in the very basic UI and in the numerous problems with the sample editing. The advantage here over existing low-tier string libraries is that this is being sold a la carte, so you don't have to drop several hundred dollars on a full string ensemble if all you need is violins. There's also the true glissando, which AFAIK no other library has right now. I'll likely use this library mostly for layering, and I think it will work well for that, problems not withstanding. Fortunately, most of the problems could be fixed by updates from the devs, so I'll probably open a support ticket on some of the particularly egregious stuff like the bugged room mic samples.

    (All examples below are normalized. Separate mic examples do not represent their respective levels within the mic mix examples. Levels within mic mixes are not the library defaults.)

    Example 1

    MIDI file

    Separate mics:

    Close

    Main

    Rigs

    Room (you can hear one of the spiccato room mic glitches mentioned above at 0:03)

    Mic mix:

    All mics (no room mic glitch at 0:03, because it's in a RR sample that didn't get triggered on this export)

    All mics with reverb/slight compression/analog saturation (Also no glitch)

    Example 2 (Legato only. Each segment is played twice -- first is fingered legato, second is bowed legato)

    A word on mic positions and legato: The transition samples exist independently of the mic positions. They are automatically mixed in when nothing is solo'd within the instrument and left out when something is solo'd. They sound a little odd when used with any single mic besides the main mic, and are obviously designed to be used in the context of a full mic mix. So if you want, for example, close mics only with legato transitions, that really doesn't sound good.

    There's a click in the fifth note of this example. This happens consistently with this particular fingered transition leading from that particular note into this particular note at this particular dynamic layer -- it's a glitch that I found by freak chance, in other words.

    MIDI file

    Separate mics:

    Close

    Main

    Rigs

    Room

    Mic mix:

    All mics

    All mics with reverb/slight compression/analog saturation

  9. Well, my interest is sufficiently piqued, and $45 is just cheap enough to still be in impulse-buy range for me, so ...

    GO GO GADGET WALLET

    I'll play with the library some over the weekend and post whatever examples I can whip up -- providing mic separation and MIDI source files, of course.

    Judging from the full $75 price of this library, my estimate is that if/when they complete the full strings collection (I'm assuming five sections) the whole thing will be priced at around $300-$350 and be around 20 GB. This would put it in the same market space as a bunch of other low/mid-tier string libraries (Adagietto, LASS Lite, Hollywood Strings Gold, etc.), so that's more or less the context I'll be evaluating it in.

  10. I think the longs sound good enough but the transition between notes seem too long and bit "sucky" like it sucks in instead of a stroke. https://soundcloud.com/aria-sounds/adagio-sordino-legato-naked at the high parts 0:41 or maybe I just haven't heard isolated high strings enough to know what it sounds like.

    I kinda like the harshness actually, smart of you to look at the youtube video for getting a better feel of the timbre and comparing the demo to the original was also something I didn't think of.

    The point about programming in the demos is tricky, you'd think they put out very polished demos by someone who knows how to get the best sound but would need to use the library to tell like you said.

    Yeah, I'm not sure how a sample company generally goes about planning its music demos. I know for some, the library creators make them, and some have go-to composers they work with all the time, and I've seen some companies recruiting demo composers on places like VI-Control. But the demos they end up with don't always put the product in the best light. For example, last time I checked, the demos for EWQL Symphonic Choirs were not very good at all. The library is capable of much better than what EastWest has used to advertise it. I expect one of the big problems is that the people who program libraries aren't always the best at using them, and the people who would be the best at using them generally have to produce something so quickly to hit demo deadlines that they don't necessarily have time to learn all the ins and outs of the libraries they've just been handed. Which would especially be a problem on heinously complicated libraries such as Symphonic Choirs.

    I think I hear the sucking sound you're talking about, and I think it's caused by the repeated crescendos -- the sucking sound happens when it goes from a loud dynamic level immediately to a soft one as the note changes. I don't think it's a property of the note transitions themselves, but rather a result of the way the dynamic levels are automated, and I don't think it's necessarily an unrealistic effect from a mockup standpoint.

  11. I kinda wish the naked demos actually didn't have the reverb, just so the essential character of the violins is more evident, but oh well. Based on what I CAN hear from the naked demos, this library is... about what I would expect for $45 USD. It's not bad but it's not great either. The Symphony No. 25 demo doesn't sound convincing to me in the first 7 seconds. The notes don't really connect too well. If you want a violin sample library (solo and ensemble), I would suggest "Friedlander Violin" for $125. It has controllable portamento speed, vibrato speed, vibrato mix, vibrato type (the latter three NOT available in most libraries), slur, bow switch legato, and tremolo, among other features.

    Info page: http://www.embertone.com/instruments/friedlanderviolinupgrade15.php

    Demos: https://soundcloud.com/embertone/sets/friedlander-violin.)

    EDIT: There's a YouTube video demonstrating some exaggerated glissandos that gives a better feel for the library's general timbre. The sustains do indeed sound a lot like the VSL strings to me -- very dry and a little harsh, although there are apparently four mic positions and I would guess that it's the close mics used in this video.

  12. Many of the samples were edited from different free sources and hence it is a free soundfont, thus not meant to redistribute as a commercial soundfont or use it for any commercial purpose. Other than that, you can sell any music and so on you have made with it, it's up to you.

    This is the issue that both MockingQuantum and I are talking about. The only thing said about the status of the rights to the samples in the Soundfont is that the samples used in it were from "free sources." It's not clear what "free" means in this context. Are the samples public domain? Are they copyrighted but under licenses that permit general use by anyone at all? It's worth noting that in the rest of the Soundfont documentation, "free" is contrasted with "commercial" to indicate things that do not need to be paid for vs. things that do need to be paid for. If we take that to be the sense in which the terms are used here in the license, then "free source" would mean simply that the samples didn't need to be paid for. Is the Soundfont author assuming that they're okay to use because they didn't have to be purchased?

    From a legal perspective, these distinctions matter -- that something is from a "free source" doesn't tell us anything about who has a right to do what with it. The fact that none of this is clear in the license itself as quoted above makes me suspect that the person who made the Soundfont doesn't necessarily understand the relevant legal aspects, which does not inspire confidence that the license is valid.

  13. It's your call, really. You're basically taking their word that the collection respects the unspecified license agreements for unspecified materials belonging to unspecified people. If it were the word of an established sample company targeting a commercial market segment, I'd assume it was true. When, as in this case, it's the word of a random internet person whose identity is unknown and whose level of understanding of the related legal issues is also unknown, I have no idea whether it's true or not.

    Personally, I would not use the Soundfont to do commercial work because of the lack of clarity in the license details for the samples. That said, I also think it's unlikely that anyone would sue you over it. (Though obviously I am not a lawyer and this should not be considered legal advice.)

    For what it's worth, if you have Kontakt, someone has converted the Soundfont to Kontakt format.

    EDIT: To clarify, if someone represents themselves as outright owning the samples in a free Soundfont and extends a license to you, I'm inclined to take it at face value. The issue here is that the Soundfont maker acknowledges that the Soundfont includes samples that they don't own and claims that the samples' licenses allow their use in the Soundfont but doesn't give any info (which samples, who they belong to, etc.) that would enable you to verify that claim. For anyone interested, the Soundfont writeup I'm getting this from is here: http://www.synthfont.com/punbb/viewtopic.php?id=167.

  14. i guess my original question is can i hook the thing that makes it glow and work audio and mic from my laptop?

    As far as I can tell, yes, you should be able to do this.

    The headphones come with a standard 3.5mm headphone jack, so assuming your DDJ-ERGO has a 3.5mm port (or a 1/4 inch port if you have the converter from 3.5mm to 1/4in), then yes, you can use these headphones with your DDJ-ERGO, but it will be a wired connection, not wireless.

    You don't need the USB except when you have to charge it (like you charge a phone). It runs on a rechargeable battery that you charge via USB.

    In other words, buy the headphones, then buy this, and you're done.

    My understanding from the user reviews is that the lights only turn on if you're using the headphones wirelessly, and if the lights are the whole point of getting these headphones as opposed to something else, then the unlighted wired connection isn't much use except as a backup.

    Contradiction, a couple of other considerations:

    1) Are you okay with these being wireless? I'm not a DJ so I can't speak in real-world terms, but it seems like there's more that could potentially go wrong on a gig with wireless headphones (wireless interference, dead/malfunctioning battery, problems with the transmitter, etc.). Since there's the backup wired connection, it might not be a showstopper, but it's something to consider.

    2) Some of the user reviews complained about build-quality issues.

    3) It's not clear how well these headphones block outside noise. (I'm assuming that for DJs, the more isolation the better.)

  15. Purchased; thanks for posting this.

    This is the second indie sample dev I've seen have to curtail its operations as a direct result of the new VAT legislation. I hope things get straightened out in a way that eases the burdens of compliance for small businesses.

    (The other is Nucleus Soundlab, which as of Jan 1st is suspending sales to EU customers until further notice.)

  16. It's not entirely clear from the Amazon description how these work, but from what I can piece together from the reviews and answered questions, it looks like it will operate wirelessly as long as the dongle is plugged into both a powered USB port and an audio source. It apparently also comes with a standard 3.5 mm headphone cable so it can be used non-wirelessly, but doesn't light up unless you use it wirelessly.

  17. Maybe I'm alone here, but I think being able to write proper four-part chorales with "correct" voice-leading is one of the more important skills a composer can develop. Not because you need to write chorales all the time or because you need to voice-lead all the time, but because the principles involved in keeping the voices independent, in balancing doubled tones in chords, in thinking about the resolution of dissonance, etc. can be applied piecemeal to great effect even in other contexts. But this, of course, requires abstracting the underlying principles from the textbook rules and knowing what goals to value in a more abstract and modern context, which is beyond the scope of most introductory theory courses.

    Speaking from my experience grading student exercises, the "sounds good" stipulation exists to encourage students to pay attention to the big picture aspects of what they're writing. I was always surprised by how many students never thought even to sit down at a piano and play the exercise they'd just completed to see what it actually sounded like. This lack of concern for overall cohesion tends to produce things such as disjointed soprano lines, harmonies whose contexts aren't quite right, and voices that slowly wander into the wrong range. And students often notice these things about their work, too, if you play it for them, because it doesn't sound idiomatic.

    But I agree that music theory curriculum tends not to make much of a distinction between writing exercises and doing "actual" composition. This is probably because music theory is generally taught by theorists and not by composers.

  18. I think we're mostly talking about crossfading across velocity layers via CC, yes. Or I am, at least.

    IMO the problem with physically-modeled instruments is that, like I mentioned above, most MIDI controllers are not good enough at CC recording to get results that are accurate enough to drive physical models that require complicated inputs (such as stringed instruments). Traditional MIDI controllers are a big obstacle to effective physical modeling, and I think the market for complicated physically-modeled instruments will remain pretty small until MIDI controllers catch up with the software.

×
×
  • Create New...