Lunahorum Posted July 6, 2008 Share Posted July 6, 2008 I was sort of curious about this stuff today, so I tried to find a site that would play both versions of the same file. http://mp3ornot.com/ I would have liked to hear the 128 vs the original wave as I could not tell the difference between the 128 and 320. I don't know what to listen for. I guess when it gets down around 32kbps (I did a little testing of my own), I for sure can hear a noticeable difference: the lower bit rate is "crumbly" or "telephone-like" I guess. I also read that mp3's of any bitrate cut off sound above 16khz. So does anyone know of an mp3 vs flac or wav listening test? It's not really a big deal at all, I was just curious... Oh edit: If it makes any difference, I got the answer right on the website, but even after reviewing the correct one with the incorrect one, I couldn't hear the difference. Knowing which one is a higher quality leads me on to believe I am hearing things, but I can't pinpoint what the difference is, if there is any. Quote Link to comment Share on other sites More sharing options...
Moseph Posted July 6, 2008 Share Posted July 6, 2008 I was sort of curious about this stuff today, so I tried to find a site that would play both versions of the same file.http://mp3ornot.com/ I would have liked to hear the 128 vs the original wave as I could not tell the difference between the 128 and 320. I don't know what to listen for. I guess when it gets down around 32kbps (I did a little testing of my own), I for sure can hear a noticeable difference: the lower bit rate is "crumbly" or "telephone-like" I guess. I also read that mp3's of any bitrate cut off sound above 16khz. So does anyone know of an mp3 vs flac or wav listening test? It's not really a big deal at all, I was just curious... Oh edit: If it makes any difference, I got the answer right on the website, but even after reviewing the correct one with the incorrect one, I couldn't hear the difference. Knowing which one is a higher quality leads me on to believe I am hearing things, but I can't pinpoint what the difference is, if there is any. Foobar2000 has a double-blind testing utility for audio comparisons if you're interested in running a test with your own audio files. The speakers/headphones/soundcard you use can all impact how easily you hear a difference. Quote Link to comment Share on other sites More sharing options...
zircon Posted July 6, 2008 Share Posted July 6, 2008 I got it right. I can hear the difference between 128 and 320, but it definitely depends on the tonal material. Some stuff makes 128kbps sound awful. Quote Link to comment Share on other sites More sharing options...
OverCoat Posted July 6, 2008 Share Posted July 6, 2008 I think 20kbps is as high as you need to go Quote Link to comment Share on other sites More sharing options...
Rozovian Posted July 6, 2008 Share Posted July 6, 2008 I got the answer right too. Lower quality version sounded a little rough around the edges, I hope the JPEG-compression analogy makes any sense to you. Quote Link to comment Share on other sites More sharing options...
analoq Posted July 6, 2008 Share Posted July 6, 2008 I would have liked to hear the 128 vs the original wave as I could not tell the difference between the 128 and 320. It wouldn't have made a difference with WAV or FLAC. 256kbps or more is effectively imperceptible compression. even after reviewing the correct one with the incorrect one, I couldn't hear the difference. Knowing which one is a higher quality leads me on to believe I am hearing things, but I can't pinpoint what the difference is, if there is any. I don't know if all the examples are the same but for the one I listened to the percussion (not the drums) in the background was the give-away. It lacked clarity in the high frequencies and sounded less transient. cheers. Quote Link to comment Share on other sites More sharing options...
Sole Signal Posted July 6, 2008 Share Posted July 6, 2008 Yeah, there was a very slight "hissing" that was noticeable in the 128kbps one (esp when Pavarotti goes way up). I imagine it would be harder to detect through speakers than through headphones. Quote Link to comment Share on other sites More sharing options...
Fishy Posted July 6, 2008 Share Posted July 6, 2008 I got it wrong after one listen of each, but I think its a terrible example. Bitrate is highly important in heavier/fuller soundscapes, not in something very quiet. I realised I can tell the difference between one of my rock remixes mixdowns that I made that were 160kbps and 192kbps while i was mastering it. For a quietly mastered opera, it won't make nearly as much difference as some loud techno or rock. Quote Link to comment Share on other sites More sharing options...
analoq Posted July 6, 2008 Share Posted July 6, 2008 It's not a terrible example. Understand that the opera piece is not "quieter" than techno/rock, it's more dynamic. Lightly-mastered music can be more affected by perceptual encoding because more frequencies can be discarded. High frequency content is the benchmark, not "loudness". And their example provides enough of that to be discriminated by. The difference was very clear to me. Quote Link to comment Share on other sites More sharing options...
Fishy Posted July 6, 2008 Share Posted July 6, 2008 I can hear minor changes in bitrate in "brickwall" dynamics, but apparently I can't hear a huge one in this sort of thing. There's got to be a reason for that, even if I can't explain it. Maybe my hearing is damaged and I just can't hear high frequencies. (Kinda related: I know a girl who can't hear a bass guitar when you play it.) Quote Link to comment Share on other sites More sharing options...
Skrypnyk Posted July 6, 2008 Share Posted July 6, 2008 I got it wrong as well. I too blame the source though. If the source was say, jazzy drums (with rides and hi hats), I could probably tell the difference. Quote Link to comment Share on other sites More sharing options...
The Vagrance Posted July 6, 2008 Share Posted July 6, 2008 I got it right, but really the sample is a bad choice anyway. Its not that orchestral/opera pieces are bad for this sort of thing but rather that this particular sample was too short and wasn't full enough. Like Skry said, something like Jazz would've been a bigger contrast, along with a full chamber orchestra. Hell, most everything that has a great dynamic range will do. That said, this was fairly easy to tell just by listening to the release of different instruments, in that its possible to hear more of the release in the 320 one. Quote Link to comment Share on other sites More sharing options...
dannthr Posted July 7, 2008 Share Posted July 7, 2008 I listened on Sony MDR7506s and could easily tell the difference between the two samples. The second one obviously had bitrate compression artifacts. Quote Link to comment Share on other sites More sharing options...
SnappleMan Posted July 7, 2008 Share Posted July 7, 2008 You're all so full of shit. The source is chosen specifically because it has the least amount of compression at 128kb. It's a trick experiment in getting you to second guess yourself. More than half of you who claimed you got it right most likely got it wrong and are just lying to make yourselves sound smarter and more adept at noticing "artifcats" than you actually are. Quote Link to comment Share on other sites More sharing options...
analoq Posted July 8, 2008 Share Posted July 8, 2008 I didn't even need to listen to the second sample; I knew the first one was correct because it had no audible artifacts. I listened on Grado RS2s thru a Yamaha GO46 connected to a very quiet MacBook Pro. I've had my ears professionally tested: my left rolls off after 18.5khz, my right after 19.2khz. Not perfect, but I take care of what I've got. I own custom-molded musician earplugs that I take to concerts, clubs and even the cinema. I own an SPL meter to make sure I don't monitor at harmful levels. I can metaphorically break boards with my ears, kung-fu style. If you think you can match me you got two choices: Either step up or walk off, bitches. cheers. Quote Link to comment Share on other sites More sharing options...
SnappleMan Posted July 8, 2008 Share Posted July 8, 2008 I didn't even need to listen to the second sample; I knew the first one was correct because it had no audible artifacts.I listened on Grado RS2s thru a Yamaha GO46 connected to a very quiet MacBook Pro. I've had my ears professionally tested: my left rolls off after 18.5khz, my right after 19.2khz. Not perfect, but I take care of what I've got. I own custom-molded musician earplugs that I take to concerts, clubs and even the cinema. I own an SPL meter to make sure I don't monitor at harmful levels. I can metaphorically break boards with my ears, kung-fu style. If you think you can match me you got two choices: Either step up or walk off, bitches. cheers. I use an SPL (semen pressure level) meter to make sure I don't accidentally drown your mom when nutting on her face. Quote Link to comment Share on other sites More sharing options...
analoq Posted July 8, 2008 Share Posted July 8, 2008 Touché, touché. Quote Link to comment Share on other sites More sharing options...
dannthr Posted July 8, 2008 Share Posted July 8, 2008 Here is a spectral analysis of the two versions. This sampling was taken by recording the source across an S/PDIF recording interface (Sony/Phillips Digital InterFace) set at 16bit, 44.1k. There was no D/A/D conversion whatsoever. The light blue is the 320kbps, the dark blue is the 128kbps version. As you can see, this graph represents the ideal natural audible range for human beings, but of course, actual audible range is completely subjective to the individual. If you are unfamiliar with Hz and pitch, you should know that it is logarithmic. What that means is that essentially, each octave in the musical scale sees a doubling of the previous frequencies. So an octave above concert A (which is 440hz) is 880hz and so on. Anyway, back to the graph. What you'll notice is that, aside from the very ends of the spectrum, the 128kbps maintains a reasonable amount of spectral detail. With the exception that the 128kbps version simply has fewer valleys and troughs, they resemble eachother well enough to be very convincing. This is actually really important because I generally assume 320kbps to be overkill for most people's listening purposes. However, if you look at the very top of the graph, you'll se the last little section ranging from 10khz to 22.5khz. This section, while numerically representing an entire HALF of the human audible range, is only the very last octave. The instruments you'll more than likely find making significant appearances in that part of the hearing range would be cymbals, various drums, and things like whistles or high resonant pitches--maybe some synths, etc... However, despite the fact that it is the last octave, as you'll see on the graph, there is a NOT INSIGNIFICANT loss of detail between the 320kbps version and the 128kbps version. But if it's such a big deal, why can't Snappleman hear the difference? Because he uses crappy monitors. The following is a frequency response measurement taken by HeadRoom, a consumer reports agency for head phones. It is the frequency response of a conventional, piece of crap, iPod ear-bud: What you'll note is that the iPod ear-bud loses detail at the high frequencies in the same way that the 128kbps version does. This is an important relationship because 128kbps mp3 files are perfect for people who want to store a large volume of music in a small space, it's why we call it compression, and it's why Apple doesn't bother making better headphones. If your headphones lose the detail up there, then it doesn't matter if you have it in the first place. However, for those of you who enjoy the listening experience or are curious what those of us who claim to hear the artifacts are actually hearing, I have isolated the 10khz to 22.5khz frequencies in the recordings and have put them in a wav file for your listening pleasure. However, to make sure that you can hear the difference, I have dropped the pitch of the isolated parts one and a half octaves. This will bring the isolated range into a more audible range, even for iPod budders, and because I used non-destructive pitch shift with absolutely no time-shifting, it will strip, essentially, the similarities between the two versions away and you will more easily recognize the compression artifacting. I suppose to be fair, I won't say which version is which, but I want you to listen for the artifacting that gives it away. Typically you'll hear a very high pitched tinkling noise or metallic pulsing coinciding with higher pitched or louder sections or even just staccato sections. Link: 10-20khz isolated, with 1.5 octave drop You may also hear that the 320kbps version has a much more defined sound because it retains these high pitched details. With the right equipment and a pair of analytical ears, you'll be able to spot these compression artifacts at regular speed and regular pitch. This is important training for anyone who actually wants to go into mixing or mastering as well, as these kinds of ear skills are a necessary part of the discerning process. I hope that was enlightening. Cheers, Quote Link to comment Share on other sites More sharing options...
SnappleMan Posted July 8, 2008 Share Posted July 8, 2008 </bluefox> I never said I couldn't hear the difference, dumbass. Quote Link to comment Share on other sites More sharing options...
Vivi22 Posted July 8, 2008 Share Posted July 8, 2008 I never said I couldn't hear the difference, dumbass. Personally, I think you're just lying to sound smarter and more adept at noticing artifacts than you actually are. Quote Link to comment Share on other sites More sharing options...
Audity Posted July 9, 2008 Share Posted July 9, 2008 You should provide some more analysis with that blue shit for other types of compression. VBR V0 versus CBR 320 particularly interests me. The easiest way I can tell if any particular song requires a higher bitrate (without going all crazy with that graphing stuff) is by converting a WAV to FLAC, and looking at the average kbps rate throughout the song. (Though I may be missing something here.) I assume by that method, one could see if that opera example truly is a "terrible" one or not, by seeing how far the FLAC dips down. Hm, kinda' makes me wish there was another type of FLAC-like format that wasn't as restrictive as MP3 VBR is, yet was engineered with human-hearing limits in mind. ...Maybe that's called OGG! (Made by the same people, too.) (As a random not-really-relating note: recording a human voice to WAV, then FLAC-converting it, leads to a pretty small size. But as soon as I add some effects to it in Reason [default reverb patch on full-power], the FLAC size doubles.) I got it right awhile ago. Couldn't tell the difference really until the end. Though now that I think about it, even after listening to it all that time ago, I remember I could put my finger on this certain open-ness throughout the entire 320kbps opera clip. I wasn't sure if that was a good or bad thing at the time of the listenings, but it ended up being my backup reason for choosing the correct one. And with dann's WAV clip, it just makes more sense (it being slowed down sounds really awful, however). If anything here is true, it's the fact that backing shit up as technically as possible sure beats just talking out of an ass. (Not saying I've done any good tech-talk here.) Unless the person desires to continue living a life of uncertainty. Quote Link to comment Share on other sites More sharing options...
analoq Posted July 9, 2008 Share Posted July 9, 2008 Unfortunately the best way to tell whether you need to up the bitrate on a song is by listening to it. They call it 'perceptual encoding' for a reason: It works by perception. MP3s work by fooling your ears, you can't fool the computer so most types of audio analysis (e.g. dannthr's graphs) aren't very helpful. cheers. Quote Link to comment Share on other sites More sharing options...
dannthr Posted July 9, 2008 Share Posted July 9, 2008 I actually did begin an analysis project. I started creating comparison graphs showing the detail loss from Wav to 320kbps to 192kbps to 128kbps. The expiriment was a test to see if there would be less loss with songs that were subjected to more processing during their production. My goal is to test the following song/types: Heavily processed orchestral film score Lightly processed orchestral film score Hardly processed classical string quartet Modern Jazz Trio Heavily distorted modern rock Accoustic or mostly accoustic rock Techno-rock fusion Trance/dance (little to no distortion) etc... So far, I've only analyzed the first four, and it's only slightly confirmed my hypothesis. I agree that there are artifacts that won't show up in a graph that you will hear and dislike. I've also found that mp3 compression tends to average and add sub-bass tones under 20hz, for some reason, which is not good for people who have sub-woofers that can reproduce 5hz sub-bass tones. Cheers, Quote Link to comment Share on other sites More sharing options...
analoq Posted July 10, 2008 Share Posted July 10, 2008 I agree that there are artifacts that won't show up in a graph that you will hear and dislike. You've got it backwards. Just because a spectral analysis shows loss does not mean there's an audible difference. Do an analysis of a 256kbit MP3. There will be loss compared to the original, but you will not hear the difference. MPEG Layer III represents audio perfectly (to the human ear, of course) at that bitrate. So far, I've only analyzed the first four, and it's only slightly confirmed my hypothesis. Stop while you're ahead. If you're only using software analysis tools then you're wasting time. Whether your hypothesis is right or wrong, it's useless information. More/Less frequency loss does not equate to less/more accurate perception. Perception is what matters. I've also found that mp3 compression tends to average and add sub-bass tones under 20hz, for some reason, which is not good for people who have sub-woofers that can reproduce 5hz sub-bass tones. Subwoofers are intended to represent the last octave of human hearing, i.e. 20-40hz. Most subwoofers will roll off anything below 20hz because our ears don't interpet it as a tone. So what's the problem with sub-20hz inaccuracy in MP3s? Answer: There is no problem. It didn't matter in the first place. I hope that was enlightening. Quote Link to comment Share on other sites More sharing options...
Lunahorum Posted July 10, 2008 Author Share Posted July 10, 2008 Subwoofers are intended to represent the last octave of human hearing, i.e. 20-40hz. Most subwoofers will roll off anything below 20hz because our ears don't interpet it as a tone. So what's the problem with sub-20hz inaccuracy in MP3s? Answer: There is no problem. It didn't matter in the first place. I hope that was enlightening. highly agreed. Except my next door neighbor's system has a sub harmonic gen for sounds 10hz to 20hz. It's this huge square box that rumbles haha. It's awesome. Anything around 20hz isn't tone either I don't think. I could be wrong on that though. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.