Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Dec 30, 2013 9:12:28 GMT
About a week or so back, I saw a solitary report in Computer Audiophile about the Modem affecting the SQ of a rip. For several days now, I have been comparing new rips without the modem cable plugged into the PC with other fairly recent rips. Although the differences weren't huge, there was a definite improvement with rips made without the modem cable plugged in. Earlier today Chris did a couple of rips , with and without the modem plugged in, using Windows XP in Safe Mode and sent them to me as uncompressed Zips via Filemail. The names used gave no clue as to which version was which. Chris may wish to follow up with the rest of the story ? Alex
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Dec 30, 2013 13:22:08 GMT
Chris shall indeed follow up To me this was quite a logical thing to try, if isolating power supplies helps the quality of rips, along with many other enhancements that have been well documented here and on other forums, then why not at least test if a connected and running modem could effect the quality of a rip? These other enhancements have confirmations from a good many members, some of whom are in positions of really knowing their onions, EEs, sound engineers and even the odd computer expert! (the main dissenters seem to be self-proclaimed "experts" who are unwilling to even experiment with these findings) To my chagrin I am still battling with a very old XP machine, so any benefits of using faster, more efficient, modern chips and mobo along with an improved operating system ( e.g. Win8 ) are beyond me for now. Add to that my best playback system is back in a box due to my "den" becoming a store room for displaced house contents for the oncoming renovations. Back to the plot, I ripped several tracks from one of my current favourite albums ( Vast "Nude" ), the recording quality is pretty much average but it was the best I had to hand. Using EAC and in Safe Mode, dumping into .wav, directly to a Corsair USB drive powered by a JLH isolated PSU. I ripped one version of each with the modem connected as normal and another ripped without the modem connected. These were played back using Foobar. They were in random order and the screen switched off, left to run for a while without listening so I wouldn't have a clue which version was running. The playback system is capable but far from being top notch ( SC Headphone Amplifier/Amanero USB-I2S converter/PK DAC ), through this I was able to discern subtle improvements in weight, accuracy and scale. After too much listening I started to become lost in a state of overload so sent what I thought was the best resultant track to Alex. These were converted to uncompressed zip files, still on the Corsair, and uploaded using filemail. This has proven to be the best method thus far towards preserving the quality during sending over the ether. The tracks were simply labeled "track 2" and "track two", Alex had no clue as to which was which. however, Alex categorically stated a clear preference for "two". This was indeed the rip made without the modem connected! (and yes, checksums were identical) So yet another worthwhile way of improving the quality of rips, as long as full file-size is used (.wav) and decent equipment used for ripping and playback, having said that mine does not qualify but still yielded positive results. TRY BEFORE YOU BUY! i.e. unqualified criticism from the usual mob is a given and does not require reiterating here
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Dec 31, 2013 19:16:12 GMT
This thing I found in the lower end of the news may not seem related, but I think you'll at least find it amusing if not: Adi Shamir I think it was, the developer of RSA security - the foundation of public key crypto, was playing with a laptop that was doing a secure online transaction using the U.S. govt's most secure crypto - the latest 4096-bit AES code, and using a mic about 10-12 feet away he recorded the laptop sound as it executed the transaction. Then playing back the recording and observing bursts of high-frequency energy at certain points, he processed that data with his own code-breaker routines, and not only decrypted the transaction, but he exposed the actual crypto keys (passwords or pass-phrases) being used by the host, which would allow him to decrypt all transactions using those keys.
Shamir was the guy who discovered Differential Fault Analysis back around 1996-1997, which broke codes using various methods to attack the codec (radiation bombardment, short-term interrupts to the power supply, others....)
So, it isn't just files. How and where those files reside, what's close to them in all likelihood, how they interact with the different codecs, - all factors that need to be explored.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 6, 2014 10:41:59 GMT
I am thankful that there are people in this, and the wider, hi-fi community who have the motivation, time and patience to experiment in this area. I.E. Rips with identical checksums sounding different because of the conditions within which a particular copy is made, stored and played back. I am satisfied that Alex (Sandyk), Colloms et al are hearing effects which are beginning to be explained by power supply quality and so on. But I still have a problem. And note... not with the believability of it all; that has been, and will continue to debated FOR EVER and I'm happy that those with better ears than I do hear changes. No, my problem is 'can I AFFORD to let myself be bothered'? Am I interested? Certainly. And I'll follow the plot as closely as my knowledge (and motivation, time and patience) permit. But can I afford to allow myself to be CONCERNED? Consider how I may handle a music file. Here's an example: 1. I'm at work with my laptop which may, or may not, be running on batteries. I go to my favourite hi-res music download site and download a huge 24/192 album to the C drive.. and I know nothing of the modem in use at work, or its power supply. I then copy the downloaded (FLAC) files from the Download Directory to my Music Directory. Sometimes, depending on the exact source, the FLAC files may even by 'zipped' and therefore need 'unzipping' too. I pay no attention to how fragmented my C drive may be... the FLAC files are going to be all over the place and maybe in little pieces now they've been downloaded and moved about. 2. I go home and I copy the files from laptop to desktop; laptop runs Win 7, desktop runs XP and the desktop PC has a budget PSU. I use a 3TB USB 3 external drive for music storage. Some time later I'll do two backups, each to a USB 2.0 drive. Again paying little regard to fragmentation and using the stock (poor!) power supplies. So.. are the music files now unplayably distorted? Nope. I really cannot (and will not) get into the routine of donning a white labcoat, galvanically isolating the house and all its contents, freezing my CDs and studying chicken entrails (whoops, sorry ) every time I want to play my favourite Gershwin concerto. It was bad enough in the days of vinyl for goodness sake. OK, let's take my tongue out from my cheek and get back to serious comments... I think there's one area that needs to be addressed. Paying attention to creating a good ripping environment gives better sound. As plenty will attest to. But can't we get back to a fundamental rough and ready experiment? Take a file of known good quality. Then copy it back and forth, many times, from a 'good' ripping environment to a 'poor' environment. A simple batch file would do this and isn't there a dos command (FC?) whch can keep tabs on whether bits have been dropped on route? Then compare the resulting many-times-copied file with the original. Surely any subtle changes will have been magnified sufficiently to be blatently obvious; as we'd see if we did layer upon layer of MP3 encoding (although the reasons for reduced quality with MP3s is well understood). I suspect there's a point beyond which subsequent copies would not change. So the question is Why? How come the quality of early stage ripping environments is crucial. But later cloning of files does nothing? If, indeed, that is the case. Derek (who was stuck for something to do right now)
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 6, 2014 11:33:09 GMT
Hi Derek That sure was a mouthful! Let's just tackle your work scenario first. While ever your downloaded files stay in their .flac container they do not appear to degrade, even when moved between PCs. If you were using Win8 you wouldn't even need to worry about fragmentation, as it rectifies that auto-magically as a background task ! As for the various copies, despite what has been reported in TAS220 and 221, I have found that the amount of degradation from a rip to wav files made using a higher quality PSU degrades only to a level determined by the quality of the PSU of the PC. As an example, now that I am using W8/64, a far less thirsty processor than my previous PC, as well 8GB of fast memory and a new SMPS, it is now much harder for me to generate worse sounding copies. That means that to get the very best copies I need to rip them using a JLH in line with my LG BR writer to give it a squeaky clean power supply that also greatly reduces interaction with other parts of the PC. As a bonus, it is also likely to result in higher quality burned CDs and DVDs/ BluRays as the power supplied to the writer is very stable. My highest quality CDs only, are then ripped to Corsair Voyager USB memory sticks using a dedicated external +5V JLH PSU and a modified USB cable with the red wire disconnected at the USB-A plug at the PC, and played directly from there using cPlay, which plays music files from System memory. There are other software players that play from system memory too. If you want to play music from your laptop, the highest quality will be achieved using battery. An external HDD will give better results if powered via a good quality linear PSU. Find a way to dampen it's vibration too, using suitable feet etc. The highest quality play of flac files directly, will be when the medium they are stored on is powered by a very low noise linear PSU. Convert the .flac files to .wav using the same storage medium and the linear PSU and the .wav files will normally sound a little better than the flac files due to them being converted "on the fly" for playing. In fact, this afternoon I loaned my E.E. friend my DIY Linear +12V 2A and +5V 2A supply to try with his external HDD. As a result of a recent demo via his own system with B&W 802s , using a Corsair Voyager USB memory stick plugged into a JLH +5V PSU, then into the USB port of my Oppo 103, he has since bought an Oppo 103 for the family. It sounds streets ahead of his previous Pioneer player even when both use coax SPDIF out into a good DAC. I also handed him back his CDs that he had me rip to a Corsair Voyager for him, and he will then be able to play the CDs directly from his own Oppo 103 as well as play the ripped files on the Corsair from one of the Oppo 103s USB ports for comparison purposes. Regards Alex
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 6, 2014 16:36:19 GMT
I am thankful that there are people in this, and the wider, hi-fi community who have the motivation, time and patience to experiment in this area. I.E. Rips with identical checksums sounding different because of the conditions within which a particular copy is made, stored and played back. I am satisfied that Alex (Sandyk), Colloms et al are hearing effects which are beginning to be explained by power supply quality and so on. But I still have a problem. And note... not with the believability of it all; that has been, and will continue to debated FOR EVER and I'm happy that those with better ears than I do hear changes. No, my problem is 'can I AFFORD to let myself be bothered'? Am I interested? Certainly. And I'll follow the plot as closely as my knowledge (and motivation, time and patience) permit. But can I afford to allow myself to be CONCERNED? Consider how I may handle a music file. Here's an example: 1. I'm at work with my laptop which may, or may not, be running on batteries. I go to my favourite hi-res music download site and download a huge 24/192 album to the C drive.. and I know nothing of the modem in use at work, or its power supply. I then copy the downloaded (FLAC) files from the Download Directory to my Music Directory. Sometimes, depending on the exact source, the FLAC files may even by 'zipped' and therefore need 'unzipping' too. I pay no attention to how fragmented my C drive may be... the FLAC files are going to be all over the place and maybe in little pieces now they've been downloaded and moved about. 2. I go home and I copy the files from laptop to desktop; laptop runs Win 7, desktop runs XP and the desktop PC has a budget PSU. I use a 3TB USB 3 external drive for music storage. Some time later I'll do two backups, each to a USB 2.0 drive. Again paying little regard to fragmentation and using the stock (poor!) power supplies. So.. are the music files now unplayably distorted? Nope. I really cannot (and will not) get into the routine of donning a white labcoat, galvanically isolating the house and all its contents, freezing my CDs and studying chicken entrails (whoops, sorry ) every time I want to play my favourite Gershwin concerto. It was bad enough in the days of vinyl for goodness sake. OK, let's take my tongue out from my cheek and get back to serious comments... I think there's one area that needs to be addressed. Paying attention to creating a good ripping environment gives better sound. As plenty will attest to. But can't we get back to a fundamental rough and ready experiment? Take a file of known good quality. Then copy it back and forth, many times, from a 'good' ripping environment to a 'poor' environment. A simple batch file would do this and isn't there a dos command (FC?) whch can keep tabs on whether bits have been dropped on route? Then compare the resulting many-times-copied file with the original. Surely any subtle changes will have been magnified sufficiently to be blatently obvious; as we'd see if we did layer upon layer of MP3 encoding (although the reasons for reduced quality with MP3s is well understood). I suspect there's a point beyond which subsequent copies would not change. So the question is Why? How come the quality of early stage ripping environments is crucial. But later cloning of files does nothing? If, indeed, that is the case. Derek (who was stuck for something to do right now)
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 6, 2014 16:43:47 GMT
Well, my comments in the above were lost in the post. How'd that happen?
Anyway, try copying your digital file from a to b, then b to c, then c to d, maybe 7 or 8 generations, and the losses will be much smaller than with recording tape. That's the good news. The bad news is the losses will be unpredictable. I'm just speculating here, but I'd say if you can identify every piece of computer software active in the chain from the recording microphone through to your speakers, including all of the rips, zips, Internet/LAN transfers, and music players, a big part of the answer is there.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 6, 2014 21:03:24 GMT
Well, my comments in the above were lost in the post. How'd that happen? Hi Dale It happens to me on occasion too. I have learned the hard way to save my most important posts to Notebook first. Sometimes, I still get caught though. It happens to me occasionally in other forums too. I suspect it may be short term slooooow downs in the connection, as the other end can often be very slow to respond to "Post". Kind Regards Alex. P.S. I live in hope that one day Barry D. will try a linear PSU with his work Mac.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 30, 2014 6:48:00 GMT
Today we compared a couple of new CD rips made with the broadband modem disconnected, against the previous most recent rips. All 3 present unanimously agreed within about 30 seconds . that the new rips without the broadband modem connected sounded cleaner and even easier to listen to. We could hear clear differences between the worst sounding example, the 2nd most recent and the brand new rip. Yes, the checksums of all 3 versions were still identical ! Regards Alex
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 10, 2014 0:07:08 GMT
Jeff C. from Brisbane sent me the attached a couple of days ago. I am posting it here with Jeff's kind permission. Regards Alex
|
|
|
Post by imstimpy on Feb 11, 2014 15:13:57 GMT
I believe I've read everything in here so hopefully this isn't too far off base. I also don't know each individual's background so I apologize if what I'm about to post seems trivial.
My background is computer science so I have to focus on one small piece of this puzzle that doesn't make sense: identical checksums. How is it that two dissimilar files produce the same checksum?
A checksum is a simple way of saying "the result of a mathematical calculation on a set of data". A BAD checksum algorithm might only show a minor deviation in the checksum with a minor change in the data set, or worse, generate the same checksum for entirely different data. A GOOD checksumming routine will provide a wildly different result with even the slightest data difference.
Lets take a really simple example data set and pass it through both bad and good checksum algorithms: BAD algorithm: rolling sum (add each byte together) "data" = 0x64 0x61 0x74 0x61 checksum = 0x9A "datb" = 0x64 0x61 0x74 0x62 checksum = 0x9B "catb" = 0x63 0x61 0x74 0x62 checksum = 0x9A
GOOD algorithm: md5 (http://tools.ietf.org/html/rfc1321) "data" checksum = 6137cde4893c59f76f005a8123d8e8e6 "datb" checksum = e47357289829b2ca25f1ea34a9d445fc "catb" checksum = a2854730723f18da51426f5ac5a8c924
If two digital files produce two different sounds, however slight those differences may be, there has to be a binary difference that is not exposed by the checksum. Obviously we have to assume all conditions are exactly the same and if the source system is inconsistent, the same file would also sound different through repetitive listens. Frans wrote a wonderful piece a few years back comparing FLAC to WAV (I apologize, I can't find the article at the moment). In it he used SoundForge to create a visual differential of two files by subtracting one from the other. Doing so shifts the comparison away from subjective listening, away from computer science, and into audio engineering.
Have you guys compared the two files in SoundForge, or an equivalent application, to track signal differences?
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 11, 2014 21:04:24 GMT
I believe I've read everything in here so hopefully this isn't too far off base. I also don't know each individual's background so I apologize if what I'm about to post seems trivial. My background is computer science so I have to focus on one small piece of this puzzle that doesn't make sense: identical checksums. How is it that two dissimilar files produce the same checksum? A checksum is a simple way of saying "the result of a mathematical calculation on a set of data". A BAD checksum algorithm might only show a minor deviation in the checksum with a minor change in the data set, or worse, generate the same checksum for entirely different data. A GOOD checksumming routine will provide a wildly different result with even the slightest data difference. Lets take a really simple example data set and pass it through both bad and good checksum algorithms: BAD algorithm: rolling sum (add each byte together) "data" = 0x64 0x61 0x74 0x61 checksum = 0x9A "datb" = 0x64 0x61 0x74 0x62 checksum = 0x9B "catb" = 0x63 0x61 0x74 0x62 checksum = 0x9A GOOD algorithm: md5 (http://tools.ietf.org/html/rfc1321) "data" checksum = 6137cde4893c59f76f005a8123d8e8e6 "datb" checksum = e47357289829b2ca25f1ea34a9d445fc "catb" checksum = a2854730723f18da51426f5ac5a8c924 If two digital files produce two different sounds, however slight those differences may be, there has to be a binary difference that is not exposed by the checksum. Obviously we have to assume all conditions are exactly the same and if the source system is inconsistent, the same file would also sound different through repetitive listens. Frans wrote a wonderful piece a few years back comparing FLAC to WAV (I apologize, I can't find the article at the moment). In it he used SoundForge to create a visual differential of two files by subtracting one from the other. Doing so shifts the comparison away from subjective listening, away from computer science, and into audio engineering. Have you guys compared the two files in SoundForge, or an equivalent application, to track signal differences? Hi Sorry, but I don't share your confidence in the ability of check sums to show minute differences in waveform shape and voltage levels or find low level noise that is there along with the binary information .They were never originally designed to check everything from just above DC to daylight! Well respected E.E. John Swenson recently reported that as he had worked at a HDD fabrication plant, he had found that system noise is stored along with binary information, he has also posted elsewhere that RF/EMI also hitches a ride along with the binary information via USB. As unlikely as all this may seem, I have heaps of confirmation from veteran technical writer and Chartered E.E. Martin Colloms from HiFi Critic Magazine and forum, that my reports are correct. You most likely have several CDs in your collection that were mastered by world famous Recording and Mastering Engineer Barry Diament. (now Soundkeeper Records) Barry has also confirmed that my reports are correct too. Early last year I sent Barry 2 compilation MAM Gold CD-Rs with pairs of tracks on each having identical check sums. The intention was for him to play them directly, but he chose to rip them to a HDD on his studio gear, where he heard the differences that remained, albeit not as obvious as they would have been without them being copied to HDD.If I had known Barry wanted to do this, I would have sent them on a Corsair Voyager instead. If you are interested I can provide links to the HFC threads and even FYI ONLY copies of some personal communications. More recently, a Sydney friend who is a well qualified Sydney E.E. had been telling me for about 12 months to STFU as I was making myself look stupid. I bet him a bottle of 12yo Scotch that I could demo this to him and won my bet through his own B&W 801 speakers.. He now supports me 100% and in fact, recently used a Corsair Voyager USB memory stick of mine to demo the same to a friend who is also a qualified E.E.without me being present They both presently believe that the Checksums are unable to find these differences. Incidentally, I have also had confirmation from 2 Professors of Music, one in the U.K. and one in the U.S.A. as well as numerous RG members and close to 20 Computer Audiophile members. Dale who reviews our headphones here, among other well appreciated contributions, is well qualified in the software writing area, and originally gave me a very hard time, but now supports me on this issue too. I have spent many hours using Sony SoundForge 9 trying to find why there are clear audible differences, however using digital means to try and find problems with minute differences in digital audio files seems to be a self fulfilling endeavour, Ideally the differences should be measured at the Analogue output of the DAC using ANALOGUE means, but that creates many other problems. Reconverting them to digital again to try and see the differences is pure folly. Regards Alex P.S. While I have a great deal of respect for Frans, on this issue he is plain wrong, as he also is in quite a few other subjective areas such as differences between passive components and cables. Frans tried very hard to try and get in first here before Martin Colloms published his report in HiFi Critic Vol.6 No.1 On the .flac vs. .wav issue check out TAS 220 and 221,There is also a new thread in C.A about 24/96 where JonP who is a recording engineer provided some sample tracks at 16/44.1, 124/44.1 and 24/48 Both Barry Diament and Cookie Marenco refuse to provide their highj resolution recordings in .flac too.
|
|
|
Post by jeffc on Feb 11, 2014 21:50:14 GMT
My 2c worth on this The data stored in the .wav files is the same if the checksums say so. Without very cleaver systems for maintaining data accuracy through all kinds operations and transfer, computers would be worthless. However, my simple take on it, with my very limited computer knowledge, is that it only needs to be “identical” within whatever tolerances the error checking and fixing systems the OS applies to read it as identical. It cares little about the files origin or maker or the nature of anything other than the pieces it needs to identify to say “yep gotcha”. As a simple analogy for what I’m trying to allude to. It’s READING the words and not how they ARE or the spaces BETWEEN. It’s reading the words and not how they are or the spaces between. As much as the computer OS cares for reading, writing and accessing data, the 2 sentences above contain the same words in the same order to make the same ‘semi-intelligible’ sentence, as long as whatever error-correction is applied is up to the task (ie. read them the same, although I had to work a little harder on one than the other, but same words in the same order, so all good, same checksums). I think we need to be thinking beyond the data itself (which is the same) to its more holistic nature. How to do this though I have no idea. cheers.. jeffc
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 11, 2014 23:10:51 GMT
Nicely put Jeff.
The main point being is that many people can these differences, even rips made on fairly standard equipment are subject to the same improvements. I have for a while proffered the theory that the checksum logarithm may not have fine enough tolerances for such a complex originally analogue, signal. Add to that the possibilities of machine noise, RFI and EMI hitching a ride and you have some problems achieving truly identical files. There are clearly reasons for the differences that have yet to be absolutely identified and a system of measuring them put in place. Until then we simply try things that seem to work.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 11, 2014 23:35:00 GMT
My 2c worth on this The data stored in the .wav files is the same if the checksums say so. Without very cleaver systems for maintaining data accuracy through all kinds operations and transfer, computers would be worthless. However, my simple take on it, with my very limited computer knowledge, is that it only needs to be “identical” within whatever tolerances the error checking and fixing systems the OS applies to read it as identical. It cares little about the files origin or maker or the nature of anything other than the pieces it needs to identify to say “yep gotcha”. As a simple analogy for what I’m trying to allude to. It’s READING the words and not how they ARE or the spaces BETWEEN. It’s reading the words and not how they are or the spaces between. As much as the computer OS cares for reading, writing and accessing data, the 2 sentences above contain the same words in the same order to make the same ‘semi-intelligible’ sentence, as long as whatever error-correction is applied is up to the task (ie. read them the same, although I had to work a little harder on one than the other, but same words in the same order, so all good, same checksums). I think we need to be thinking beyond the data itself (which is the same) to its more holistic nature. How to do this though I have no idea. cheers.. jeffc This is a perfect example of how they can be different - because in the above the ASCII character values are the same but the text is not the same - there are hidden formatting characters. On a computer, if the checksum is the same, the file content as read from the disk sectors is the same, but that's where the checksum begins and ends. There's a lot more to a computer than that. BTW, I think if you did 15 generations of copying the file, always starting with the latest (not the original) copy, you should expect little degradation compared to doing the same thing with magnetic tape. But a perfect copy of digital music isn't like a perfect copy of text. The text is read, interpreted, and displayed in a process that's orders of magnitude less sophisticated than the computer, and absolute real-time dislpay isn't a requirement in most applications (or I'd be out of a job tomorrow). But with digital music the need to read at a rate vastly greater than text and feed it to a DAC that may have to deal with anomalies that aren't part of the checksummed file contents - it's not A-B-C. We can talk about a few things we know, such as file headers or instructions within the file that may trigger events differently than other copies of the file even though the content is the same. If I listed just a few software bugs I've encountered that couldn't possibly happen because the code trace is exact (but the runtime differs), it would take awhile. If I really depended on the file contents playing at maximum quality in every case (I do actually) then I need to get the best masters I can get and archive those, since I presume that lower quality (lower bitrate?) files won't have as much "depth" and will suffer more impact on their sound from every anomaly in the system.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 11, 2014 23:54:18 GMT
|
|
|
Post by imstimpy on Feb 12, 2014 4:18:54 GMT
It is fascinating to me that what is being described is a difference in the inferred data beyond the actual bits of the digital data. While I cannot describe what that might be, it makes sense to me how a file could be bitwise identical and not actually be the same. Time, as a hypothetical example, could be inferred by the data without actually needing to store @1 second..., @2 seconds... The real problem seems to be the digital format relying upon translation, which is subject to noise and all sorts of external influences.
While it is remotely possible the checksum algorithm is at fault, FLAC appears to use md5. Perhaps the FLAC format is relying on sequencing or rendering based upon factors during encoding and decoding. I'll have to go read the whitepapers on FLAC.
I must be missing something but how does the 16 bit vs 24 bit discussion translate to two "identical" files sounding different? A 16 bit file, by its very definition would be different from a 24 bit file and not subject to the original topic of conversation. It was an interesting read none-the-less.
|
|
|
Post by imstimpy on Feb 12, 2014 4:22:37 GMT
P.S. While I have a great deal of respect for Frans, on this issue he is plain wrong, as he also is in quite a few other subjective areas such as differences between passive components and cables. Frans tried very hard to try and get in first here before Martin Colloms published his report in HiFi Critic Vol.6 No.1 On the .flac vs. .wav issue check out TAS 220 and 221,There is also a new thread in C.A about 24/96 where JonP who is a recording engineer provided some sample tracks at 16/44.1, 124/44.1 and 24/48 Both Barry Diament and Cookie Marenco refuse to provide their highj resolution recordings in .flac too. Is there somewhere I can access HiFi Critic Vol.6 No.1? What is TAS 220 and 221? C.A. is computeraudiophile.com, I take it? Sorry, this particular field is completely new to me.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 12, 2014 5:29:27 GMT
P.S. While I have a great deal of respect for Frans, on this issue he is plain wrong, as he also is in quite a few other subjective areas such as differences between passive components and cables. Frans tried very hard to try and get in first here before Martin Colloms published his report in HiFi Critic Vol.6 No.1 On the .flac vs. .wav issue check out TAS 220 and 221,There is also a new thread in C.A about 24/96 where JonP who is a recording engineer provided some sample tracks at 16/44.1, 124/44.1 and 24/48 Both Barry Diament and Cookie Marenco refuse to provide their highj resolution recordings in .flac too. Is there somewhere I can access HiFi Critic Vol.6 No.1? What is TAS 220 and 221? C.A. is computeraudiophile.com, I take it? Sorry, this particular field is completely new to me. Hi I can supply the links to the original HFC thread discussion and may b e able to locate a copy of the eventual article which appeared to be rushed and not as conclusive as the forum reports. The Absolute Sound 220 and 221 I no longer have copies of, but they were basically a comparison of .wav vs. ,flac written by a Dr. Zellig IIRC. C.A is Computer Audiophile where I still cop a lot of flack. I will post the links to the HFC threads a little later on. Regards Alex
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 12, 2014 5:37:17 GMT
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 12, 2014 6:44:43 GMT
As this thread is for those interested in reading about Computer Audio in general , and things like .flac vs. .wav storage are of interest to many, I threw it in as a bonus, as JonP also agreed with me that the .flac files should be converted to .wav or .flac files before playing them. I don't think that the files he provided are only restricted to actual C.A. members, but he would of course be interested in feedback about them from others. If anybody here wishes to comment about the sound of those files and isn't a C.A. member I would be happy to post their results for them in C.A. Regards Alex
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 14, 2014 4:00:05 GMT
Recently I uploaded a couple of comparison .wav files for a C.A. member. The tracks were "Sumiyaki Coffee" which is the bonus instrumental track on a Jheena Lodwick CD. Both versions had identical checksums. The attached copy of his post was posted today in Computer Audiophile Forum.The other track that he refers to was just a .flac track that I found on the Internet, for him to compare with his own rip. Alex
|
|