Alex (& Franz).
I like both points of view. A good debate is a real learning curve.
I also like that there is conviction on both sides.
My job is argument. I am always fired up by those with a good point which is close to what I can prove. Those people usually have my utmost respect and in somes case they are so damn good I quietly admire them.
I can understand both points of view and I admire the conviction. I want neither of you to take your arguments away provided there is some justification for passing them onto everybody else and each other; I can say what I want in an empty room but its only selfish to do so if no one can hear.
Ultimately it is for each of us to then rate the points raised based upon our own understanding and reasons. Thats the test.
It's the arguments themselves that create this impasse.
This will be my last 'explain myself and test' and from then on will NOT raise the issue again UNLESS one of the subjectivist 'camp' thinks it might be fun to do the proposed challenge below just for the fun and gaining some insight in their own perception (notice the individual aspect of it)
Also Alex his reasoning to stop his membership (just because M.C. won't find the same which we all can read in the threads is plain to see is MOST LIKELY NOT to be the case) seems to be completely unwarranted to me.
In my (admitted one sided view as hovering is difficult except for those with a heli) point of view the impasse is as explained in my post.
To one corner the 'arguably the most respected technical reviewer, who is also a Chartered Engineer' is the only qualified person to test BECAUSE he can hear those differences.
M.C. already reported those to be there so the confirmation is already there.
Only the statistics are not published yet.
From the other corner (just read the threads Alex linked to) the expertise of M.C. is openly challenged.
As well as this subject (WAV appearing bit wise the same yet differ audibly) is concerned as are other articles/statements.
In short the camp that MUST work together based on findings of the most respected technical reviewer around that do not agree with him in the first place.
His findings may not be questioned anymore.
This is because the claiming camp NOW has ULTIMATE proof because he is a chartered engineer.
THIS is one impasse.
The second impasse is the J-Play test (and Alex test) and where the flaws/validity differences in viewpoints exist.
My (rather single sided and thus limited) view is this:
From the J-Play thread it isn't clear which persons take it (it's all off-line and not transparent now) so waiting is arguably needed to asses the validity of the test.
What MIGHT (again no proof, just suspicions need validation) happen though is those that take the test ARE people like me and Owen that cannot hear and 'pollute' the test.
In this case the average score could fall in negative direction.
It is unclear what the competences of the contestants are.
It is safe to assume they are mostly subjectivists (the science camp wouldn't pay for a player if it hasn't got measurable different performance) which is in favor of the test results being more valid.
Also statistically the test has a certain margin of error the scientific camps likes to see eliminated.
Agreed the statistics can be greatly approved by one tester doing the test repeatedly and (in case J-Play hasn't got an A-B function like Foobar has) the test good be more rigid when someone else starts the track 'blind' to the tester.
We don't know how each tester works.
How many tests (blinded) are given improves accuracy (to the technical guys)
2 files of which 2 are different gives a 50% chance to correctly 'guess' correct.
BUT also a 50% chance to discern a difference and prefer the wrong one (in this case due to personal preference) so in essence a 100% chance of validation there is a difference.
That is in case the testers themselves can see if they start A or B from a list (the filenames are different).
When blind tested (Alex his test) there is a 50% chance as the individuals don't know which file it is and hear (or guess in the viewpoint of the skeptics) correctly. And a 100% chance differences can be discerned but prefer the other one.
Once one knows which they prefer and click on the files themselves (knowing which they prefer) the scientific camp states from now on the test is questionable.
On the other hand if the Alex test is taken again and again but the test is started blind (someone else starts a 'randomized' test) and score is kept the Alex test is very valid indeed.
That is if a statistically significant number of tests is done... say 10 or more (Foobar test repeated several times).
The more it is repeated the more valid it becomes.
I understand from members they tested this blind several times and could discern.
Do I trust the ears of those people, given the subjectivistic character of those I know took the test ?
This is where things get 'ugly' and the offensive and dismissive character STARTS to emerge and defensive walls are put up.
For the scientific camp it is unclear HOW often the test is taken and the score list and how (blind or sighted) the test is done.
This is also THE only point where this 'fight' is decided.
The scientific camp likes to see a (ONLY to them) second test taken where a larger number of unknowns exist and perhaps 2 or even 3 CONFIRMED different files.
For the science camp the statistical difference between the test is paramount.
Because no matter how one takes it the test can only be done true blind.
Files have to be identified and identified correctly in the sense there is a difference between 2 or even 3 different sounding files.
The statistical chance has improved (TO the scientific crowd)
The chance of 'guessing' wrong (same files as Alex test, same circumstances, same gear, same ears) when merely taking the test ONCE has decreased (statistically) from a 100% change to hear differences to 1 in 4802 (0,02%).
Quite a difference in statistic value.
Of course the Alex test repeated blind (someone else starts the test randomly while NOT seeing the person that starts the track) 10 times and keeping scores is EVERY bit as valid.
The J-Play test is only marginally different as Alex's with a different statistic chance in case the test is done truly blind ONE time.
The chance of determining correctly by chance is smaller.
The possible combinations are:
AAB, ABA, ABB, BAB, BBA, BAA (might have missed some, please point me to it in this case)
The AAA and BBB cannot be there.
So a 1 in 6 chance (16%) to get a perfect result by chance which is already better odds.
The chance to get a score proving differences yet identifying the wrong files as the best (due to preference) is 33% because ABA, BAB and AAB, BBA and ABB, BAA are opposite BUT SHOW differences yet due to taste/preference the opposite.
In the same way as in the 'Alex test' the accuracy will improve IF executed multiples time truly blind. This WE (the doubtful camp) will have to TRUST.
THIS appears to be one Alex and John's MAIN points... the TRUST.
People messing up the final results, one in 3 MIGHT hear it correctly IF only done the test ONCE or multiple times but sighted (knowing which file name is playing) OR people entering the test and sending in 'results' to test the validity (the opposing camp trying to make a point) by sending in a 'randomly selected' score is a possible reason for this test to 'fail'.
Hence my expression of interest in ONLY those that pass the test AND see a valid reason to do it all OVER again.
This last sentence is the 'offending' one and therefore those that have proven it already feel no NEED nor desire to repeat it to those who question their very being.
IMO the most valid reason NOT to be tested again as the remote possibility exists they could fail a statistically relevant DIFFERENT test OR for other reasons explained.
I agree completely and my thesis is IF the 'Alex test' or J-Play test can be done easily (reported differences are said to be quite big for those that know what to listen for) the Frans test should be equally passable as well.
And even though that test is not needed nor warranted as the existence has already be proven without a shadow of doubt to those that passed the test and M.C. as well (he already openly posted it was REAL and multiple persons heard) those that like the matter investigate and put those 'non believing audio atheists' in their place should be able to get at least a 70% score and have the last laugh, shown it is real to those that DOUBT what they know they can do and force them into revising their opinions and with their technical background can aid in the search making changes and co-operating with those that can discern in order to finally ADVANCE in technical improvements.
What's wrong with this viewpoint/opinion ?Now where does the shoe not fit (again in MY rather one sided opinion) ?
It doesn't fit because the 'EE's that are ONLY after destroying reputations' may have alternate reasons to destroy the validity of the test in order to PROVE their point by:
a: lying about the reported test results (can be overcome IMO by having the results mailed to Alex and me for instance)
b: The computer of those that rename the files messes up the files and destroys the differences.
This is very well possible but can easily be tested. If this test fails it is pointless to test.
This test would be 2 fold and can be done by several computer owning 'tech guys' willing to 'see' the evidence.
It would consist of downloading both Alex's files, storing them locally and uploading UNCHANGED (all ZIP'd ?) Then if the SQ is preserved a second test.
the already uploaded files are renamed and uploaded again. What WILL be known in this case
Then the guys with the ears HEAR if the files still poses the qualities.
next step:
File A is renamed to 1 and file B is renamed to 2 and uploaded.
If differences are still preserved (because of the renaming) the actual test may begin.
WHEN a few computers (big group needs to be WILLING and above all INTERESTED to participate) are used the capable computers/techs can be selected by the hearing camp.
Then the files are 'randomly' renamed (meaning file A and B all are renamed in a random order into 'file1 to file10' and uploaded.
They could very well be all file A, all file B or a certain percentage of both or 50/50
THIS is where the trust issue starts acting up as in a desperate attempt to destroy reputations or merely to be proven 'right' the ones uploading the files COULD have malicious intend and not be honest (about the sequence send out so there can never be a good score) or 'destroyed' the files uploaded making it impossible to discern.
a TRUST issue...
One could send out different 'selected' files to different testers OR use one test AND allow the testers to discuss (off forum) amongst themselves what they think is the proper order.
This way when consensus is reached the results can be close or exactly 100%
All involved would need to be 'trusted' (those that upload the file and know what the result would be)
This trust is questioned before hand for the mentioned reason and the trust issue will thus prevent the proposed procedure to take place.
End of discussion-co-operation in this case.
Now I have put all my cards on the table the question is:
What would be WRONG with this test OUTSIDE the reasoning about 'destroying reputations' intentionally to prove their point (the mutual TRUST issue) ?
a short answer will do by all of those that ARE really interested in this subject AND like to know for themselves (results NOT posted publicly)This can now be put to rest I hope.
By the way Since we are ALL into audio and ALL appear to be very serious about it it could stand to reason that NO-ONE hear has any interest in NOT letting this test be valid.
That's MY opinion of course.
Damn that was a long essay but I hope it will clear up viewpoints and motivations of the 'technical' camp.