Thank you for your hard work and preliminary summary on this.
I look at this from various perspectives. One could see three research questions here: Question 1
Finding a single consumer test drive to be used in reviews that gives roughly similar results as a calibrated Pulsetec drive, using the scanning options your scanner operator has chosen Question 2
Finding a set of tools that gives a good indication of a burned disc quality (but without being able to tell if the burner or the disc or both were good or bad), namely readability/compatibility in a wide range of devices Question 3
Finding a set of tools that can be used to measure the burner quality AND the quality of media (as separate issues)
These are three different research questions, which must be tackled with different tools in order to be answered in a useful manner.
Sure, there is overlap between the tools and the test methods, but because what is being asked in each case is different, so are the tools to find out the answers.
Let me propose my current high level view answers to the above three questions. High level, because I may not be able to name names (i.e. tell exactly which drives/software/settings/etc to use in each case). Answer 1
Question number one is what you have set out to answer and it looks like to me, that you've found a good candidate for a set of tools:
LiteON LDW-811S drive with kProbe at 8xCAV. Filling in with PX-712/Plextools Pro when there is reason to believe a scanning reliability is compromised (from your CATS reference perspective) with LiteON/kProbe scans.
I find several arguments in your favour:
- it seems, based on your testing, that LDW-811S at 8xCAV does give reasonably similar error rates to CATS
- Mediatek measures PI/PIF and not PO (so it's actually possible to measure against DVD-ROM and DVD+R limits of PiSum8(max)=280 and PIF_ECC1(max)=4)
- scanning at 8x also stresses reading to such a degree that it simulates "difficult reading conditions" in real-life dvd-rom/dvdrw use
Question 2 is more complicated. For this, I don't believe that single one drive is good enough.
Let me be frank and say I don't believe even CATS with the current software version, using one set of parameters and one scan run, is enough.
A proper set of tools, without going overboard, would consists of perhaps something like the below:
- your kprobe test setup from 1) to show how non-optimal LiteOns see errors on burns
- Plextools / PX712 (or similar in the future) to give an indication of Plextor (and perhaps Sanyo chipsets in general?). Preferably with your hacked Plextools at a higher than 2xCLV scan speed, because discs do get read faster than 2xCLV in real use
- CD Speed PIE/POE scan / Nexperia based drive (BenQ) to show how another family of chipsets/transport sees the number of errors. Perhaps this could even be used for jitter measurements (this remains to be seen)
- CD Speed transfer rate graph with a "marginal quality" dvd-rom drive (don't know which one this could be) to simulate a sub-optimal reading situation
- a rack hifi-player from one of the big names (Sony, Matsushita, Pioneer) for checking dvd player compatibility if desired (of course one could have several players, but even one would be great)
Question three is the most complicated to answer imho. Staying out of the realm of professional tools, I think something like the following could be devised in the future:
- PIE/PIF/POE/POF scan results for discs for which the maker/model of the disc and the maker/model/firmware/speed of both the burner and the reader are known
- transfer rate graphs (including elapsed time) to show transfer rates for various different readers
- Hopefully some lower level indirect measures of tolerance limits for some major drives (i.e. Plextor 107 has a maximum jitter tolerance of 1X%, PX712 has 1Y%, LiteOn 851 does not like too much focus error, etc)
- All of the above exported, uploaded and stored in a database
- A statistical tool that looks (from the database) multiple burns using media A with various burners at various speeds using various firmware and being measured with various readers:
- when the results from various sources start to correlate it can be reasonably safely assumed (after statistical analysis) that the media is good and being supported well.
- When there is a great discrepancy in some drives being able read copies, but other drives failing, it can be analyzed whether the media is badly supported by other drives (or whether it is bad in itself).
Of course setup no. 2 is already a LOT of work for any single source / tester group. Too much, I think.
Number three is way too much work, but it could be distributed, if somebody set up the database and software makes enabled exporting of raw data in a known standard format.
The trouble is that the burning tools are a moving platform: new firmwares, new models, new versions of same discs (perhaps even with same media codes), different batches, etc.
It may not be possible to statistically analyse these results, because there may be too much data noise and/or there just isn't enough big of a statistical sample available from each set of burners/firmware/media, as they change too fast.
These are my initial rough thoughts at this time and by no means do I consider them the optimal solution as they are admittedly way too cumbersome. Also, I can't prove that what I've written above is all correct, as I've already learned quite a lot on the way and find my own initial thoughts from even a year ago to be overly simplistic or just plainly wrong.
However, I do believe that we should try to aim for something that is between 1) and 2) in terms of realibility and amount of work, or otherwise we will just end up measuring a number of errors that some unknown/relatively small number of drive sees on some particular burns.
Unfortunately I don't yet know what this "in-between" test setup could be.
Perhaps with further testing we could find, say three different drives, which are popular and which sometimes see totally different results from one same burned disc (because the drives have differing tolerance values).
Perhaps something like above could be a decent compromise between number of tests and reliability of the test results considering a larger population of drives and burns.
That's my two cents.