QA IC Policy Decisions
back
Major QA Concerns
Calibration
In comparing the QA issues encountered at MBLWHOI and SIL, concerns were raised about the variations in image quality between the Boston and Fedscan/SIL scanning centers. The following example illustrates what is thought to be a calibration issue with the cameras on the Scribe:
example (
"all pgs image too light, especially left side. Unable to OCR") and
example 2 ("light text - p.172, 298, 299 very poor quality text - should be reviewed"). Patches of light text have been proven to disrupt the OCR rendering the scans insufficient for the data-mining goals of the BHL. It is believed that the "dodging" and "burning" affects in the image are a result of mis-calibrated equipment. The QA group would like to see IA adopt regular calibration procedures to ensure the consistent quality of images for each of the scanning centers.
Fold-outs
Though we understand that fold-out stations can never be calibrated such that scan color/quality will perfectly match the scan coming off the SCRIBE machines, we have noticed consistent color calibration differences among the various foldout machines. Fedscan has experienced a yellow tinge associated with the fold-out images, while the Boston scanning center has experienced a pink tinge. We would like IA to regularly calibrate the fold-out cameras as well, and consider revisiting the fold-out station setup to control the ambient lighting and associated variables that contribute to calibration problems. The concern here is primarily with color plates, and ensuring that reproduction of colors are relatively close to those in the original, when possible. As documentation is available for the scanning as conducted on the SCRIBE, it is the concern of the QA participants that no documentation is currently available for the fold-out station. Considering the significant differences in costs per image between the SCRIBE and the fold-out station, this documentation is highly desired.
QA as part of routine workflow
The QA summit participants firmly believe that QA on returned scans should be part of each library's workflow. Library QA on materials was not thought to be necessary when scanning started, because it was thought that IA QA procedures would be sufficient to detect enough major errors to ensure the overall quality of scanning. However, given the results of SIL's in-house QA process, it seems that IA's current level/quality of QA may not be sufficient to catch the percentage of errors necessary to correct [endemic?] procedural, callibration, or operator errors. The purpose of requiring the libraries to do QA is to test whether or not IA's implementation of the QA process is in fact sufficient.
The problem with mandating that participating libraries do their own QA is that most of the BHL libraries do not have adequate staff or time to perform QA even if only done on a statistical sample of each shipment. It is placing a greater burden on staff already stretched thin doing other BHL-related activities. We are not sure how to resolve this problem.
Consequences for failed QA
When SIL (the only library currently doing regular QA) has a cart that fails QA (according to the IA QA procedures and sampling charts) they send the entire cart back to the scanning center, and the scanning center then decides how to handle it, i.e., rescan the entire cart, perform more statistical QA and correct found errors, or perform 100% QA and correct errors.
Unfortunately, given that most BHL libraries pay for shipping to and from their respective scanning centers, and do not have a regular or frequent shipping schedule, the decision to send back *entire carts* that fail QA will place an additional financial burden on those libraries, as well as significantly increase the amount of time materials are outside of the library, and unavailable for other use.
The other major concern with failing entire carts is that if the scanning center chooses to rescan an entire cart, we are unsure how the billing will be recorded - now, each library checks against the invoices when received, but if whole shipments are being rescanned, the libraries must now keep careful track of how many times a given item is sent for rescanning, and will have to double-check to see that they aren't being accidentally billed twice.
[note: the point i'm trying to make is this process + billing is just confusing - also, we just dont know how they are billing us for this sort of thing right now. we haven't been checking closely -kt] .
We would like to have alternatives to rejecting and re-sending entire carts when they fail QA. Those alternatives, however, should include serious enough consequences for the scanning centers such that it acts as an incentive to improve their scanning and/or QA processes. We would also like to be reassured that re-scans of materials sent back on failed carts do not get double-billed.