As discussed in the disk error recovery procedure page when a disk has a marginal error recovery work it may not result in a check condition with sense, it may only be noticeable by a long latency effect on the read. The proposed handling of this is either to ignore it or to rewrite the data at the same spot. This is a similar thinking to the other best practice of writing over a medium error with the thinking being that if there is a problem at that spot a rewrite will enable the disk to fix the problem by writing over it or reallocating the sector to another place that will hold the data better.
A long latency in this case needs to consider also queuing delays and the disk may be free to reorder requests. Dealing with this reordering is non-trivial and in different cases I have witnessed queuing delays due to reorder of two to three seconds.
There are a few caveats though to this practice:
The IO that returned with a relatively long latency may not have been delayed by an error recovery procedure at all. It could have been mere queuing. This could be handled by measuring the time of the IO not from the time of submission but by the time since the last IO returned, this should give a number that is much closer to the actual IO service time.
Another reason why this may not be the problem space is the case of a background process that took over, needed a lengthy error recovery and only then this IO was serviced. Unfortunately there is no real way to discern this. In a SAS disk it is possible to query the Background Media Scan log page to see if something happened recently but it’s not a sure way and the LOG SENSE request takes time as well.
Despite the drawbacks it should be a very useful method to reduce the number of unrecoverable read errors by fixing problem spaces before they become too much of a problem.
As with any possible error it is also important to log this and keep statistics about such work in order to enable higher-order analysis to consider if a disk is going bad.