### Abstract

In a faulty environment, comparisons between two elements with respect to an underlying linear order can come out right or go wrong. A wrong comparison is a recurring comparison fault if comparing the same two elements yields the very same result each time we compare the elements. We examine the impact of such faults on the elementary problems of sorting a set of distinct elements and finding a minimum element in such a set. The more faults occur, the worse the approaches to solve these problems can become and we parametrize our analysis by an upper bound kk on the number of faults.we first explain that reconstructing the sorted order of the elements is impossible in the presence of even one fault. Then, we focus on the maximum information content we get by performing all possible comparisons. We consider two natural approaches for sorting the elements that involve knowledge of the outcomes of all comparisons: the first approach finds a permutation (compatible solution) that contradicts at most kk times the outcomes of comparisons, and the second approach sorts the elements by the number of times an element is returned to be larger in the outcomes of its comparisons with all other elements (score solution). In such permutations the elements can be dislocated from their positions in the linear order. We measure the quality of such permutations by three measures: the maximum dislocation of an element, the sum of dislocations of all elements, and the kemeny distance compared to the linear order. We show for compatible solutions that the kemeny distance is at most 2k, the sum of dislocations at most 4k, and the maximum dislocation at most 2k. In score solutions the kemeny distance is smaller than 4k, the sum of dislocations smaller than 8k, and the maximum dislocation at most k+1k+1. Our upper bounds are tight for compatible solutions, but possibly not tight for score solutions. It turns out that none of the two approaches is better than the other in all measures.for the problem of finding a minimum element, we first observe that there is no deterministic algorithm that guarantees to return one of the smallest k+ 1k+ 1 elements. This implies that computing the first element of a score solution is optimum and we derive an algorithm that guarantees to find one of the k+ 2k+ 2 smallest elements in time o(\sqrt{k}n)o(\sqrt{k}n) making o(\sqrt{k}n)o(\sqrt{k}n) comparisons, where n is the number of elements, and we generalize this algorithm to find all elements of score at most a given target t.

Original language | English |
---|---|

Title of host publication | Proc. of the 20th International Symposium on Fundamentals of Computation Theory (FCT) |

Publisher | Springer |

Pages | 227-239 |

Number of pages | 13 |

DOIs | |

Publication status | Published - 2015 |

### Publication series

Series | Lecture Notes in Computer Science |
---|---|

Volume | 9210 |

## Cite this

Geissmann, B., Mihalák, M., & Widmayer, P. (2015). Recurring Comparison Faults: Sorting and Finding the Minimum. In

*Proc. of the 20th International Symposium on Fundamentals of Computation Theory (FCT)*(pp. 227-239). Springer. Lecture Notes in Computer Science, Vol.. 9210 https://doi.org/10.1007/978-3-319-22177-9_18