Constant time approximation scheme for largest well predicted subset
Bin Fu () and
Lusheng Wang ()
Additional contact information
Bin Fu: University of Texas–Pan American
Lusheng Wang: City University of Hong Kong
Journal of Combinatorial Optimization, 2013, vol. 25, issue 3, No 2, 352-367
Abstract:
Abstract The largest well predicted subset problem is formulated for comparison of two predicted 3D protein structures from the same sequence. A 3D protein structure is represented by an ordered point set A={a 1,…,a n }, where each a i is a point in 3D space. Given two ordered point sets A={a 1,…,a n } and B={b 1,b 2,…b n } containing n points, and a threshold d, the largest well predicted subset problem is to find the rigid body transformation T for a largest subset B opt of B such that the distance between a i and T(b i ) is at most d for every b i in B opt . A meaningful prediction requires that the size of B opt is at least αn for some constant α (Li et al. in CPM 2008, 2008). We use LWPS(A,B,d,α) to denote the largest well predicted subset problem with meaningful prediction. An (1+δ 1,1−δ 2)-approximation for LWPS(A,B,d,α) is to find a transformation T to bring a subset B′⊆B of size at least (1−δ 2)|B opt | such that for each b i ∈B′, the Euclidean distance between the two points distance (a i ,T(b i ))≤(1+δ 1)d. We develop a constant time (1+δ 1,1−δ 2)-approximation algorithm for LWPS(A,B,d,α) for arbitrary positive constants δ 1 and δ 2. To our knowledge, this is the first constant time algorithm in this area. Li et al. (CPM 2008, 2008) showed an $O(n(\log n)^{2}/\delta_{1}^{5})$ time randomized (1+δ 1)-distance approximation algorithm for the largest well predicted subset problem under meaningful prediction. We also study a closely related problem, the bottleneck distance problem, where we are given two ordered point sets A={a 1,…,a n } and B={b 1,b 2,…b n } containing n points and the problem is to find the smallest d opt such that there exists a rigid transformation T with distance(a i ,T(b i ))≤d opt for every point b i ∈B. A (1+δ)-approximation for the bottleneck distance problem is to find a transformation T, such that for each b i ∈B, distance (a i ,T(b i ))≤(1+δ)d opt , where δ is a constant. For an arbitrary constant δ, we obtain a linear O(n/δ 6) time (1+δ)-algorithm for the bottleneck distance problem. The best known algorithms for both problems require super-linear time (Li et al. in CPM 2008, 2008).
Keywords: Randomization; Approximation scheme; Constant time (search for similar items in EconPapers)
Date: 2013
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s10878-010-9371-1 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:jcomop:v:25:y:2013:i:3:d:10.1007_s10878-010-9371-1
Ordering information: This journal article can be ordered from
https://www.springer.com/journal/10878
DOI: 10.1007/s10878-010-9371-1
Access Statistics for this article
Journal of Combinatorial Optimization is currently edited by Thai, My T.
More articles in Journal of Combinatorial Optimization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().