High-dimensional Feature Indexing

The main operations in content-based retrieval, especially for QBE, include similarity evaluation between the features of two objects and feature indexing in the database. The former is time-expensive when the feature is high dimensional, and the latter may suffer from the large-scale database. To make content-based retrieval scalable, especially for QBE, two techniques, dimension reduction and multidimensional indexing, are explored for efficient retrieval.

Dimension reduction maps high-dimensional features into lower-dimensional space to save computation cost used for feature comparison (similarity calculation). Multidimensional indexing is used to find efficient index structure of a multimedia database to speed up the feature comparison process between the query and the entire database.

Dimension Reduction. Dimension reduction aims to map a higher-dimensional space to a lower-dimensional space. Two categories of dimension reduction methods include linear and nonlinear dimension reduction.

Linear Dimension Reduction. Linear dimension reduction consists of two basic steps: linear transformation and dimension discarding. Linear transformation is optimized with various criteria. Principal component analysis (equivalent to Karhunen-Loeve Transformation) finds a linear transformation such that the transformed features are uncorrelated, or equivalently, the covariance matrix of the transformed data is diagonal.

The linear transformation is obtained by performing singular value decomposition on the covariance matrix of the original data. The next step discards some dimensions that have smaller variances. By incorporating supervised information, the resulted linear transformation may be discriminative. Linear discriminant analysis is just such a linear transformation to maximize the between-class variance and minimize the within-class variance in the lower-dimensional space.

Nonlinear Dimension Reduction. Manyresearchers advocate the use of multidimensional scaling for content-based retrieval applications. MDS comes in different flavors and lacks a general precise definition. A typical approach is to map space Rn into Rm using m transformations, each of which is a linear combination of appropriate radial basis functions. Isometric feature mapping is a special MDS to find a lowerdimensional space such that the Euclidean distance is ‘‘close enough’’ to the ‘‘geodesic distance’’ on the original space.

Another popular method, called locally linear embedding, aims to find a lower-dimensional space such that the locally linear reconstructions in the original space are kept. An augmented relation embeddingis proposed to map the image space into a semantic manifold that faithfully grasps the user’s preferences by combining the similarity relations entailed by the content-based features, and the relevance relations specified in the feedback.

Geometric hashing consists of hashing from a highdimensional space to a very low-dimensional space. In general, hashing functions are not data dependent, and the metric properties of the hashed space can be significantly different from those of the original space.

Ideally, the hash function should spread the database uniformly across the range of the low-dimensionality space, but the design of the hashing function becomes increasingly complex with the dimensionality of the original space. Hence it can be applied to image database only when the dimensionality of the original space is not so high.

The local linear dimension reduction methods, which include clustering and singular value decomposition and vector quantization principal component analysis, are another categorization of nonlinear dimension-reduction method. Those methods consist of two steps. First, the data points are clustered. Second, a linear dimension reduction method is adopted on each local cluster.

Multidimensional Indexing. Given an object database with multidimensional features, an appropriate indexing structure should be constructed to make the retrieval task efficient and effective. Many methods can be used for indexing, which can be roughly divided into two classes: vector-space and metric-space methods.

Vector-Space Methods. Vector-space methods represent objects (or feature vectors) as sets or points in a d-dimensional vector space. For example, gray histograms of images can be viewed as points in high-dimensional (typically 255dimensional) space, in which each coordinate corresponds to a different bin of the histogram. Algorithmically, vector- space methods can be divided into nonhierarchical methods, hierarchical decomposition methods, and projection-based methods.

The basic nonhierarchical method is a brute-force approach. It scans the whole database table sequentially, and it apparently is very time consuming and not efficient. Besides this naive method, mainly two other classes are suggested: mapping a d-dimensional space onto a real line through a space-filling curve, and partitioning the space into nonoverlapping cells of known size. Most former methods order the database using the positions of the individual items on a space-filling curve, such as the Hibert or Peano-Hilbert curve, or the z-ordering, and obtain a onedimensional representation.

Then, a one-dimensional indexing structure is used to index the mapped records. The space-filling curves tend to map nearby points in the original space into nearby points on a real line, so the onedimensional indexing techniques, such as range queries, nearest-neighbor queries, and a-cut queries, may be reasonably approximated by executing them in the projected space.

The latter partition the search space into a predefined number of nonoverlapping fixed-size regions, which do not dependent on the actual data contained in the database. These two methods are well suited to index low-dimensional spaces (when d < 10), but their efficiency decays exponentially when d > 20.

Locality sensitive hashing is a recent technique which proposed a locality sensitive hashing function to perform approximate nearest neighbor searching in high-dimensions. The basic idea is to hash the input items so that similar items are mapped to the same buckets with high probability.

Hierarchical decomposition methods recursively partition the search space into progressively smaller regions that depend on the dataset. The hierarchical decomposition can be finally represented by a tree. The decomposition techniques vary at the partitioning step with the different tree representation. The typical methods include quadtrees, k-d trees, and R-trees.

Those methods were originally developed for low-dimensional search spaces, and unsurprisingly they suffer from the curse of dimensionality and become ineffective in high-dimension cases (when d > 20). One of recent researches adopted kd-trees to perform approximate nearest neighbor searching in arbitrarily high dimensions. The hierarchical clustering methods, such as hierarchical k-means or hierarchical Gaussian mixture models, can also be used for indexing.

Projection-based methods are indexing structures that support approximate nearest-neighbor queries. The techniques vary with different types of approximation performed. Basically two categories exist: fixed-radius queries and (l + ε)-nearest-neighbor queries. Some former methods project the database onto the coordinate axes, maintain a list for each collection of projections, and use the list to identify a region of the search space quickly that contains a hyper-sphere of radius centered on the query point.

Other methods project the database onto appropriate (d + 1)-dimensional hyperplanes, and they find nearest neighbors by tracing an approximate line query point and finding its intersection with the hyperspaces. The latter methods project the high-dimension database into selected or randomly generated lines and index the projections. Both methods are proper for high-dimension queries.

Metric-Space Methods. Metric-space methods index the distances between database items rather than the individual database items. They are useful when the distances are provided with the dataset or where the selected metric is too computationally complex for interactive retrieval (and therefore it is more convenient to compute the pairwise distances while adding items to the database). Most metric-space methods are designed for nearest-neighbor queries, few support alpha-cut, and almost no methods support range queries.

The Voronoi diagram is a method that associates a Voronoi region to each database item if the distance function is given. Different distance functions produce different sets of Voronoi regions. Examples of Voronoi diagrams include cell-and X-tree-based methods.

The ordered list method is proper when all pairwise distances between database items are given. Each database item is associated with an ordered list of all the other items, which are sorted in ascending order of distance. Nearest- neighbor queries are reduced to a point query followed by scanning the list.

Vantage-point methods build a tree structure such that each internal node indexes a disjoint subset of the database, has two children nodes, and is associated with a database item called vantage point. The items indexed by an internal node are well organized according the distance. For example, the median distance is computed, the items closer to the vantage point than the median distance are associated with the left subtree and the remaining ones with the right subtree.

 






Date added: 2024-06-15; views: 98;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.015 sec.