WO2013053320A1 - Image retrieval method and device - Google Patents

Image retrieval method and device Download PDF

Info

Publication number
WO2013053320A1
WO2013053320A1 PCT/CN2012/082746 CN2012082746W WO2013053320A1 WO 2013053320 A1 WO2013053320 A1 WO 2013053320A1 CN 2012082746 W CN2012082746 W CN 2012082746W WO 2013053320 A1 WO2013053320 A1 WO 2013053320A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
feature
images
similarity
Prior art date
Application number
PCT/CN2012/082746
Other languages
French (fr)
Chinese (zh)
Inventor
田卉
Original Assignee
中国移动通信集团公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信集团公司 filed Critical 中国移动通信集团公司
Publication of WO2013053320A1 publication Critical patent/WO2013053320A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • the present invention relates to the field of image retrieval technologies in the field of image processing technologies, and in particular, to an image retrieval method and apparatus. Background technique
  • CBIR Content-Based Image Retrieval
  • the search technology combines image processing, pattern recognition, computer vision, image understanding, database management, human-computer interaction and other technologies. It is a fusion of multiple technologies, with a wide range of applications, and has been rapidly developed.
  • the existing image-based image retrieval technology focuses on the global feature of the image for retrieval, and describes the image as an inseparable whole.
  • the color, texture, shape and other features are used to describe the entire image without distinguishing the image.
  • the embodiment of the invention provides a method and a device for solving the problem, which are used to solve the problem of poor validity of the retrieval result and low retrieval efficiency when searching based on image content existing in the prior art.
  • the embodiment of the invention provides a method for drawing a cable, comprising:
  • An image retrieval result corresponding to the image to be retrieved is determined based on a magnitude of similarity of each image in the image library and the image to be retrieved.
  • An embodiment of the present invention further provides an image retrieval apparatus, including:
  • a dividing unit configured to divide the image to be retrieved into sub-pictures to obtain a plurality of sub-images
  • An extracting unit configured to perform image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images
  • a similarity determining unit configured to determine, according to each feature in the image library, a feature vector of each sub image in each of the to-be-matched sub-image groups of the image, and a feature vector of each of the specified sub-images a similarity of the image to be retrieved; wherein a relative position between each sub-image in the sub-image group to be matched is the same as a relative position between the specified sub-images;
  • the search result determining unit is configured to determine a graph result corresponding to the image to be retrieved based on a magnitude of similarity between each image in the image library and the image to be retrieved.
  • the image to be retrieved and the image in the image library are divided into subgraphs, and image features are extracted from each of the plurality of subimages of the image to be retrieved, and the specified subimages are obtained.
  • a feature vector and when performing image retrieval, for each image in the image library, each sub-image group to be matched is determined in the image, and the relative positions between the sub-images in the sub-image group to be matched are compared with The relative positions between the specified sub-images are the same, and then the image is compared with the feature to be retrieved based on the feature vector of each sub-image in each sub-image group to be matched of the image and the feature vector of each specified sub-image.
  • FIG. 1 is a flowchart of a method for searching for a picture according to an embodiment of the present invention
  • Embodiment 1 of the present invention is a flowchart of a method for searching for a map provided in Embodiment 1 of the present invention
  • FIG. 3 is a schematic diagram of a sub-picture division and an index window in Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of determining a mapping window based on an index window according to Embodiment 1 of the present invention
  • FIG. 5 is a flowchart of a method for searching for a map provided in Embodiment 2 of the present invention
  • Fig. 6 is a schematic structural view of the H cable device provided in Embodiment 3 of the present invention.
  • the embodiment of the present invention provides an image retrieval method and apparatus, and the preferred embodiment of the present invention is described below with reference to the accompanying drawings. It is to be understood that the preferred embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. And in the case of no conflict, the features in the embodiments and the embodiments of the present application can be combined with each other.
  • the embodiment of the present invention provides a method for drawing a cable, as shown in FIG. 1 , including:
  • Step S101 Perform sub-picture division on the image to be retrieved to obtain a plurality of sub-images.
  • Step S102 Perform image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images.
  • Step S103 determining, for each image in the image library, a similarity between the image and the image to be retrieved based on a feature vector of each sub-image in each sub-image group to be matched of the image, and a feature vector of each specified sub-image. Wherein, the relative positions between the sub-images in the sub-image group to be matched are the same as the relative positions between the specified sub-images.
  • Step S104 Based on the similarity between each image in the image library and the image to be retrieved, The image retrieval result corresponding to the retrieved image is determined.
  • the feature vector of each specified sub-image in each specified sub-image may be obtained, and the feature vector may further include multiple feature vectors, and each feature.
  • the vector may also include a plurality of feature components.
  • the determining may be based on the weight of the sub-image corresponding to each specified sub-image in each specified sub-image, and may also be based on each The weights between the features corresponding to the feature vectors are determined, and may be determined based on the weights within the features corresponding to each feature component.
  • the user performs a correlation evaluation on each retrieval result image, and based on the user's evaluation result of the correlation of each retrieval result image. Adjusting the weight used in determining the similarity between the image in the image library and the image to be retrieved, obtaining the adjusted weight, and determining the latest similarity between the image in the image library and the image to be retrieved based on the adjusted weight, and based on The latest image similarity between the image in the image library and the image to be retrieved determines the latest image H corresponding to the image to be retrieved.
  • FIG. 2 is a flowchart of a method for providing a cable according to Embodiment 1 of the present invention, which specifically includes the following steps:
  • Step S201 Acquire an image to be retrieved submitted by the user, and perform subgraph division on the image to be retrieved to obtain a plurality of sub-images.
  • the sub-picture partitioning may use various methods in the prior art to divide the image to be retrieved into multiple sub-graphs of a rectangle of the same size, for example, as shown in FIG. 3, 16 sub-pictures divided into 4 ⁇ 4. .
  • multiple subgraphs are the same size and rectangular, which is convenient for subsequent similarity calculation, and is not a strict division condition.
  • the pseudo-quad tree partitioning method is used to perform sub-picture partitioning, which is as follows: In the first level division, the image is divided into mxn subgraphs of rectangles of the same size, wherein the maximum common divisor of the width and height of the image is first determined, m is the width of the image divided by the greatest common divisor, and n is the image. Divided by the greatest common divisor;
  • Step S202 Select each designated sub-image from the plurality of sub-images of the image to be retrieved, and the selected designated sub-images constitute a designated sub-image group.
  • the specified sub-image can be automatically selected by the device according to a certain selection strategy.
  • the plurality of sub-images can be displayed to the user, and the user selects the specified sub-image according to the retrieval requirement of the user.
  • the designated sub-image selected by the user is equivalent to the user's region of interest, that is, the user performs the current image.
  • the purpose of the search is to retrieve an image that matches the meaning of the region of interest. Therefore, the selection of each specified sub-image by the user can improve the validity of the search result.
  • the user selects the designated sub-images 1-5 among the 16 sub-images.
  • an index window including each specified sub-image may also be determined, and the index window is
  • the sub-images of the plurality of sub-images are composed, and include the smallest rectangular window of each of the designated sub-images.
  • the index window includes 9 sub-images of 3 x 3 at the upper left of the image to be retrieved.
  • Step S203 Perform image feature extraction on each of the plurality of sub-images of the image to be retrieved to obtain feature vectors of the designated sub-images.
  • feature vectors of each of the designated sub-images in each of the designated sub-images may be extracted.
  • the feature vector of the specified sub-image may include a plurality of feature vectors.
  • each feature vector may include a plurality of feature components. The following is a detailed description of the extraction of three feature vectors of color, texture and shape of an image.
  • the color distribution is represented by the central moments of the image pixels, and the color information is mostly concentrated on the lower moments, including the first moment is the mean, the second moment is the variance and the third moment is the raid. Used to represent the overall color distribution of an image, where:
  • the color feature vector of the image proposed in the embodiment of the present invention includes nine color feature components, which respectively correspond to: a red first moment mean value, a red second order moment variance ⁇ , and a red third order moment rake Sr , and a green first order.
  • the corresponding texture features are extracted by using its statistics.
  • Gray level co-occurrence matrix of 135 degrees in four directions for the elements in the gray level co-occurrence matrix in each direction, indicating the number of times the pixel points of the direction are 2 and the pair of pixels, wherein the range of gray levels It is 0-256.
  • the gray level co-occurrence matrix based on the direction determines the statistic of energy, entropy, contrast and correlation corresponding to the direction:
  • the mean value of each element of the hth line in the gray level co-occurrence matrix is the mean value of each element of the kth column in the gray level co-occurrence matrix, and ⁇ is the standard deviation of each element of the hth line in the gray level co-occurrence matrix, which is in the gray level co-occurrence matrix The standard deviation of each element of column k.
  • the mean and standard deviation of the four texture feature statistics corresponding to the four directions are determined as the texture feature components included in the texture feature vector of the extracted image.
  • a total of 8 texture feature components are included, which are: energy mean texture feature component / SM , energy standard deviation texture feature component ⁇ 5 ⁇ , entropy mean texture feature component / £OT , entropy standard deviation texture feature component ⁇ 7 , contrast mean texture Feature component / ecw , contrast standard deviation texture feature component iJc ⁇ , correlation mean texture feature component / and associated standard deviation texture feature component ⁇ ( 3, extraction of shape feature vector: ⁇ Use the shape invariant moment method to extract the shape feature vector of the image.
  • the Canny operator is used to extract the edge information of each pixel in the image / (xj), f ⁇ x, y), and the Canny operator is used to extract the edge information of the pixel with the coordinate (x, y). And determine the two-dimensional p+q moment of the image and the normalized central moment with translation invariance, where:
  • ⁇ Pq ⁇ ⁇ p y q f(x, y) ; where is a two-dimensional p + q moment.
  • the center moment, (x., j.) is the coordinates of the center of gravity of the image.
  • the various second-order and third-order central moments are combined to obtain seven invariant moments which are independent of translation, rotation and scale, and the seven invariant ⁇ are taken as the extracted image.
  • the shape feature component of the shape feature vector, 7 invariant can be calculated by the method in the prior art, and is not described in the JH.
  • Step S204 Determine, for each image in the image library, each to-be-matched sub-image group of the image, where a relative position between each sub-image in the sub-image group to be matched, and each specified sub-image of the image to be retrieved
  • the relative positions are the same, that is, each group of sub-images is determined in the image, and each group of sub-images is satisfied, the relative position between the sub-images included, and the specified sub-images of the image to be retrieved
  • the relative position is the same.
  • sub-picture division is performed in advance for each image in the image library, and specifically, the same method of sub-picture division of the image to be retrieved may be performed, and sub-picture division is performed on each image in the image library.
  • the number of the obtained sub-images may be equal to the number of the plurality of sub-images obtained after the image to be retrieved is divided, or may be larger than the number of the plurality of sub-images obtained by dividing the image to be retrieved.
  • each mapping window is determined in the image.
  • the mapping window satisfies the following conditions:
  • the number and arrangement of the sub-images included in the mapping window are the same as the number and arrangement of the specified sub-images in the index window, for example, as shown in FIG.
  • the index window includes 9 sub-images of 3 x 3 at the upper left of the image to be retrieved, that is, the number of designated sub-images included in the index window is 9, and the arrangement is 3 x 3, and the division is 4 X
  • four mapping windows satisfying the condition can be determined, which are the upper left, lower left, upper right, and lower right mapping windows including 9 sub-images of 3 x 3. Since the number and arrangement of the sub-images included in the index window and the mapping window are the same, there is a one-to-one correspondence between the sub-images included in the index window and each sub-image included in each mapping window.
  • a sub-image corresponding to each specified sub-image position included in the index window among the sub-images included in each mapping window is determined to constitute a sub-image group to be matched corresponding to the mapping window.
  • Step S205 determining, for each image in the image library, each of the image vectors based on the feature vectors of each of the sub-images in the to-be-matched sub-image group of the image, and the feature vectors of the specified sub-images of the image to be retrieved.
  • the feature vector of each sub-image of each image in the image library may be pre-extracted and stored, and may be directly acquired during use.
  • the similarity between a sub-image group to be matched and a specified sub-image group is determined by the following formula:
  • S ⁇ q, Pl ) VS ⁇ Zi ) , where is the similarity of the selected sub-image group of the image P in the image library and the specified sub-image group of the image to be retrieved, ⁇ ( ⁇ ,.)
  • the similarity of the i-th sub-image corresponding to the position of the i-th sub-image group to be matched is the sub-image weight corresponding to the i-th designated sub-image, and N is the sub-image included in each designated sub-image. The number of images.
  • ⁇ ) ⁇ ( ) ' where is the i-th sub-image and position j in the sub-image group to be matched
  • the similarity of the jth feature vector of the corresponding i-th specified sub-image, W 2J is the j-th feature vector pair
  • the inter-feature weight, M is the number of types of feature vectors of the extracted i-th sub-image. For example, when the extracted feature vector includes three feature vectors of color, texture, and shape, M takes a value of 3.
  • the above formula can be determined by the following formula for the sum of the weighted Euclidean distances: '
  • O is the similarity of the kth feature component of the jth feature vector of the i-th specified sub-image corresponding to the position of the i-th sub-image in the sub-image group to be matched, and is the k-th feature of the j-th feature vector
  • the intra-feature weight corresponding to the component, H is the number of feature components included in the extracted j-th feature vector. For example, the number of feature components included in the color feature vector is nine, and the number of feature components included in the texture feature vector is Eighty, the shape feature vector includes the number of feature components of seven.
  • can be determined by the following formula: r ⁇ , Q —r ⁇ , P
  • is the kth feature component of the jth feature vector of the i-th specified sub-image of the image to be retrieved q, which is the i-th sub-group of the sub-image group to be matched of the image P in the image library
  • is the kth feature component of the jth feature vector of the image.
  • Step S206 For each image in the image library, after determining the similarity between each sub-image group to be matched and the specified sub-image group of the image to be retrieved, the average value of each similarity is taken as the image and the The similarity of the images is retrieved, or, preferably, the maximum of the similarities may be used as the similarity between the image and the image to be retrieved.
  • Step S207 Determine an image retrieval result corresponding to the image to be retrieved based on a magnitude of similarity between each image in the image library and the image to be retrieved.
  • the image in the image library whose degree of similarity with the image to be retrieved is greater than the set similarity threshold may be determined as the result of the image corresponding to the image to be retrieved.
  • the image of the image set in the order of the similarity of the image to be retrieved from the highest to the lowest in the image library is determined as the result of the image corresponding to the image to be retrieved.
  • the above method of searching for the image provided by the first embodiment of the present invention is for each finger of the image to be retrieved.
  • the stator image is performed for each sub-image in each sub-image group to be matched for each image in the image library, so the retrieval is more targeted and more efficient than the global image-based feature in the technology.
  • the meaning of the expression matches the meaning expressed by each specified sub-image of the image to be retrieved, thereby improving the validity of the retrieval result and the retrieval efficiency.
  • the correlation evaluation result based on the user's image for each search result is proposed, in order to more accurately return the result of the image required by the user to the user.
  • the weights used in determining the similarity are adjusted, and based on the adjusted weights, the latest similarity between the images is determined to determine the latest image retrieval results. As shown in FIG. 5, the following steps are specifically included:
  • Step S501 displaying each search result image in the image search result determined in the above embodiment 1 to the user, and providing the user with a relevance evaluation option of the search result image for each search result image, so that the user can perform the search.
  • the resulting image was correlated (3).
  • Step S502 Acquire a correlation evaluation result of each search result image in the search result of the user, and the correlation evaluation result may specifically include three types, respectively: a related result related to the representation, and a non-judgment of the evaluation
  • Step S503 Adjust the weights used when determining the similarity between each image in the image library and the image to be retrieved based on the obtained correlation evaluation result of each search result image, and obtain the adjusted weight.
  • the corresponding adjusted weights are determined in the following manner:
  • Step A Determine the positive example image in which the correlation evaluation result in the figure is the correlation result; for example, the number of images included in the figure search result is T, and the number of positive example images is Tl.
  • Step ⁇ Determine the candidate to be matched for the sub-image group to be matched with the highest similarity in each positive image.
  • the similarity determined for each positive example image and each specified sub-image may be composed into a matrix (z); j nxjV , where, the matrix RixN-order matrix, N is the number of sub-images included in the specified sub-image group, and the elements in the matrix (4 is the position of the i-th specified sub-image and the t-th positive example image with the highest similarity among the sub-image groups to be matched The similarity of the corresponding sub-image.
  • Step C determining, for each designated sub-image in the specified sub-image group, a reciprocal of the standard deviation of each similarity of the pair of sub-images corresponding to the specified sub-image position among the sub-image groups to be matched with the highest similarity .
  • the standard deviation is the above matrix (z); j nxjV ⁇ the standard deviation of each column element ⁇ , ., ⁇ ,. represents the standard deviation corresponding to the i-th specified sub-image, and calculates the reciprocal of the standard deviation.
  • the value of the standard deviation ⁇ ,. is smaller, it means that the specified sub-image is more valued by the user, so the reciprocal of the standard deviation ⁇ ,. reflects the size of the sub-image weight of the specified sub-image.
  • Step D performing normalization processing on each reciprocal corresponding to each of the designated sub-images, and performing a normalized processing result corresponding to each of the designated sub-images as the adjusted sub-image corresponding to the designated sub-image
  • Step a for each of the plurality of feature vectors, based on the feature vector of each sub-image in each of the sub-image groups to be matched for each image in the image library, and the feature of each specified sub-image The vector determines a feature map search result corresponding to the image to be retrieved corresponding to the feature vector.
  • the plurality of feature vectors are used to determine the map search result corresponding to the search image, and in the step a, the map corresponding to the search image is determined based on a single feature vector.
  • the image retrieval result determined based on a single feature vector is referred to as a feature map search result, for example, a color map search result determined based on the color feature vector is determined based on the texture feature vector.
  • the texture image retrieval result is referred to, and the shape feature vector is determined as a shape map search result.
  • the specific determination method can adopt the same method as in the above Embodiment 1, and the difference is only based on a single one feature vector.
  • Step b determining each of the search result images existing in the feature image search result and the image search result determined in the embodiment 1.
  • Step c determining a first sum value of each score corresponding to the correlation evaluation result of each search result image that exists, and determining a second sum of the initial value of the weight between the first sum value and the feature of the feature vector
  • the value is determined by the following formula:
  • the initial value of the inter-feature weight of the j-th feature vector for example, the initial value of the inter-feature weight of each feature vector is set to 0; 4 ⁇ is the first sum value corresponding to the feature vector in the jth,
  • the score corresponding to the correlation evaluation result may be set according to actual needs. For example, in this embodiment, the corresponding result corresponding value is set to 2, the non-evaluation result corresponding score is set to 1, and the uncorrelated result corresponding score is set. 0; is the above second sum value corresponding to the jth feature vector.
  • Step d performing normalization processing on each second sum value corresponding to each of the plurality of feature vectors, and performing normalized processing on the result corresponding to each feature vector as the adjusted corresponding to the feature vector
  • the weight between features ⁇ can be determined by the following formula:
  • W 2 j w 2 j I w 2 j ;
  • W 2 j is the adjusted inter-feature weight corresponding to the j-th feature vector, M is j
  • the number of types of feature vectors of the extracted sub-images is the number of types of feature vectors of the extracted sub-images.
  • Step 1 Determine the positive example image in which the correlation evaluation result in the image retrieval result is the correlation result; for example, the number of images included in the image retrieval result is T, and the number of the positive example images is T1.
  • Step 2 For each feature vector to be matched in each positive example image, for each feature vector, each feature component included in the feature vector of each sub-image in the to-be-matched sub-image group is determined.
  • each of the determined feature components may be composed into a matrix [r ⁇ nxff , where the matrix [" n ⁇ is a 7 XH-order matrix, and H is the extracted j-th feature vector including The number of feature components, the elements in the matrix are the k-th feature component of the j-th feature vector of the i-th sub-image in the to-be-matched sub-image group of the t-th positive example image.
  • Step 3 determining, for each sub-image and each feature component in the to-be-matched sub-image group, each of the feature components included in the feature vector of the sub-image in the to-be-matched sub-image group of each positive example image
  • the standard deviation is the standard deviation corresponding to the feature component of each sub-image.
  • the standard deviation is the standard deviation of each of the elements in the matrix [" n ⁇ ⁇ 3 ⁇ 4 , ⁇ 3 ⁇ 4 represents a standard corresponding to the i-th sub-image and corresponding to the k-th feature component of the j-th feature vector difference.
  • Step 4 normalizing, for each sub-image and each feature vector, a reciprocal of each standard deviation corresponding to the plurality of feature components included in the feature vector, and obtaining each sub-image and each feature component
  • the feature component representing the feature vector is more valued by the user, so the reciprocal of the standard deviation reflects the characteristics of the feature component of the feature vector.
  • Step S504 Determine, according to the determined adjusted weight, the latest similarity between the image in the image library and the image to be retrieved.
  • Step S505 Determine, according to the size of the latest similarity between the image in the image library and the image to be retrieved, the latest image corresponding to the image to be retrieved.
  • the positive example image may be used as a part of the image included in the latest image search result.
  • the latest similarity between the image library and the image to be retrieved may be greater than An image of the similarity threshold is determined to be other images included in the latest map search result;
  • the above-described method of Fig. 5 in Embodiment 2 of the present invention can be executed a plurality of times until the set number of loops is satisfied, or until the user instructs the currently determined latest image search result as the final image search result.
  • the weight is adjusted, and based on the adjusted weight, the latest image search result is determined, so that the determined latest image retrieval is performed.
  • the result is closer to the user's needs, which further improves the effectiveness and efficiency of image retrieval.
  • the method of the present invention is provided in accordance with the above-mentioned embodiments of the present invention. Accordingly, the third embodiment of the present invention further provides a drawing device.
  • the structure of the present invention is as shown in FIG.
  • a dividing unit 601 configured to divide the image to be retrieved into sub-pictures to obtain a plurality of sub-images
  • the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images
  • the similarity determining unit 603 is configured to determine, according to each feature in the image library, a feature vector of each sub image in each to-be-matched sub-image group of the image, and a feature vector of each of the specified sub-images. a similarity with the image to be retrieved; wherein a relative position between each sub-image in the sub-image group to be matched is the same as a relative position between the specified sub-images;
  • the search result determining unit 604 is configured to determine a search result corresponding to the image to be retrieved based on a size of similarity between each image in the image library and the image to be retrieved.
  • the similarity determining unit 603 is specifically configured to determine each of the image based on a feature vector of each sub image in each of the to-be-matched sub-image groups of the image, and a feature vector of each of the specified sub-images. a similarity of a sub-image group to be matched, a specified sub-image group composed of the specified sub-images; a maximum value or an average value of similarities between each sub-image group to be matched and the specified sub-image group The similarity of the image to the image to be retrieved.
  • the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images to obtain a feature vector of each of the specified sub-images; the similarity determining unit 603 Specifically, the feature vector of each sub-image in the sub-image group to be matched, and the feature vector of each specified sub-image in the specified sub-image group, respectively determine the sub-image group to be matched and the designated sub- a similarity of each pair of sub-images corresponding to the positions in the image group; weighting and summing the similarities of each pair of sub-images based on the sub-image weights corresponding to each of the specified sub-images in the specified sub-image group, to obtain the Matching the similarity of the sub-image group with the specified sub-image group.
  • the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images to obtain a plurality of feature vectors of each of the specified sub-images;
  • the unit 603 is specifically configured to use, according to a plurality of feature vectors of one sub-image in the sub-image group to be matched, and a specified sub-picture corresponding to the sub-image position in the specified sub-image group. a plurality of feature vectors of the image, determining a similarity between the sub-image and each feature vector of the specified sub-image; and weighting and summing the similarities of each feature vector based on the inter-feature weights corresponding to each feature vector, The similarity of the sub-image to the specified sub-image.
  • the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images, and obtain a plurality of feature vectors of each of the specified sub-images, each of the plurality of feature vectors. a plurality of feature components included in the feature vector;
  • the similarity determining unit 603 is specifically configured to determine the sub-image and the designated sub-image based on the plurality of feature components included in one feature vector of the sub-image and the plurality of feature components included in the feature vector of the specified sub-image The similarity of each feature component included in the feature vector of the image;
  • the sum of the weighted Euclidean distances of the similarities of the various feature components is determined based on the intra-feature weights corresponding to each feature component, and the similarity between the sub-image and a feature vector of the specified sub-image is obtained.
  • the method further includes:
  • the obtaining unit 605 is configured to obtain a correlation evaluation result of the user for each search result image in the image retrieval result
  • the weight adjustment unit 606 is configured to adjust, according to the obtained correlation evaluation result of each search result image, the weight used in determining the similarity between each image in the image library and the image to be retrieved, and obtain an adjustment Post weight
  • the similarity determining unit 603 is further configured to determine, according to the adjusted weight, an latest similarity between the image in the image library and the image to be retrieved;
  • the search result determining unit 604 is further configured to determine, according to the size of the latest similarity between the image in the image library and the image to be retrieved, the latest image retrieval result corresponding to the image to be retrieved.
  • the correlation evaluation result obtained by the obtaining unit 605 includes: a correlation correlation result, and a weight adjustment unit 606, configured to determine, when the used weight is a sub-image weight, determine the correlation in the graph result.
  • the results of the sexual evaluation are the positive examples of the relevant results; for each positive example a similarly-matched sub-image group in the image, determining a similarity of each pair of sub-images corresponding to positions in the specified sub-image group in the to-be-matched sub-image group; for each of the specified sub-image groups Specifying a sub-image, determining a reciprocal of a standard deviation of each similarity of the pair of sub-images corresponding to the specified sub-image position among the sub-image groups to be matched with the highest similarity; respectively corresponding to the designated sub-images Performing normalization processing for each reciprocal; and performing a normalized processing result corresponding to each specified sub-image as the adjusted sub-image weight corresponding to the designated sub-image;
  • the used weight is an inter-feature weight
  • the feature vector of each sub-image in each sub-image group to be matched for each image in the image library is used. And the feature vector of the specified sub-images, determining a feature map search result corresponding to the image to be retrieved corresponding to the feature vector, determining the feature map search result and the image search result And determining, by each of the search result images, the first sum value of each score corresponding to the correlation evaluation result of each of the search result images, and determining the first sum value and the feature of the feature vector
  • the second sum value of the initial value of the weight; the second sum value corresponding to each of the plurality of feature vectors is normalized; and the result corresponding to each feature vector obtained by normalizing is obtained as The weight of the adjusted features corresponding to the feature vector;
  • the used weight is the intra-feature weight, determining the correlation evaluation result in the image retrieval result as each positive example image of the correlation result; for each of the positive example images having the highest similarity to be matched sub-image group, Determining, for each feature vector, each feature component included in the feature vector of each sub-image in the to-be-matched sub-image group; determining, for each sub-image and each feature component in the to-be-matched sub-image group, each positive component a standard deviation of each of the feature components included in the feature vector of the sub-image in the to-be-matched sub-image group of the example image as a standard deviation corresponding to the feature component of each sub-image; for each sub-image and Each feature vector, normalizing the reciprocal of each standard deviation corresponding to the plurality of feature components included in the feature vector, and obtaining a processing result corresponding to each sub-image and each feature component; The feature component determines an average value of the processing result corresponding to the feature component of each sub-image, and the average value is used as the adjusted corresponding
  • the solution provided by the embodiment of the present invention includes: dividing a to-be-searched image into sub-pictures to obtain a plurality of sub-images; and performing image feature extraction on each of the plurality of sub-images to obtain each of the designated sub-images. a feature vector of the image; and for each image in the image library, determining the image and the image to be retrieved based on the feature vector of each sub-image in each sub-image group to be matched of the image, and the feature vector of each specified sub-image The degree of similarity; and determining the result of the map corresponding to the image to be retrieved based on the magnitude of the similarity between each image in the image library and the image to be retrieved.
  • the solution provided by the embodiment of the present invention when searching based on image content, the validity of the retrieval result and the retrieval efficiency are improved.

Abstract

An image retrieval method and device. The method includes: performing sub-image division on an image to be retrieved to obtain a plurality of sub-images; performing image feature extraction on each designated sub-image in the plurality of sub-images to obtain a feature vector of each designated image; for each image in an image library, determining the similarity between the image and the image to be retrieved based on the feature vector of each sub-image in each sub-image group to be matched of the image and the feature vector of each designated sub-image; and determining an image retrieval result corresponding to the image to be retrieved based on the similarity between each image in the image library and the image to be retrieved. The solution provided by the embodiments of the present invention is employed to improve the effectiveness and retrieval efficiency of the retrieval result when performing retrieval based on image content.

Description

一种图像检索方法及装置  Image retrieval method and device
本申请要求于 2011 年 10 月 13 号提交中国专利局、 申请号为 201110309996.0、 发明名称为 "一种图像检索方法及装置" 的中国专利申请 的优先权, 其全部内容通过引用结合在本申请中。 技术领域  The present application claims priority to Chinese Patent Application No. 201110309996.0, entitled "An Image Retrieval Method and Apparatus", filed on October 13, 2011, the entire contents of . Technical field
本发明涉及图像处理技术领域中的图像检索技术领域,尤其涉及一种图像 检索方法及装置。 背景技术  The present invention relates to the field of image retrieval technologies in the field of image processing technologies, and in particular, to an image retrieval method and apparatus. Background technique
随着多媒体技术的迅速发展, 图像作为内容丰富、 表达直观的可视信息, 其应用日益广泛。 由于图像信息数据量大、 抽象程度低的特点, 所以如何从海 量图像数据中快速高效的检索出用户需要的图像资源,成为图像处理技术领域 中新的挑战。  With the rapid development of multimedia technology, images are widely used as rich and expressive visual information. Due to the large amount of image information and low level of abstraction, how to quickly and efficiently retrieve the image resources required by users from massive image data has become a new challenge in the field of image processing technology.
现有对图像的检索技术中, 大都釆用基于文本的图像检索 (text-based image retrieval ), 其技术思路是先人工对图像进行文本标注, 如关键词、 标题 以及一些附加的描述信息, 然后釆用传统数据库管理系统的方法进行文本检 索。  In the existing image retrieval technology, most of the text-based image retrieval (text-based image retrieval), the technical idea is to manually text the image, such as keywords, titles and some additional description information, and then文本 Use the traditional database management system method for text retrieval.
随着大规模数字图像库的出现,基于文本的图 H 索问题逐渐严重,例如, 对大规模图像库来说, 人工进行文本注释是一件繁瑣且费时的工作; 并且, 人 工注释具有一定的主观性, 不同的人对图像内容可有不同的理解, 即使同一个 人在不同语境下对图像的理解也不尽相同, 所以对于一个图像的文本注释, 受 个人兴趣和知识背景影响, 主观性强; 并且, 图像内容信息量大, 很多图像很 难用文字的方式准确描述; 并且, 世界各地文化差异大, 不同语言文字标注后 的图像在应用通用性上受到限制。 因此,基于文本检索导致检索结果的有效性 差, 很难准确的返回用户需要的图像。 目前, 为了克服低效的人工注释和二义性, 出现了基于内容的图像检索 ( Content-Based Image Retrieval, CBIR ), 其技术思路是利用图像的视觉特征 进行检索, 直接对图像内容进行分析, 提取图像特征和语义, 并以此建立索引 进行检索, 特征提取和匹配可以由机器自动完成。 With the advent of large-scale digital image libraries, the problem of text-based graphs has become more serious. For example, for large-scale image libraries, manual text annotation is a tedious and time-consuming task; and, manual annotation has certain Subjectivity, different people can have different understandings of image content, even if the same person has different understanding of images in different contexts, so the text annotation of an image is influenced by personal interest and knowledge background, subjectivity Strong; Moreover, the amount of image content is large, and many images are difficult to describe accurately by means of text; and, cultural differences around the world are large, and images marked with different languages are limited in application versatility. Therefore, text-based retrieval results in poor validity of search results, and it is difficult to accurately return images that the user desires. At present, in order to overcome the inefficient manual annotation and ambiguity, Content-Based Image Retrieval (CBIR) has emerged. The technical idea is to use the visual features of the image to retrieve and directly analyze the image content. Image features and semantics are extracted and indexed for retrieval. Feature extraction and matching can be done automatically by the machine.
基于内容的图 佥索技术融合了图像处理、模式识别、 计算机视觉、 图像 理解、 数据库管理、 人机交互等技术, 是多种技术的融合, 具有广泛的应用, 得到了快速的发展。  Content-based maps The search technology combines image processing, pattern recognition, computer vision, image understanding, database management, human-computer interaction and other technologies. It is a fusion of multiple technologies, with a wide range of applications, and has been rapidly developed.
然而,现有的基于图像内容的图 佥索技术,侧重于考虑图像全局特征进 行检索, 将图像作为不可分的整体进行描述, 使用颜色、 纹理、 形状等特征描 述整幅图像, 并不区分图像的前景和背景,且很多情况下基于单一特征进行检 索。  However, the existing image-based image retrieval technology focuses on the global feature of the image for retrieval, and describes the image as an inseparable whole. The color, texture, shape and other features are used to describe the entire image without distinguishing the image. Prospects and backgrounds, and in many cases retrieval based on a single feature.
因此, 当一幅图像的内容表达的含义较多时, 比如能够表达风景、 建筑和 人物, 则釆用上述现有图 佥索技术, 可能导致提取的特征无法准确的表达图 像内容的含义, 从而使得检索结果的有效性较差, 进而使得检索效率较低。  Therefore, when the meaning of the content expression of an image is large, such as the ability to express landscapes, buildings, and characters, the use of the above-described existing map search technique may result in the extracted features failing to accurately express the meaning of the image content, thereby The search results are less effective, which in turn makes the search efficiency less efficient.
发明内容 Summary of the invention
本发明实施例提供一种图 佥索方法及装置,用以解决现有技术中存在的 基于图像内容进行检索时, 检索结果的有效性差, 检索效率低的问题。  The embodiment of the invention provides a method and a device for solving the problem, which are used to solve the problem of poor validity of the retrieval result and low retrieval efficiency when searching based on image content existing in the prior art.
本发明实施例提供一种图 H 索方法, 包括:  The embodiment of the invention provides a method for drawing a cable, comprising:
将待检索图像进行子图划分, 得到多个子图像;  Subdividing the image to be retrieved into subgraphs to obtain a plurality of sub-images;
对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定 子图像的特征向量;  Performing image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images;
针对图像库中每个图像,基于该图像的每个待匹配子图像组中的各子图像 的特征向量, 以及所述各指定子图像的特征向量,确定该图像与所述待检索图 像的相似度; 其中, 待匹配子图像组中的各子图像之间的相对位置, 与所述各 指定子图像之间的相对位置相同; Determining, for each image in the image library, a feature vector of each sub-image in each sub-image group to be matched of the image, and a feature vector of each of the specified sub-images, determining that the image is similar to the image to be retrieved Degree; wherein, the relative positions between the sub-images in the sub-image group to be matched, and the respective Specify the relative position between the sub-images to be the same;
基于所述图像库中的每个图像与所述待检索图像的相似度的大小,确定所 述待检索图像对应的图像检索结果。  An image retrieval result corresponding to the image to be retrieved is determined based on a magnitude of similarity of each image in the image library and the image to be retrieved.
本发明实施例还提供一种图像检索装置, 包括:  An embodiment of the present invention further provides an image retrieval apparatus, including:
划分单元, 用于将待检索图像进行子图划分, 得到多个子图像;  a dividing unit, configured to divide the image to be retrieved into sub-pictures to obtain a plurality of sub-images;
提取单元, 用于对所述多个子图像中的各指定子图像进行图像特征提取, 得到所述各指定子图像的特征向量;  An extracting unit, configured to perform image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images;
相似度确定单元, 用于针对图像库中每个图像,基于该图像的每个待匹配 子图像组中的各子图像的特征向量, 以及所述各指定子图像的特征向量,确定 该图像与所述待检索图像的相似度; 其中,待匹配子图像组中的各子图像之间 的相对位置, 与所述各指定子图像之间的相对位置相同;  a similarity determining unit, configured to determine, according to each feature in the image library, a feature vector of each sub image in each of the to-be-matched sub-image groups of the image, and a feature vector of each of the specified sub-images a similarity of the image to be retrieved; wherein a relative position between each sub-image in the sub-image group to be matched is the same as a relative position between the specified sub-images;
检索结果确定单元,用于基于所述图像库中的每个图像与所述待检索图像 的相似度的大小, 确定所述待检索图像对应的图 H 索结果。  The search result determining unit is configured to determine a graph result corresponding to the image to be retrieved based on a magnitude of similarity between each image in the image library and the image to be retrieved.
本发明有益效果包括:  Advantageous effects of the invention include:
本发明实施例提供的方法中,对待检索图像和图像库中的图像均进行了子 图划分, 并对待检索图像的多个子图像中的各指定子图像进行图像特征提取, 得到各指定子图像的特征向量,并在进行图像检索时,针对图像库中每个图像, 均在该图像中确定出各待匹配子图像组,且待匹配子图像组中的各子图像之间 的相对位置, 与各指定子图像之间的相对位置相同, 然后基于该图像的每个待 匹配子图像组中的各子图像的特征向量, 以及各指定子图像的特征向量,确定 该图像与待检索图像的相似度,以及基于图像库中的每个图像与待检索图像的 相似度的大小,确定待检索图像对应的图 佥索结果。由于在进行图 佥索时, 是针对待检索图像的各指定子图像,以及针对图像库中每个图像的各待匹配子 图像组中的各子图像进行的, 所以相比技术中基于图像的全局特征进行检索, 针对性更强,更能够有效的检索出表达的含义与待检索图像的各指定子图像所 表达含义相匹配的图像, 从而提高了检索结果的有效性和检索效率。 In the method provided by the embodiment of the present invention, the image to be retrieved and the image in the image library are divided into subgraphs, and image features are extracted from each of the plurality of subimages of the image to be retrieved, and the specified subimages are obtained. a feature vector, and when performing image retrieval, for each image in the image library, each sub-image group to be matched is determined in the image, and the relative positions between the sub-images in the sub-image group to be matched are compared with The relative positions between the specified sub-images are the same, and then the image is compared with the feature to be retrieved based on the feature vector of each sub-image in each sub-image group to be matched of the image and the feature vector of each specified sub-image. And determining a map search result corresponding to the image to be retrieved based on the magnitude of the similarity between each image in the image library and the image to be retrieved. Since the image is performed for each specified sub-image of the image to be retrieved and for each sub-image in each sub-image group to be matched for each image in the image library, it is compared with the image-based image in the technology. The global feature is searched, and the pertinence is stronger, and the meaning of the expression and the specified sub-images of the image to be retrieved can be effectively retrieved. The expressions match the images, which improves the validity and retrieval efficiency of the search results.
附图说明 DRAWINGS
图 1为本发明实施例提供的图 佥索方法的流程图;  FIG. 1 is a flowchart of a method for searching for a picture according to an embodiment of the present invention;
图 2为本发明实施例 1中提供的图 佥索方法的流程图;  2 is a flowchart of a method for searching for a map provided in Embodiment 1 of the present invention;
图 3为本发明实施例 1中子图划分和索引窗口的示意图;  3 is a schematic diagram of a sub-picture division and an index window in Embodiment 1 of the present invention;
图 4为本发明实施例 1中基于索引窗口确定映射窗口的示意图; 图 5为本发明实施例 2中提供的图 佥索方法的流程图;  4 is a schematic diagram of determining a mapping window based on an index window according to Embodiment 1 of the present invention; FIG. 5 is a flowchart of a method for searching for a map provided in Embodiment 2 of the present invention;
图 6为本发明实施例 3中提供的图 H 索装置的结构示意图。  Fig. 6 is a schematic structural view of the H cable device provided in Embodiment 3 of the present invention.
具体实施方式 detailed description
为了给出在基于图像内容进行检索时,提高检索结果的有效性和检索效率 的实现方案, 本发明实施例提供了一种图像检索方法及装置, 以下结合说明书 附图对本发明的优选实施例进行说明,应当理解, 此处所描述的优选实施例仅 用于说明和解释本发明, 并不用于限定本发明。 并且在不冲突的情况下, 本申 请中的实施例及实施例中的特征可以相互组合。  In order to provide an implementation scheme for improving the validity of the search result and the search efficiency when searching based on the image content, the embodiment of the present invention provides an image retrieval method and apparatus, and the preferred embodiment of the present invention is described below with reference to the accompanying drawings. It is to be understood that the preferred embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. And in the case of no conflict, the features in the embodiments and the embodiments of the present application can be combined with each other.
本发明实施例提供一种图 H 索方法, 如图 1所示, 包括:  The embodiment of the present invention provides a method for drawing a cable, as shown in FIG. 1 , including:
步骤 S101、 将待检索图像进行子图划分, 得到多个子图像。  Step S101: Perform sub-picture division on the image to be retrieved to obtain a plurality of sub-images.
步骤 S102、 对该多个子图像中的各指定子图像进行图像特征提取, 得到 各指定子图像的特征向量。  Step S102: Perform image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images.
步骤 S103、 针对图像库中每个图像, 基于该图像的每个待匹配子图像组 中的各子图像的特征向量, 以及各指定子图像的特征向量,确定该图像与待检 索图像的相似度; 其中, 待匹配子图像组中的各子图像之间的相对位置, 与各 指定子图像之间的相对位置相同。  Step S103: determining, for each image in the image library, a similarity between the image and the image to be retrieved based on a feature vector of each sub-image in each sub-image group to be matched of the image, and a feature vector of each specified sub-image. Wherein, the relative positions between the sub-images in the sub-image group to be matched are the same as the relative positions between the specified sub-images.
步骤 S104、 基于图像库中的每个图像与待检索图像的相似度的大小, 确 定待检索图像对应的图像检索结果。 Step S104: Based on the similarity between each image in the image library and the image to be retrieved, The image retrieval result corresponding to the retrieved image is determined.
较佳的,在上述方法中进行图像特征提取时, 具体可以是得到各指定子图 像中的每个指定子图像的特征向量,且该特征向量具体还可以包括多种特征向 量, 以及每种特征向量具体还可以包括多种特征分量。  Preferably, when the image feature extraction is performed in the foregoing method, the feature vector of each specified sub-image in each specified sub-image may be obtained, and the feature vector may further include multiple feature vectors, and each feature. The vector may also include a plurality of feature components.
相应的, 在上述方法中确定图像库中的一个图像与待检索图像的相似度 时, 具体可以基于各指定子图像中每个指定子图像对应的子图像权重进行确 定, 具体还可以基于每种特征向量对应的特征间权重进行确定, 具体还可以基 于每种特征分量对应的特征内权重进行确定。  Correspondingly, when determining the similarity between an image in the image library and the image to be retrieved in the foregoing method, the determining may be based on the weight of the sub-image corresponding to each specified sub-image in each specified sub-image, and may also be based on each The weights between the features corresponding to the feature vectors are determined, and may be determined based on the weights within the features corresponding to each feature component.
较佳的, 在上述方法的基础上, 还可以在将图像检索结果显示给用户后, 由用户对每个检索结果图像进行相关性评价,并基于用户对每个检索结果图像 的相关性评价结果,调整在确定图像库中的图像与待检索图像的相似度时所使 用的权重, 得到调整后权重, 并基于调整后权重, 确定图像库中的图像与待检 索图像的最新相似度,以及基于图像库中的图像与待检索图像的最新相似度的 大小, 确定待检索图像对应的最新图 H 索结果。  Preferably, on the basis of the above method, after the image retrieval result is displayed to the user, the user performs a correlation evaluation on each retrieval result image, and based on the user's evaluation result of the correlation of each retrieval result image. Adjusting the weight used in determining the similarity between the image in the image library and the image to be retrieved, obtaining the adjusted weight, and determining the latest similarity between the image in the image library and the image to be retrieved based on the adjusted weight, and based on The latest image similarity between the image in the image library and the image to be retrieved determines the latest image H corresponding to the image to be retrieved.
下面结合附图, 用具体实施例对本发明提供的方法及装置进行详细描述。 实施例 1 :  The method and apparatus provided by the present invention will be described in detail below with reference to the accompanying drawings. Example 1
图 2所示为本发明实施例 1中提供的图 H 索方法的流程图,具体包括如 下步骤:  FIG. 2 is a flowchart of a method for providing a cable according to Embodiment 1 of the present invention, which specifically includes the following steps:
步骤 S201、 获取用户提交的待检索图像, 对待检索图像进行子图划分, 得到多个子图像。  Step S201: Acquire an image to be retrieved submitted by the user, and perform subgraph division on the image to be retrieved to obtain a plurality of sub-images.
具体的, 子图划分可釆用现有技术中的各种方法,将待检索图像划分为大 小相同的矩形的多个子图, 例如, 如图 3所示, 划分为 4 X 4的 16个子图。 其 中, 多个子图的大小相同且为矩形, 是便于后续进行相似度计算, 并非是严格 的划分条件。  Specifically, the sub-picture partitioning may use various methods in the prior art to divide the image to be retrieved into multiple sub-graphs of a rectangle of the same size, for example, as shown in FIG. 3, 16 sub-pictures divided into 4×4. . Among them, multiple subgraphs are the same size and rectangular, which is convenient for subsequent similarity calculation, and is not a strict division condition.
本发明实施例中, 提出釆用伪四叉树划分法进行子图划分, 具体如下: 在进行第 1级划分时, 将图像划分为大小相同的矩形的 m x n个子图, 其 中, 首先确定图像的宽和高的最大公约数, m为图像的宽除以该最大公约数, n为图像的高除以该最大公约数; In the embodiment of the present invention, the pseudo-quad tree partitioning method is used to perform sub-picture partitioning, which is as follows: In the first level division, the image is divided into mxn subgraphs of rectangles of the same size, wherein the maximum common divisor of the width and height of the image is first determined, m is the width of the image divided by the greatest common divisor, and n is the image. Divided by the greatest common divisor;
在进行第 2级划分时, 针对 m x n个子图中的每个子图, 均将其划分为 2 X 2的 4个子图,从而将该图像划分为大小相同的矩形的 (2*m) (2*n)个子图; 依此类推, 可以将该图像划分为大小相同的矩形的 (2k*m) (2k*n)个子图, 其中, k为划分的级数减 1。 When performing the second level division, for each sub-picture in the mxn sub-pictures, it is divided into 4 sub-pictures of 2 X 2, thereby dividing the image into rectangles of the same size (2*m) (2*) n) Subgraphs; and so on, the image can be divided into (2 k *m) (2 k *n) subgraphs of the same size rectangle, where k is the number of divisions minus one.
步骤 S202、 从待检索图像的多个子图像中选择各指定子图像, 所选择的 各指定子图像组成指定子图像组。  Step S202: Select each designated sub-image from the plurality of sub-images of the image to be retrieved, and the selected designated sub-images constitute a designated sub-image group.
指定子图像可以按照一定的选择策略由设备进行自动选择。  The specified sub-image can be automatically selected by the device according to a certain selection strategy.
较佳的, 可以将这多个子图像显示给用户, 由用户根据自身的检索需要进 行指定子图像的选择, 用户所选择的各指定子图像即相当于用户感兴趣区域, 即用户进行本次图像检索的目的,是检索出与其感兴趣区域表达含义相匹配的 图像, 所以, 由用户进行各指定子图像的选择可以提高检索结果的有效性。  Preferably, the plurality of sub-images can be displayed to the user, and the user selects the specified sub-image according to the retrieval requirement of the user. The designated sub-image selected by the user is equivalent to the user's region of interest, that is, the user performs the current image. The purpose of the search is to retrieve an image that matches the meaning of the region of interest. Therefore, the selection of each specified sub-image by the user can improve the validity of the search result.
例如, 如图 3所示, 用户选择了 16个子图像中的指定子图像 1-5。  For example, as shown in Fig. 3, the user selects the designated sub-images 1-5 among the 16 sub-images.
为了便于后续确定图像库中图像的待匹配子图像组中的各子图像,本步骤 中, 在选择各指定子图像后, 还可以确定包括各指定子图像的索引窗口, 该索 引窗口为由这多个子图像中的子图像组成,且包括各指定子图像的, 最小的矩 形窗口。 例如, 如图 3所示, 索引窗口中包括了待检索图像的左上方的 3 x 3 的 9个子图像。  In order to facilitate subsequent determination of each sub-image in the image group to be matched in the image library, in this step, after selecting each of the designated sub-images, an index window including each specified sub-image may also be determined, and the index window is The sub-images of the plurality of sub-images are composed, and include the smallest rectangular window of each of the designated sub-images. For example, as shown in FIG. 3, the index window includes 9 sub-images of 3 x 3 at the upper left of the image to be retrieved.
步骤 S203、 对待检索图像的多个子图像中的各指定子图像进行图像特征 提取, 得到各指定子图像的特征向量。  Step S203: Perform image feature extraction on each of the plurality of sub-images of the image to be retrieved to obtain feature vectors of the designated sub-images.
具体的, 可以提取出各指定子图像中的每个指定子图像的特征向量。  Specifically, feature vectors of each of the designated sub-images in each of the designated sub-images may be extracted.
进一步的, 指定子图像的特征向量可以包括多种特征向量。  Further, the feature vector of the specified sub-image may include a plurality of feature vectors.
进一步的, 每种特征向量可以包括多种特征分量。 下面以对一个图像的颜色、纹理和形状三种特征向量的提取为例进行具体 描述。 Further, each feature vector may include a plurality of feature components. The following is a detailed description of the extraction of three feature vectors of color, texture and shape of an image.
1、 颜色特征向量的提取:  1. Extraction of color feature vectors:
釆用颜色矩法, 由图像像素点的各阶中心矩表示颜色分布情况,颜色信息 大都集中在低阶矩上, 包括一阶矩为均值、 二阶矩为方差和三阶矩为饶度, 用 于表示图像的总体颜色分布, 其中:  颜色 Using the color moment method, the color distribution is represented by the central moments of the image pixels, and the color information is mostly concentrated on the lower moments, including the first moment is the mean, the second moment is the variance and the third moment is the raid. Used to represent the overall color distribution of an image, where:
釆用如下公式确定一阶矩均值: / = ^∑ ; 其中, /为一阶矩均值, Α.为图像中的第 i个像素点的颜色分 量值, N为图像包括的像素点的数量。 确定 Determine the first-order moment mean using the following formula: / = ^∑ ; where / is the first-order moment mean, Α is the color component value of the ith pixel in the image, and N is the number of pixels included in the image.
釆用如下公式确定二阶矩方差:  确定 Determine the second-order moment variance using the following formula:
1 Ν 1 Ν
σ = 其中, σ为二阶矩方差 釆用如下公式确定三阶矩饶度: σ = where σ is the second-order moment variance. Determine the third-order moment with the following formula:
1 w  1 w
S = , 其中, s为三阶矩饶度。 由于每个像素共有 3个颜色分量, 包括红色分量、 绿色分量和蓝色分量, 所以,针对每种颜色分量均可以确定出对应的一阶矩均值、二阶矩方差和三阶 矩饶度, 因此, 本发明实施例中提出的图像的颜色特征向量包括 9种颜色特征 分量, 分别对应为: 红色一阶矩均值 、 红色二阶矩方差 ^和红色三阶矩饶 度 Sr , 以及绿色一阶矩均值 /g、 绿色二阶矩方差 ^和绿色三阶矩饶度 , 以及 蓝色一阶矩均值 /έ、 蓝色二阶矩方差 σέ和蓝色三阶矩饶度 ¾S = , where s is the third-order moment. Since each pixel has 3 color components, including a red component, a green component, and a blue component, the corresponding first-order moment mean, second-order moment variance, and third-order moment aura can be determined for each color component. Therefore, the color feature vector of the image proposed in the embodiment of the present invention includes nine color feature components, which respectively correspond to: a red first moment mean value, a red second order moment variance ^, and a red third order moment rake Sr , and a green first order. the mean torque / g, and the green second moment variance ^ Rao third moment of green, blue and first order moment mean / έ, the variance σ έ second moment blue and blue Rao third moment of ¾.
2、 纹理特征向量的提取:  2. Extraction of texture feature vectors:
釆用灰度共生矩阵法, 利用其统计量提取出相应的纹理特征。  Using the gray level co-occurrence matrix method, the corresponding texture features are extracted by using its statistics.
首先将彩色图像转换成灰度图像, 然后分别提取出 0度、 90度、 45度、 135度四个方向的灰度共生矩阵 对于每个方向的灰度共生矩阵 中的元素 , 表示该方向的灰度级为 2和 的像素点对出现的次数, 其中, 灰度级的取值范围为 0-256。 First convert the color image into a grayscale image, and then extract 0 degrees, 90 degrees, 45 degrees, respectively. Gray level co-occurrence matrix of 135 degrees in four directions for the elements in the gray level co-occurrence matrix in each direction, indicating the number of times the pixel points of the direction are 2 and the pair of pixels, wherein the range of gray levels It is 0-256.
针对每个方向, 基于该方向的灰度共生矩阵 分别确定与该方向对 应的, 能量、 熵、 反差和相关这 4个纹理特征统计量:  For each direction, the gray level co-occurrence matrix based on the direction determines the statistic of energy, entropy, contrast and correlation corresponding to the direction:
釆用如下公式确定能量纹理特征统计量:
Figure imgf000009_0001
其中, 为能量纹理特征统计量。 釆用如下公式确定熵纹理特征统计量:
确定 Determine the energy texture feature statistic using the following formula:
Figure imgf000009_0001
Among them, is the energy texture feature statistic.确定 Determine the entropy texture feature statistic using the following formula:
ENT = H mh mh 其中, ENr为熵纹理特征统计量; 釆用如下公式确定反差纹理特征统计量:
Figure imgf000009_0002
其中, CON为反差纹理特征统计量; 釆用如下公式确定相关纹理特征统计量:
ENT = H m h m h where ENr is the entropy texture feature statistic; 确定 Determine the contrast texture feature statistic using the following formula:
Figure imgf000009_0002
Where CON is the contrast texture feature statistic; 确定 use the following formula to determine the relevant texture feature statistic:
COR = 其中, CO ?为相关纹理特征统计量, 为
Figure imgf000009_0003
COR = where CO ? is the associated texture feature statistic,
Figure imgf000009_0003
灰度共生矩阵 中第 h行各元素的均值, A为灰度共生矩阵 中第 k 列各元素的均值, ^为灰度共生矩阵 中第 h行各元素的标准差, 为 灰度共生矩阵 中第 k列各元素的标准差。 The mean value of each element of the hth line in the gray level co-occurrence matrix, A is the mean value of each element of the kth column in the gray level co-occurrence matrix, and ^ is the standard deviation of each element of the hth line in the gray level co-occurrence matrix, which is in the gray level co-occurrence matrix The standard deviation of each element of column k.
针对上述 4个纹理特征统计量中每种纹理特征统计量,确定分别与 4个方 向对应的 4个该纹理特征统计量的均值和标准差,作为提取的图像的纹理特征 向量包括的纹理特征分量, 共计包括 8种纹理特征分量, 分别为: 能量均值纹 理特征分量 / SM、 能量标准差紋理特征分量 σ 、 熵均值紋理特征分量 /£OT、 熵标准差纹理特征分量^7 、 反差均值纹理特征分量 / ecw、 反差标准差纹理特 征分量 iJc^、 相关均值纹理特征分量 / 和相关标准差纹理特征分量 σ( 3、 形状特征向量的提取: 釆用形状不变矩法提取图像的形状特征向量。 For each of the four texture feature statistics, the mean and standard deviation of the four texture feature statistics corresponding to the four directions are determined as the texture feature components included in the texture feature vector of the extracted image. A total of 8 texture feature components are included, which are: energy mean texture feature component / SM , energy standard deviation texture feature component σ , entropy mean texture feature component / £OT , entropy standard deviation texture feature component ^7 , contrast mean texture Feature component / ecw , contrast standard deviation texture feature component iJc^, correlation mean texture feature component / and associated standard deviation texture feature component σ ( 3, extraction of shape feature vector: 提取 Use the shape invariant moment method to extract the shape feature vector of the image.
具体为,釆用 Canny算子提取图像中每个像素点的边缘信息/ (xj) , f{x,y) 表示釆用 Canny算子提取坐标为 (x,y)的像素点的边缘信息,并确定图像的二维 p+q阶矩和具有平移不变性的规格化后的中心矩, 其中:  Specifically, the Canny operator is used to extract the edge information of each pixel in the image / (xj), f{x, y), and the Canny operator is used to extract the edge information of the pixel with the coordinate (x, y). And determine the two-dimensional p+q moment of the image and the normalized central moment with translation invariance, where:
釆用如下公式确定二维 p+q阶矩:  确定 Determine the two-dimensional p+q moment by the following formula:
^Pq =∑∑^pyqf(x,y) ; 其中, 为二维 p+q阶矩。 ^ Pq = ∑∑ ^ p y q f(x, y) ; where is a two-dimensional p + q moment.
y  y
釆用如下公式确定具有平移不变性的规格化后的中心矩: 确定 Determine the normalized center moment with translation invariance using the following formula:
Pq =∑∑(x - x0 Y (y - y0 )q f(x,y) ; 其中, ^为具有平移不变性的规格化后 y P q =∑∑(x - x 0 Y (y - y 0 ) q f(x,y) ; where ^ is the normalized y with translation invariance
的中心矩, (x。, j。)为图像重心的坐标。 The center moment, (x., j.) is the coordinates of the center of gravity of the image.
基于《^和 ^ , 将各种二阶和三阶中心矩进行组合, 分别得到对平移、 旋 转和尺度均具有无关性的 7个不变矩,并将这 7个不变钜作为提取的图像的形 状特征向量的形状特征分量, 7个不变钜具体可釆用现有技术中的方法进行计 算, 在 JH不再进行伴细 ^述。  Based on "^ and ^, the various second-order and third-order central moments are combined to obtain seven invariant moments which are independent of translation, rotation and scale, and the seven invariant 钜 are taken as the extracted image. The shape feature component of the shape feature vector, 7 invariant, can be calculated by the method in the prior art, and is not described in the JH.
步骤 S204、 针对图像库中每个图像, 确定该图像的各待匹配子图像组, 其中,待匹配子图像组中的各子图像之间的相对位置, 与待检索图像的各指定 子图像之间的相对位置相同, 即在该图像中确定出各组子图像,且每组子图像 均满足, 其包括的各子图像之间的相对位置, 与待检索图像的各指定子图像之 间的相对位置相同。  Step S204: Determine, for each image in the image library, each to-be-matched sub-image group of the image, where a relative position between each sub-image in the sub-image group to be matched, and each specified sub-image of the image to be retrieved The relative positions are the same, that is, each group of sub-images is determined in the image, and each group of sub-images is satisfied, the relative position between the sub-images included, and the specified sub-images of the image to be retrieved The relative position is the same.
本发明实施例中, 针对图像库中的每个图像, 预先进行了子图划分, 具体 可以釆用对待检索图像进行子图划分的相同方法,对图像库中的每个图像进行 子图划分, 所得到的子图像的数量, 可以等于待检索图像划分后得到的多个子 图像的数量, 也可以大于待检索图像划分后得到的多个子图像的数量。  In the embodiment of the present invention, sub-picture division is performed in advance for each image in the image library, and specifically, the same method of sub-picture division of the image to be retrieved may be performed, and sub-picture division is performed on each image in the image library. The number of the obtained sub-images may be equal to the number of the plurality of sub-images obtained after the image to be retrieved is divided, or may be larger than the number of the plurality of sub-images obtained by dividing the image to be retrieved.
基于上述步骤 S202中确定的包括各指定子图像的索引窗口, 具体可釆用 如下方式确定图像库中一个图像的各待匹配子图像组: 首先在该图像中确定各映射窗口, 映射窗口满足的条件为: 其所包括的子 图像的数量和排列方式, 与索引窗口中各指定子图像的数量和排列方式相同, 例如, 如图 4所示, 索引窗口中包括了待检索图像的左上方的 3 x 3的 9个子 图像, 即索引窗口包括的各指定子图像的数量为 9, 排列方式为 3 x 3 , 则在一 个划分为 4 X 4的 16个子图像的图像中,可以确定出满足条件的 4个映射窗口, 分别为左上、 左下、 右上和右下的包括 3 x 3的 9个子图像的映射窗口。 由于 索引窗口与映射窗口中包括的子图像的数量和排列方式相同,所以索引窗口包 括的各子图像与每个映射窗口中包括的各子图像之间,存在位置一一对应的关 系。 Based on the index window including each specified sub-image determined in the above step S202, specifically, each sub-image group to be matched of one image in the image library may be determined as follows: First, each mapping window is determined in the image. The mapping window satisfies the following conditions: The number and arrangement of the sub-images included in the mapping window are the same as the number and arrangement of the specified sub-images in the index window, for example, as shown in FIG. The index window includes 9 sub-images of 3 x 3 at the upper left of the image to be retrieved, that is, the number of designated sub-images included in the index window is 9, and the arrangement is 3 x 3, and the division is 4 X In the image of the 16 sub-images of 4, four mapping windows satisfying the condition can be determined, which are the upper left, lower left, upper right, and lower right mapping windows including 9 sub-images of 3 x 3. Since the number and arrangement of the sub-images included in the index window and the mapping window are the same, there is a one-to-one correspondence between the sub-images included in the index window and each sub-image included in each mapping window.
然后确定由每个映射窗口包括的各子图像中,与索引窗口包括的各指定子 图像位置对应的子图像, 组成该映射窗口对应的待匹配子图像组。  Then, a sub-image corresponding to each specified sub-image position included in the index window among the sub-images included in each mapping window is determined to constitute a sub-image group to be matched corresponding to the mapping window.
步骤 S205、 针对图像库中每个图像, 基于该图像的每个待匹配子图像组 中的各子图像的特征向量, 以及待检索图像的各指定子图像的特征向量,确定 该图像的每个待匹配子图像组与指定子图像组的相似度。其中, 图像库中每个 图像的各子图像的特征向量,可以为预先提取并存储的,在使用时可直接获取。  Step S205: determining, for each image in the image library, each of the image vectors based on the feature vectors of each of the sub-images in the to-be-matched sub-image group of the image, and the feature vectors of the specified sub-images of the image to be retrieved. The similarity between the sub-image group to be matched and the specified sub-image group. The feature vector of each sub-image of each image in the image library may be pre-extracted and stored, and may be directly acquired during use.
具体可釆用如下公式确定一个待匹配子图像组与指定子图像组的相似度:  Specifically, the similarity between a sub-image group to be matched and a specified sub-image group is determined by the following formula:
S{q,Pl)= V S{Zi ) ,其中, 为图像库中的图像 P的第 /个待匹配子图 像组与待检索图像 q的指定子图像组的相似度, ^(ζ,.)为第 /个待匹配子图像组 的第 i个子图像与位置对应的第 i个指定子图像的相似度, 为第 i个指定子 图像对应的子图像权重, N为各指定子图像包括的子图像的数量。 S{q, Pl )= VS{ Zi ) , where is the similarity of the selected sub-image group of the image P in the image library and the specified sub-image group of the image to be retrieved, ^(ζ,.) The similarity of the i-th sub-image corresponding to the position of the i-th sub-image group to be matched is the sub-image weight corresponding to the i-th designated sub-image, and N is the sub-image included in each designated sub-image. The number of images.
具体的, 上述 ^ζ,.)可釆用如下公式确定:  Specifically, the above ^ζ,.) can be determined by the following formula:
^■) =∑^( ) ' 其中, 为待匹配子图像组中的第 i个子图像与位置 j ^■) =∑^( ) ' where is the i-th sub-image and position j in the sub-image group to be matched
对应的第 i个指定子图像的第 j种特征向量的相似度, W2J为第 j种特征向量对 应的特征间权重, M为提取的第 i个子图像的特征向量的种类的数量, 例如, 当提取的特征向量包括颜色、 纹理和形状三种特征向量时, M取值为 3。 The similarity of the jth feature vector of the corresponding i-th specified sub-image, W 2J is the j-th feature vector pair The inter-feature weight, M is the number of types of feature vectors of the extracted i-th sub-image. For example, when the extracted feature vector includes three feature vectors of color, texture, and shape, M takes a value of 3.
具体的, 上述 可釆用如下加权欧式距离之和的计算公式确定: ' 其中, Specifically, the above formula can be determined by the following formula for the sum of the weighted Euclidean distances: '
Figure imgf000012_0001
O为待匹配子图像组中的第 i个子图像与 位置对应的第 i个指定子图像的第 j种特征向量的第 k种特征分量的相似度, 为第 j种特征向量的第 k种特征分量对应的特征内权重, H为提取的第 j 种特征向量包括的特征分量的数量, 例如, 上述颜色特征向量包括的特征分量 的数量为 9个, 上述纹理特征向量包括的特征分量的数量为 8个, 上述形状特 征向量包括的特征分量的数量为 7个。
Figure imgf000012_0001
O is the similarity of the kth feature component of the jth feature vector of the i-th specified sub-image corresponding to the position of the i-th sub-image in the sub-image group to be matched, and is the k-th feature of the j-th feature vector The intra-feature weight corresponding to the component, H is the number of feature components included in the extracted j-th feature vector. For example, the number of feature components included in the color feature vector is nine, and the number of feature components included in the texture feature vector is Eighty, the shape feature vector includes the number of feature components of seven.
具体的, 上述 ^)可釆用如下公式确定: r≠,Q—r,P Specifically, the above ^) can be determined by the following formula: r≠ , Q —r , P
O - , 其中, ^为待检索图像 q的第 i个指定子图像的第 j 种特征向量的第 k种特征分量, 为图像库中的该图像 P的待匹配子图像组 中的第 i个子图像的第 j种特征向量的第 k种特征分量。  O - , where ^ is the kth feature component of the jth feature vector of the i-th specified sub-image of the image to be retrieved q, which is the i-th sub-group of the sub-image group to be matched of the image P in the image library The kth feature component of the jth feature vector of the image.
步骤 S206、 针对图像库中每个图像, 在确定出该图像的每个待匹配子图 像组与待检索图像的指定子图像组的相似度后,将各相似度的平均值作为该图 像与待检索图像的相似度, 或者, 较佳的, 还可以将各相似度中的最大值作为 该图像与待检索图像的相似度。  Step S206: For each image in the image library, after determining the similarity between each sub-image group to be matched and the specified sub-image group of the image to be retrieved, the average value of each similarity is taken as the image and the The similarity of the images is retrieved, or, preferably, the maximum of the similarities may be used as the similarity between the image and the image to be retrieved.
步骤 S207、 基于图像库中的每个图像与待检索图像的相似度的大小, 确 定待检索图像对应的图像检索结果。  Step S207: Determine an image retrieval result corresponding to the image to be retrieved based on a magnitude of similarity between each image in the image library and the image to be retrieved.
具体可以为,将图像库中与待检索图像的相似度大于设定相似度阈值的图 像, 确定为待检索图像对应的图 佥索结果。  Specifically, the image in the image library whose degree of similarity with the image to be retrieved is greater than the set similarity threshold may be determined as the result of the image corresponding to the image to be retrieved.
具体还可以为,将图像库中与待检索图像的相似度从高到低的顺序的前设 定数量的图像, 确定为待检索图像对应的图 佥索结果。  Specifically, it may be determined that the image of the image set in the order of the similarity of the image to be retrieved from the highest to the lowest in the image library is determined as the result of the image corresponding to the image to be retrieved.
釆用本发明实施例 1提供的上述图 佥索方法,是针对待检索图像的各指 定子图像,以及针对图像库中每个图像的各待匹配子图像组中的各子图像进行 的, 所以相比技术中基于图像的全局特征进行检索, 针对性更强, 更能够有效 的检索出表达的含义与待检索图像的各指定子图像所表达含义相匹配的图像, 从而提高了检索结果的有效性和检索效率。 The above method of searching for the image provided by the first embodiment of the present invention is for each finger of the image to be retrieved. The stator image is performed for each sub-image in each sub-image group to be matched for each image in the image library, so the retrieval is more targeted and more efficient than the global image-based feature in the technology. The meaning of the expression matches the meaning expressed by each specified sub-image of the image to be retrieved, thereby improving the validity of the retrieval result and the retrieval efficiency.
实施例 2:  Example 2:
为了能够更准确的向用户返回用户所需要的图 H 索结果,本发明实施例 2中, 在上述实施例 1中方法的基础上, 提出基于用户对每个检索结果图像的 相关性评价结果, 调整确定相似度时所使用的权重, 并基于调整后权重, 确定 图像之间的最新相似度, 以确定最新图像检索结果的方案。 如图 5所示, 具体 包括如下处理步骤:  In the second embodiment of the present invention, based on the method in the first embodiment, the correlation evaluation result based on the user's image for each search result is proposed, in order to more accurately return the result of the image required by the user to the user. The weights used in determining the similarity are adjusted, and based on the adjusted weights, the latest similarity between the images is determined to determine the latest image retrieval results. As shown in FIG. 5, the following steps are specifically included:
步骤 S501、 向用户显示上述实施例 1 中确定的图像检索结果中的各检索 结果图像, 并针对每个检索结果图像, 向用户提供该检索结果图像的相关性评 价选项, 使得用户能够对该检索结果图像进行相关性评 ¾ (介。  Step S501, displaying each search result image in the image search result determined in the above embodiment 1 to the user, and providing the user with a relevance evaluation option of the search result image for each search result image, so that the user can perform the search. The resulting image was correlated (3).
步骤 S502、 获取用户对图 佥索结果中的每个检索结果图像的相关性评 价结果, 相关性评价结果具体可以包括三种, 分别为: 表征相关的相关结果、 评判的不评判  Step S502: Acquire a correlation evaluation result of each search result image in the search result of the user, and the correlation evaluation result may specifically include three types, respectively: a related result related to the representation, and a non-judgment of the evaluation
步骤 S503、 基于获取的每个检索结果图像的相关性评价结果, 调整在确 定图像库中的每个图像与待检索图像的相似度时所使用的权重,得到调整后权 重。  Step S503: Adjust the weights used when determining the similarity between each image in the image library and the image to be retrieved based on the obtained correlation evaluation result of each search result image, and obtain the adjusted weight.
针对上述实施例 1中所使用的三个权重,分别釆用如下方式确定对应的调 整后权重:  For the three weights used in the above embodiment 1, the corresponding adjusted weights are determined in the following manner:
1、 指定子图像对应的子图像权重:  1. Specify the sub-image weight corresponding to the sub-image:
步骤 A: 确定图 佥索结果中相关性评价结果为相关结果的各正例图像; 例如图 佥索结果中包括的图像数量为 T, 其中正例图像的数量为 Tl。  Step A: Determine the positive example image in which the correlation evaluation result in the figure is the correlation result; for example, the number of images included in the figure search result is T, and the number of positive example images is Tl.
步骤 Β: 针对每个正例图像中相似度最高的待匹配子图像组, 确定该待匹 配子图像组中与指定子图像组中位置相对应的每对子图像的相似度,由于上述 实施例 1中已经确定出该相似度, 所以此时可以直接获取保存的该相似度。 为 了便于后续调整后子图像权重的确定,可以将针对每个正例图像和每个指定子 图像确定的相似度组成矩阵 (z);jnxjV , 其中, 矩阵
Figure imgf000014_0001
rixN阶矩阵, N 为指定子图像组包括的子图像的数量, 矩阵中的元素 (4为第 i个指定子图像 与第 t个正例图像中相似度最高的待匹配子图像组中位置相对应的子图像的相 似度。
Step Β: Determine the candidate to be matched for the sub-image group to be matched with the highest similarity in each positive image. The similarity of each pair of sub-images corresponding to the positions in the specified sub-image group in the group of game images, since the similarity has been determined in the above embodiment 1, the saved similarity can be directly obtained at this time. In order to facilitate the determination of the subsequent adjusted sub-image weights, the similarity determined for each positive example image and each specified sub-image may be composed into a matrix (z); j nxjV , where, the matrix
Figure imgf000014_0001
RixN-order matrix, N is the number of sub-images included in the specified sub-image group, and the elements in the matrix (4 is the position of the i-th specified sub-image and the t-th positive example image with the highest similarity among the sub-image groups to be matched The similarity of the corresponding sub-image.
步骤 C: 针对指定子图像组中每个指定子图像, 确定各相似度最高的待匹 配子图像组中,与该指定子图像位置相对应的该对子图像的各相似度的标准差 的倒数。 其中, 该标准差即为上述矩阵 (z);jnxjV†每一列元素的标准差 σ,., σ,. 表示与第 i个指定子图像对应的标准差, 并计算标准差 的倒数。 Step C: determining, for each designated sub-image in the specified sub-image group, a reciprocal of the standard deviation of each similarity of the pair of sub-images corresponding to the specified sub-image position among the sub-image groups to be matched with the highest similarity . Wherein, the standard deviation is the above matrix (z); j nxjV 标准 the standard deviation of each column element σ, ., σ,. represents the standard deviation corresponding to the i-th specified sub-image, and calculates the reciprocal of the standard deviation.
由于标准差 σ,.的值越小,表示该指定子图像更受到用户的重视, 因此该标 准差 σ,.的倒数反映了该指定子图像的子图像权重的大小。  Since the value of the standard deviation σ,. is smaller, it means that the specified sub-image is more valued by the user, so the reciprocal of the standard deviation σ,. reflects the size of the sub-image weight of the specified sub-image.
步骤 D: 对各指定子图像分别对应的各倒数, 进行归一化处理, 将归一化 处理出后得到的与每个指定子图像对应的结果,作为该指定子图像对应的调整 后子图像权重 ^,. , 具体可以釆用如下公式确定: w = \la \ ^, = ^, /∑^,; 为第 i个指定子图像对应的调整后子图像 权重。  Step D: performing normalization processing on each reciprocal corresponding to each of the designated sub-images, and performing a normalized processing result corresponding to each of the designated sub-images as the adjusted sub-image corresponding to the designated sub-image The weight ^,. , can be determined by the following formula: w = \la \ ^, = ^, /∑^,; The adjusted sub-image weight corresponding to the i-th specified sub-image.
2、 特征向量对应的特征间权重:  2. The weight between features corresponding to the feature vector:
步骤 a: 针对多种特征向量中的每种特征向量, 基于图像库中每个图像的 每个待匹配子图像组中的各子图像的该种特征向量,以及各指定子图像的该种 特征向量,确定与该种特征向量对应的,待检索图像对应的特征图 佥索结果。  Step a: for each of the plurality of feature vectors, based on the feature vector of each sub-image in each of the sub-image groups to be matched for each image in the image library, and the feature of each specified sub-image The vector determines a feature map search result corresponding to the image to be retrieved corresponding to the feature vector.
即上述实施例 1中 于多种特征向量,确定检索图像对应的图 佥索结 果, 本步骤 a中是分别基于单一的一种特征向量, 确定检索图像对应的图 佥 索结果, 为便于区分,将基于单一的一种特征向量确定的图像检索结果称为特 征图 佥索结果, 例如, 基于颜色特征向量确定的称作颜色图 佥索结果, 将 基于纹理特征向量确定的称作纹理图像检索结果,将形状特征向量确定的称作 形状图 佥索结果。具体的确定方式可釆用上述实施例 1中相同的方法, 区别 仅是基于单一的一种特征向量。 That is, in the foregoing embodiment 1, the plurality of feature vectors are used to determine the map search result corresponding to the search image, and in the step a, the map corresponding to the search image is determined based on a single feature vector. As a result, for the sake of distinguishing, the image retrieval result determined based on a single feature vector is referred to as a feature map search result, for example, a color map search result determined based on the color feature vector is determined based on the texture feature vector. The texture image retrieval result is referred to, and the shape feature vector is determined as a shape map search result. The specific determination method can adopt the same method as in the above Embodiment 1, and the difference is only based on a single one feature vector.
步骤 b: 确定该特征图像检索结果与实施例 1中确定的该图像检索结果中 均存在的各检索结果图像。  Step b: determining each of the search result images existing in the feature image search result and the image search result determined in the embodiment 1.
步骤 c: 确定均存在的各检索结果图像的相关性评价结果分别对应的各分 值的第一和值,以及确定第一和值与该种特征向量的特征间权重的初始值的第 二和值, 具体釆用如下公式确定:  Step c: determining a first sum value of each score corresponding to the correlation evaluation result of each search result image that exists, and determining a second sum of the initial value of the weight between the first sum value and the feature of the feature vector The value is determined by the following formula:
w2J = w2J0 + MARK; 其中, ¾ .。为第 j种特征向量的特征间权重的初始值, 例如, 将每种特征向量的特征间权重的初始值均设为 0; 4 ^为与第 j 中特 征向量对应的上述第一和值, 其中,相关性评价结果对应的分值可根据实际需 要进行设置, 例如, 本实施例中设置相关结果对应分值为 2, 设置不评判结果 对应分值为 1 , 设置不相关结果对应分值为 0; 为与第 j种特征向量对应的 上述第二和值。 w 2J = w 2J0 + MARK; wherein, ¾ .. The initial value of the inter-feature weight of the j-th feature vector, for example, the initial value of the inter-feature weight of each feature vector is set to 0; 4 ^ is the first sum value corresponding to the feature vector in the jth, The score corresponding to the correlation evaluation result may be set according to actual needs. For example, in this embodiment, the corresponding result corresponding value is set to 2, the non-evaluation result corresponding score is set to 1, and the uncorrelated result corresponding score is set. 0; is the above second sum value corresponding to the jth feature vector.
步骤 d: 对多种特征向量分别对应的各第二和值, 进行归一化处理, 将归 一化处理出后得到的与每种特征向量对应的结果,作为该种特征向量对应的调 整后特征间权重^ , 具体可以釆用如下公式确定:  Step d: performing normalization processing on each second sum value corresponding to each of the plurality of feature vectors, and performing normalized processing on the result corresponding to each feature vector as the adjusted corresponding to the feature vector The weight between features ^ can be determined by the following formula:
M  M
W2 j = w2 j I w2 j; W2 j为第 j种特征向量对应的调整后特征间权重, M为 j W 2 j = w 2 j I w 2 j ; W 2 j is the adjusted inter-feature weight corresponding to the j-th feature vector, M is j
提取的子图像的特征向量的种类的数量。 The number of types of feature vectors of the extracted sub-images.
3、 特征矢量对应的特征内权重:  3. Intra-feature weights corresponding to feature vectors:
步骤 1 : 确定图像检索结果中相关性评价结果为相关结果的各正例图像; 例如图 佥索结果中包括的图像数量为 T, 其中正例图像的数量为 Tl。 步骤 2: 对于每个正例图像中相似度最高的待匹配子图像组, 针对每种特 征向量,确定该待匹配子图像组中每个子图像的该种特征向量包括的每种特征 分量。 为了便于后续调整后子图像权重的确定, 可以将确定的该每种特征分量 组成矩阵 [r^nxff , 其中, 矩阵 [」n ^为 7 XH阶矩阵, H为提取的第 j种特征向 量包括的特征分量的数量,矩阵中的元素 为第 t个正例图像中相似度最高的 待匹配子图像组中第 i个子图像的第 j种特征向量的第 k种特征分量。 Step 1: Determine the positive example image in which the correlation evaluation result in the image retrieval result is the correlation result; for example, the number of images included in the image retrieval result is T, and the number of the positive example images is T1. Step 2: For each feature vector to be matched in each positive example image, for each feature vector, each feature component included in the feature vector of each sub-image in the to-be-matched sub-image group is determined. In order to facilitate the determination of the sub-image weights after the subsequent adjustment, each of the determined feature components may be composed into a matrix [r^ nxff , where the matrix [" n ^ is a 7 XH-order matrix, and H is the extracted j-th feature vector including The number of feature components, the elements in the matrix are the k-th feature component of the j-th feature vector of the i-th sub-image in the to-be-matched sub-image group of the t-th positive example image.
步骤 3: 针对该待匹配子图像组中的每个子图像和每种特征分量, 确定各 正例图像的该待匹配子图像组中的该子图像的该种特征向量包括的各该种特 征分量的标准差, 作为与每个子图像的该种特征分量对应的标准差。 其中, 该 标准差即为上述矩阵 [」n ^中每一列元素的标准差^¾ , ^¾表示与第 i个子图 像对应的, 且与第 j种特征向量的第 k种特征分量对应的标准差。 Step 3: determining, for each sub-image and each feature component in the to-be-matched sub-image group, each of the feature components included in the feature vector of the sub-image in the to-be-matched sub-image group of each positive example image The standard deviation is the standard deviation corresponding to the feature component of each sub-image. Wherein, the standard deviation is the standard deviation of each of the elements in the matrix [" n ^ ^3⁄4 , ^3⁄4 represents a standard corresponding to the i-th sub-image and corresponding to the k-th feature component of the j-th feature vector difference.
步骤 4: 针对每个子图像和每种特征向量, 对该种特征向量所包括的多种 特征分量分别对应的各标准差的倒数, 进行归一化处理,得到与每个子图像和 每种特征分量对应的处理结果, 具体可以釆用如下公式确定: wyk = \ l yk - Wyk = w /∑wvk 为第 i个子图像的第 j种特征向量的第 k 种特征分量对应的处理结果。 Step 4: normalizing, for each sub-image and each feature vector, a reciprocal of each standard deviation corresponding to the plurality of feature components included in the feature vector, and obtaining each sub-image and each feature component The corresponding processing result can be determined by the following formula: w yk = \ l yk - W yk = w /∑w vk is the processing result corresponding to the kth feature component of the jth feature vector of the i-th sub-image .
步骤 5: 针对每种特征分量, 确定各子图像的该种特征分量对应的处理结 果的平均值, 将该平均值作为该种特征分量对应的调整后特征间权重 具 体可以釆用如下公式确定: W3Jk =∑w ijk / N ; W3Jk为第 j种特征向量的第 k种特征分量对应的调整后 特征内权重。 Step 5: Determine, for each feature component, an average value of the processing result corresponding to the feature component of each sub-image, and use the average value as the weight of the adjusted feature corresponding to the feature component, which may be determined by using the following formula: W 3Jk = ∑w ijk / N ; W 3Jk is the adjusted intra-feature weight corresponding to the k-th eigen component of the j-th eigenvector.
由于标准差^¾的值越小, 表示该种特征向量的该种特征分量更受到用户 的重视, 因此该标准差 的倒数反映了该种特征向量的该种特征分量的特征 内权重的大小。 Since the value of the standard deviation ^3⁄4 is smaller, the feature component representing the feature vector is more valued by the user, so the reciprocal of the standard deviation reflects the characteristics of the feature component of the feature vector. The size of the weight inside.
步骤 S504、 基于确定的调整后权重, 确定图像库中的图像与待检索图像 的最新相似度。  Step S504: Determine, according to the determined adjusted weight, the latest similarity between the image in the image library and the image to be retrieved.
本步骤中,较佳的, 可以仅确定图像库中除上述正例图像外的每个图像与 待检索图像的最新相似度。  In this step, preferably, only the latest similarity between each image in the image library except the above-mentioned positive example image and the image to be retrieved may be determined.
步骤 S505、 基于图像库中的图像与待检索图像的最新相似度的大小, 确 定待检索图像对应的最新图 H 索结果。  Step S505: Determine, according to the size of the latest similarity between the image in the image library and the image to be retrieved, the latest image corresponding to the image to be retrieved.
较佳的, 可以将上述正例图像作为该最新图像检索结果包括的一部分图 像, 对于最新图 佥索结果包括的其它图像, 具体可以为, 将图像库中与待检 索图像的最新相似度大于设定相似度阈值的图像,确定为最新图 佥索结果包 括的其它图像;  Preferably, the positive example image may be used as a part of the image included in the latest image search result. For the other images included in the latest image search result, the latest similarity between the image library and the image to be retrieved may be greater than An image of the similarity threshold is determined to be other images included in the latest map search result;
具体还可以为,将图像库中与待检索图像的最新相似度从高到低的顺序的 前设定数量的图像, 确定为最新图 佥索结果包括的其它图像。  Specifically, it is also possible to determine, in the image library, a previously set number of images in the order of the latest similarity of the image to be retrieved from high to low, as other images included in the latest map search result.
本发明实施例 2中上述图 5所示方法, 可以多次循环执行, 直到满足设定 循环次数为止,或者直到用户指示将当前确定的最新图像检索结果作为最终图 像检索结果为止。  The above-described method of Fig. 5 in Embodiment 2 of the present invention can be executed a plurality of times until the set number of loops is satisfied, or until the user instructs the currently determined latest image search result as the final image search result.
釆用本发明实施例 2提供的图像检索方法,由于基于用户对检索结果图像 的相关性评价结果,调整权重,并基于调整后权重,确定出最新图 佥索结果, 从而使得确定的最新图像检索结果更接近用户的需要,进而进一步的提高了图 像检索的有效性和效率。  According to the image retrieval method provided by Embodiment 2 of the present invention, since the user evaluates the correlation result of the search result image, the weight is adjusted, and based on the adjusted weight, the latest image search result is determined, so that the determined latest image retrieval is performed. The result is closer to the user's needs, which further improves the effectiveness and efficiency of image retrieval.
实施例 3:  Example 3:
基于同一发明构思,根据本发明上述实施例提供的图 佥索方法,相应地, 本发明实施例 3还提供了一种图 佥索装置, 其结构示意图如图 6所示, 具体 包括:  Based on the same inventive concept, the method of the present invention is provided in accordance with the above-mentioned embodiments of the present invention. Accordingly, the third embodiment of the present invention further provides a drawing device. The structure of the present invention is as shown in FIG.
划分单元 601 , 用于将待检索图像进行子图划分, 得到多个子图像; 提取单元 602, 用于对所述多个子图像中的各指定子图像进行图像特征提 取, 得到所述各指定子图像的特征向量; a dividing unit 601, configured to divide the image to be retrieved into sub-pictures to obtain a plurality of sub-images; The extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images;
相似度确定单元 603 , 用于针对图像库中每个图像, 基于该图像的每个待 匹配子图像组中的各子图像的特征向量, 以及所述各指定子图像的特征向量, 确定该图像与所述待检索图像的相似度; 其中,待匹配子图像组中的各子图像 之间的相对位置, 与所述各指定子图像之间的相对位置相同;  The similarity determining unit 603 is configured to determine, according to each feature in the image library, a feature vector of each sub image in each to-be-matched sub-image group of the image, and a feature vector of each of the specified sub-images. a similarity with the image to be retrieved; wherein a relative position between each sub-image in the sub-image group to be matched is the same as a relative position between the specified sub-images;
检索结果确定单元 604, 用于基于所述图像库中的每个图像与所述待检索 图像的相似度的大小, 确定所述待检索图像对应的图 佥索结果。  The search result determining unit 604 is configured to determine a search result corresponding to the image to be retrieved based on a size of similarity between each image in the image library and the image to be retrieved.
较佳的, 相似度确定单元 603 , 具体用于基于该图像的每个待匹配子图像 组中的各子图像的特征向量, 以及所述各指定子图像的特征向量,确定该图像 的每个待匹配子图像组, 与所述各指定子图像组成的指定子图像组的相似度; 将每个待匹配子图像组与所述指定子图像组的相似度中的最大值或平均 值, 作为该图像与所述待检索图像的相似度。  Preferably, the similarity determining unit 603 is specifically configured to determine each of the image based on a feature vector of each sub image in each of the to-be-matched sub-image groups of the image, and a feature vector of each of the specified sub-images. a similarity of a sub-image group to be matched, a specified sub-image group composed of the specified sub-images; a maximum value or an average value of similarities between each sub-image group to be matched and the specified sub-image group The similarity of the image to the image to be retrieved.
较佳的, 提取单元 602, 具体用于对所述多个子图像中的各指定子图像进 行图像特征提取, 得到所述各指定子图像中每个指定子图像的特征向量; 相似度确定单元 603 , 具体用于基于待匹配子图像组中的每个子图像的特 征向量, 以及所述指定子图像组中每个指定子图像的特征向量, 分别确定该待 匹配子图像组中与所述指定子图像组中位置相对应的每对子图像的相似度; 基于所述指定子图像组中每个指定子图像对应的子图像权重,将每对子图 像的相似度进行加权求和,得到该待匹配子图像组与所述指定子图像组的相似 度。  Preferably, the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images to obtain a feature vector of each of the specified sub-images; the similarity determining unit 603 Specifically, the feature vector of each sub-image in the sub-image group to be matched, and the feature vector of each specified sub-image in the specified sub-image group, respectively determine the sub-image group to be matched and the designated sub- a similarity of each pair of sub-images corresponding to the positions in the image group; weighting and summing the similarities of each pair of sub-images based on the sub-image weights corresponding to each of the specified sub-images in the specified sub-image group, to obtain the Matching the similarity of the sub-image group with the specified sub-image group.
较佳的, 提取单元 602, 具体用于对所述多个子图像中的各指定子图像进 行图像特征提取, 得到所述各指定子图像中每个指定子图像的多种特征向量; 相似度确定单元 603 , 具体用于基于待匹配子图像组中的一个子图像的多 种特征向量,以及所述指定子图像组中与该子图像位置相对应的一个指定子图 像的多种特征向量, 确定该子图像与该指定子图像的每种特征向量的相似度; 基于每种特征向量对应的特征间权重,将每种特征向量的相似度进行加权 求和, 得到该子图像与该指定子图像的相似度。 Preferably, the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images to obtain a plurality of feature vectors of each of the specified sub-images; The unit 603 is specifically configured to use, according to a plurality of feature vectors of one sub-image in the sub-image group to be matched, and a specified sub-picture corresponding to the sub-image position in the specified sub-image group. a plurality of feature vectors of the image, determining a similarity between the sub-image and each feature vector of the specified sub-image; and weighting and summing the similarities of each feature vector based on the inter-feature weights corresponding to each feature vector, The similarity of the sub-image to the specified sub-image.
较佳的, 提取单元 602, 具体用于对所述多个子图像中的各指定子图像进 行图像特征提取, 得到所述各指定子图像中每个指定子图像的多种特征向量 中, 每种特征向量包括的多个特征分量;  Preferably, the extracting unit 602 is configured to perform image feature extraction on each of the plurality of sub-images, and obtain a plurality of feature vectors of each of the specified sub-images, each of the plurality of feature vectors. a plurality of feature components included in the feature vector;
相似度确定单元 603 , 具体用于基于该子图像的一种特征向量包括的多个 特征分量, 以及该指定子图像的该种特征向量包括的多个特征分量,确定该子 图像与该指定子图像的该种特征向量包括的每种特征分量的相似度;  The similarity determining unit 603 is specifically configured to determine the sub-image and the designated sub-image based on the plurality of feature components included in one feature vector of the sub-image and the plurality of feature components included in the feature vector of the specified sub-image The similarity of each feature component included in the feature vector of the image;
基于每种特征分量对应的特征内权重,确定各种特征分量的相似度的加权 欧式距离之和, 得到该子图像与该指定子图像的一种特征向量的相似度。  The sum of the weighted Euclidean distances of the similarities of the various feature components is determined based on the intra-feature weights corresponding to each feature component, and the similarity between the sub-image and a feature vector of the specified sub-image is obtained.
较佳的, 还包括:  Preferably, the method further includes:
获取单元 605 , 用于获取用户对所述图像检索结果中的每个检索结果图像 的相关性评价结果;  The obtaining unit 605 is configured to obtain a correlation evaluation result of the user for each search result image in the image retrieval result;
权重调整单元 606, 用于基于获取的每个检索结果图像的相关性评价结 果,调整在确定所述图像库中的每个图像与所述待检索图像的相似度时所使用 的权重, 得到调整后权重;  The weight adjustment unit 606 is configured to adjust, according to the obtained correlation evaluation result of each search result image, the weight used in determining the similarity between each image in the image library and the image to be retrieved, and obtain an adjustment Post weight
相似度确定单元 603 , 还用于基于所述调整后权重, 确定所述图像库中的 图像与所述待检索图像的最新相似度;  The similarity determining unit 603 is further configured to determine, according to the adjusted weight, an latest similarity between the image in the image library and the image to be retrieved;
检索结果确定单元 604, 还用于基于所述图像库中的图像与所述待检索图 像的最新相似度的大小, 确定所述待检索图像对应的最新图像检索结果。  The search result determining unit 604 is further configured to determine, according to the size of the latest similarity between the image in the image library and the image to be retrieved, the latest image retrieval result corresponding to the image to be retrieved.
较佳的,获取单元 605获取的相关性评价结果包括:表征相关的相关结果、 权重调整单元 606, 具体用于当所述所使用的权重为子图像权重时, 确定 所述图 索结果中相关性评价结果为相关结果的各正例图像;针对每个正例 图像中相似度最高的待匹配子图像组,确定该待匹配子图像组中与所述指定子 图像组中位置相对应的每对子图像的相似度;针对所述指定子图像组中每个指 定子图像,确定各相似度最高的待匹配子图像组中, 与该指定子图像位置相对 应的该对子图像的各相似度的标准差的倒数;对所述各指定子图像分别对应的 各倒数, 进行归一化处理; 将归一化处理出后得到的与每个指定子图像对应的 结果, 作为该指定子图像对应的调整后子图像权重; Preferably, the correlation evaluation result obtained by the obtaining unit 605 includes: a correlation correlation result, and a weight adjustment unit 606, configured to determine, when the used weight is a sub-image weight, determine the correlation in the graph result. The results of the sexual evaluation are the positive examples of the relevant results; for each positive example a similarly-matched sub-image group in the image, determining a similarity of each pair of sub-images corresponding to positions in the specified sub-image group in the to-be-matched sub-image group; for each of the specified sub-image groups Specifying a sub-image, determining a reciprocal of a standard deviation of each similarity of the pair of sub-images corresponding to the specified sub-image position among the sub-image groups to be matched with the highest similarity; respectively corresponding to the designated sub-images Performing normalization processing for each reciprocal; and performing a normalized processing result corresponding to each specified sub-image as the adjusted sub-image weight corresponding to the designated sub-image;
当所述所使用的权重为特征间权重时,针对多种特征向量中的每种特征向 量,基于图像库中每个图像的每个待匹配子图像组中的各子图像的该种特征向 量, 以及所述各指定子图像的该种特征向量, 确定与该种特征向量对应的, 所 述待检索图像对应的特征图 佥索结果,确定该特征图 佥索结果与所述图像 检索结果中均存在的各检索结果图像,确定所述均存在的各检索结果图像的相 关性评价结果分别对应的各分值的第一和值,以及确定所述第一和值与该种特 征向量的特征间权重的初始值的第二和值;对多种特征向量分别对应的各第二 和值,进行归一化处理;将归一化处理出后得到的与每种特征向量对应的结果, 作为该种特征向量对应的调整后特征间权重;  When the used weight is an inter-feature weight, for each of the plurality of feature vectors, the feature vector of each sub-image in each sub-image group to be matched for each image in the image library is used. And the feature vector of the specified sub-images, determining a feature map search result corresponding to the image to be retrieved corresponding to the feature vector, determining the feature map search result and the image search result And determining, by each of the search result images, the first sum value of each score corresponding to the correlation evaluation result of each of the search result images, and determining the first sum value and the feature of the feature vector The second sum value of the initial value of the weight; the second sum value corresponding to each of the plurality of feature vectors is normalized; and the result corresponding to each feature vector obtained by normalizing is obtained as The weight of the adjusted features corresponding to the feature vector;
当所述所使用的权重为特征内权重时,确定所述图像检索结果中相关性评 价结果为相关结果的各正例图像;对于每个正例图像中相似度最高的待匹配子 图像组,针对每种特征向量,确定该待匹配子图像组中每个子图像的该种特征 向量包括的每种特征分量;针对该待匹配子图像组中的每个子图像和每种特征 分量,确定各正例图像的该待匹配子图像组中的该子图像的该种特征向量包括 的各该种特征分量的标准差, 作为与每个子图像的该种特征分量对应的标准 差;针对每个子图像和每种特征向量,对该种特征向量所包括的多种特征分量 分别对应的各标准差的倒数, 进行归一化处理,得到与每个子图像和每种特征 分量对应的处理结果; 针对每种特征分量,确定各子图像的该种特征分量对应 的处理结果的平均值, 将该平均值作为该种特征分量对应的调整后特征间权 重。 When the used weight is the intra-feature weight, determining the correlation evaluation result in the image retrieval result as each positive example image of the correlation result; for each of the positive example images having the highest similarity to be matched sub-image group, Determining, for each feature vector, each feature component included in the feature vector of each sub-image in the to-be-matched sub-image group; determining, for each sub-image and each feature component in the to-be-matched sub-image group, each positive component a standard deviation of each of the feature components included in the feature vector of the sub-image in the to-be-matched sub-image group of the example image as a standard deviation corresponding to the feature component of each sub-image; for each sub-image and Each feature vector, normalizing the reciprocal of each standard deviation corresponding to the plurality of feature components included in the feature vector, and obtaining a processing result corresponding to each sub-image and each feature component; The feature component determines an average value of the processing result corresponding to the feature component of each sub-image, and the average value is used as the adjusted corresponding to the feature component Inter-feature Heavy.
综上所述,本发明实施例提供的方案, 包括:将待检索图像进行子图划分, 得到多个子图像; 并对该多个子图像中的各指定子图像进行图像特征提取,得 到各指定子图像的特征向量; 并针对图像库中每个图像,基于该图像的每个待 匹配子图像组中的各子图像的特征向量, 以及各指定子图像的特征向量,确定 该图像与待检索图像的相似度;以及基于图像库中的每个图像与待检索图像的 相似度的大小,确定待检索图像对应的图 佥索结果。釆用本发明实施例提供 的方案, 在基于图像内容进行检索时, 提高了检索结果的有效性和检索效率。  In summary, the solution provided by the embodiment of the present invention includes: dividing a to-be-searched image into sub-pictures to obtain a plurality of sub-images; and performing image feature extraction on each of the plurality of sub-images to obtain each of the designated sub-images. a feature vector of the image; and for each image in the image library, determining the image and the image to be retrieved based on the feature vector of each sub-image in each sub-image group to be matched of the image, and the feature vector of each specified sub-image The degree of similarity; and determining the result of the map corresponding to the image to be retrieved based on the magnitude of the similarity between each image in the image library and the image to be retrieved. With the solution provided by the embodiment of the present invention, when searching based on image content, the validity of the retrieval result and the retrieval efficiency are improved.
明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及 其等同技术的范围之内, 则本发明也意图包含这些改动和变型在内。 The spirit and scope of the Ming. Thus, it is intended that the present invention cover the modifications and variations of the inventions

Claims

权 利 要 求 Rights request
1、 一种图 佥索方法, 其特征在于, 包括: A method of drawing a chord, characterized in that it comprises:
将待检索图像进行子图划分, 得到多个子图像;  Subdividing the image to be retrieved into subgraphs to obtain a plurality of sub-images;
对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定 子图像的特征向量;  Performing image feature extraction on each of the plurality of sub-images to obtain feature vectors of the specified sub-images;
针对图像库中每个图像,基于该图像的每个待匹配子图像组中的各子图像 的特征向量, 以及所述各指定子图像的特征向量,确定该图像与所述待检索图 像的相似度; 其中, 待匹配子图像组中的各子图像之间的相对位置, 与所述各 指定子图像之间的相对位置相同;  Determining, for each image in the image library, a feature vector of each sub-image in each sub-image group to be matched of the image, and a feature vector of each of the specified sub-images, determining that the image is similar to the image to be retrieved a relative position between each sub-image in the sub-image group to be matched, and a relative position between the sub-images;
基于所述图像库中的每个图像与所述待检索图像的相似度的大小,确定所 述待检索图像对应的图像检索结果。  An image retrieval result corresponding to the image to be retrieved is determined based on a magnitude of similarity of each image in the image library and the image to be retrieved.
2、 如权利要求 1所述的方法, 其特征在于, 基于该图像的每个待匹配子 图像组中的各子图像的特征向量, 以及所述各指定子图像的特征向量,确定该 图像与所述待检索图像的相似度, 具体包括:  2. The method according to claim 1, wherein the image is determined based on a feature vector of each sub-image in each of the sub-image groups to be matched of the image, and a feature vector of each of the designated sub-images The similarity of the image to be retrieved specifically includes:
基于该图像的每个待匹配子图像组中的各子图像的特征向量,以及所述各 指定子图像的特征向量,确定该图像的每个待匹配子图像组, 与所述各指定子 图像组成的指定子图像组的相似度;  Determining, according to the feature vector of each sub-image in each sub-image group to be matched of the image, and the feature vector of each of the designated sub-images, determining each sub-image group to be matched of the image, and the designated sub-images The similarity of the specified sub-image groups composed;
将每个待匹配子图像组与所述指定子图像组的相似度中的最大值或平均 值, 作为该图像与所述待检索图像的相似度。  A maximum value or an average value of similarities between each of the sub-image groups to be matched and the specified sub-image group is taken as the similarity between the image and the image to be retrieved.
3、 如权利要求 2所述的方法, 其特征在于, 对所述多个子图像中的各指 定子图像进行图像特征提取, 得到所述各指定子图像的特征向量, 具体为: 对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定 子图像中每个指定子图像的特征向量;  The method of claim 2, wherein image feature extraction is performed on each of the plurality of sub-images to obtain a feature vector of each of the specified sub-images, specifically: Image feature extraction is performed on each designated sub-image in the sub-images, and feature vectors of each of the designated sub-images in the specified sub-images are obtained;
相应的,基于待匹配子图像组中的各子图像的特征向量, 以及所述各指定 子图像的特征向量,确定该待匹配子图像组与所述指定子图像组的相似度, 具 体包括: Correspondingly, determining, according to the feature vector of each sub-image in the sub-image group to be matched, and the feature vector of each specified sub-image, the similarity between the sub-image group to be matched and the specified sub-image group, Body includes:
基于待匹配子图像组中的每个子图像的特征向量,以及所述指定子图像组 中每个指定子图像的特征向量,分别确定该待匹配子图像组中与所述指定子图 像组中位置相对应的每对子图像的相似度;  Determining a position in the to-be-matched sub-image group and the specified sub-image group respectively based on a feature vector of each sub-image in the sub-image group to be matched, and a feature vector of each of the specified sub-images in the specified sub-image group The similarity of each pair of sub-images corresponding to each other;
基于所述指定子图像组中每个指定子图像对应的子图像权重,将每对子图 像的相似度进行加权求和,得到该待匹配子图像组与所述指定子图像组的相似 度。  The similarity of each pair of sub-images is weighted and summed based on the sub-image weights corresponding to each of the specified sub-images in the specified sub-image group, to obtain the similarity between the sub-image group to be matched and the specified sub-image group.
4、 如权利要求 3所述的方法, 其特征在于, 对所述多个子图像中的各指 定子图像进行图像特征提取,得到所述各指定子图像中每个指定子图像的特征 向量, 具体为:  The method according to claim 3, wherein image feature extraction is performed on each of the plurality of sub-images to obtain a feature vector of each of the specified sub-images, For:
对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定 子图像中每个指定子图像的多种特征向量;  Performing image feature extraction on each of the plurality of sub-images to obtain a plurality of feature vectors of each of the designated sub-images;
相应的 ,确定待匹配子图像组中与所述指定子图像组中位置相对应的一对 子图像的相似度, 具体包括:  Correspondingly, determining the similarity of the pair of sub-images corresponding to the positions in the specified sub-image group in the sub-image group to be matched, specifically includes:
基于待匹配子图像组中的一个子图像的多种特征向量,以及所述指定子图 像组中与该子图像位置相对应的一个指定子图像的多种特征向量,确定该子图 像与该指定子图像的每种特征向量的相似度;  Determining the sub-image and the designation based on a plurality of feature vectors of one sub-image in the sub-image group to be matched, and a plurality of feature vectors of the specified sub-image corresponding to the sub-image position in the specified sub-image group The similarity of each feature vector of the sub-image;
基于每种特征向量对应的特征间权重,将每种特征向量的相似度进行加权 求和, 得到该子图像与该指定子图像的相似度。  Based on the inter-feature weights corresponding to each feature vector, the similarity of each feature vector is weighted and summed to obtain the similarity between the sub-image and the specified sub-image.
5、 如权利要求 4所述的方法, 其特征在于, 对所述多个子图像中的各指 定子图像进行图像特征提取,得到所述各指定子图像中每个指定子图像的多种 特征向量, 具体为:  The method according to claim 4, wherein image feature extraction is performed on each of the plurality of sub-images, and a plurality of feature vectors of each of the specified sub-images are obtained. , Specifically:
对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定 子图像中每个指定子图像的多种特征向量中,每种特征向量包括的多个特征分 量; 相应的,确定该子图像与该指定子图像的一种特征向量的相似度, 具体包 括: Performing image feature extraction on each of the plurality of sub-images to obtain a plurality of feature components included in each of the plurality of feature vectors of each of the designated sub-images; Correspondingly, determining a similarity between the sub-image and a feature vector of the specified sub-image includes:
基于该子图像的一种特征向量包括的多个特征分量,以及该指定子图像的 该种特征向量包括的多个特征分量,确定该子图像与该指定子图像的该种特征 向量包括的每种特征分量的相似度;  Determining, according to the plurality of feature components included in a feature vector of the sub-image, and the plurality of feature components included in the feature vector of the specified sub-image, determining each of the feature vectors included in the sub-image and the specified sub-image Similarity of feature components;
基于每种特征分量对应的特征内权重,确定各种特征分量的相似度的加权 欧式距离之和, 得到该子图像与该指定子图像的一种特征向量的相似度。  The sum of the weighted Euclidean distances of the similarities of the various feature components is determined based on the intra-feature weights corresponding to each feature component, and the similarity between the sub-image and a feature vector of the specified sub-image is obtained.
6、 如权利要求 3-5任一项所述的方法, 其特征在于, 还包括:  6. The method of any of claims 3-5, further comprising:
获取用户对所述图 佥索结果中的每个检索结果图像的相关性评价结果; 基于获取的每个检索结果图像的相关性评价结果,调整在确定所述图像库 中的每个图像与所述待检索图像的相似度时所使用的权重, 得到调整后权重; 基于所述调整后权重,确定所述图像库中的图像与所述待检索图像的最新 相似度;  Obtaining a correlation evaluation result of the user for each of the search result images in the map search result; adjusting each image and the image in the image library based on the obtained correlation evaluation result of each search result image Determining a weight used when retrieving the similarity of the image, obtaining an adjusted weight; determining an latest similarity between the image in the image library and the image to be retrieved based on the adjusted weight;
基于所述图像库中的图像与所述待检索图像的最新相似度的大小,确定所 述待检索图像对应的最新图 H 索结果。  And determining, based on the magnitude of the latest similarity between the image in the image library and the image to be retrieved, the latest image of the image to be retrieved.
7、 如权利要求 6所述的方法, 其特征在于, 相关性评价结果包括: 表征 则基于获取的每个检索结果图像的相关性评价结果,调整在确定所述图像 库中的每个图像与所述待检索图像的相似度时所使用的权重, 具体包括: 当所述所使用的权重为子图像权重时,确定所述图像检索结果中相关性评 价结果为相关结果的各正例图像;针对每个正例图像中相似度最高的待匹配子 图像组,确定该待匹配子图像组中与所述指定子图像组中位置相对应的每对子 图像的相似度; 针对所述指定子图像组中每个指定子图像,确定各相似度最高 的待匹配子图像组中,与该指定子图像位置相对应的该对子图像的各相似度的 标准差的倒数; 对所述各指定子图像分别对应的各倒数, 进行归一化处理; 将 归一化处理出后得到的与每个指定子图像对应的结果,作为该指定子图像对应 的调整后子图像权重; 7. The method according to claim 6, wherein the correlation evaluation result comprises: the characterization is based on the obtained correlation evaluation result of each retrieval result image, and adjusting each image in the image library is determined The weight used in the similarity of the image to be retrieved specifically includes: determining, when the used weight is a sub-image weight, each positive example image in which the correlation evaluation result in the image retrieval result is a correlation result; Determining a similarity of each pair of sub-images corresponding to positions in the specified sub-image group in the sub-image group to be matched for each sub-image group to be matched with the highest degree of similarity in each positive example image; Determining, in each of the specified sub-images in the image group, a reciprocal of the standard deviation of each similarity of the pair of sub-images corresponding to the specified sub-image position in each sub-image group to be matched with the highest degree of similarity; Sub-images corresponding to each reciprocal, normalized; Normally processing the result corresponding to each specified sub-image obtained as the adjusted sub-image weight corresponding to the designated sub-image;
当所述所使用的权重为特征间权重时,针对多种特征向量中的每种特征向 量,基于图像库中每个图像的每个待匹配子图像组中的各子图像的该种特征向 量, 以及所述各指定子图像的该种特征向量, 确定与该种特征向量对应的, 所 述待检索图像对应的特征图 佥索结果,确定该特征图 佥索结果与所述图像 检索结果中均存在的各检索结果图像,确定所述均存在的各检索结果图像的相 关性评价结果分别对应的各分值的第一和值,以及确定所述第一和值与该种特 征向量的特征间权重的初始值的第二和值;对多种特征向量分别对应的各第二 和值,进行归一化处理;将归一化处理出后得到的与每种特征向量对应的结果, 作为该种特征向量对应的调整后特征间权重;  When the used weight is an inter-feature weight, for each of the plurality of feature vectors, the feature vector of each sub-image in each sub-image group to be matched for each image in the image library is used. And the feature vector of the specified sub-images, determining a feature map search result corresponding to the image to be retrieved corresponding to the feature vector, determining the feature map search result and the image search result And determining, by each of the search result images, the first sum value of each score corresponding to the correlation evaluation result of each of the search result images, and determining the first sum value and the feature of the feature vector The second sum value of the initial value of the weight; the second sum value corresponding to each of the plurality of feature vectors is normalized; and the result corresponding to each feature vector obtained by normalizing is obtained as The weight of the adjusted features corresponding to the feature vector;
当所述所使用的权重为特征内权重时,确定所述图像检索结果中相关性评 价结果为相关结果的各正例图像;对于每个正例图像中相似度最高的待匹配子 图像组,针对每种特征向量,确定该待匹配子图像组中每个子图像的该种特征 向量包括的每种特征分量;针对该待匹配子图像组中的每个子图像和每种特征 分量,确定各正例图像的该待匹配子图像组中的该子图像的该种特征向量包括 的各该种特征分量的标准差, 作为与每个子图像的该种特征分量对应的标准 差; 针对每个子图像和每种特征向量,对该种特征向量所包括的多种特征分量 分别对应的各标准差的倒数, 进行归一化处理,得到与每个子图像和每种特征 分量对应的处理结果; 针对每种特征分量,确定各子图像的该种特征分量对应 的处理结果的平均值, 将该平均值作为该种特征分量对应的调整后特征间权 重。  When the used weight is the intra-feature weight, determining the correlation evaluation result in the image retrieval result as each positive example image of the correlation result; for each of the positive example images having the highest similarity to be matched sub-image group, Determining, for each feature vector, each feature component included in the feature vector of each sub-image in the to-be-matched sub-image group; determining, for each sub-image and each feature component in the to-be-matched sub-image group, each positive component a standard deviation of each of the feature components included in the feature vector of the sub-image in the to-be-matched sub-image group of the example image as a standard deviation corresponding to the feature component of each sub-image; Each feature vector, normalizing the reciprocal of each standard deviation corresponding to the plurality of feature components included in the feature vector, and obtaining a processing result corresponding to each sub-image and each feature component; a feature component that determines an average value of processing results corresponding to the feature components of each sub-image, and uses the average value as the feature component corresponding to the feature component Wherein the weight between adjusted.
8、 一种图 佥索装置, 其特征在于, 包括:  8. A figure choking device, comprising:
划分单元, 用于将待检索图像进行子图划分, 得到多个子图像;  a dividing unit, configured to divide the image to be retrieved into sub-pictures to obtain a plurality of sub-images;
提取单元, 用于对所述多个子图像中的各指定子图像进行图像特征提取, 得到所述各指定子图像的特征向量; An extracting unit, configured to perform image feature extraction on each of the plurality of sub-images, Obtaining feature vectors of the specified sub-images;
相似度确定单元, 用于针对图像库中每个图像,基于该图像的每个待匹配 子图像组中的各子图像的特征向量, 以及所述各指定子图像的特征向量,确定 该图像与所述待检索图像的相似度; 其中,待匹配子图像组中的各子图像之间 的相对位置, 与所述各指定子图像之间的相对位置相同;  a similarity determining unit, configured to determine, according to each feature in the image library, a feature vector of each sub image in each of the to-be-matched sub-image groups of the image, and a feature vector of each of the specified sub-images a similarity of the image to be retrieved; wherein a relative position between each sub-image in the sub-image group to be matched is the same as a relative position between the specified sub-images;
检索结果确定单元,用于基于所述图像库中的每个图像与所述待检索图像 的相似度的大小, 确定所述待检索图像对应的图 H 索结果。  The search result determining unit is configured to determine a graph result corresponding to the image to be retrieved based on a magnitude of similarity between each image in the image library and the image to be retrieved.
9、 如权利要求 8所述的装置, 其特征在于, 所述相似度确定单元, 具体 用于基于该图像的每个待匹配子图像组中的各子图像的特征向量,以及所述各 指定子图像的特征向量,确定该图像的每个待匹配子图像组, 与所述各指定子 图像组成的指定子图像组的相似度;  9. The apparatus according to claim 8, wherein the similarity determining unit is specifically configured to use a feature vector of each sub-image in each sub-image group to be matched based on the image, and the specified a feature vector of the sub-image, determining a similarity of each sub-image group to be matched of the image, and a specified sub-image group composed of the specified sub-images;
将每个待匹配子图像组与所述指定子图像组的相似度中的最大值或平均 值, 作为该图像与所述待检索图像的相似度。  A maximum value or an average value of similarities between each of the sub-image groups to be matched and the specified sub-image group is taken as the similarity between the image and the image to be retrieved.
10、 如权利要求 9所述的装置, 其特征在于, 所述提取单元, 具体用于对 所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定子图像 中每个指定子图像的特征向量;  The apparatus according to claim 9, wherein the extracting unit is configured to perform image feature extraction on each of the plurality of sub-images to obtain each of the specified sub-images. Specifying a feature vector of the sub-image;
所述相似度确定单元,具体用于基于待匹配子图像组中的每个子图像的特 征向量, 以及所述指定子图像组中每个指定子图像的特征向量, 分别确定该待 匹配子图像组中与所述指定子图像组中位置相对应的每对子图像的相似度; 基于所述指定子图像组中每个指定子图像对应的子图像权重,将每对子图 像的相似度进行加权求和,得到该待匹配子图像组与所述指定子图像组的相似 度。  The similarity determining unit is specifically configured to determine the to-be-matched sub-image group respectively based on a feature vector of each sub-image in the sub-image group to be matched, and a feature vector of each specified sub-image in the specified sub-image group a similarity of each pair of sub-images corresponding to positions in the specified sub-image group; weighting the similarity of each pair of sub-images based on sub-image weights corresponding to each of the designated sub-images in the specified sub-image group Summing, obtaining the similarity between the to-be-matched sub-image group and the specified sub-image group.
11、 如权利要求 10所述的装置, 其特征在于, 所述提取单元, 具体用于 对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定子图 像中每个指定子图像的多种特征向量; 所述相似度确定单元,具体用于基于待匹配子图像组中的一个子图像的多 种特征向量,以及所述指定子图像组中与该子图像位置相对应的一个指定子图 像的多种特征向量, 确定该子图像与该指定子图像的每种特征向量的相似度; 基于每种特征向量对应的特征间权重,将每种特征向量的相似度进行加权 求和, 得到该子图像与该指定子图像的相似度。 The apparatus according to claim 10, wherein the extracting unit is configured to perform image feature extraction on each of the plurality of sub-images to obtain each of the specified sub-images Specifying a plurality of feature vectors of the sub-image; The similarity determining unit is specifically configured to use, according to a plurality of feature vectors of one sub-image in the sub-image group to be matched, and a plurality of specified sub-images corresponding to the sub-image position in the specified sub-image group. a feature vector, determining a similarity between the sub-image and each feature vector of the specified sub-image; and weighting and summing the similarity of each feature vector based on the inter-feature weight corresponding to each feature vector, to obtain the sub-image and The similarity of the specified sub-image.
12、 如权利要求 11所述的装置, 其特征在于, 所述提取单元, 具体用于 对所述多个子图像中的各指定子图像进行图像特征提取,得到所述各指定子图 像中每个指定子图像的多种特征向量中, 每种特征向量包括的多个特征分量; 所述相似度确定单元,具体用于基于该子图像的一种特征向量包括的多个 特征分量, 以及该指定子图像的该种特征向量包括的多个特征分量,确定该子 图像与该指定子图像的该种特征向量包括的每种特征分量的相似度;  The apparatus according to claim 11, wherein the extracting unit is configured to perform image feature extraction on each of the plurality of sub-images to obtain each of the specified sub-images. Specifying a plurality of feature components included in each feature vector among the plurality of feature vectors of the sub-image; the similarity determining unit is specifically configured to use a plurality of feature components included in a feature vector of the sub-image, and the designation And determining, by the plurality of feature components of the feature vector of the sub-image, a similarity of each feature component included in the feature vector of the specified sub-image;
基于每种特征分量对应的特征内权重,确定各种特征分量的相似度的加权 欧式距离之和, 得到该子图像与该指定子图像的一种特征向量的相似度。  The sum of the weighted Euclidean distances of the similarities of the various feature components is determined based on the intra-feature weights corresponding to each feature component, and the similarity between the sub-image and a feature vector of the specified sub-image is obtained.
13、 如权利要求 10-12任一项所述的装置, 其特征在于, 还包括: 获取单元,用于获取用户对所述图 佥索结果中的每个检索结果图像的相 关性评价结果;  The apparatus according to any one of claims 10 to 12, further comprising: an obtaining unit, configured to acquire a correlation evaluation result of the user for each of the search result images in the map search result;
权重调整单元, 用于基于获取的每个检索结果图像的相关性评价结果,调 整在确定所述图像库中的每个图像与所述待检索图像的相似度时所使用的权 重, 得到调整后权重;  a weight adjustment unit, configured to adjust a weight used when determining a similarity between each image in the image library and the image to be retrieved based on the obtained correlation evaluation result of each search result image, and obtain an adjusted Weights;
则所述相似度确定单元,还用于基于所述调整后权重,确定所述图像库中 的图像与所述待检索图像的最新相似度;  And the similarity determining unit is further configured to determine, according to the adjusted weight, an latest similarity between the image in the image library and the image to be retrieved;
所述检索结果确定单元,还用于基于所述图像库中的图像与所述待检索图 像的最新相似度的大小, 确定所述待检索图像对应的最新图像检索结果。  The search result determining unit is further configured to determine a latest image search result corresponding to the image to be retrieved based on a size of an latest similarity between the image in the image library and the image to be retrieved.
14、 如权利要求 13所述的装置, 其特征在于, 所述获取单元获取的相关 性评价结果包括: 表征相关的相关结果、表征不相关的不相关结果和表征不评 判的不评判结果; 14. The apparatus according to claim 13, wherein the correlation evaluation result obtained by the obtaining unit comprises: characterization related related results, characterization irrelevant irrelevant results, and characterization not commenting Judging the result of non-judgment;
则所述权重调整单元, 具体用于当所述所使用的权重为子图像权重时,确 定所述图像检索结果中相关性评价结果为相关结果的各正例图像;针对每个正 例图像中相似度最高的待匹配子图像组,确定该待匹配子图像组中与所述指定 子图像组中位置相对应的每对子图像的相似度;针对所述指定子图像组中每个 指定子图像,确定各相似度最高的待匹配子图像组中, 与该指定子图像位置相 对应的该对子图像的各相似度的标准差的倒数;对所述各指定子图像分别对应 的各倒数, 进行归一化处理; 将归一化处理出后得到的与每个指定子图像对应 的结果, 作为该指定子图像对应的调整后子图像权重;  The weight adjustment unit is specifically configured to: when the used weight is a sub-image weight, determine a positive example image in which the correlation evaluation result in the image retrieval result is a correlation result; for each positive example image a similarly-matched sub-image group to be determined, determining a similarity of each pair of sub-images corresponding to positions in the specified sub-image group in the to-be-matched sub-image group; for each of the specified sub-image groups And an image, determining a reciprocal of a standard deviation of each similarity of the pair of sub-images corresponding to the specified sub-image position in each of the to-be-matched sub-image groups having the highest similarity; and corresponding reciprocals corresponding to the respective sub-images Performing normalization processing; the result corresponding to each specified sub-image obtained after the normalization processing is used as the weight of the adjusted sub-image corresponding to the specified sub-image;
当所述所使用的权重为特征间权重时,针对多种特征向量中的每种特征向 量,基于图像库中每个图像的每个待匹配子图像组中的各子图像的该种特征向 量, 以及所述各指定子图像的该种特征向量, 确定与该种特征向量对应的, 所 述待检索图像对应的特征图 佥索结果,确定该特征图 佥索结果与所述图像 检索结果中均存在的各检索结果图像,确定所述均存在的各检索结果图像的相 关性评价结果分别对应的各分值的第一和值,以及确定所述第一和值与该种特 征向量的特征间权重的初始值的第二和值;对多种特征向量分别对应的各第二 和值,进行归一化处理;将归一化处理出后得到的与每种特征向量对应的结果, 作为该种特征向量对应的调整后特征间权重;  When the used weight is an inter-feature weight, for each of the plurality of feature vectors, the feature vector of each sub-image in each sub-image group to be matched for each image in the image library is used. And the feature vector of the specified sub-images, determining a feature map search result corresponding to the image to be retrieved corresponding to the feature vector, determining the feature map search result and the image search result And determining, by each of the search result images, the first sum value of each score corresponding to the correlation evaluation result of each of the search result images, and determining the first sum value and the feature of the feature vector The second sum value of the initial value of the weight; the second sum value corresponding to each of the plurality of feature vectors is normalized; and the result corresponding to each feature vector obtained by normalizing is obtained as The weight of the adjusted features corresponding to the feature vector;
当所述所使用的权重为特征内权重时,确定所述图像检索结果中相关性评 价结果为相关结果的各正例图像;对于每个正例图像中相似度最高的待匹配子 图像组,针对每种特征向量,确定该待匹配子图像组中每个子图像的该种特征 向量包括的每种特征分量;针对该待匹配子图像组中的每个子图像和每种特征 分量,确定各正例图像的该待匹配子图像组中的该子图像的该种特征向量包括 的各该种特征分量的标准差, 作为与每个子图像的该种特征分量对应的标准 差; 针对每个子图像和每种特征向量,对该种特征向量所包括的多种特征分量 分别对应的各标准差的倒数, 进行归一化处理,得到与每个子图像和每种特征 分量对应的处理结果; 针对每种特征分量,确定各子图像的该种特征分量对应 的处理结果的平均值, 将该平均值作为该种特征分量对应的调整后特征间权 重。 When the used weight is the intra-feature weight, determining the correlation evaluation result in the image retrieval result as each positive example image of the correlation result; for each of the positive example images having the highest similarity to be matched sub-image group, Determining, for each feature vector, each feature component included in the feature vector of each sub-image in the to-be-matched sub-image group; determining, for each sub-image and each feature component in the to-be-matched sub-image group, each positive component a standard deviation of each of the feature components included in the feature vector of the sub-image in the to-be-matched sub-image group of the example image as a standard deviation corresponding to the feature component of each sub-image; Each feature vector, a plurality of feature components included in the feature vector Corresponding to the reciprocal of each standard deviation, performing normalization processing to obtain processing results corresponding to each sub-image and each feature component; for each feature component, determining a processing result corresponding to the feature component of each sub-image The average value is used as the adjusted inter-feature weight corresponding to the feature component.
PCT/CN2012/082746 2011-10-13 2012-10-11 Image retrieval method and device WO2013053320A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110309996.0 2011-10-13
CN201110309996.0A CN103049446B (en) 2011-10-13 2011-10-13 A kind of image search method and device

Publications (1)

Publication Number Publication Date
WO2013053320A1 true WO2013053320A1 (en) 2013-04-18

Family

ID=48062089

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/082746 WO2013053320A1 (en) 2011-10-13 2012-10-11 Image retrieval method and device

Country Status (2)

Country Link
CN (1) CN103049446B (en)
WO (1) WO2013053320A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838821A (en) * 2013-12-31 2014-06-04 中国传媒大学 Characteristic vector optimization method for interactive image retrieval

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571468B (en) * 2013-10-11 2017-11-03 中国移动通信集团广东有限公司 A kind of method and apparatus for handling digital picture feature
CN105989029A (en) * 2015-02-02 2016-10-05 阿里巴巴集团控股有限公司 Image searching method and image searching system
CN104881798A (en) * 2015-06-05 2015-09-02 北京京东尚科信息技术有限公司 Device and method for personalized search based on commodity image features
CN105184236A (en) * 2015-08-26 2015-12-23 江苏久祥汽车电器集团有限公司 Robot-based face identification system
CN105138672B (en) * 2015-09-07 2018-08-21 北京工业大学 A kind of image search method of multiple features fusion
CN106897328A (en) * 2015-12-21 2017-06-27 苏宁云商集团股份有限公司 A kind of image search method and device
CN105630906A (en) * 2015-12-21 2016-06-01 苏州科达科技股份有限公司 Person searching method, apparatus and system
CN108268258B (en) * 2016-12-29 2021-06-18 阿里巴巴集团控股有限公司 Method and device for acquiring webpage code and electronic equipment
CN108255858A (en) * 2016-12-29 2018-07-06 北京优朋普乐科技有限公司 A kind of image search method and system
CN108733679B (en) * 2017-04-14 2021-10-26 华为技术有限公司 Pedestrian retrieval method, device and system
CN110019898A (en) * 2017-08-08 2019-07-16 航天信息股份有限公司 A kind of animation image processing system
CN108845999B (en) * 2018-04-03 2021-08-06 南昌奇眸科技有限公司 Trademark image retrieval method based on multi-scale regional feature comparison
CN108764297B (en) * 2018-04-28 2020-10-30 北京猎户星空科技有限公司 Method and device for determining position of movable equipment and electronic equipment
CN108734175A (en) * 2018-04-28 2018-11-02 北京猎户星空科技有限公司 A kind of extracting method of characteristics of image, device and electronic equipment
CN109213886B (en) * 2018-08-09 2021-01-08 山东师范大学 Image retrieval method and system based on image segmentation and fuzzy pattern recognition
CN109624844A (en) * 2018-12-05 2019-04-16 电子科技大学成都学院 A kind of bus driving protection system based on image recognition and voice transmission control
CN109739233B (en) * 2018-12-29 2022-06-14 歌尔光学科技有限公司 AGV trolley positioning method, device and system
CN109978078B (en) * 2019-04-10 2022-03-18 厦门元印信息科技有限公司 Font copyright detection method, medium, computer equipment and device
CN110135483A (en) * 2019-04-30 2019-08-16 北京百度网讯科技有限公司 The method, apparatus and relevant device of training image identification model
CN110263196B (en) * 2019-05-10 2022-05-06 南京旷云科技有限公司 Image retrieval method, image retrieval device, electronic equipment and storage medium
CN111612800B (en) * 2020-05-18 2022-08-16 智慧航海(青岛)科技有限公司 Ship image retrieval method, computer-readable storage medium and equipment
CN116150417B (en) * 2023-04-19 2023-08-04 上海维智卓新信息科技有限公司 Multi-scale multi-fusion image retrieval method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276421A (en) * 2008-04-18 2008-10-01 清华大学 Method and apparatus for recognizing human face combining human face part characteristic and Gabor human face characteristic
CN101615195A (en) * 2009-07-24 2009-12-30 中国传媒大学 A kind of Chinese character image texture characteristic extracting method based on the Fu Shi frequency spectrum
CN101901350A (en) * 2010-07-23 2010-12-01 北京航空航天大学 Characteristic vector-based static gesture recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005208740A (en) * 2004-01-20 2005-08-04 Ricoh Co Ltd Sub-image search apparatus and sub-image search program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276421A (en) * 2008-04-18 2008-10-01 清华大学 Method and apparatus for recognizing human face combining human face part characteristic and Gabor human face characteristic
CN101615195A (en) * 2009-07-24 2009-12-30 中国传媒大学 A kind of Chinese character image texture characteristic extracting method based on the Fu Shi frequency spectrum
CN101901350A (en) * 2010-07-23 2010-12-01 北京航空航天大学 Characteristic vector-based static gesture recognition method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838821A (en) * 2013-12-31 2014-06-04 中国传媒大学 Characteristic vector optimization method for interactive image retrieval

Also Published As

Publication number Publication date
CN103049446B (en) 2016-01-27
CN103049446A (en) 2013-04-17

Similar Documents

Publication Publication Date Title
WO2013053320A1 (en) Image retrieval method and device
US20240070214A1 (en) Image searching method and apparatus
JP5463415B2 (en) Method and system for quasi-duplicate image retrieval
WO2021012484A1 (en) Deep learning-based target tracking method and apparatus, and computer readable storage medium
JP5121972B2 (en) Method for expressing color image, device for expressing color image, system for expressing color image, program comprising computer executable instructions, and computer readable medium
US8232996B2 (en) Image learning, automatic annotation, retrieval method, and device
EP2615572A1 (en) Image segmentation based on approximation of segmentation similarity
CN105701514B (en) A method of the multi-modal canonical correlation analysis for zero sample classification
Ahmad et al. Saliency-weighted graphs for efficient visual content description and their applications in real-time image retrieval systems
CN105718940B (en) The zero sample image classification method based on factorial analysis between multiple groups
US10353950B2 (en) Visual recognition using user tap locations
WO2010006367A1 (en) Facial image recognition and retrieval
US10866984B2 (en) Sketch-based image searching system using cell-orientation histograms and outline extraction based on medium-level features
CN107977948B (en) Salient map fusion method facing community image
US8429163B1 (en) Content similarity pyramid
Walia et al. An effective and fast hybrid framework for color image retrieval
CN109165698A (en) A kind of image classification recognition methods and its storage medium towards wisdom traffic
CN112561976A (en) Image dominant color feature extraction method, image retrieval method, storage medium and device
CN103761503A (en) Self-adaptive training sample selection method for relevance feedback image retrieval
Shi et al. Image retrieval using both color and texture features
Yin et al. Assessing photo quality with geo-context and crowdsourced photos
US8942515B1 (en) Method and apparatus for image retrieval
CN106469437B (en) Image processing method and image processing apparatus
WO2017143979A1 (en) Image search method and device
Tang et al. Person re-identification based on multi-scale global feature and weight-driven part feature

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12839538

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12839538

Country of ref document: EP

Kind code of ref document: A1