WO2020238515A1 - 图像匹配方法、装置、设备、介质和程序产品 - Google Patents
图像匹配方法、装置、设备、介质和程序产品 Download PDFInfo
- Publication number
- WO2020238515A1 WO2020238515A1 PCT/CN2020/086666 CN2020086666W WO2020238515A1 WO 2020238515 A1 WO2020238515 A1 WO 2020238515A1 CN 2020086666 W CN2020086666 W CN 2020086666W WO 2020238515 A1 WO2020238515 A1 WO 2020238515A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- matched
- feature
- key point
- similarity
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000004927 fusion Effects 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 13
- 238000004364 calculation method Methods 0.000 description 9
- 239000002699 waste material Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 6
- 238000010422 painting Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000013145 classification model Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000011423 initialization method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/758—Involving statistics of pixels or of feature values, e.g. histogram matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/30—Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
Definitions
- the present disclosure relates to the field of computer vision technology. Specifically, the present disclosure relates to an image matching method, device, device, computer-readable storage medium, and program product.
- images from various sources can be stored in the image library.
- the maintainer of the image library adds the same or similar images to the image library multiple times; 2.
- the image uploaded by the user is the same as the existing image in the image library Or approximate. Multiple repeated or similar images are used as redundant data in the image library, which will cause a lot of waste of storage space.
- This application proposes an image matching method, device, equipment, computer readable storage medium and program product.
- an image matching method including: acquiring an image to be matched. For any image in the image library, determine the key point feature similarity between any image and the image to be matched, and determine the color feature similarity between any image and the image to be matched, so as to determine the The key point feature similarity and color feature similarity are used to determine the fusion similarity between any image and the image to be matched. Then, according to the fusion similarity between each of at least one image in the image library and the image to be matched, it is determined whether there is an image that matches the image to be matched in the image library.
- the above method further includes: before determining the similarity of key point features between any image in the image library and the image to be matched, extracting key point features of multiple sample images in the preset sample library, and preset the sample library Includes image library. Then, cluster the key point features of the multiple sample images in the preset sample library to obtain multiple feature categories. Then, at least one key point feature of any image in the image library is extracted. For each key point feature of any image, the key point feature is classified to determine the feature category to which the key point feature belongs from the above multiple feature categories. According to the feature category to which at least one key point feature of any image in the image library belongs, the key point feature distribution information of any image is determined.
- the foregoing clustering of the key point features of the multiple sample images in the preset sample library includes: clustering the key point features of the multiple sample images by using a cluster center initialization method based on a hierarchical method.
- determining the key point feature similarity between any image in the image library and the image to be matched includes: extracting at least one key point feature of the image to be matched. For each key point feature of the image to be matched, the key point feature is classified to determine the feature category to which the key point feature belongs from the multiple feature categories. Determine the key point feature distribution information of the image to be matched according to the feature category to which at least one key point feature of the image to be matched belongs. Then, according to the key point feature distribution information of any image in the image library and the key point feature distribution information of the image to be matched, the key point feature similarity between the any image and the image to be matched is determined.
- determining the key point feature similarity between any image and the image to be matched based on the key point feature distribution information of any image and the key point feature distribution information of the image to be matched includes: according to the key point of any image The feature distribution information and the key point feature distribution information of the image to be matched are determined, the first number of key point features included in any image and the image to be matched and belonging to the same feature category is determined, and the arbitrary number is determined according to the first number. The key point feature similarity between an image and the image to be matched.
- the first number is N same
- any image includes a second number of N n key point features
- the image to be matched includes a third number of N check key point features, where N same , N n and N check are all greater than An integer equal to 0.
- determining the key point feature similarity between any image and the image to be matched includes: determining the key point feature similarity P sift between any image and the image to be matched according to the following formula:
- the above method further includes: before determining the key point feature similarity between any image and the image to be matched, determining the reverse index table according to respective key point feature distribution information of at least one image in the image library.
- the reverse index table is used to characterize the mapping relationship from each of the multiple feature categories to at least one image in the image library.
- the foregoing determination of the first number of key point features included in any image and the image to be matched and belonging to the same feature category includes: setting the initial value of the first number to 0; for each key included in the image to be matched Point feature, search for an image that has a mapping relationship with the feature category to which the key point feature belongs in the reverse index table, and when the image that has a mapping relationship with the feature category to which the key point feature belongs includes any image, The value of the first number is increased by 1; and the value of the first number is determined after traversing the key point features contained in the image to be matched.
- the aforementioned key point features include: scale-invariant feature conversion SIFT key point features.
- the above method further includes: before determining the color feature similarity between any image in the image library and the image to be matched, acquiring multiple color categories. Calculate the color feature value of each pixel in any image, and determine the color feature value of each pixel in any image from the above-mentioned multiple color categories. Color category. Then, the color feature of any image is determined according to the color category to which the multiple pixels in the any image belong.
- the above determining the color feature similarity between any image and the image to be matched includes: calculating the color feature value of each pixel in the image to be matched. According to the color characteristic value of each pixel in the image to be matched, the color category to which each pixel in the image to be matched belongs is determined from the foregoing multiple color categories. Then, the color characteristics of the image to be matched are determined according to the color category to which the multiple pixels in the image to be matched belong.
- determining the color feature similarity between any image in the image library and the image to be matched also includes: determining the Euclidean distance between the color feature of any image and the color feature of the image to be matched, and according to The Euclidean distance determines the color feature similarity between any image and the image to be matched.
- the above calculation of the color characteristic value of each pixel in any image includes: for each pixel in the any image, converting the RGB value of the color mode of the pixel to an HSV value, and integerizing the HSV value Processing, and calculating the color feature value of the pixel in any image according to the HSV value after the integerization processing.
- the foregoing calculation of the color characteristic value of each pixel in the image to be matched includes: for each pixel in the image to be matched, the RGB value of the color mode of the pixel is converted to an HSV value, and the HSV value is integerized. , And determine the color feature value of the pixel in the image to be matched according to the HSV value after the integerization process.
- determining the fusion similarity between any image in the image library and the image to be matched based on the similarity of key features and color features between any image in the image library and the image to be matched includes: setting the first weight And the second weight, the sum of the first weight and the second weight is 1. According to the first weight and the second weight, the key point feature similarity between any image and the image to be matched and the color feature similarity between any image and the image to be matched are weighted and summed to obtain the The fusion similarity between any image and the image to be matched.
- the above setting of the first weight includes: determining the first weight ⁇ through the following formula: Among them, N check represents the number of key point features of the image to be matched, L 1 represents the first threshold number of key point features, and L 2 represents the second threshold number of key point features.
- determining whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched includes: The fusion similarity between the matching images is sorted in descending order to obtain the fusion similarity sequence, and then the image targeted by the fusion similarity greater than the fusion similarity threshold in the fusion similarity sequence is used as the image that matches the image to be matched.
- the above method further includes: when there is no image matching the image to be matched in the image library, adding the image to be matched to the image library.
- an image matching device including: a first processing module, a second processing module, a third processing module, and a fourth processing module.
- the first processing module is used to obtain the image to be matched.
- the second processing module is used to determine the similarity of key features between any image in the image library and the image to be matched, and determine the similarity of color features between any image and the image to be matched .
- the third processing module is configured to determine the fusion similarity between the image to be matched and the image to be matched according to the determined similarity of the key point features and the similarity of the color features.
- the fourth processing module is used for determining whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched.
- an electronic device including: at least one processor, a memory, and a bus.
- the bus is used to connect the processor and the memory.
- the memory is used to store operation instructions.
- the processor is configured to execute the image matching method of any one of the foregoing embodiments by calling an operation instruction.
- a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the image matching method of any one of the foregoing embodiments is implemented.
- a computer program product including executable instructions, which, when executed by a processor, implement the image matching method of any of the foregoing embodiments.
- the image matching method when matching the image to be matched, first determines the key point feature similarity and color feature similarity between any image in the image library and the image to be matched, and then based on the key determined above Point feature similarity and color feature similarity determine the fusion similarity between any image and the image to be matched. Therefore, according to the fusion similarity between each image in the image library and the image to be matched, it is judged whether there is an image that matches the image to be matched in the image library, and the image search (image in the image library) is realized by the image (image to be matched) Technical effect.
- the fusion similarity composed of key point feature similarity and color feature similarity is selected for matching judgment in the image matching process, the features of multiple dimensions in the image are comprehensively considered, which can effectively improve the matching accuracy. It can quickly and effectively find out the existing images in the image library that are the same or similar to the image to be matched, avoid the repeated addition of the same or similar images, and thus avoid the waste of storage space.
- FIGS. 1A to 1B are schematic diagrams of the system architecture of the application image matching method and device provided by the embodiments of the disclosure.
- FIG. 2 is a schematic flowchart of an image matching method provided by an embodiment of the disclosure
- FIG. 3 is a schematic flowchart of an image matching method provided by another embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of keyword feature extraction and keyword feature similarity calculation provided by embodiments of the disclosure.
- FIG. 5 is a schematic flowchart of an image matching method provided by another embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of color feature extraction and color feature similarity calculation provided by embodiments of the disclosure.
- FIG. 7 is a schematic structural diagram of an image matching device provided by an embodiment of the disclosure.
- FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
- images from various sources can be stored in the image library.
- the maintainer of the image library adds the same or similar images to the image library multiple times; 2.
- the image uploaded by the user is the same as the existing image in the image library. Or approximate. Multiple identical or similar images are used as redundant data in the image library, which will waste a lot of storage space.
- digital paintings, calligraphy, photos and other images are stored in the image library, and electronic devices are used to display the images in the image library for users to view various paintings, calligraphy, photos, etc.
- the image library can be set on the electronic device side or on the server side. In some cases, the same or similar images are repeatedly added to the image library. Because the digital conversational works, calligraphy works, photos, etc. occupy a large storage space, a large amount of storage space is wasted.
- the system architecture 100 may include an electronic device 101, a network 102, and a server 103.
- the network 102 is a medium used to provide a communication link between the electronic device 101 and the server 105.
- the electronic device 101 may be, for example, a smart phone, a tablet computer, a smart TV, a portable computer, a desktop computer, etc., which are not limited here.
- the server 103 can provide various background services for the electronic device 101. For example, receiving a request message from the electronic device 101, performing analysis and processing on the received request message, etc., and feeding back the corresponding response result to the electronic device 101.
- the image matching method according to the embodiment of the present disclosure may be implemented in the electronic device 101, and accordingly, the image matching apparatus according to the embodiment of the present disclosure may be provided in the electronic device 101.
- the image matching method according to the embodiment of the present disclosure may also be implemented in the server 103, and accordingly, the image matching apparatus according to the embodiment of the present disclosure may also be set in the server 103.
- Fig. 1B exemplarily shows the implementation principle of the image matching method and device according to the embodiments of the present disclosure.
- an image library 104 and a feature library 105 are deployed in the above system architecture.
- features may be extracted from all images in the image library 104 (for example, digitized painting works, calligraphy works, photos, etc.) and saved in the feature library 105 first.
- feature extraction is performed on the image 106 to be matched to obtain the feature 107 of the image to be matched.
- the feature 107 of the image to be matched is matched with each feature in the feature library 105 to calculate the feature similarity between the feature 107 of the image to be matched and each feature in the feature library 105.
- the feature similarity is sorted, and the image corresponding to the feature whose feature similarity is higher than a predetermined value is taken as the image 108 that matches the image 106 to be matched, indicating that the image is the same or similar to the image 106 to be matched.
- the maintenance personnel of the image library can manually review the above-mentioned filtered images to delete the repeated or substantially repeated images in the image library 104.
- the embodiment of the present disclosure provides an image matching method. It should be noted that the sequence number of each step in the following method is only used to indicate the step for description, and should not be regarded as indicating the execution order of the various steps. Unless explicitly indicated, the method does not need to be executed exactly in the order shown.
- the schematic flow chart of the method is shown in FIG. 2, for example, the method may include: step S210 to step S240.
- Step S210 Obtain the image to be matched.
- Step S220 For any image in the image library, determine the key point feature similarity between any image and the image to be matched, and determine that the color features between any image and the image to be matched are similar degree.
- the key points may include various extreme points in the image, such as corner points, edge points, bright points in dark areas, dark points in bright areas, and so on.
- the key point feature similarity between any image and the image to be matched may include the similarity between the key point feature of the any image and the key point feature of the image to be matched.
- the color feature similarity between any image and the image to be matched may include the similarity between the color feature of any image and the color feature of the image to be matched.
- the color characteristics of the image can be determined based on various color models of the image.
- Step S230 Determine the fusion similarity between any image and the image to be matched based on the determined similarity of the key point feature and the similarity of the color feature.
- Step S240 Determine whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched.
- the image matching method when matching the image to be matched, first determines the key point feature similarity and color feature similarity between any image in the image library and the image to be matched , And then based on the determined similarity of the key point feature and the similarity of the color feature to determine the fusion similarity between any image and the image to be matched. Therefore, according to the fusion similarity between each image in the image library and the image to be matched, it is judged whether there is an image that matches the image to be matched in the image library, and the search for images (images in the image library) by image (image to be matched) is realized. Technical effect.
- the fusion similarity composed of key point feature similarity and color feature similarity is selected for matching judgment in the image matching process, the features of multiple dimensions in the image are comprehensively considered, which can effectively improve the matching accuracy. It can quickly and effectively find out the existing images in the image library that are the same or similar to the image to be matched, avoid the repeated addition of the same or similar images, and thus avoid the waste of storage space.
- Fig. 3 schematically shows an example flowchart of an image matching method provided by another embodiment of the present disclosure.
- the image matching method shown in FIG. 3 may further include the step of determining the similarity of the key point features between any image in the image library and the image to be matched in the above step S220 S310 ⁇ S350.
- Step S310 Extract key point features of multiple sample images in the preset sample library.
- the preset sample library is composed of a large number of sample images, and the preset sample library may include an image library.
- the aforementioned key point feature may be a SIFT (Scale-invariant feature transform, scale-invariant feature transform) key point feature.
- SIFT Scale-invariant feature transform, scale-invariant feature transform
- Step S320 clustering the key point features of the multiple sample images in the preset sample library to obtain multiple feature categories.
- hierarchical clustering may be performed according to the SIFT feature of each sample image in the preset sample library.
- HIKM hierarchical clustering K-means, cluster center initialization method based on hierarchical method
- each feature category can be called a HIKM category.
- Step S330 Extract at least one key point feature of any image in the image library.
- Step S340 For each key point feature of any image, classify the key point feature to determine the feature category to which the key point feature belongs from the multiple feature categories.
- HIKM classification is performed on the SIFT key point features of each image in the image library to obtain the HIKM classification to which each SIFT key point feature in each image belongs.
- Step S350 Determine the key point feature distribution information of any image in the image library according to the feature category to which at least one key point feature of any image in the image library belongs.
- SIFT key point feature extraction is performed on 70,000 images in the preset sample library 401 (step S310) to obtain 60 million SIFT key points, that is, 60 million SIFT key point features.
- HIKM clustering is performed on the 60,000,000 SIFT key point features (step S320), and a classification model 402 containing 125,000 feature categories (HIKM categories) is established.
- step S330 perform SIFT key point feature extraction on each image in the image library 403 (step S330) to obtain M SIFT key points of each image, that is, M SIFT key point features, and M is a positive integer.
- M the number of key point features contained in different images in the image library can be the same or different, and there is no restriction here.
- All the images in the image library may be a part of the images in the preset sample library, so that the clustering results for the key point features in the preset sample library can cover the classification results of the key point features in the image library.
- HIKM classification is performed on the M SIFT key point features of each image respectively to determine the feature category to which the M SIFT key point features belong (step S340).
- the feature category of the first keypoint feature is C 1
- the feature category of the second keypoint feature is C 2
- the feature category of the Mth keypoint feature The characteristic category is C M.
- the feature categories C 1 to C M all belong to the above 125,000 feature categories, and the number of unique categories in the feature categories C 1 to C M is Q, and Q is an integer greater than or equal to 1 and less than or equal to M.
- the key point feature distribution information of each image can be obtained according to the feature categories C 1 to C M (step S350).
- the key point feature distribution information can be represented by an M-dimensional vector. If different feature categories are represented by different integer data, the key point feature distribution information can be represented as an M-dimensional integer.
- the reverse index table 404 can be established according to the key point feature distribution information of each image in the image library for subsequent matching, which will be described in detail below.
- SIFT key point feature extraction is performed on the 0th image (0.jpeg) to obtain 4 SIFT key points, that is, 4 SIFT key point features.
- the feature category to which the first key point feature belongs is the first category (indicated by "0")
- the feature category to which the second key point feature belongs is the second category (indicated by "1")
- the third key The feature category to which the point feature belongs is the third category (indicated by "2")
- the feature category to which the fourth key point feature belongs is the fourth category (indicated by "3")
- the M-dimensional integer of the 0th image Is ⁇ 0, 1, 2, 3 ⁇ .
- the M-dimensional integer type of the first image (1.jpeg) is ⁇ 0, 2, 3, 124999 ⁇ , where "124999" represents the 125,000th feature category.
- the above step S220 determining the similarity of the key point feature between any image in the image library and the image to be matched may include the following steps.
- Step S221 Extract at least one key point feature of the image to be matched.
- the extracted key point feature may be a SIFT key point feature, for example.
- Step S222 For each key point feature of the image to be matched, the key point feature is classified to determine the feature category to which the key point feature belongs from the multiple feature categories.
- Step S223 Determine the key point feature distribution information of the image to be matched according to the feature category to which at least one key point feature of the image to be matched belongs.
- the SIFT feature of the image to be matched 405 is extracted (step S221) to obtain N SIFT key points, that is, N SIFT key point features, and N is a positive integer.
- N is a positive integer.
- HIKM classification is performed on the N SIFT key point features respectively to determine the feature category to which the N SIFT key point features belong (step S222).
- the key point feature distribution information of the image to be matched can be obtained according to the feature category to which the N SIFT key point features belong (step S223).
- the key point feature distribution information can be represented by an N-dimensional vector.
- each SIFT key point feature in this example will be classified as an integer between 0 and 124999 after HIKM classification, and the key point feature distribution information of the image to be matched can be represented It is an N-dimensional integer.
- Step S224 according to the key point feature distribution information of any image in the image library and the key point feature distribution information of the image to be matched, determine the key point feature similarity between any image and the image to be matched.
- the above step S224 determines the key point feature similarity between any image and the image to be matched according to the key point feature distribution information of any image and the key point feature distribution information of the image to be matched may include: The key point feature distribution information of an image and the key point feature distribution information of the image to be matched are determined to determine the first number of key point features that belong to the same feature category and contained in either image and the image to be matched, and based on the first Quantity, determine the similarity of key point features between any image and the image to be matched.
- the above step S224 determines the key point feature similarity between any image and the image to be matched according to the key point feature distribution information of any image and the key point feature distribution information of the image to be matched may include: Determine the SIFT key points contained in any image and the image to be matched and belong to the same feature category according to the key point feature distribution information of the any image and the key point feature distribution information of the image to be matched The first number of features. According to the first quantity, the similarity of the key point features between the any image and the image to be matched is determined. In the same way, the similarity of the key point features between each image in the image library and the image to be matched can be determined.
- the SIFT key point feature similarity between the image to be matched and any image in the image library can be determined according to the following formula (1):
- P sift represents the SIFT key point feature similarity between the image to be matched and any image in the image library
- N same represents the same feature that the image to be matched and any image in the image library contain together.
- the first number of SIFT key point features of the category hereinafter may be referred to as the same key point feature contained in the image to be matched and any image
- N check represents the total number of SIFT key point features contained in the image to be matched
- N n represents the total number of SIFT key point features contained in any image in the image library.
- the process of determining the number of the same key point features contained in the image to be matched and each image in the image library may include: combining the key point feature distribution information of the image to be matched with each image in the image library.
- the key point feature distribution information of the images is compared, so as to determine the key point features that are included between each image and the image to be matched and belong to the same feature category.
- the N-dimensional integer type of the image to be matched is compared with the N-dimensional integer type of each image in the image library to determine the same feature category between the image to be matched and each image in the image library;
- the number of occurrences of the same feature category contained in the matched image and each image in the image library is determined to determine the number of the same key point feature between the image to be matched and each image in the image library.
- a reverse index table can be established based on the key point feature distribution information of each image.
- the index table is used to characterize the mapping relationship between each of the multiple feature categories obtained from the clustering and at least one image in the image library. According to the key point feature distribution information of the image to be matched and the reverse index table, the similarity of the key point feature between the image to be matched and each image in the image library can be determined relatively quickly.
- the inverted index table includes 125,000 feature categories.
- the numbers of the 125,000 feature categories in the inverted index table are divided into integers ranging from 0 to 124999.
- each integer in the M-dimensional integer type corresponds to one of the 125000 feature category numbers in the inverted index table, according to the M-dimensional image of each image
- the integer type adds the image to the corresponding feature category in the inverted index table, for example, the image 1.jpeg whose M-dimensional integer type is ⁇ 0, 2, 3, 124999 ⁇ is added to the number "0" of the inverted index table.
- the class numbered 0 corresponds to the images 0.jpeg, 1.jpeg, 2.jpeg, and 5.jpeg, which means that the images 0.jpeg, 1.jpeg, 2.jpeg, and 5.jpeg all have SIFT key point features attributable to Feature category numbered 0.
- the establishment of a reverse index table improves the speed of image matching and retrieval and shortens the time of image matching and retrieval.
- step S2241 according to the key point feature distribution information (such as N-dimensional integer) of the image to be matched and the inverse index table, the image to be matched and the image contained in each image in the image library are determined The number of the same SIFT key point feature. Then in step S2242, the SIFT key point feature similarity between the image to be matched and each image in the image library is determined according to the number of identical SIFT features contained in the image to be matched and each image in the image library.
- the key point feature distribution information such as N-dimensional integer
- the first number of key point features that are included in any image in the image library and the image to be matched and belong to the same feature category may include: For any image in the image library, the initial value of the first number of the same key point feature contained in the any image and the image to be matched is set to 0. For each key point feature contained in the image to be matched, search the reverse index table for an image that has a mapping relationship with the feature category to which the key point feature belongs, and when it has a mapping relationship with the feature category to which the key point feature belongs When any image is included in the image, 1 is added to the value of the first number. And so on, until all the key point features contained in the image to be matched are traversed, and the final value of the first number is determined. The same is true for other images in the image library, so I won't repeat them here.
- the 15,000-dimensional array saves the number N same of the same key point features contained in each image in the image library and the image to be matched.
- the above formula (1) can be used to calculate the SIFT key point feature similarity between each image in the image library and the image to be matched.
- the reverse index table is used to improve the matching retrieval time. For example, experiments show that the retrieval time for 15,000 images in the image library is only 0.6 ms (milliseconds).
- Fig. 5 schematically shows an example flowchart of an image matching method provided by another embodiment of the present disclosure.
- the image matching method shown in FIG. 5 may further include step S510 ⁇ S540.
- Step S510 Obtain multiple color categories.
- Step S520 Calculate the color feature value of each pixel in any image.
- the foregoing process of calculating the color feature value of each pixel in any image may include: for each pixel in the any image, the color mode of the pixel is RGB (Red (Red), Green (Green) ), the Blue (blue) value is converted to HSV (Hue (hue), Saturation (saturation), Value (lightness)) value. Perform integerization processing on the HSV value, and calculate the color feature value of the pixel in any image according to the HSV value after the integerization processing.
- Step S530 According to the color characteristic value of each pixel in the any image, determine the color category to which each pixel in the any image belongs from the above-mentioned multiple color categories.
- Step S540 Determine the color feature of any image according to the color category to which the multiple pixels in the any image belong.
- step S220 determining the color feature similarity between any image and the image to be matched may include the following steps.
- Step S225 Calculate the color feature value of each pixel in the image to be matched.
- the above calculation of the color characteristic value of each pixel in the image to be matched includes: for each pixel in the image to be matched, the color mode RGB value of the pixel is converted to HSV value, and the HSV value is integerized. , And determine the color feature value of the pixel in the image to be matched according to the HSV value after the integerization process.
- Step S226 According to the color characteristic value of each pixel in the image to be matched, the color category to which each pixel in the image to be matched belongs is determined from the foregoing multiple color categories.
- Step S227 Determine the color feature of the image to be matched according to the color category to which the multiple pixels in the image to be matched belong.
- Step S228 Determine the color feature similarity between any image in the image library and the color feature of the image to be matched according to the color feature of any image in the image library.
- the above step S228 determines the color feature similarity between each image in the image library and the image to be matched according to the color feature of the image to be matched and the color feature of each image in the image library, which may include: determining the image The Euclidean distance between the color feature of any image in the library and the color feature of the image to be matched. Then, according to the Euclidean distance, the color feature similarity between the image to be matched and any image is determined.
- step S610 the RGB value of each pixel in each image in the image library 601 is converted into an HSV value, and then in step S620, according to the HSV of each pixel in each image The value determines the HSV color characteristics of the image.
- the HSV color feature of at least one image in the image library can establish the HSV color feature library 602.
- step S630 the RGB value of the image to be matched 603 is converted into an HSV value, and then in step S640, the HSV color feature of the image to be matched is determined according to the HSV value of each pixel in the image to be matched.
- step S650 a similarity calculation is performed based on the HSV color feature of the image to be matched and the HSV color feature of each image in the HSV feature library to obtain the HSV color feature similarity 604 between each image in the image library and the image to be matched.
- the color in the image is the most direct and important feature for distinguishing the image
- the HSV color feature in the color feature is a feature close to the perception of the human eye .
- calculating the color feature similarity based on the HSV color feature can more accurately reflect the similarity of the color distribution between the images.
- the calculation formula for converting RGB value to HSV value is as follows:
- R, G, and B are the red, green, and blue coordinate values of a color, and the coordinate values are real numbers between 0 and 1.
- HSV is integerized.
- the H space can be divided into 8 parts, the S space is divided into 3 parts, and the V space is divided into 3 parts, where:
- Z has 72 value spaces in total, and the 72 value spaces can correspond to the multiple color categories mentioned above, that is, 72 color categories can be obtained.
- determine the color category to which each pixel belongs in the 72 color categories For example, when the color feature value of a pixel is 5, it is determined that the pixel belongs to the fifth color category. It can be determined that in an image: the number of pixels belonging to the first color category is W 1 , the number of pixels belonging to the second color category is W 2 , ..., the number of pixels belonging to the 72nd color category The quantity is W 72 .
- the HSV color feature of the image can be: ⁇ W 1 /W, W 2 /W,...W 72 /W ⁇ .
- the HSV color feature similarity P color can be calculated using the normalized Euclidean distance as follows:
- Z i represents the HSV color feature of the i-th image in the image library
- Z check represents the HSV color feature of the image to be matched.
- the foregoing embodiments determine the scale-invariant feature transformation SIFT key point feature similarity and color model HSV between each image in the image library and the image to be matched based on the features of the image to be matched and the features of each image in the image library. Similarity of color features.
- the SIFT key point feature similarity and HSV color feature similarity fusion are performed on the image to be matched with each image in the image library to obtain the fusion similarity between the image to be matched and each image in the image library.
- the first weight can be determined by the following formula (6):
- ⁇ represents the first weight
- N check represents the number of key point features of the image to be matched
- L 1 represents the first threshold number of key point features
- L 2 represents the second threshold number of key point features.
- the key point feature similarity and the color feature similarity are fused to obtain the fusion similarity.
- the calculation formula of fusion similarity is as follows:
- ⁇ + ⁇ 1
- ⁇ and ⁇ are respectively the first weight and the second weight of the two similarities in the fusion similarity.
- Color feature can solve the problem of low accuracy of image retrieval with simple texture in the search process, and the complexity of texture is characterized by the number of key point features.
- the number of key points extracted from an image with complex texture will be more than 600, and the number of key points from an image with simple texture will be less than 400.
- 400 ⁇ N check ⁇ 600 the larger N check is, the larger ⁇ is.
- L 1 is 400 and L 2 is 600, the calculation formula (6) of ⁇ is shown in the following formula (8):
- ⁇ makes P final both efficient and high accuracy for the retrieval of images with simple textures, and can be adaptively adjusted between images with complex textures and images with simple textures, making P final with adaptive performance, extending the application Scope and application scenarios.
- the respective fusion similarities between the image to be matched and each image in the image library it can be determined whether there is an image that matches the image to be matched in the image library.
- the fusion similarity between each of at least one image in the image library and the image to be matched may be sorted in descending order to obtain a fusion similarity sequence. And compared with the fusion similarity threshold, if there is a fusion similarity greater than the fusion similarity threshold in the fusion similarity sequence, it is determined that there is an image matching the image to be matched in the image library. Specifically, the image for which the fusion similarity is greater than the fusion similarity threshold is used as the image that matches the image to be matched. Further, when there is no image matching the image to be matched in the image library, the image to be matched can be stored in the image library. When there are multiple images in the image library that match the image to be matched, the duplicate images in the image library can be deleted.
- the scale-invariant feature transform SIFT key point feature similarity and the color model HSV color feature similarity between each image in the image library and the image to be matched are determined.
- the SIFT key point feature similarity and HSV color feature similarity fusion are performed on the image to be matched with each image in the library to obtain the fusion similarity between the image to be matched and each image in the image library.
- the embodiments of the present disclosure also provide an image matching device.
- the structure diagram of the image matching device is shown in FIG. 7.
- the image matching device 700 includes a first processing module 701, a second processing module 702, and a third processing module 701. Processing module 703 and fourth processing module 704.
- the first processing module 701 is used to obtain an image to be matched.
- the second processing module 702 is used for determining the similarity of key point features between any image and the image to be matched for any image in the image library, and determining that the color features between any image and the image to be matched are similar degree.
- the third processing module 703 is configured to determine the fusion similarity between the image to be matched and the image to be matched according to the determined similarity of the key point features and the similarity of the color features.
- the fourth processing module 704 is configured to determine whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched.
- the image matching device provided by the embodiment of the present disclosure, please refer to the foregoing image matching method.
- the image matching device provided in the embodiment of the present disclosure can achieve the same beneficial effects as the foregoing image matching method, and will not be repeated here.
- the embodiments of the present disclosure also provide an electronic device.
- the structural diagram of the electronic device is shown in FIG. 8.
- the electronic device 800 includes at least one processor 801, a memory 802, and a bus 803. 801 are all electrically connected to the memory 802; the memory 802 is configured to store at least one computer-executable instruction, and the processor 801 is configured to execute the at least one computer-executable instruction, so as to execute any one in the first embodiment of the present disclosure.
- the steps of any image matching method provided in the embodiment or any optional implementation manner.
- processor 803 may be FPGA (Field-Programmable Gate Array) or other devices with logic processing capabilities, such as MCU (Microcontroller Unit), CPU (Central Process Unit, central processing unit) ).
- MCU Microcontroller Unit
- CPU Central Process Unit, central processing unit
- an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored.
- the computer program is executed by a processor, the image matching method provided in the first embodiment of the present disclosure is implemented.
- Computer readable media include but are not limited to any type of disk (including floppy disk, hard disk, CD-ROM, CD-ROM, and magneto-optical disk), ROM, RAM, EPROM (Erasable Programmable Read-Only Memory, rewritable programmable read-only memory) ), EEPROM, flash memory, magnetic card or light card. That is, the readable medium includes any medium that stores or transmits information in a readable form by a device (for example, a computer).
- the computer-readable storage medium provided in this embodiment has the same inventive concept and the same beneficial effects as the foregoing embodiments, and will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Library & Information Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (18)
- 一种图像匹配方法,包括:获取待匹配图像;针对图像库中的任一图像,确定所述任一图像与所述待匹配图像之间的关键点特征相似度,并确定所述任一图像与所述待匹配图像之间的颜色特征相似度;根据所述关键点特征相似度和所述颜色特征相似度,确定所述任一图像与所述待匹配图像之间的融合相似度;以及根据所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度,确定所述图像库中是否存在与所述待匹配图像匹配的图像。
- 根据权利要求1所述的方法,还包括:在所述确定所述任一图像与所述待匹配图像之间的关键点特征相似度之前,提取预设样本库中多个样本图像的关键点特征,所述预设样本库包括所述图像库;对所述多个样本图像的关键点特征进行聚类,以得到多个特征类别;提取所述任一图像的至少一个关键点特征;针对所述任一图像的每个关键点特征,对所述关键点特征进行分类,以从所述多个特征类别中确定所述关键点特征所属的特征类别;以及根据所述任一图像的至少一个关键点特征各自所属的特征类别,确定所述任一图像的关键点特征分布信息。
- 根据权利要求2所述的方法,其中,所述确定所述任一图像与所述待匹配图像之间的关键点特征相似度包括:提取所述待匹配图像的至少一个关键点特征;针对所述待匹配图像的每个关键点特征,对所述关键点特征进行分类,以从所述多个特征类别中确定所述关键点特征所属的特征类别;根据所述待匹配图像的至少一个关键点特征各自所属的特征类别,确定所述待匹配图像的关键点特征分布信息;以及根据所述任一图像的关键点特征分布信息和所述待匹配图像的关键点特征分布信息,确定所述任一图像与所述待匹配图像之间的关键点特征相似度。
- 根据权利要求3所述的方法,其中,所述根据所述任一图像的关键点特征分 布信息和所述待匹配图像的关键点特征分布信息,确定所述任一图像与所述待匹配图像之间的关键点特征相似度包括:根据所述任一图像的关键点特征分布信息和所述待匹配图像的关键点特征分布信息,确定所述任一图像和所述待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量;以及根据所述第一数量,确定所述任一图像与所述待匹配图像之间的关键点特征相似度。
- 根据权利要求4或5所述的方法,还包括:在所述确定所述任一图像与所述待匹配图像之间的关键点特征相似度之前,根据所述图像库中至少一个图像各自的关键点特征分布信息,确定反向索引表,所述反向索引表用于表征从所述多个特征类别中的每个特征类别到所述至少一个图像的映射关系;所述确定所述任一图像和所述待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量包括:设置所述第一数量的初始值为0;针对所述待匹配图像所包含的每个关键点特征,在所述反向索引表中查找与所述关键点特征所属的特征类别具有映射关系的图像,且当所述具有映射关系的图像包括所述任一图像时,对所述第一数量的取值加1;以及在遍历所述待匹配图像所包含的关键点特征后,确定所述第一数量的取值。
- 根据权利要求1~6之一所述的方法,其中,所述关键点特征包括:尺度不变特征转换SIFT关键点特征。
- 根据权利要求1~7之一所述的方法,还包括:在所述确定所述任一图像与所述待匹配图像之间的颜色特征相似度之前,获取多个颜色类别;计算所述任一图像中每个像素点的颜色特征值;根据所述任一图像中每个像素点的颜色特征值,从所述多个颜色类别中确定所述任一图像中每个像素点所属的颜色类别;以及根据所述任一图像中多个像素点各自所属的颜色类别,确定所述任一图像的颜色特征。
- 根据权利要求8所述的方法,其中,所述确定所述任一图像与所述待匹配图像之间的颜色特征相似度包括:计算所述待匹配图像中每个像素点的颜色特征值;根据所述待匹配图像中每个像素点的颜色特征值,从所述多个颜色类别中确定所述待匹配图像中每个像素点所属的颜色类别;以及根据所述待匹配图像中多个像素点各自所属的颜色类别,确定所述待匹配图像的颜色特征。
- 根据权利要求9所述的方法,其中,所述确定所述任一图像与所述待匹配图像之间的颜色特征相似度还包括:确定所述任一图像的颜色特征与所述待匹配图像之间的颜色特征之间的欧式距离;以及根据所述欧式距离,确定所述任一图像与所述待匹配图像之间的颜色特征相似度。
- 根据权利要求9~10之一所述的方法,其中,所述计算所述任一图像中每个像素点的颜色特征值包括:针对所述任一图像中的每个像素点,将所述每个像素点的色彩模式RGB值转换为HSV值,对所述HSV值进行整数化处理,并根据整数化处理后的HSV值计算所述任一图像中每个像素点的颜色特征值;以及所述计算所述待匹配图像中每个像素点的颜色特征值包括:针对所述待匹配图像中的每个像素点,将所述每个像素点的色彩模式RGB值转换为HSV值,对所述HSV值进行整数化处理,并根据整数化处理后的HSV值确定所述待匹配图像中每个像素点的颜色特征值。
- 根据权利要求1~11之一所述的方法,其中,所述根据所述关键点特征相似度和所述颜色特征相似度,确定所述任一图像与所述待匹配图像之间的融合相似度包 括:设置第一权重和第二权重,其中,所述第一权重与所述第二权重之和为1;根据所述第一权重和所述第二权重,对所述任一图像与所述待匹配图像之间的关键点特征相似度以及所述任一图像与所述待匹配图像之间的颜色特征相似度进行加权求和,以得到所述任一图像与所述待匹配图像之间的融合相似度。
- 根据权利要求1~13之一所述的方法,其中,所述根据所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度,确定所述图像库中是否存在与所述待匹配图像相匹配的图像包括:对所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度进行降序排序,以得到融合相似度序列;将所述融合相似度序列中大于融合相似度阈值的融合相似度所针对的图像作为与所述待匹配图像匹配的图像。
- 一种图像匹配装置,包括:第一处理模块,用于获取待匹配图像;第二处理模块,用于针对图像库中的任一图像,确定所述任一图像与所述待匹配图像之间的关键点特征相似度,并确定所述任一图像与所述待匹配图像之间的颜色特征相似度;第三处理模块,用于根据所述将关键点特征相似度和颜色特征相似度,确定所述任一图像与所述待匹配图像之间的融合相似度;第四处理模块,用于根据所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度,确定所述图像库中是否存在与所述待匹配图像匹配的图像。
- 一种电子设备,包括:至少一个处理器和存储器;所述存储器,用于存储计算机程序;所述处理器,用于通过调用所述计算机程序,执行上述权利要求1~14之一所述 的方法。
- 一种计算机可读存储介质,存储有计算机程序,所述计算机程序用于被处理器执行时实现如权利要求1~14之一所述的方法。
- 一种计算机程序产品,包括可执行指令,该指令被处理器执行时实现根据权利要求1~14之一所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/056,900 US11886492B2 (en) | 2019-05-29 | 2020-04-24 | Method of matching image and apparatus thereof, device, medium and program product |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910457984.9 | 2019-05-29 | ||
CN201910457984.9A CN110188217A (zh) | 2019-05-29 | 2019-05-29 | 图像查重方法、装置、设备和计算机可读储存介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020238515A1 true WO2020238515A1 (zh) | 2020-12-03 |
Family
ID=67718628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/086666 WO2020238515A1 (zh) | 2019-05-29 | 2020-04-24 | 图像匹配方法、装置、设备、介质和程序产品 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11886492B2 (zh) |
CN (1) | CN110188217A (zh) |
WO (1) | WO2020238515A1 (zh) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902198A (zh) * | 2019-03-11 | 2019-06-18 | 京东方科技集团股份有限公司 | 一种以图搜图的方法、装置及应用系统 |
CN110188217A (zh) * | 2019-05-29 | 2019-08-30 | 京东方科技集团股份有限公司 | 图像查重方法、装置、设备和计算机可读储存介质 |
CN112749131A (zh) * | 2020-06-11 | 2021-05-04 | 腾讯科技(上海)有限公司 | 信息消重处理方法、装置及计算机可读存储介质 |
CN111930985A (zh) * | 2020-07-08 | 2020-11-13 | 泰康保险集团股份有限公司 | 图像检索方法、装置、电子设备及可读存储介质 |
CN113220925B (zh) * | 2021-04-26 | 2022-04-12 | 华南师范大学 | 一种细胞图像查重方法和系统 |
CN113486942A (zh) * | 2021-06-30 | 2021-10-08 | 武汉理工光科股份有限公司 | 一种重复火警判定方法、装置、电子设备及存储介质 |
CN114373089A (zh) * | 2022-01-13 | 2022-04-19 | 上海汉时信息科技有限公司 | 货架商品自主聚类识别方法及装置 |
US11869260B1 (en) * | 2022-10-06 | 2024-01-09 | Kargo Technologies Corporation | Extracting structured data from an image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140099033A1 (en) * | 2012-10-05 | 2014-04-10 | Raytheon Bbn Technologies Corp. | Fast computation of kernel descriptors |
CN104008174A (zh) * | 2014-06-04 | 2014-08-27 | 北京工业大学 | 一种海量图像检索的隐私保护索引生成方法 |
CN106649487A (zh) * | 2016-10-09 | 2017-05-10 | 苏州大学 | 基于兴趣目标的图像检索方法 |
CN108460114A (zh) * | 2018-02-09 | 2018-08-28 | 福州大学 | 一种基于层次注意力模型的图像检索方法 |
CN110188217A (zh) * | 2019-05-29 | 2019-08-30 | 京东方科技集团股份有限公司 | 图像查重方法、装置、设备和计算机可读储存介质 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060136402A1 (en) * | 2004-12-22 | 2006-06-22 | Tsu-Chang Lee | Object-based information storage, search and mining system method |
US7904455B2 (en) * | 2005-11-03 | 2011-03-08 | Fuji Xerox Co., Ltd. | Cascading cluster collages: visualization of image search results on small displays |
US8004576B2 (en) * | 2008-10-31 | 2011-08-23 | Digimarc Corporation | Histogram methods and systems for object recognition |
CN103150330B (zh) * | 2013-01-16 | 2015-12-23 | 中南大学 | 一种基于局部对象和块匹配的扫描证书图像检索方法 |
CN108291876B (zh) * | 2014-11-21 | 2022-03-15 | 盖伊·李·亨纳夫 | 用于检测产品的真实性的系统及方法 |
CN104504007B (zh) * | 2014-12-10 | 2018-01-30 | 成都品果科技有限公司 | 一种图像相似度的获取方法及系统 |
CN106933861A (zh) * | 2015-12-30 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | 一种支持特征自定义的跨镜头目标检索方法 |
CN107392215A (zh) * | 2017-08-02 | 2017-11-24 | 焦点科技股份有限公司 | 一种基于sift算法的重图检测方法 |
-
2019
- 2019-05-29 CN CN201910457984.9A patent/CN110188217A/zh active Pending
-
2020
- 2020-04-24 WO PCT/CN2020/086666 patent/WO2020238515A1/zh active Application Filing
- 2020-04-24 US US17/056,900 patent/US11886492B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140099033A1 (en) * | 2012-10-05 | 2014-04-10 | Raytheon Bbn Technologies Corp. | Fast computation of kernel descriptors |
CN104008174A (zh) * | 2014-06-04 | 2014-08-27 | 北京工业大学 | 一种海量图像检索的隐私保护索引生成方法 |
CN106649487A (zh) * | 2016-10-09 | 2017-05-10 | 苏州大学 | 基于兴趣目标的图像检索方法 |
CN108460114A (zh) * | 2018-02-09 | 2018-08-28 | 福州大学 | 一种基于层次注意力模型的图像检索方法 |
CN110188217A (zh) * | 2019-05-29 | 2019-08-30 | 京东方科技集团股份有限公司 | 图像查重方法、装置、设备和计算机可读储存介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110188217A (zh) | 2019-08-30 |
US11886492B2 (en) | 2024-01-30 |
US20210174135A1 (en) | 2021-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020238515A1 (zh) | 图像匹配方法、装置、设备、介质和程序产品 | |
CN109146892B (zh) | 一种基于美学的图像裁剪方法及装置 | |
WO2021012526A1 (zh) | 人脸识别模型的训练方法、人脸识别方法、装置、设备及存储介质 | |
WO2021164550A1 (zh) | 图像分类方法及装置 | |
WO2020253127A1 (zh) | 脸部特征提取模型训练方法、脸部特征提取方法、装置、设备及存储介质 | |
AU2014218444B2 (en) | Dynamic feature selection for joint probabilistic recognition | |
Goh et al. | Food-image Classification Using Neural Network Model | |
WO2021012494A1 (zh) | 基于深度学习的人脸识别方法、装置及计算机可读存储介质 | |
US9600738B2 (en) | Discriminative embedding of local color names for object retrieval and classification | |
WO2020192112A1 (zh) | 人脸识别方法及装置 | |
WO2013053320A1 (zh) | 一种图像检索方法及装置 | |
CN105069042A (zh) | 基于内容的无人机侦察图像数据检索方法 | |
WO2021175040A1 (zh) | 视频处理方法及相关装置 | |
CN110381392B (zh) | 一种视频摘要提取方法及其系统、装置、存储介质 | |
CN103366178A (zh) | 一种用于对目标图像进行颜色分类的方法与设备 | |
WO2023024413A1 (zh) | 信息的匹配方法、装置、计算机设备及可读存储介质 | |
WO2020097834A1 (zh) | 一种特征处理方法及装置、存储介质及程序产品 | |
CN111553838A (zh) | 模型参数的更新方法、装置、设备及存储介质 | |
CN112561976A (zh) | 一种图像主颜色特征提取方法、图像检索方法、存储介质及设备 | |
CN112614110B (zh) | 评估图像质量的方法、装置及终端设备 | |
CN113762303B (zh) | 图像分类方法、装置、电子设备及存储介质 | |
CN116630630B (zh) | 语义分割方法、装置、计算机设备及计算机可读存储介质 | |
US11869127B2 (en) | Image manipulation method and apparatus | |
JP2018180879A (ja) | 物体認識装置、物体認識システム、及び物体認識方法 | |
CN112287945A (zh) | 碎屏确定方法、装置、计算机设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20813528 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20813528 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20813528 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 220722) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20813528 Country of ref document: EP Kind code of ref document: A1 |