WO2020238515A1 - 图像匹配方法、装置、设备、介质和程序产品 - Google Patents

图像匹配方法、装置、设备、介质和程序产品 Download PDF

Info

Publication number
WO2020238515A1
WO2020238515A1 PCT/CN2020/086666 CN2020086666W WO2020238515A1 WO 2020238515 A1 WO2020238515 A1 WO 2020238515A1 CN 2020086666 W CN2020086666 W CN 2020086666W WO 2020238515 A1 WO2020238515 A1 WO 2020238515A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matched
feature
key point
similarity
Prior art date
Application number
PCT/CN2020/086666
Other languages
English (en)
French (fr)
Inventor
刘向阳
刘童
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/056,900 priority Critical patent/US11886492B2/en
Publication of WO2020238515A1 publication Critical patent/WO2020238515A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video

Definitions

  • the present disclosure relates to the field of computer vision technology. Specifically, the present disclosure relates to an image matching method, device, device, computer-readable storage medium, and program product.
  • images from various sources can be stored in the image library.
  • the maintainer of the image library adds the same or similar images to the image library multiple times; 2.
  • the image uploaded by the user is the same as the existing image in the image library Or approximate. Multiple repeated or similar images are used as redundant data in the image library, which will cause a lot of waste of storage space.
  • This application proposes an image matching method, device, equipment, computer readable storage medium and program product.
  • an image matching method including: acquiring an image to be matched. For any image in the image library, determine the key point feature similarity between any image and the image to be matched, and determine the color feature similarity between any image and the image to be matched, so as to determine the The key point feature similarity and color feature similarity are used to determine the fusion similarity between any image and the image to be matched. Then, according to the fusion similarity between each of at least one image in the image library and the image to be matched, it is determined whether there is an image that matches the image to be matched in the image library.
  • the above method further includes: before determining the similarity of key point features between any image in the image library and the image to be matched, extracting key point features of multiple sample images in the preset sample library, and preset the sample library Includes image library. Then, cluster the key point features of the multiple sample images in the preset sample library to obtain multiple feature categories. Then, at least one key point feature of any image in the image library is extracted. For each key point feature of any image, the key point feature is classified to determine the feature category to which the key point feature belongs from the above multiple feature categories. According to the feature category to which at least one key point feature of any image in the image library belongs, the key point feature distribution information of any image is determined.
  • the foregoing clustering of the key point features of the multiple sample images in the preset sample library includes: clustering the key point features of the multiple sample images by using a cluster center initialization method based on a hierarchical method.
  • determining the key point feature similarity between any image in the image library and the image to be matched includes: extracting at least one key point feature of the image to be matched. For each key point feature of the image to be matched, the key point feature is classified to determine the feature category to which the key point feature belongs from the multiple feature categories. Determine the key point feature distribution information of the image to be matched according to the feature category to which at least one key point feature of the image to be matched belongs. Then, according to the key point feature distribution information of any image in the image library and the key point feature distribution information of the image to be matched, the key point feature similarity between the any image and the image to be matched is determined.
  • determining the key point feature similarity between any image and the image to be matched based on the key point feature distribution information of any image and the key point feature distribution information of the image to be matched includes: according to the key point of any image The feature distribution information and the key point feature distribution information of the image to be matched are determined, the first number of key point features included in any image and the image to be matched and belonging to the same feature category is determined, and the arbitrary number is determined according to the first number. The key point feature similarity between an image and the image to be matched.
  • the first number is N same
  • any image includes a second number of N n key point features
  • the image to be matched includes a third number of N check key point features, where N same , N n and N check are all greater than An integer equal to 0.
  • determining the key point feature similarity between any image and the image to be matched includes: determining the key point feature similarity P sift between any image and the image to be matched according to the following formula:
  • the above method further includes: before determining the key point feature similarity between any image and the image to be matched, determining the reverse index table according to respective key point feature distribution information of at least one image in the image library.
  • the reverse index table is used to characterize the mapping relationship from each of the multiple feature categories to at least one image in the image library.
  • the foregoing determination of the first number of key point features included in any image and the image to be matched and belonging to the same feature category includes: setting the initial value of the first number to 0; for each key included in the image to be matched Point feature, search for an image that has a mapping relationship with the feature category to which the key point feature belongs in the reverse index table, and when the image that has a mapping relationship with the feature category to which the key point feature belongs includes any image, The value of the first number is increased by 1; and the value of the first number is determined after traversing the key point features contained in the image to be matched.
  • the aforementioned key point features include: scale-invariant feature conversion SIFT key point features.
  • the above method further includes: before determining the color feature similarity between any image in the image library and the image to be matched, acquiring multiple color categories. Calculate the color feature value of each pixel in any image, and determine the color feature value of each pixel in any image from the above-mentioned multiple color categories. Color category. Then, the color feature of any image is determined according to the color category to which the multiple pixels in the any image belong.
  • the above determining the color feature similarity between any image and the image to be matched includes: calculating the color feature value of each pixel in the image to be matched. According to the color characteristic value of each pixel in the image to be matched, the color category to which each pixel in the image to be matched belongs is determined from the foregoing multiple color categories. Then, the color characteristics of the image to be matched are determined according to the color category to which the multiple pixels in the image to be matched belong.
  • determining the color feature similarity between any image in the image library and the image to be matched also includes: determining the Euclidean distance between the color feature of any image and the color feature of the image to be matched, and according to The Euclidean distance determines the color feature similarity between any image and the image to be matched.
  • the above calculation of the color characteristic value of each pixel in any image includes: for each pixel in the any image, converting the RGB value of the color mode of the pixel to an HSV value, and integerizing the HSV value Processing, and calculating the color feature value of the pixel in any image according to the HSV value after the integerization processing.
  • the foregoing calculation of the color characteristic value of each pixel in the image to be matched includes: for each pixel in the image to be matched, the RGB value of the color mode of the pixel is converted to an HSV value, and the HSV value is integerized. , And determine the color feature value of the pixel in the image to be matched according to the HSV value after the integerization process.
  • determining the fusion similarity between any image in the image library and the image to be matched based on the similarity of key features and color features between any image in the image library and the image to be matched includes: setting the first weight And the second weight, the sum of the first weight and the second weight is 1. According to the first weight and the second weight, the key point feature similarity between any image and the image to be matched and the color feature similarity between any image and the image to be matched are weighted and summed to obtain the The fusion similarity between any image and the image to be matched.
  • the above setting of the first weight includes: determining the first weight ⁇ through the following formula: Among them, N check represents the number of key point features of the image to be matched, L 1 represents the first threshold number of key point features, and L 2 represents the second threshold number of key point features.
  • determining whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched includes: The fusion similarity between the matching images is sorted in descending order to obtain the fusion similarity sequence, and then the image targeted by the fusion similarity greater than the fusion similarity threshold in the fusion similarity sequence is used as the image that matches the image to be matched.
  • the above method further includes: when there is no image matching the image to be matched in the image library, adding the image to be matched to the image library.
  • an image matching device including: a first processing module, a second processing module, a third processing module, and a fourth processing module.
  • the first processing module is used to obtain the image to be matched.
  • the second processing module is used to determine the similarity of key features between any image in the image library and the image to be matched, and determine the similarity of color features between any image and the image to be matched .
  • the third processing module is configured to determine the fusion similarity between the image to be matched and the image to be matched according to the determined similarity of the key point features and the similarity of the color features.
  • the fourth processing module is used for determining whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched.
  • an electronic device including: at least one processor, a memory, and a bus.
  • the bus is used to connect the processor and the memory.
  • the memory is used to store operation instructions.
  • the processor is configured to execute the image matching method of any one of the foregoing embodiments by calling an operation instruction.
  • a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the image matching method of any one of the foregoing embodiments is implemented.
  • a computer program product including executable instructions, which, when executed by a processor, implement the image matching method of any of the foregoing embodiments.
  • the image matching method when matching the image to be matched, first determines the key point feature similarity and color feature similarity between any image in the image library and the image to be matched, and then based on the key determined above Point feature similarity and color feature similarity determine the fusion similarity between any image and the image to be matched. Therefore, according to the fusion similarity between each image in the image library and the image to be matched, it is judged whether there is an image that matches the image to be matched in the image library, and the image search (image in the image library) is realized by the image (image to be matched) Technical effect.
  • the fusion similarity composed of key point feature similarity and color feature similarity is selected for matching judgment in the image matching process, the features of multiple dimensions in the image are comprehensively considered, which can effectively improve the matching accuracy. It can quickly and effectively find out the existing images in the image library that are the same or similar to the image to be matched, avoid the repeated addition of the same or similar images, and thus avoid the waste of storage space.
  • FIGS. 1A to 1B are schematic diagrams of the system architecture of the application image matching method and device provided by the embodiments of the disclosure.
  • FIG. 2 is a schematic flowchart of an image matching method provided by an embodiment of the disclosure
  • FIG. 3 is a schematic flowchart of an image matching method provided by another embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of keyword feature extraction and keyword feature similarity calculation provided by embodiments of the disclosure.
  • FIG. 5 is a schematic flowchart of an image matching method provided by another embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of color feature extraction and color feature similarity calculation provided by embodiments of the disclosure.
  • FIG. 7 is a schematic structural diagram of an image matching device provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • images from various sources can be stored in the image library.
  • the maintainer of the image library adds the same or similar images to the image library multiple times; 2.
  • the image uploaded by the user is the same as the existing image in the image library. Or approximate. Multiple identical or similar images are used as redundant data in the image library, which will waste a lot of storage space.
  • digital paintings, calligraphy, photos and other images are stored in the image library, and electronic devices are used to display the images in the image library for users to view various paintings, calligraphy, photos, etc.
  • the image library can be set on the electronic device side or on the server side. In some cases, the same or similar images are repeatedly added to the image library. Because the digital conversational works, calligraphy works, photos, etc. occupy a large storage space, a large amount of storage space is wasted.
  • the system architecture 100 may include an electronic device 101, a network 102, and a server 103.
  • the network 102 is a medium used to provide a communication link between the electronic device 101 and the server 105.
  • the electronic device 101 may be, for example, a smart phone, a tablet computer, a smart TV, a portable computer, a desktop computer, etc., which are not limited here.
  • the server 103 can provide various background services for the electronic device 101. For example, receiving a request message from the electronic device 101, performing analysis and processing on the received request message, etc., and feeding back the corresponding response result to the electronic device 101.
  • the image matching method according to the embodiment of the present disclosure may be implemented in the electronic device 101, and accordingly, the image matching apparatus according to the embodiment of the present disclosure may be provided in the electronic device 101.
  • the image matching method according to the embodiment of the present disclosure may also be implemented in the server 103, and accordingly, the image matching apparatus according to the embodiment of the present disclosure may also be set in the server 103.
  • Fig. 1B exemplarily shows the implementation principle of the image matching method and device according to the embodiments of the present disclosure.
  • an image library 104 and a feature library 105 are deployed in the above system architecture.
  • features may be extracted from all images in the image library 104 (for example, digitized painting works, calligraphy works, photos, etc.) and saved in the feature library 105 first.
  • feature extraction is performed on the image 106 to be matched to obtain the feature 107 of the image to be matched.
  • the feature 107 of the image to be matched is matched with each feature in the feature library 105 to calculate the feature similarity between the feature 107 of the image to be matched and each feature in the feature library 105.
  • the feature similarity is sorted, and the image corresponding to the feature whose feature similarity is higher than a predetermined value is taken as the image 108 that matches the image 106 to be matched, indicating that the image is the same or similar to the image 106 to be matched.
  • the maintenance personnel of the image library can manually review the above-mentioned filtered images to delete the repeated or substantially repeated images in the image library 104.
  • the embodiment of the present disclosure provides an image matching method. It should be noted that the sequence number of each step in the following method is only used to indicate the step for description, and should not be regarded as indicating the execution order of the various steps. Unless explicitly indicated, the method does not need to be executed exactly in the order shown.
  • the schematic flow chart of the method is shown in FIG. 2, for example, the method may include: step S210 to step S240.
  • Step S210 Obtain the image to be matched.
  • Step S220 For any image in the image library, determine the key point feature similarity between any image and the image to be matched, and determine that the color features between any image and the image to be matched are similar degree.
  • the key points may include various extreme points in the image, such as corner points, edge points, bright points in dark areas, dark points in bright areas, and so on.
  • the key point feature similarity between any image and the image to be matched may include the similarity between the key point feature of the any image and the key point feature of the image to be matched.
  • the color feature similarity between any image and the image to be matched may include the similarity between the color feature of any image and the color feature of the image to be matched.
  • the color characteristics of the image can be determined based on various color models of the image.
  • Step S230 Determine the fusion similarity between any image and the image to be matched based on the determined similarity of the key point feature and the similarity of the color feature.
  • Step S240 Determine whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched.
  • the image matching method when matching the image to be matched, first determines the key point feature similarity and color feature similarity between any image in the image library and the image to be matched , And then based on the determined similarity of the key point feature and the similarity of the color feature to determine the fusion similarity between any image and the image to be matched. Therefore, according to the fusion similarity between each image in the image library and the image to be matched, it is judged whether there is an image that matches the image to be matched in the image library, and the search for images (images in the image library) by image (image to be matched) is realized. Technical effect.
  • the fusion similarity composed of key point feature similarity and color feature similarity is selected for matching judgment in the image matching process, the features of multiple dimensions in the image are comprehensively considered, which can effectively improve the matching accuracy. It can quickly and effectively find out the existing images in the image library that are the same or similar to the image to be matched, avoid the repeated addition of the same or similar images, and thus avoid the waste of storage space.
  • Fig. 3 schematically shows an example flowchart of an image matching method provided by another embodiment of the present disclosure.
  • the image matching method shown in FIG. 3 may further include the step of determining the similarity of the key point features between any image in the image library and the image to be matched in the above step S220 S310 ⁇ S350.
  • Step S310 Extract key point features of multiple sample images in the preset sample library.
  • the preset sample library is composed of a large number of sample images, and the preset sample library may include an image library.
  • the aforementioned key point feature may be a SIFT (Scale-invariant feature transform, scale-invariant feature transform) key point feature.
  • SIFT Scale-invariant feature transform, scale-invariant feature transform
  • Step S320 clustering the key point features of the multiple sample images in the preset sample library to obtain multiple feature categories.
  • hierarchical clustering may be performed according to the SIFT feature of each sample image in the preset sample library.
  • HIKM hierarchical clustering K-means, cluster center initialization method based on hierarchical method
  • each feature category can be called a HIKM category.
  • Step S330 Extract at least one key point feature of any image in the image library.
  • Step S340 For each key point feature of any image, classify the key point feature to determine the feature category to which the key point feature belongs from the multiple feature categories.
  • HIKM classification is performed on the SIFT key point features of each image in the image library to obtain the HIKM classification to which each SIFT key point feature in each image belongs.
  • Step S350 Determine the key point feature distribution information of any image in the image library according to the feature category to which at least one key point feature of any image in the image library belongs.
  • SIFT key point feature extraction is performed on 70,000 images in the preset sample library 401 (step S310) to obtain 60 million SIFT key points, that is, 60 million SIFT key point features.
  • HIKM clustering is performed on the 60,000,000 SIFT key point features (step S320), and a classification model 402 containing 125,000 feature categories (HIKM categories) is established.
  • step S330 perform SIFT key point feature extraction on each image in the image library 403 (step S330) to obtain M SIFT key points of each image, that is, M SIFT key point features, and M is a positive integer.
  • M the number of key point features contained in different images in the image library can be the same or different, and there is no restriction here.
  • All the images in the image library may be a part of the images in the preset sample library, so that the clustering results for the key point features in the preset sample library can cover the classification results of the key point features in the image library.
  • HIKM classification is performed on the M SIFT key point features of each image respectively to determine the feature category to which the M SIFT key point features belong (step S340).
  • the feature category of the first keypoint feature is C 1
  • the feature category of the second keypoint feature is C 2
  • the feature category of the Mth keypoint feature The characteristic category is C M.
  • the feature categories C 1 to C M all belong to the above 125,000 feature categories, and the number of unique categories in the feature categories C 1 to C M is Q, and Q is an integer greater than or equal to 1 and less than or equal to M.
  • the key point feature distribution information of each image can be obtained according to the feature categories C 1 to C M (step S350).
  • the key point feature distribution information can be represented by an M-dimensional vector. If different feature categories are represented by different integer data, the key point feature distribution information can be represented as an M-dimensional integer.
  • the reverse index table 404 can be established according to the key point feature distribution information of each image in the image library for subsequent matching, which will be described in detail below.
  • SIFT key point feature extraction is performed on the 0th image (0.jpeg) to obtain 4 SIFT key points, that is, 4 SIFT key point features.
  • the feature category to which the first key point feature belongs is the first category (indicated by "0")
  • the feature category to which the second key point feature belongs is the second category (indicated by "1")
  • the third key The feature category to which the point feature belongs is the third category (indicated by "2")
  • the feature category to which the fourth key point feature belongs is the fourth category (indicated by "3")
  • the M-dimensional integer of the 0th image Is ⁇ 0, 1, 2, 3 ⁇ .
  • the M-dimensional integer type of the first image (1.jpeg) is ⁇ 0, 2, 3, 124999 ⁇ , where "124999" represents the 125,000th feature category.
  • the above step S220 determining the similarity of the key point feature between any image in the image library and the image to be matched may include the following steps.
  • Step S221 Extract at least one key point feature of the image to be matched.
  • the extracted key point feature may be a SIFT key point feature, for example.
  • Step S222 For each key point feature of the image to be matched, the key point feature is classified to determine the feature category to which the key point feature belongs from the multiple feature categories.
  • Step S223 Determine the key point feature distribution information of the image to be matched according to the feature category to which at least one key point feature of the image to be matched belongs.
  • the SIFT feature of the image to be matched 405 is extracted (step S221) to obtain N SIFT key points, that is, N SIFT key point features, and N is a positive integer.
  • N is a positive integer.
  • HIKM classification is performed on the N SIFT key point features respectively to determine the feature category to which the N SIFT key point features belong (step S222).
  • the key point feature distribution information of the image to be matched can be obtained according to the feature category to which the N SIFT key point features belong (step S223).
  • the key point feature distribution information can be represented by an N-dimensional vector.
  • each SIFT key point feature in this example will be classified as an integer between 0 and 124999 after HIKM classification, and the key point feature distribution information of the image to be matched can be represented It is an N-dimensional integer.
  • Step S224 according to the key point feature distribution information of any image in the image library and the key point feature distribution information of the image to be matched, determine the key point feature similarity between any image and the image to be matched.
  • the above step S224 determines the key point feature similarity between any image and the image to be matched according to the key point feature distribution information of any image and the key point feature distribution information of the image to be matched may include: The key point feature distribution information of an image and the key point feature distribution information of the image to be matched are determined to determine the first number of key point features that belong to the same feature category and contained in either image and the image to be matched, and based on the first Quantity, determine the similarity of key point features between any image and the image to be matched.
  • the above step S224 determines the key point feature similarity between any image and the image to be matched according to the key point feature distribution information of any image and the key point feature distribution information of the image to be matched may include: Determine the SIFT key points contained in any image and the image to be matched and belong to the same feature category according to the key point feature distribution information of the any image and the key point feature distribution information of the image to be matched The first number of features. According to the first quantity, the similarity of the key point features between the any image and the image to be matched is determined. In the same way, the similarity of the key point features between each image in the image library and the image to be matched can be determined.
  • the SIFT key point feature similarity between the image to be matched and any image in the image library can be determined according to the following formula (1):
  • P sift represents the SIFT key point feature similarity between the image to be matched and any image in the image library
  • N same represents the same feature that the image to be matched and any image in the image library contain together.
  • the first number of SIFT key point features of the category hereinafter may be referred to as the same key point feature contained in the image to be matched and any image
  • N check represents the total number of SIFT key point features contained in the image to be matched
  • N n represents the total number of SIFT key point features contained in any image in the image library.
  • the process of determining the number of the same key point features contained in the image to be matched and each image in the image library may include: combining the key point feature distribution information of the image to be matched with each image in the image library.
  • the key point feature distribution information of the images is compared, so as to determine the key point features that are included between each image and the image to be matched and belong to the same feature category.
  • the N-dimensional integer type of the image to be matched is compared with the N-dimensional integer type of each image in the image library to determine the same feature category between the image to be matched and each image in the image library;
  • the number of occurrences of the same feature category contained in the matched image and each image in the image library is determined to determine the number of the same key point feature between the image to be matched and each image in the image library.
  • a reverse index table can be established based on the key point feature distribution information of each image.
  • the index table is used to characterize the mapping relationship between each of the multiple feature categories obtained from the clustering and at least one image in the image library. According to the key point feature distribution information of the image to be matched and the reverse index table, the similarity of the key point feature between the image to be matched and each image in the image library can be determined relatively quickly.
  • the inverted index table includes 125,000 feature categories.
  • the numbers of the 125,000 feature categories in the inverted index table are divided into integers ranging from 0 to 124999.
  • each integer in the M-dimensional integer type corresponds to one of the 125000 feature category numbers in the inverted index table, according to the M-dimensional image of each image
  • the integer type adds the image to the corresponding feature category in the inverted index table, for example, the image 1.jpeg whose M-dimensional integer type is ⁇ 0, 2, 3, 124999 ⁇ is added to the number "0" of the inverted index table.
  • the class numbered 0 corresponds to the images 0.jpeg, 1.jpeg, 2.jpeg, and 5.jpeg, which means that the images 0.jpeg, 1.jpeg, 2.jpeg, and 5.jpeg all have SIFT key point features attributable to Feature category numbered 0.
  • the establishment of a reverse index table improves the speed of image matching and retrieval and shortens the time of image matching and retrieval.
  • step S2241 according to the key point feature distribution information (such as N-dimensional integer) of the image to be matched and the inverse index table, the image to be matched and the image contained in each image in the image library are determined The number of the same SIFT key point feature. Then in step S2242, the SIFT key point feature similarity between the image to be matched and each image in the image library is determined according to the number of identical SIFT features contained in the image to be matched and each image in the image library.
  • the key point feature distribution information such as N-dimensional integer
  • the first number of key point features that are included in any image in the image library and the image to be matched and belong to the same feature category may include: For any image in the image library, the initial value of the first number of the same key point feature contained in the any image and the image to be matched is set to 0. For each key point feature contained in the image to be matched, search the reverse index table for an image that has a mapping relationship with the feature category to which the key point feature belongs, and when it has a mapping relationship with the feature category to which the key point feature belongs When any image is included in the image, 1 is added to the value of the first number. And so on, until all the key point features contained in the image to be matched are traversed, and the final value of the first number is determined. The same is true for other images in the image library, so I won't repeat them here.
  • the 15,000-dimensional array saves the number N same of the same key point features contained in each image in the image library and the image to be matched.
  • the above formula (1) can be used to calculate the SIFT key point feature similarity between each image in the image library and the image to be matched.
  • the reverse index table is used to improve the matching retrieval time. For example, experiments show that the retrieval time for 15,000 images in the image library is only 0.6 ms (milliseconds).
  • Fig. 5 schematically shows an example flowchart of an image matching method provided by another embodiment of the present disclosure.
  • the image matching method shown in FIG. 5 may further include step S510 ⁇ S540.
  • Step S510 Obtain multiple color categories.
  • Step S520 Calculate the color feature value of each pixel in any image.
  • the foregoing process of calculating the color feature value of each pixel in any image may include: for each pixel in the any image, the color mode of the pixel is RGB (Red (Red), Green (Green) ), the Blue (blue) value is converted to HSV (Hue (hue), Saturation (saturation), Value (lightness)) value. Perform integerization processing on the HSV value, and calculate the color feature value of the pixel in any image according to the HSV value after the integerization processing.
  • Step S530 According to the color characteristic value of each pixel in the any image, determine the color category to which each pixel in the any image belongs from the above-mentioned multiple color categories.
  • Step S540 Determine the color feature of any image according to the color category to which the multiple pixels in the any image belong.
  • step S220 determining the color feature similarity between any image and the image to be matched may include the following steps.
  • Step S225 Calculate the color feature value of each pixel in the image to be matched.
  • the above calculation of the color characteristic value of each pixel in the image to be matched includes: for each pixel in the image to be matched, the color mode RGB value of the pixel is converted to HSV value, and the HSV value is integerized. , And determine the color feature value of the pixel in the image to be matched according to the HSV value after the integerization process.
  • Step S226 According to the color characteristic value of each pixel in the image to be matched, the color category to which each pixel in the image to be matched belongs is determined from the foregoing multiple color categories.
  • Step S227 Determine the color feature of the image to be matched according to the color category to which the multiple pixels in the image to be matched belong.
  • Step S228 Determine the color feature similarity between any image in the image library and the color feature of the image to be matched according to the color feature of any image in the image library.
  • the above step S228 determines the color feature similarity between each image in the image library and the image to be matched according to the color feature of the image to be matched and the color feature of each image in the image library, which may include: determining the image The Euclidean distance between the color feature of any image in the library and the color feature of the image to be matched. Then, according to the Euclidean distance, the color feature similarity between the image to be matched and any image is determined.
  • step S610 the RGB value of each pixel in each image in the image library 601 is converted into an HSV value, and then in step S620, according to the HSV of each pixel in each image The value determines the HSV color characteristics of the image.
  • the HSV color feature of at least one image in the image library can establish the HSV color feature library 602.
  • step S630 the RGB value of the image to be matched 603 is converted into an HSV value, and then in step S640, the HSV color feature of the image to be matched is determined according to the HSV value of each pixel in the image to be matched.
  • step S650 a similarity calculation is performed based on the HSV color feature of the image to be matched and the HSV color feature of each image in the HSV feature library to obtain the HSV color feature similarity 604 between each image in the image library and the image to be matched.
  • the color in the image is the most direct and important feature for distinguishing the image
  • the HSV color feature in the color feature is a feature close to the perception of the human eye .
  • calculating the color feature similarity based on the HSV color feature can more accurately reflect the similarity of the color distribution between the images.
  • the calculation formula for converting RGB value to HSV value is as follows:
  • R, G, and B are the red, green, and blue coordinate values of a color, and the coordinate values are real numbers between 0 and 1.
  • HSV is integerized.
  • the H space can be divided into 8 parts, the S space is divided into 3 parts, and the V space is divided into 3 parts, where:
  • Z has 72 value spaces in total, and the 72 value spaces can correspond to the multiple color categories mentioned above, that is, 72 color categories can be obtained.
  • determine the color category to which each pixel belongs in the 72 color categories For example, when the color feature value of a pixel is 5, it is determined that the pixel belongs to the fifth color category. It can be determined that in an image: the number of pixels belonging to the first color category is W 1 , the number of pixels belonging to the second color category is W 2 , ..., the number of pixels belonging to the 72nd color category The quantity is W 72 .
  • the HSV color feature of the image can be: ⁇ W 1 /W, W 2 /W,...W 72 /W ⁇ .
  • the HSV color feature similarity P color can be calculated using the normalized Euclidean distance as follows:
  • Z i represents the HSV color feature of the i-th image in the image library
  • Z check represents the HSV color feature of the image to be matched.
  • the foregoing embodiments determine the scale-invariant feature transformation SIFT key point feature similarity and color model HSV between each image in the image library and the image to be matched based on the features of the image to be matched and the features of each image in the image library. Similarity of color features.
  • the SIFT key point feature similarity and HSV color feature similarity fusion are performed on the image to be matched with each image in the image library to obtain the fusion similarity between the image to be matched and each image in the image library.
  • the first weight can be determined by the following formula (6):
  • represents the first weight
  • N check represents the number of key point features of the image to be matched
  • L 1 represents the first threshold number of key point features
  • L 2 represents the second threshold number of key point features.
  • the key point feature similarity and the color feature similarity are fused to obtain the fusion similarity.
  • the calculation formula of fusion similarity is as follows:
  • ⁇ + ⁇ 1
  • ⁇ and ⁇ are respectively the first weight and the second weight of the two similarities in the fusion similarity.
  • Color feature can solve the problem of low accuracy of image retrieval with simple texture in the search process, and the complexity of texture is characterized by the number of key point features.
  • the number of key points extracted from an image with complex texture will be more than 600, and the number of key points from an image with simple texture will be less than 400.
  • 400 ⁇ N check ⁇ 600 the larger N check is, the larger ⁇ is.
  • L 1 is 400 and L 2 is 600, the calculation formula (6) of ⁇ is shown in the following formula (8):
  • makes P final both efficient and high accuracy for the retrieval of images with simple textures, and can be adaptively adjusted between images with complex textures and images with simple textures, making P final with adaptive performance, extending the application Scope and application scenarios.
  • the respective fusion similarities between the image to be matched and each image in the image library it can be determined whether there is an image that matches the image to be matched in the image library.
  • the fusion similarity between each of at least one image in the image library and the image to be matched may be sorted in descending order to obtain a fusion similarity sequence. And compared with the fusion similarity threshold, if there is a fusion similarity greater than the fusion similarity threshold in the fusion similarity sequence, it is determined that there is an image matching the image to be matched in the image library. Specifically, the image for which the fusion similarity is greater than the fusion similarity threshold is used as the image that matches the image to be matched. Further, when there is no image matching the image to be matched in the image library, the image to be matched can be stored in the image library. When there are multiple images in the image library that match the image to be matched, the duplicate images in the image library can be deleted.
  • the scale-invariant feature transform SIFT key point feature similarity and the color model HSV color feature similarity between each image in the image library and the image to be matched are determined.
  • the SIFT key point feature similarity and HSV color feature similarity fusion are performed on the image to be matched with each image in the library to obtain the fusion similarity between the image to be matched and each image in the image library.
  • the embodiments of the present disclosure also provide an image matching device.
  • the structure diagram of the image matching device is shown in FIG. 7.
  • the image matching device 700 includes a first processing module 701, a second processing module 702, and a third processing module 701. Processing module 703 and fourth processing module 704.
  • the first processing module 701 is used to obtain an image to be matched.
  • the second processing module 702 is used for determining the similarity of key point features between any image and the image to be matched for any image in the image library, and determining that the color features between any image and the image to be matched are similar degree.
  • the third processing module 703 is configured to determine the fusion similarity between the image to be matched and the image to be matched according to the determined similarity of the key point features and the similarity of the color features.
  • the fourth processing module 704 is configured to determine whether there is an image matching the image to be matched in the image library according to the fusion similarity between each of at least one image in the image library and the image to be matched.
  • the image matching device provided by the embodiment of the present disclosure, please refer to the foregoing image matching method.
  • the image matching device provided in the embodiment of the present disclosure can achieve the same beneficial effects as the foregoing image matching method, and will not be repeated here.
  • the embodiments of the present disclosure also provide an electronic device.
  • the structural diagram of the electronic device is shown in FIG. 8.
  • the electronic device 800 includes at least one processor 801, a memory 802, and a bus 803. 801 are all electrically connected to the memory 802; the memory 802 is configured to store at least one computer-executable instruction, and the processor 801 is configured to execute the at least one computer-executable instruction, so as to execute any one in the first embodiment of the present disclosure.
  • the steps of any image matching method provided in the embodiment or any optional implementation manner.
  • processor 803 may be FPGA (Field-Programmable Gate Array) or other devices with logic processing capabilities, such as MCU (Microcontroller Unit), CPU (Central Process Unit, central processing unit) ).
  • MCU Microcontroller Unit
  • CPU Central Process Unit, central processing unit
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the image matching method provided in the first embodiment of the present disclosure is implemented.
  • Computer readable media include but are not limited to any type of disk (including floppy disk, hard disk, CD-ROM, CD-ROM, and magneto-optical disk), ROM, RAM, EPROM (Erasable Programmable Read-Only Memory, rewritable programmable read-only memory) ), EEPROM, flash memory, magnetic card or light card. That is, the readable medium includes any medium that stores or transmits information in a readable form by a device (for example, a computer).
  • the computer-readable storage medium provided in this embodiment has the same inventive concept and the same beneficial effects as the foregoing embodiments, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种图像匹配方法、装置、设备和计算机可读存储介质。该方法包括:获取待匹配图像(S210)。针对图像库中的任一图像,确定该任一图像与待匹配图像之间的关键点特征相似度,并确定该任一图像与待匹配图像之间的颜色特征相似度(S220),从而根据上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像与待匹配图像之间的融合相似度(S230)。然后,根据图像库中的至少一个图像各自与待匹配图像之间的融合相似度,确定图像库中是否存在与待匹配图像匹配的图像(S240)。

Description

图像匹配方法、装置、设备、介质和程序产品
本公开要求于2019年5月29日提交的、申请号为201910457984.9的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机视觉技术领域,具体而言,本公开涉及一种图像匹配方法、装置、设备、计算机可读存储介质和程序产品。
背景技术
在涉及到图像存储的场景中,可以将各种来源的图像存入图像库。例如以下两种情况会造成图像库存储空间的浪费:1、图像库的维护人员多次将相同或近似的图像添加至图像库中;2、用户所上传的图像与图像库中已有图像相同或近似。多个重复或近似的图像在图像库中作为冗余数据,将造成大量存储空间的浪费。
发明内容
本申请提出一种图像匹配方法、装置、设备、计算机可读存储介质和程序产品。
根据本公开的一个方面,提供了一种图像匹配方法,包括:获取待匹配图像。针对图像库中的任一图像,确定该任一图像与待匹配图像之间的关键点特征相似度,并确定该任一图像与待匹配图像之间的颜色特征相似度,从而根据上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像与待匹配图像之间的融合相似度。然后,根据图像库中的至少一个图像各自与待匹配图像之间的融合相似度,确定图像库中是否存在与待匹配图像匹配的图像。
例如,上述方法还包括:在上述确定图像库中的任一图像与待匹配图像之间的关键点特征相似度之前,提取预设样本库中多个样本图像的关键点特征,预设样本库包括图像库。然后,对预设样本库中的多个样本图像的关键点特征进行聚类,以得到多个特征类别。接着,提取图像库中的任一图像的至少一个关键点特征。针对任一图像的每个关键点特征,对该关键点特征进行分类,以从上述多个特征类别中确定该关键点特征所属的特征类别。根据图像库中的任一图像的至少一个关键点特征各自所属的 特征类别,确定该任一图像的关键点特征分布信息。
例如,上述对预设样本库中的多个样本图像的关键点特征进行聚类包括:利用基于层次方法的聚类中心初始化方法,对所述多个样本图像的关键点特征进行聚类。
例如,上述确定图像库中的任一图像与待匹配图像之间的关键点特征相似度包括:提取待匹配图像的至少一个关键点特征。针对待匹配图像的每个关键点特征,对该关键点特征进行分类,以从上述多个特征类别中确定该关键点特征所属的特征类别。根据待匹配图像的至少一个关键点特征各自所属的特征类别,确定待匹配图像的关键点特征分布信息。接着,根据图像库中的任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定该任一图像与待匹配图像之间的关键点特征相似度。
例如,上述根据任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定任一图像与待匹配图像之间的关键点特征相似度包括:根据该任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定该任一图像和待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量,并且根据第一数量,确定该任一图像与待匹配图像之间的关键点特征相似度。
例如,第一数量为N same,任一图像包括第二数量N n个关键点特征,待匹配图像包括第三数量N check个关键点特征,其中,N same、N n和N check均为大于等于0的整数。上述根据第一数量,确定任一图像与待匹配图像之间的关键点特征相似度包括:根据下述公式,确定该任一图像与待匹配图像之间的关键点特征相似度P sift
Figure PCTCN2020086666-appb-000001
Figure PCTCN2020086666-appb-000002
例如,上述方法还包括:在上述确定任一图像与待匹配图像之间的关键点特征相似度之前,根据图像库中至少一个图像各自的关键点特征分布信息,确定反向索引表。反向索引表用于表征从上述多个特征类别中的每个特征类别到图像库中至少一个图像的映射关系。上述确定任一图像和待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量包括:设置所述第一数量的初始值为0;针对待匹配图像所包含的每个关键点特征,在反向索引表中查找与该关键点特征所属的特征类别具有映射关系的图像,且当与该关键点特征所属的特征类别具有映射关系的图像中包括该任一图像时,对第一数量的取值加1;并且,在遍历待匹配图像所包含的关键点特征后,确定第一数量的取值。
例如,上述关键点特征包括:尺度不变特征转换SIFT关键点特征。
例如,上述方法还包括:在上述确定图像库中的任一图像与待匹配图像之间的颜色特征相似度之前,获取多个颜色类别。计算该任一图像中每个像素点的颜色特征值,并根据该任一图像中每个像素点的颜色特征值,从上述多个颜色类别中确定该任一图像中每个像素点所属的颜色类别。然后,根据该任一图像中多个像素点各自所属的颜色类别,确定该任一图像的颜色特征。
例如,上述确定任一图像与待匹配图像之间的颜色特征相似度包括:计算待匹配图像中每个像素点的颜色特征值。根据待匹配图像中每个像素点的颜色特征值,从上述多个颜色类别中确定待匹配图像中每个像素点所属的颜色类别。然后,根据待匹配图像中多个像素点各自所属的颜色类别,确定待匹配图像的颜色特征。
例如,上述确定图像库中的任一图像与待匹配图像之间的颜色特征相似度还包括:确定该任一图像的颜色特征与待匹配图像之间的颜色特征之间的欧式距离,并根据该欧式距离,确定该任一图像与待匹配图像之间的颜色特征相似度。
例如,上述计算任一图像中每个像素点的颜色特征值包括:针对该任一图像中的每个像素点,将该像素点的色彩模式RGB值转换为HSV值,对HSV值进行整数化处理,并根据整数化处理后的HSV值计算该任一图像中的该像素点的颜色特征值。以及,上述计算待匹配图像中每个像素点的颜色特征值包括:针对待匹配图像中的每个像素点,将该像素点的色彩模式RGB值转换为HSV值,对HSV值进行整数化处理,并根据整数化处理后的HSV值确定待匹配图像中的该像素点的颜色特征值。
例如,上述根据图像库中的任一图像与待匹配图像之间的关键点特征相似度和颜色特征相似度,确定该任一图像与待匹配图像之间的融合相似度包括:设置第一权重和第二权重,第一权重与第二权重之和为1。根据第一权重和第二权重,对该任一图像与待匹配图像之间的关键点特征相似度以及该任一图像与待匹配图像之间的颜色特征相似度进行加权求和,以得到该任一图像与待匹配图像之间的融合相似度。
例如,上述设置第一权重包括:通过下述公式确定第一权重α:
Figure PCTCN2020086666-appb-000003
其中,N check表示待匹配图像的关键点特征的数量,L 1表示关键点特征的第一阈值数量,L 2表示关键点特征的第二阈值数量。
例如,上述根据图像库中的至少一个图像各自与待匹配图像之间的融合相似度,确定图像库中是否存在与待匹配图像相匹配的图像包括:对图像库中的至少一个图像各自与待匹配图像之间的融合相似度进行降序排序,以得到融合相似度序列,然后将 该融合相似度序列中大于融合相似度阈值的融合相似度所针对的图像作为与待匹配图像相匹配的图像。
例如,上述方法还包括:当图像库中不存在与待匹配图像相匹配的图像时,将待匹配图像添加至图像库中。
根据本公开的另一方面,提供了一种图像匹配装置,包括:第一处理模块、第二处理模块、第三处理模块和第四处理模块。第一处理模块用于获取待匹配图像。第二处理模块用于针对图像库中的任一图像,确定该任一图像与待匹配图像之间的关键点特征相似度,并确定该任一图像与待匹配图像之间的颜色特征相似度。第三处理模块用于根据上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像待匹配图像与待匹配图像之间的融合相似度。第四处理模块用于根据图像库中的至少一个图像各自与待匹配图像之间的融合相似度,确定图像库中是否存在与待匹配图像匹配的图像。
根据本公开的另一方面,提供了一种电子设备,包括:至少一个处理器、存储器和总线。总线用于连接处理器和存储器。存储器用于存储操作指令。处理器用于通过调用操作指令,执行上述任一实施例的图像匹配方法。
根据本公开的另一方面,提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述任一实施例的图像匹配方法。
根据本公开的另一方面,提供了一种计算机程序产品,包括可执行指令,该指令被处理器执行时实现上述任一实施例的图像匹配方法。
本公开实施例提供的技术方案,至少具有如下有益效果:
根据本公开实施例的图像匹配方法在针对待匹配图像进行匹配时,先确定图像库中任一图像与待匹配图像之间的关键点特征相似度和颜色特征相似度,再基于上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像与待匹配图像之间的融合相似度。从而根据图像库中各图像与待匹配图像之间融合相似度来判断图像库中是否存在与待匹配图像相匹配的图像,实现了以图(待匹配图像)搜图(图像库中的图像)的技术效果。由于在图像匹配过程中选用由关键点特征相似度和颜色特征相似度综合构成的融合相似度进行匹配判定,综合考虑了图像中多种维度的特征,能够有效提高匹配准确性。可以迅速有效地查找出图像库中已有的与待匹配图像相同或近似的图像,避免相同或近似图像的重复添加,进而避免存储空间的浪费。
本公开附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得 明显,或通过本公开的实践了解到。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1A~1B为本公开实施例提供的应用图像匹配方法和装置的系统架构的示意图;
图2为本公开实施例提供的图像匹配方法的流程示意图;
图3为本公开另一实施例提供的图像匹配方法的流程示意图;
图4为本公开实施例提供的关键字特征提取和关键字特征相似度计算的示意图;
图5为本公开另一实施例提供的图像匹配方法的流程示意图;
图6为本公开实施例提供的颜色特征提取和颜色特征相似度计算的示意图;
图7为本公开实施例提供的一种图像匹配装置的结构示意图;
图8为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面详细描述本公开,本公开实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的部件或具有相同或类似功能的部件。此外,如果已知技术的详细描述对于示出的本公开的特征是不必要的,则将其省略。下面通过参考附图描述的实施例是示例性的,仅用于解释本公开,而不能解释为对本公开的限制。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本公开所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本公开的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他 元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
在涉及到图像存储的场景中,可以将各种来源的图像存入图像库。例如以下两种情况会造成图像库存储空间的浪费:1、图像库的维护人员多次将相同或近似的图像添加至图像库中;2、用户所上传的图像与图像库中已有图像相同或近似。多个相同或近似的图像在图像库中作为冗余数据,将造成大量存储空间的浪费。
例如,在图像库中存储有数字化的绘画作品、书法作品、照片等图像,利用电子设备对图像库中的图像进行展示,以供用户对各种绘画作品、书法作品、照片等进行观赏。图像库可以设置于电子设备端,也可以设置在服务器端。一些情况下会将相同或近似的图像重复添加至图像库中,由于数字化的会话作品、书法作品、照片等所占存储空间较大,从而造成大量存储空间的浪费。
首先对本公开实施例的可以应用图像匹配方法和装置的系统架构进行示例性介绍。
如图1A所示,根据本公开实施例的系统架构100可以包括电子设备101、网络102和服务器103。网络102为用以在电子设备101和服务器105之间提供通信链路的介质。电子设备101例如可以是智能手机、平板电脑、智能电视、便捷计算机和台式计算机等,在此不做限制。服务器103可以为电子设备101提供各种后台服务。例如接收来自电子设备101的请求消息,对接收到的请求消息进行分析处理等响应,并将相应的响应结果反馈至电子设备101。
根据本公开实施例的图像匹配方法可以在电子设备101中实施,相应地,根据本公开实施例的图像匹配装置可以设置于电子设备101中。或者,根据本公开实施例的图像匹配方法也可以在服务器103中实施,相应地,根据本公开实施例的图像匹配装置也可以设置在服务器103中。
图1B示例性地展示了根据本公开实施例的图像匹配方法和装置的实施原理。如图1B所示,上述系统架构中部署有图像库104和特征库105。示例性地,可以首先对图像库104内所有图像(例如数字化的绘画作品、书法作品、照片等)分别提取特征并保存到特征库105中。当获取到待匹配图像106后,对待匹配图像106进行特征提取,以得到待匹配图像的特征107。对待匹配图像的特征107和特征库105中的各特征进行特征匹配,以计算待匹配图像的特征107与特征库105内的各特征之间的特征相似度。对各 特征相似度进行排序,将特征相似度高于预定值的特征所对应的图像作为与待匹配图像106相匹配的图像108,说明该图像与待匹配图像106相同或近似。接着可以由图像库的维护人员对上述所筛选出的图像进行人工审核以删除图像库104中重复或大体重复的图像。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
本公开实施例中提供了一种图像匹配方法,应注意,以下方法中各个步骤的序号仅作为该步骤的表示以便描述,而不应被看作表示该各个步骤的执行顺序。除非明确指出,否则该方法不需要完全按照所示顺序来执行。该方法的流程示意图如图2示,示例性地,该方法可以包括:步骤S210~步骤S240。
步骤S210,获取待匹配图像。
步骤S220,针对图像库中的任一图像,确定该任一图像与待匹配图像之间的关键点(key point)特征相似度,并确定该任一图像与待匹配图像之间的颜色特征相似度。
示例性地,关键点可以包括图像中的各种极值点,例如角点(corner point)、边缘点、暗区的亮点、亮区的暗点等等。任一图像与待匹配图像之间的关键点特征相似度可以包括该任一图像的关键点特征与待匹配图像的关键点特征之间的相似度。任一图像与待匹配图像之间的颜色特征相似度可以包括该任一图像的颜色特征与待匹配图像的颜色特征之间的相似度。图像的颜色特征可以基于图像的各种颜色模型来确定。
步骤S230,根据上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像与待匹配图像之间的融合相似度。
步骤S240,根据图像库中的至少一个图像各自与待匹配图像之间的融合相似度,确定图像库中是否存在与待匹配图像相匹配的图像。
本领域技术人员可以理解,根据本公开实施例的图像匹配方法在针对待匹配图像进行匹配时,先确定图像库中任一图像与待匹配图像之间的关键点特征相似度和颜色特征相似度,再基于上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像与待匹配图像之间的融合相似度。从而根据图像库中各图像与待匹配图像之间融合相似度来判断图像库中是否存在与待匹配图像相匹配的图像,实现了以图(待匹配图像)搜图(图像库中的图像)的技术效果。由于在图像匹配过程中选用由关键点特征相似度和颜色特 征相似度综合构成的融合相似度进行匹配判定,综合考虑了图像中多种维度的特征,能够有效提高匹配准确性。可以迅速有效地查找出图像库中已有的与待匹配图像相同或近似的图像,避免相同或近似图像的重复添加,进而避免存储空间的浪费。
图3示意性地示出了本公开另一实施例提供的图像匹配方法的示例流程图。与图2所示的图像匹配方法相比,在上述步骤S220确定图像库中的任一图像与待匹配图像之间的关键点特征相似度之前,图3所示的图像匹配方法还可以包括步骤S310~S350。
步骤S310,提取预设样本库中多个样本图像的关键点特征。
其中,预设样本库由大量样本图像构成,预设样本库可以包括图像库。
示例性地,上述关键点特征可以是SIFT(Scale-invariant feature transform,尺度不变特征变换)关键点特征。
步骤S320,对预设样本库中的多个样本图像的关键点特征进行聚类,以得到多个特征类别。
示例性地,可以根据预设样本库中的各样本图像的SIFT特征,进行分层聚类。例如可以利用HIKM(hierarchical clustering K-means,基于层次方法的聚类中心初始化方法)进行聚类,以得到多个特征类别,本例中每个特征类别可称为一个HIKM类。
步骤S330,提取图像库中的任一图像的至少一个关键点特征。
步骤S340,针对任一图像的每个关键点特征,对该关键点特征进行分类,以从上述多个特征类别中确定该关键点特征所属的特征类别。
示例性地,根据所得到的多个HIKM类,对图像库中各图像的SIFT关键点特征进行HIKM分类,以得到每个图像中的每个SIFT关键点特征所属的HIKM分类。
步骤S350,根据图像库中的任一图像的至少一个关键点特征各自所属的特征类别,确定该任一图像的关键点特征分布信息。
下面参考图4对上述确定图像库中每个图像的关键点特征分布信息的过程进行示例性说明。如图4所示,例如,对预设样本库401中的70000个图像进行SIFT关键点特征提取(步骤S310),得到60000000个SIFT关键点,即60000000个SIFT关键点特征。对这60000000个SIFT关键点特征进行HIKM聚类(步骤S320),建立包含125000个特征类别(HIKM类)的分类模型402。
进一步地,将图像库403中的每个图像进行SIFT关键点特征提取(步骤S330),得到每个图像的M个SIFT关键点,即M个SIFT关键点特征,M为正整数。根据实际 情况,图像库中不同图像所包含的关键点特征的数量可以相同或不同,在此不做限制。图像库中的所有图像可以为预设样本库中的一部分图像,以使得针对预设样本库中关键点特征的聚类结果能够覆盖图像库中的关键点特征的分类结果。根据上文中所建立的包含125000个特征类别的分类模型,对每个图像的M个SIFT关键点特征分别进行HIKM分类,以确定M个SIFT关键点特征各自所属的特征类别(步骤S340)。例如,在M个SIFT关键点特征中,第1个关键点特征所属的特征类别为C 1,第2个关键点特征所属的特征类别为C 2,……,第M个关键点特征所属的特征类别为C M。特征类别C 1~C M均属于上述125000个特征类别,且特征类别C 1~C M中不重复的类别数目为Q,Q大于等于1且小于等于M的整数。根据特征类别C 1~C M可以得到每个图像的关键点特征分布信息(步骤S350),例如该关键点特征分布信息可以通过M维向量进行表示。如不同的特征类别通过不同的整型数据进行表示,则关键点特征分布信息可以表示为M维整型。根据本公开的实施例,还可以根据图像库中各图像的关键点特征分布信息建立反向索引表404以供后续匹配使用,下文中将会详细说明。
例如,图像库中有15000个图像。其中,对第0个图像(0.jpeg)进行SIFT关键点特征提取,得到4个SIFT关键点,即4个SIFT关键点特征。对第0个图像的4个SIFT关键点特征分别进行HIKM分类,确定4个SIFT关键点特征各自所属的特征类别。如其中第1个关键点特征所属的特征类别为第1类别(以“0”表示),第2个关键点特征所属的特征类别为第2类别(以“1”表示),第3个关键点特征所属的特征类别为第3类别(以“2”表示),第4个关键点特征所属的特征类别为第4类别(以“3”表示),则第0个图像的M维整型为{0,1,2,3}。同理地,第1个图像(1.jpeg)的M维整型为{0,2,3,124999},其中“124999”表示第125000个特征类别。以此类推,可以确定图像库中各图像的关键点特征分布信息。
根据本公开的实施例,如图3所示,上述步骤S220确定图像库中的任一图像与待匹配图像之间的关键点特征相似度可以包括如下步骤。
步骤S221,提取待匹配图像的至少一个关键点特征。
示例性地,所提取的关键点特征例如可以是SIFT关键点特征。
步骤S222,针对待匹配图像的每个关键点特征,对该关键点特征进行分类,以从上述多个特征类别中确定该关键点特征所属的特征类别。
步骤S223,根据待匹配图像的至少一个关键点特征各自所属的特征类别,确定 待匹配图像的关键点特征分布信息。
上述步骤S222~S223可以按照上文中步骤S340~S350的实施方式进行,重复的部分不再赘述。
继续参考图4,例如,对待匹配图像405提取SIFT特征(步骤S221),得到N个SIFT关键点,即N个SIFT关键点特征,N为正整数。利用上文中所建立的包含125000个特征类别的分类模型,对N个SIFT关键点特征分别进行HIKM分类,以确定N个SIFT关键点特征各自所属的特征类别(步骤S222)。根据N个SIFT关键点特征各自所属的特征类别可以得到待匹配图像的关键点特征分布信息(步骤S223),例如该关键点特征分布信息可以通过N维向量进行表示。如不同的特征类别通过不同的整型数据进行表示,本例中每一个SIFT关键点特征经过HIKM分类后将得到0~124999之间的一个整数,则待匹配图像的关键点特征分布信息可以表示为N维整型。
步骤S224,根据图像库中的任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定该任一图像与待匹配图像之间的关键点特征相似度。
示例性地,上述步骤S224根据任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定任一图像与待匹配图像之间的关键点特征相似度可以包括:根据该任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定该任一图像和待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量,并且根据第一数量,确定该任一图像与待匹配图像之间的关键点特征相似度。
示例性地,上述步骤S224根据任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定任一图像与待匹配图像之间的关键点特征相似度可以包括:针对图像库中的任一图像,根据该任一图像的关键点特征分布信息和待匹配图像的关键点特征分布信息,确定该任一图像与待匹配图像均包含的、且属于相同特征类别的SIFT关键点特征的第一数量。再根据第一数量,确定该任一图像与待匹配图像之间的关键点特征相似度。同理,可以确定图像库中的每个图像各自与待匹配图像之间的关键点特征相似度。
在一个例子中,可以根据下述公式(1),来确定待匹配图像和图像库中的任一图像之间的SIFT关键点特征相似度:
Figure PCTCN2020086666-appb-000004
公式(1)中,P sift表示待匹配图像和图像库中的任一图像之间的SIFT关键点特征 相似度,N same表示待匹配图像和图像库中的任一图像共同包含的属于相同特征类别的SIFT关键点特征(下文中可称为待匹配图像与该任一图像所包含的相同关键点特征)的第一数量,N check表示待匹配图像所包含的SIFT关键点特征的总数量,N n表示图像库中任一图像所包含的SIFT关键点特征的总数量。
根据本公开的实施例,确定待匹配图像和图像库中的每个图像所包含的相同关键点特征的数量的过程可以包括:将待匹配图像的关键点特征分布信息与图像库中的每个图像的关键点特征分布信息进行比较,从而确定每个图像与待匹配图像之间共同包含的、属于相同特征类别的关键点特征。示例性地,将待匹配图像的N维整型与图像库中的每个图像的N维整型进行比较,确定出待匹配图像与图像库中每个图像之间相同的特征类别;根据待匹配图像与图像库中每个图像所包含的相同特征类别的出现次数,确定待匹配图像与图像库中每个图像之间的相同关键点特征的数量。
为进一步提高图像匹配效率,根据本公开的实施例,在上述确定图像库中各图像的关键点特征分布信息后,可以基于各图像的关键点特征分布信息,建立反向索引表,该反向索引表用于表征从聚类得到的多个特征类别中的每个特征类别到图像库中至少一个图像的映射关系。根据待匹配图像的关键点特征分布信息和反向索引表,可以较为快速地确定待匹配图像与图像库中每个图像之间的关键点特征相似度。
在图4所示的例子中,聚类得到125000个特征类别,建立反向索引表(特征库)404。反向索引表包括125000个特征类别,例如反向索引表中125000个特征类别的编号分为为0~124999的整数。在确定图像库中每个图像的M维整型后,M维整型中的每个整型对应于反向索引表中125000个特征类别的编号中的一个编号,按照每个图像的M维整型将图像添加至反向索引表中的相应特征类别中,例如将M维整型为{0,2,3,124999}的图像1.jpeg分别添加至反向索引表的编号“0”、“2”、“3”和“124999”下,以使每个特征类别的编号下对应图像库中的一个或多个图像。例如,编号为0的类对应图像0.jpeg、1.jpeg、2.jpeg和5.jpeg,表示图像0.jpeg、1.jpeg、2.jpeg和5.jpeg都有SIFT关键点特征归属于编号为0的特征类别。建立反向索引表,提高了图像的匹配检索速度,缩短了图像的匹配检索时间。
例如,如图4所示,在步骤S2241,根据待匹配图像的关键点特征分布信息(如N维整型)和反向索引表,确定待匹配图像和图像库中的每个图像所包含的相同SIFT关键点特征的数量。然后在步骤S2242,根据待匹配图像和图像库中的每个图像所包含的 相同SIFT特征的数量,确定待匹配图像和图像库中的每个图像之间的SIFT关键点特征相似度。
根据本公开的实施例,在建立反向索引表的基础上,上述确定图像库中任一图像和待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量可以包括:针对图像库中的任一图像,设置该任一图像与待匹配图像所包含的相同关键点特征的第一数量的初始值为0。针对待匹配图像所包含的每个关键点特征,在反向索引表中查找与该关键点特征所属的特征类别具有映射关系的图像,且当与该关键点特征所属的特征类别具有映射关系的图像中包括该任一图像时,对第一数量的取值加1。以此类推,直至遍历待匹配图像所包含的所有关键点特征,确定所述第一数量的最终取值。对于图像库中的其他图像同理,在此不再赘述。
继续参考图4,例如,在进行图像匹配时,假设图像库内有15000个图像,首先创建一个15000维数组用来保存每个图像与待匹配图像所包含的相同SIFT关键点特征的第一数量,初始化为0。待匹配图像提取到900个关键点特征,那么分别去反向索引表中该900个关键点特征所属的特征类别中检索,当被检索的每个特征类别中的图像每出现一次,则在上述15000维数组中与该图像对应的位置加1。当该900个关键点特征所属的特征类别查询完毕后,15000维数组即保存了图像库中每个图像与待匹配图像所包含的相同关键点特征的数量N same。利用上述公式(1)可以计算图像库中每个图像与待匹配图像之间的SIFT关键点特征相似度。
根据本公开实施例的图像匹配方法,利用反向索引表提高了匹配检索时间,例如实验表明图像库内15000个图像的检索时间仅为0.6ms(毫秒)。
图5示意性地示出了本公开另一实施例提供的图像匹配方法的示例流程图。与图2所示的图像匹配方法相比,在上述步骤S220确定图像库中的任一图像与待匹配图像之间的颜色特征相似度之前,图5所示的图像匹配方法还可以包括步骤S510~S540。
步骤S510,获取多个颜色类别。
步骤S520,计算该任一图像中每个像素点的颜色特征值。
例如,上述计算任一图像中每个像素点的颜色特征值的过程可以包括:针对该任一图像中的每个像素点,将该像素点的色彩模式RGB(Red(红),Green(绿),Blue(蓝))值转换为HSV(Hue(色调),Saturation(饱和度),Value(明度))值。对HSV值进行整数化处理,并根据整数化处理后的HSV值计算该任一图像中的该像素 点的颜色特征值。
步骤S530,根据该任一图像中每个像素点的颜色特征值,从上述多个颜色类别中确定该任一图像中每个像素点所属的颜色类别。
步骤S540,根据该任一图像中多个像素点各自所属的颜色类别,确定该任一图像的颜色特征。
根据本公开的实施例,在确定图像库中各图像的颜色特征之后,如图5所示,上述步骤S220确定任一图像与待匹配图像之间的颜色特征相似度可以包括如下步骤。
步骤S225,计算待匹配图像中每个像素点的颜色特征值。
例如,上述计算待匹配图像中每个像素点的颜色特征值包括:针对待匹配图像中的每个像素点,将该像素点的色彩模式RGB值转换为HSV值,对HSV值进行整数化处理,并根据整数化处理后的HSV值确定待匹配图像中的该像素点的颜色特征值。
步骤S226,根据待匹配图像中每个像素点的颜色特征值,从上述多个颜色类别中确定待匹配图像中每个像素点所属的颜色类别。
步骤S227,根据待匹配图像中多个像素点各自所属的颜色类别,确定待匹配图像的颜色特征。
步骤S228,根据图像库中任一图像的颜色特征和待匹配图像的颜色特征,确定该任一图像与待匹配图像之间的颜色特征相似度。
根据本公开的实施例,上述步骤S228根据待匹配图像的颜色特征和图像库中各图像的颜色特征,确定图像库中各图像与待匹配图像之间的颜色特征相似度,可以包括:确定图像库中任一图像的颜色特征与待匹配图像的颜色特征之间的欧式距离。然后根据欧式距离,确定待匹配图像和该任一图像之间的颜色特征相似度。
下面参考图6对上述确定图像库中各图像与待匹配图像之间的颜色特征相似度的过程进行示例性说明。可选地,如图6所示,在步骤S610,将图像库601中的每个图像中各像素点的RGB值转化为HSV值,然后在步骤S620,根据每个图像中各像素点的HSV值确定该图像的HSV颜色特征。图像库中至少一个图像的HSV颜色特征可建立HSV颜色特征库602。在步骤S630,将待匹配图像603的RGB值转化为HSV值,然后在步骤S640,根据待匹配图像中各像素点的HSV值确定待匹配图像的HSV颜色特征。在步骤S650,根据待匹配图像的HSV颜色特征和HSV特征库中各图像的HSV颜色特征,进行相似度计算,得到图像库中各图像与待匹配图像之间的HSV颜色特征相 似度604。
根据本公开的实施例,对于纹理简单的图像(例如绘画作品),图像中的颜色是最直接且重要的用于区分图像的特征,而颜色特征中的HSV颜色特征是接近人眼感知的特征。本公开实施例基于HSV颜色特征来计算颜色特征相似度能够较为准确地反映图像之间颜色分布的相似程度。RGB值转化为HSV值的计算公式如下所示:
Figure PCTCN2020086666-appb-000005
Figure PCTCN2020086666-appb-000006
Figure PCTCN2020086666-appb-000007
其中,
Figure PCTCN2020086666-appb-000008
R、G、B分别是一个颜色的红、绿和蓝坐标值,坐标值为0到1之间的实数。
利用RGB得到HSV值后对HSV进行整数化,例如可以将H空间分为8份,S空间分为3份,V空间分为3份,其中:
Figure PCTCN2020086666-appb-000009
Figure PCTCN2020086666-appb-000010
Figure PCTCN2020086666-appb-000011
每一个像素点的HSV颜色特征值为Z=9H+3S+V,G∈[0,1,…71]。本例中,Z共有72个取值空间,可以将该72个取值空间对应到上述多个颜色类别,即获取到72个颜色类别。在确定一个图像中每个像素点的HSV颜色特征值之后,确定每个像素点在72个颜色类别中所属的一个颜色类别。例如,当一个像素点的颜色特征值为5时,确定该像素点属于第5个颜色类别。从而能够确定一个图像中:属于第1个颜色类别的 像素点的数量为W 1,属于第2个颜色类别的像素点的数量为W 2,……,属于第72个颜色类别的像素点的数量为W 72。假设该图像中的像素点总数量为W,则该图像的HSV颜色特征可以为:{W 1/W,W 2/W,...W 72/W}。HSV颜色特征相似度P color可以采用归一化的欧式距离的计算公式如下所示:
Figure PCTCN2020086666-appb-000012
其中,Z i表示图像库中的第i个图像的HSV颜色特征,Z check表示待匹配图像的HSV颜色特征。
可以理解,上述各实施例根据待匹配图像的特征和图像库中各图像的特征,确定图像库中各图像与待匹配图像之间的尺度不变特征变换SIFT关键点特征相似度和颜色模型HSV颜色特征相似度。
接着,将待匹配图像分别与图像库中的每个图像进行SIFT关键点特征相似度和HSV颜色特征相似度融合,得到待匹配图像和图像库中的每个图像之间的融合相似度。
示例性地,根据待匹配图像和图像库中任一图像之间的关键点特征相似度与第一权重的乘积,以及待匹配图像和该任一图像之间的颜色特征相似度与第二权重的乘积,确定待匹配图像和图像库中的每个图像之间的融合相似度。第一权重与第二权重之和为1。
例如,第一权重可以是通过下述公式(6)确定的:
Figure PCTCN2020086666-appb-000013
公式(6)中,α表示第一权重,N check表示表示待匹配图像的关键点特征的数量,L 1表示关键点特征的第一阈值数量,L 2表示关键点特征的第二阈值数量。
根据本公开的实施例,确定图像库中各图像与待匹配图像之间的关键点特征相似度和颜色特征相似度后,对关键点特征相似度和颜色特征相似度进行融合得到融合相似度,融合相似度的计算公式如下所示:
P final=αP sift+βP color       公式(7)
公式(7)中,α+β=1,α和β分别是两种相似度在融合相似度中的第一权重和第二权重。
颜色特征可以解决在搜索过程中纹理简单的图像检索准确率低的问题,而纹理的复杂程度通过关键点特征的数量表征。在图像库中,经过统计,例如针对一个640*360大小的图像(画作),纹理复杂的图像提取关键点会大于600个,而纹理简单的图像的关键点的个数将小于400。针对纹理简单的图像,即N check<400时,关键点特征对于图像 匹配所起到的作用较少,此时可以设置α=0。针对纹理复杂的图像,即N check>600时,关键点特征在图像匹配中所起的作用较大,而颜色特征在图像匹配中所起的作用较小。此时可设置α=1。当400<N check<600时,N check越大,α越大。当L 1为400,L 2为600时,α的计算公式(6)具体如下公式(8)所示:
Figure PCTCN2020086666-appb-000014
α使得P final兼具高效与针对纹理简单的图像的检索具备高准确率,并能够在纹理复杂的图像和纹理简单的图像之间自适应地调整,使得P final具有自适应性能,扩展了适用的范围和应用场景。
根据本公开的实施例,根据待匹配图像和图像库中的每个图像各自的融合相似度,可以确定图像库中是否存在与待匹配图像相匹配的图像。
例如,可以将图像库中至少一个图像各自与待匹配图像之间的融合相似度进行降序排序,以得到融合相似度序列。并与融合相似度阈值比较,如果融合相似度序列中存在大于融合相似度阈值的融合相似度,则确定图像库中存在与待匹配图像相匹配的图像。具体地,将大于融合相似度阈值的融合相似度所针对的图像作为与待匹配图像相匹配的图像。进一步地,当图像库中不存在与待匹配图像相匹配的图像时,可以将待匹配图像存入图像库中。当图像库中存在多个与待匹配图像相匹配的图像时,可以将图像库中重复的图像删除。
应用本公开的实施例,至少可以实现如下有益效果:
1)获取待匹配图像的特征。根据待匹配图像的特征和图像库中各图像的特征,确定图像库中各图像与待匹配图像之间的尺度不变特征变换SIFT关键点特征相似度和颜色模型HSV颜色特征相似度。将待匹配图像分别与库中的每个图像进行SIFT关键点特征相似度和HSV颜色特征相似度融合,得到待匹配图像和图像库中的每个图像各自的融合相似度。根据待匹配图像和图像库中的每个图像各自的融合相似度,确定图像库中是否存在与待匹配图像重复的图像。如此,通过将图像库中各图像与待匹配图像之间的SIFT关键点特征相似度和HSV颜色特征相似度进行融合的方式,提高图像匹配的准确率和效率,有利于更加准确和快速地查找出重复或者大体重复的图像进行删除,能够节省图像存储空间,减少图像存储资源的浪费。
2)提高了纹理简单图像的匹配准确率,从而减少了图像存储资源的浪费。
3)建立反向索引表,提高了图像的匹配检索速度,缩短了图像的匹配检索时间。
基于同一构思,本公开实施例还提供了一种图像匹配装置,该图像匹配装置的结构示意图如图7所示,图像匹配装置700,包括第一处理模块701、第二处理模块702、第三处理模块703和第四处理模块704。
第一处理模块701用于获取待匹配图像。第二处理模块702用于针对图像库中的任一图像,确定该任一图像与待匹配图像之间的关键点特征相似度,并确定该任一图像与待匹配图像之间的颜色特征相似度。第三处理模块703用于根据上述确定的关键点特征相似度和颜色特征相似度,确定该任一图像待匹配图像与待匹配图像之间的融合相似度。第四处理模块704用于根据图像库中的至少一个图像各自与待匹配图像之间的融合相似度,确定图像库中是否存在与待匹配图像匹配的图像。本公开实施例提供的图像匹配装置中未详述的内容,可参照上述图像匹配方法,本公开实施例提供的图像匹配装置能够达到的有益效果与上述图像匹配方法相同,在此不再赘述。
基于相同的构思,本公开实施例还提供了一种电子设备,该电子设备的结构示意图如图8所示,该电子设备800包括至少一个处理器801、存储器802和总线803,至少一个处理器801均与存储器802电连接;存储器802被配置用于存储有至少一个计算机可执行指令,处理器801被配置用于执行该至少一个计算机可执行指令,从而执行如本公开实施例一中任意一个实施例或任意一种可选实施方式提供的任意一种图像匹配重方法的步骤。
进一步,处理器803可以是FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其它具有逻辑处理能力的器件,如MCU(Microcontroller Unit,微控制单元)、CPU(Central Process Unit,中央处理器)。
基于同一构思,本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本公开实施例一所提供的图像匹配重方法。
计算机可读介质包括但不限于任何类型的盘(包括软盘、硬盘、光盘、CD-ROM、和磁光盘)、ROM、RAM、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM、闪存、磁性卡片或光线卡片。也就是,可读介质包括由设备(例如,计算机)以能够读的形式存储或传输信息的任何介质。
本实施例提供的计算机可读存储介质,与前面所述的各实施例具有相同的发明构思及相同的有益效果,在此不再赘述。
公开本技术领域技术人员可以理解,本公开中已经讨论过的各种操作、方法、流程 中的步骤、措施、方案可以被交替、更改、组合或删除。进一步地,具有本公开中已经讨论过的各种操作、方法、流程中的其他步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。进一步地,现有技术中的具有与本公开中公开的各种操作、方法、流程中的步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以上所述仅是本公开的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (18)

  1. 一种图像匹配方法,包括:
    获取待匹配图像;
    针对图像库中的任一图像,确定所述任一图像与所述待匹配图像之间的关键点特征相似度,并确定所述任一图像与所述待匹配图像之间的颜色特征相似度;
    根据所述关键点特征相似度和所述颜色特征相似度,确定所述任一图像与所述待匹配图像之间的融合相似度;以及
    根据所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度,确定所述图像库中是否存在与所述待匹配图像匹配的图像。
  2. 根据权利要求1所述的方法,还包括:在所述确定所述任一图像与所述待匹配图像之间的关键点特征相似度之前,
    提取预设样本库中多个样本图像的关键点特征,所述预设样本库包括所述图像库;
    对所述多个样本图像的关键点特征进行聚类,以得到多个特征类别;
    提取所述任一图像的至少一个关键点特征;
    针对所述任一图像的每个关键点特征,对所述关键点特征进行分类,以从所述多个特征类别中确定所述关键点特征所属的特征类别;以及
    根据所述任一图像的至少一个关键点特征各自所属的特征类别,确定所述任一图像的关键点特征分布信息。
  3. 根据权利要求2所述的方法,其中,所述确定所述任一图像与所述待匹配图像之间的关键点特征相似度包括:
    提取所述待匹配图像的至少一个关键点特征;
    针对所述待匹配图像的每个关键点特征,对所述关键点特征进行分类,以从所述多个特征类别中确定所述关键点特征所属的特征类别;
    根据所述待匹配图像的至少一个关键点特征各自所属的特征类别,确定所述待匹配图像的关键点特征分布信息;以及
    根据所述任一图像的关键点特征分布信息和所述待匹配图像的关键点特征分布信息,确定所述任一图像与所述待匹配图像之间的关键点特征相似度。
  4. 根据权利要求3所述的方法,其中,所述根据所述任一图像的关键点特征分 布信息和所述待匹配图像的关键点特征分布信息,确定所述任一图像与所述待匹配图像之间的关键点特征相似度包括:
    根据所述任一图像的关键点特征分布信息和所述待匹配图像的关键点特征分布信息,确定所述任一图像和所述待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量;以及
    根据所述第一数量,确定所述任一图像与所述待匹配图像之间的关键点特征相似度。
  5. 根据权利要求4所述的方法,其中,第一数量为N same,所述任一图像包括第二数量N n个关键点特征,所述待匹配图像包括第三数量N check个关键点特征,其中,N same、N n和N check均为大于等于0的整数;
    所述根据所述第一数量,确定所述任一图像与所述待匹配图像之间的关键点特征相似度包括:根据下述公式,确定所述任一图像与所述待匹配图像之间的关键点特征相似度P sift
    Figure PCTCN2020086666-appb-100001
  6. 根据权利要求4或5所述的方法,还包括:
    在所述确定所述任一图像与所述待匹配图像之间的关键点特征相似度之前,根据所述图像库中至少一个图像各自的关键点特征分布信息,确定反向索引表,所述反向索引表用于表征从所述多个特征类别中的每个特征类别到所述至少一个图像的映射关系;
    所述确定所述任一图像和所述待匹配图像均包含的、且属于相同特征类别的关键点特征的第一数量包括:
    设置所述第一数量的初始值为0;
    针对所述待匹配图像所包含的每个关键点特征,在所述反向索引表中查找与所述关键点特征所属的特征类别具有映射关系的图像,且当所述具有映射关系的图像包括所述任一图像时,对所述第一数量的取值加1;以及
    在遍历所述待匹配图像所包含的关键点特征后,确定所述第一数量的取值。
  7. 根据权利要求1~6之一所述的方法,其中,所述关键点特征包括:尺度不变特征转换SIFT关键点特征。
  8. 根据权利要求1~7之一所述的方法,还包括:在所述确定所述任一图像与所述待匹配图像之间的颜色特征相似度之前,
    获取多个颜色类别;
    计算所述任一图像中每个像素点的颜色特征值;
    根据所述任一图像中每个像素点的颜色特征值,从所述多个颜色类别中确定所述任一图像中每个像素点所属的颜色类别;以及
    根据所述任一图像中多个像素点各自所属的颜色类别,确定所述任一图像的颜色特征。
  9. 根据权利要求8所述的方法,其中,所述确定所述任一图像与所述待匹配图像之间的颜色特征相似度包括:
    计算所述待匹配图像中每个像素点的颜色特征值;
    根据所述待匹配图像中每个像素点的颜色特征值,从所述多个颜色类别中确定所述待匹配图像中每个像素点所属的颜色类别;以及
    根据所述待匹配图像中多个像素点各自所属的颜色类别,确定所述待匹配图像的颜色特征。
  10. 根据权利要求9所述的方法,其中,所述确定所述任一图像与所述待匹配图像之间的颜色特征相似度还包括:
    确定所述任一图像的颜色特征与所述待匹配图像之间的颜色特征之间的欧式距离;以及
    根据所述欧式距离,确定所述任一图像与所述待匹配图像之间的颜色特征相似度。
  11. 根据权利要求9~10之一所述的方法,其中,
    所述计算所述任一图像中每个像素点的颜色特征值包括:针对所述任一图像中的每个像素点,将所述每个像素点的色彩模式RGB值转换为HSV值,对所述HSV值进行整数化处理,并根据整数化处理后的HSV值计算所述任一图像中每个像素点的颜色特征值;以及
    所述计算所述待匹配图像中每个像素点的颜色特征值包括:针对所述待匹配图像中的每个像素点,将所述每个像素点的色彩模式RGB值转换为HSV值,对所述HSV值进行整数化处理,并根据整数化处理后的HSV值确定所述待匹配图像中每个像素点的颜色特征值。
  12. 根据权利要求1~11之一所述的方法,其中,所述根据所述关键点特征相似度和所述颜色特征相似度,确定所述任一图像与所述待匹配图像之间的融合相似度包 括:
    设置第一权重和第二权重,其中,所述第一权重与所述第二权重之和为1;
    根据所述第一权重和所述第二权重,对所述任一图像与所述待匹配图像之间的关键点特征相似度以及所述任一图像与所述待匹配图像之间的颜色特征相似度进行加权求和,以得到所述任一图像与所述待匹配图像之间的融合相似度。
  13. 根据权利要求12所述的方法,其中,所述设置第一权重包括:通过下述公式确定所述第一权重α:
    Figure PCTCN2020086666-appb-100002
    其中,N check表示所述待匹配图像的关键点特征的数量,M表示关键点特征的第一阈值数量,L表示关键点特征的第二阈值数量。
  14. 根据权利要求1~13之一所述的方法,其中,所述根据所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度,确定所述图像库中是否存在与所述待匹配图像相匹配的图像包括:
    对所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度进行降序排序,以得到融合相似度序列;
    将所述融合相似度序列中大于融合相似度阈值的融合相似度所针对的图像作为与所述待匹配图像匹配的图像。
  15. 一种图像匹配装置,包括:
    第一处理模块,用于获取待匹配图像;
    第二处理模块,用于针对图像库中的任一图像,确定所述任一图像与所述待匹配图像之间的关键点特征相似度,并确定所述任一图像与所述待匹配图像之间的颜色特征相似度;
    第三处理模块,用于根据所述将关键点特征相似度和颜色特征相似度,确定所述任一图像与所述待匹配图像之间的融合相似度;
    第四处理模块,用于根据所述图像库中的至少一个图像各自与所述待匹配图像之间的融合相似度,确定所述图像库中是否存在与所述待匹配图像匹配的图像。
  16. 一种电子设备,包括:至少一个处理器和存储器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于通过调用所述计算机程序,执行上述权利要求1~14之一所述 的方法。
  17. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序用于被处理器执行时实现如权利要求1~14之一所述的方法。
  18. 一种计算机程序产品,包括可执行指令,该指令被处理器执行时实现根据权利要求1~14之一所述的方法。
PCT/CN2020/086666 2019-05-29 2020-04-24 图像匹配方法、装置、设备、介质和程序产品 WO2020238515A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/056,900 US11886492B2 (en) 2019-05-29 2020-04-24 Method of matching image and apparatus thereof, device, medium and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910457984.9 2019-05-29
CN201910457984.9A CN110188217A (zh) 2019-05-29 2019-05-29 图像查重方法、装置、设备和计算机可读储存介质

Publications (1)

Publication Number Publication Date
WO2020238515A1 true WO2020238515A1 (zh) 2020-12-03

Family

ID=67718628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086666 WO2020238515A1 (zh) 2019-05-29 2020-04-24 图像匹配方法、装置、设备、介质和程序产品

Country Status (3)

Country Link
US (1) US11886492B2 (zh)
CN (1) CN110188217A (zh)
WO (1) WO2020238515A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902198A (zh) * 2019-03-11 2019-06-18 京东方科技集团股份有限公司 一种以图搜图的方法、装置及应用系统
CN110188217A (zh) * 2019-05-29 2019-08-30 京东方科技集团股份有限公司 图像查重方法、装置、设备和计算机可读储存介质
CN112749131A (zh) * 2020-06-11 2021-05-04 腾讯科技(上海)有限公司 信息消重处理方法、装置及计算机可读存储介质
CN111930985A (zh) * 2020-07-08 2020-11-13 泰康保险集团股份有限公司 图像检索方法、装置、电子设备及可读存储介质
CN113220925B (zh) * 2021-04-26 2022-04-12 华南师范大学 一种细胞图像查重方法和系统
CN113486942A (zh) * 2021-06-30 2021-10-08 武汉理工光科股份有限公司 一种重复火警判定方法、装置、电子设备及存储介质
CN114373089A (zh) * 2022-01-13 2022-04-19 上海汉时信息科技有限公司 货架商品自主聚类识别方法及装置
US11869260B1 (en) * 2022-10-06 2024-01-09 Kargo Technologies Corporation Extracting structured data from an image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140099033A1 (en) * 2012-10-05 2014-04-10 Raytheon Bbn Technologies Corp. Fast computation of kernel descriptors
CN104008174A (zh) * 2014-06-04 2014-08-27 北京工业大学 一种海量图像检索的隐私保护索引生成方法
CN106649487A (zh) * 2016-10-09 2017-05-10 苏州大学 基于兴趣目标的图像检索方法
CN108460114A (zh) * 2018-02-09 2018-08-28 福州大学 一种基于层次注意力模型的图像检索方法
CN110188217A (zh) * 2019-05-29 2019-08-30 京东方科技集团股份有限公司 图像查重方法、装置、设备和计算机可读储存介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136402A1 (en) * 2004-12-22 2006-06-22 Tsu-Chang Lee Object-based information storage, search and mining system method
US7904455B2 (en) * 2005-11-03 2011-03-08 Fuji Xerox Co., Ltd. Cascading cluster collages: visualization of image search results on small displays
US8004576B2 (en) * 2008-10-31 2011-08-23 Digimarc Corporation Histogram methods and systems for object recognition
CN103150330B (zh) * 2013-01-16 2015-12-23 中南大学 一种基于局部对象和块匹配的扫描证书图像检索方法
CN108291876B (zh) * 2014-11-21 2022-03-15 盖伊·李·亨纳夫 用于检测产品的真实性的系统及方法
CN104504007B (zh) * 2014-12-10 2018-01-30 成都品果科技有限公司 一种图像相似度的获取方法及系统
CN106933861A (zh) * 2015-12-30 2017-07-07 北京大唐高鸿数据网络技术有限公司 一种支持特征自定义的跨镜头目标检索方法
CN107392215A (zh) * 2017-08-02 2017-11-24 焦点科技股份有限公司 一种基于sift算法的重图检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140099033A1 (en) * 2012-10-05 2014-04-10 Raytheon Bbn Technologies Corp. Fast computation of kernel descriptors
CN104008174A (zh) * 2014-06-04 2014-08-27 北京工业大学 一种海量图像检索的隐私保护索引生成方法
CN106649487A (zh) * 2016-10-09 2017-05-10 苏州大学 基于兴趣目标的图像检索方法
CN108460114A (zh) * 2018-02-09 2018-08-28 福州大学 一种基于层次注意力模型的图像检索方法
CN110188217A (zh) * 2019-05-29 2019-08-30 京东方科技集团股份有限公司 图像查重方法、装置、设备和计算机可读储存介质

Also Published As

Publication number Publication date
CN110188217A (zh) 2019-08-30
US11886492B2 (en) 2024-01-30
US20210174135A1 (en) 2021-06-10

Similar Documents

Publication Publication Date Title
WO2020238515A1 (zh) 图像匹配方法、装置、设备、介质和程序产品
CN109146892B (zh) 一种基于美学的图像裁剪方法及装置
WO2021012526A1 (zh) 人脸识别模型的训练方法、人脸识别方法、装置、设备及存储介质
WO2021164550A1 (zh) 图像分类方法及装置
WO2020253127A1 (zh) 脸部特征提取模型训练方法、脸部特征提取方法、装置、设备及存储介质
AU2014218444B2 (en) Dynamic feature selection for joint probabilistic recognition
Goh et al. Food-image Classification Using Neural Network Model
WO2021012494A1 (zh) 基于深度学习的人脸识别方法、装置及计算机可读存储介质
US9600738B2 (en) Discriminative embedding of local color names for object retrieval and classification
WO2020192112A1 (zh) 人脸识别方法及装置
WO2013053320A1 (zh) 一种图像检索方法及装置
CN105069042A (zh) 基于内容的无人机侦察图像数据检索方法
WO2021175040A1 (zh) 视频处理方法及相关装置
CN110381392B (zh) 一种视频摘要提取方法及其系统、装置、存储介质
CN103366178A (zh) 一种用于对目标图像进行颜色分类的方法与设备
WO2023024413A1 (zh) 信息的匹配方法、装置、计算机设备及可读存储介质
WO2020097834A1 (zh) 一种特征处理方法及装置、存储介质及程序产品
CN111553838A (zh) 模型参数的更新方法、装置、设备及存储介质
CN112561976A (zh) 一种图像主颜色特征提取方法、图像检索方法、存储介质及设备
CN112614110B (zh) 评估图像质量的方法、装置及终端设备
CN113762303B (zh) 图像分类方法、装置、电子设备及存储介质
CN116630630B (zh) 语义分割方法、装置、计算机设备及计算机可读存储介质
US11869127B2 (en) Image manipulation method and apparatus
JP2018180879A (ja) 物体認識装置、物体認識システム、及び物体認識方法
CN112287945A (zh) 碎屏确定方法、装置、计算机设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20813528

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20813528

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20813528

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 220722)

122 Ep: pct application non-entry in european phase

Ref document number: 20813528

Country of ref document: EP

Kind code of ref document: A1