WO2023207028A1 - 图像检索方法、装置及计算机程序产品 - Google Patents
图像检索方法、装置及计算机程序产品 Download PDFInfo
- Publication number
- WO2023207028A1 WO2023207028A1 PCT/CN2022/130517 CN2022130517W WO2023207028A1 WO 2023207028 A1 WO2023207028 A1 WO 2023207028A1 CN 2022130517 W CN2022130517 W CN 2022130517W WO 2023207028 A1 WO2023207028 A1 WO 2023207028A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- recall
- global
- images
- loss
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 124
- 238000004590 computer program Methods 0.000 title claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims abstract description 21
- 238000012795 verification Methods 0.000 claims description 153
- 238000012549 training Methods 0.000 claims description 128
- 230000008569 process Effects 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 13
- 238000010801 machine learning Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 230000000875 corresponding effect Effects 0.000 description 48
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000011176 pooling Methods 0.000 description 11
- 238000013507 mapping Methods 0.000 description 9
- 238000005259 measurement Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 6
- 238000000605 extraction Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000013139 quantization Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/06—Arrangements for sorting, selecting, merging, or comparing data on individual record carriers
- G06F7/08—Sorting, i.e. grouping record carriers in numerical or other ordered sequence according to the classification of at least some of the information they carry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present disclosure relates to the field of artificial intelligence, specifically to deep learning technology, and in particular to image retrieval methods, devices, and training methods and devices for global recall models and local verification models, electronic equipment, storage media, and computer program products, Can be used in image retrieval scenarios.
- the present disclosure provides an image retrieval method and device as well as a training method and device for a global recall model and a training method and device for a local verification model, electronic equipment, storage media, and computer program products.
- an image retrieval method including: obtaining global recall features that take into account the semantic information and visual information of the image to be retrieved through a pre-trained global recall model; obtaining the image to be retrieved through a pre-trained local verification model , local verification features used for local feature point matching; based on the global recall features and local verification features, determine similar and/or identical images of the image to be retrieved from the general image library.
- a training method for a global recall model including: obtaining a first training sample set, wherein the training samples in the first training sample set include image pairs and classification data of the image pairs; using a machine learning method, The global recall features of the images in the image pair are obtained through the global recall model, and the metric loss between the global recall features corresponding to the input image pairs is determined, as well as the classification results based on the global recall features corresponding to the input image pairs.
- the classification loss between the corresponding classification data of the image pair is used to update the global recall model through the measurement loss and classification loss to obtain the trained global recall model.
- a training method for a local verification model includes a global branch, a feature reconstruction branch and an attention branch, and the method includes: obtaining a second training sample set, wherein the second training The training samples in the sample set include sample images and classification data of the sample images; the global features of the sample images are obtained through the global branch, and the first loss is determined based on the global features and the classification data corresponding to the input sample images; the target is obtained through the feature reconstruction branch Reconstruct the features of the feature, and determine the second loss based on the reconstructed feature and the target feature.
- the target feature is obtained by the global branch in the process of extracting the global feature; through the attention branch, the attention weight of the target feature is determined, and based on the attention
- the weights and reconstructed features are used to obtain local point features, and based on the local point features and the classification data corresponding to the input sample image, the third loss is determined; based on the first loss, the second loss and the third loss, the local verification model is updated to Obtain the trained local verification model.
- an image retrieval device including: a recall unit configured to obtain global recall features that take into account semantic information and visual information of the image to be retrieved through a pre-trained global recall model; a verification unit configured To obtain the local verification features of the image to be retrieved for local feature point matching through the pre-trained local verification model; the determination unit is configured to determine from the general image library based on the global recall features and local verification features. Similar and/or identical images of the image to be retrieved.
- a training device for a global recall model including: a first acquisition unit configured to acquire a first training sample set, wherein the training samples in the first training sample set include image pairs and image pairs. Classification data; the first training unit is configured to: use machine learning methods to obtain the global recall features of the images in the image pair through the global recall model, and determine the metric loss between the global recall features corresponding to the input image pair, And the classification loss between the classification result obtained based on the global recall feature corresponding to the input image pair and the classification data corresponding to the image pair is used to update the global recall model through the measurement loss and classification loss to obtain the trained global recall model.
- a training device for a local verification model includes a global branch, a feature reconstruction branch and an attention branch
- the device includes: a second acquisition unit configured to acquire the second training A sample set, wherein the training samples in the second training sample set include sample images and classification data of the sample images;
- the first loss unit is configured to obtain the global features of the sample images through the global branch, and based on the global features and the input samples The classification data corresponding to the image determines the first loss;
- the second loss unit is configured to obtain the reconstructed features of the target features through the feature reconstruction branch, and determine the second loss based on the reconstructed features and the target features, where the target features are extracted by the global branch
- the global features are obtained in the process;
- the third loss unit is configured to determine the attention weight of the target feature through the attention branch, and obtain the local point features based on the attention weight and reconstruction features, and based on the local point features and the input
- the classification data corresponding to the sample image determines the third loss;
- the second training unit is configured to
- an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions that can be executed by at least one processor, and the instructions are processed by at least one The processor executes, so that at least one processor can execute the method described in any implementation manner of the first aspect, the second aspect, and the third aspect.
- a non-transitory computer-readable storage medium storing computer instructions
- the computer instructions are used to cause the computer to execute the method described in any implementation manner of the first aspect, the second aspect, and the third aspect.
- a computer program product including: a computer program.
- the computer program When executed by a processor, the computer program implements the method described in any implementation manner of the first aspect, the second aspect, and the third aspect.
- FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure may be applied;
- Figure 2 is a flow chart of one embodiment of an image retrieval method according to the present disclosure
- Figure 3 is a schematic diagram of an application scenario of the image retrieval method according to this embodiment.
- Figure 4 is a flow chart of yet another embodiment of an image retrieval method according to the present disclosure.
- Figure 5 is a flow chart of an embodiment of a training method of a global recall model according to the present disclosure
- Figure 6 is a schematic structural diagram of a global recall model according to the present disclosure.
- Figure 7 is a flow chart of an embodiment of a training method for a local verification model according to the present disclosure
- Figure 8 is a schematic structural diagram of a local verification model according to the present disclosure.
- Figure 9 is a structural diagram of an embodiment of an image retrieval device according to the present disclosure.
- Figure 10 is a structural diagram of an embodiment of a training device for a global recall model according to the present disclosure.
- Figure 11 is a structural diagram of an embodiment of a training device for a local verification model according to the present disclosure.
- FIG. 12 is a schematic structural diagram of a computer system suitable for implementing embodiments of the present disclosure.
- the collection, storage, use, processing, transmission, provision and disclosure of user personal information are in compliance with relevant laws and regulations and do not violate public order and good customs.
- FIG. 1 shows an exemplary architecture 100 in which the image retrieval method and device and the global recall model training method and device of the present disclosure can be applied.
- the system architecture 100 may include terminal devices 101, 102, 103, a network 104 and a server 105.
- the communication connections between terminal devices 101, 102, and 103 constitute a topological network, and the network 104 is used to provide a medium for communication links between the terminal devices 101, 102, and 103 and the server 105.
- Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
- the terminal devices 101, 102, and 103 may be hardware devices or software that support network connection for data interaction and data processing.
- the terminal devices 101, 102, and 103 are hardware, they can be various electronic devices that support network connection, information acquisition, interaction, display, processing and other functions, including but not limited to image acquisition equipment, smart phones, tablets, electronic devices, etc. Book readers, laptops, desktop computers and more.
- the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It may be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There are no specific limitations here.
- the server 105 may be a server that provides various services, for example, for the images to be retrieved provided by the terminal devices 101, 102, and 103, the global recall features of the images to be retrieved obtained based on the global recall model and the images to be retrieved obtained based on the local verification model. Based on the local verification features, the background processing server determines similar images and/or identical images of the image to be retrieved in the general image database. Optionally, the server can also train a global recall model and a local verification model that implement the above image retrieval tasks. As an example, server 105 may be a cloud server.
- the server can be hardware or software.
- the server can be implemented as a distributed server cluster composed of multiple servers or as a single server.
- the server is software, it can be implemented as multiple software or software modules (for example, software or software modules used to provide distributed services), or it can be implemented as a single software or software module. There are no specific limitations here.
- the image retrieval method, global recall model training method, and local verification model training method provided by the embodiments of the present disclosure can be executed by the server, can also be executed by the terminal device, and can also be executed by the server and the terminal. Devices perform in conjunction with each other. Correspondingly, various parts (such as each unit) included in the image retrieval device, the training device of the global recall model, and the training device of the local verification model can be all set up in the server, or they can all be set up in the terminal device, or they can also be set up separately. in servers and terminal devices.
- the number of terminal devices, networks and servers in Figure 1 is only illustrative. Depending on implementation needs, there can be any number of end devices, networks, and servers.
- the system architecture may only include the image retrieval method, the global recall model The training method and the training method of the local verification model run on the electronic device (such as a server or terminal device).
- an image retrieval method is provided. Based on the global recall features of the image to be retrieved obtained by the global recall model and the local verification features of the image to be retrieved obtained by the local verification model, an image retrieval method is provided from a general image database. Determining similar images and common retrieval logic for identical images to be retrieved improves the convenience and efficiency of image retrieval.
- FIG. 2 is a flow chart of an image retrieval method provided by an embodiment of the present disclosure.
- the process 200 includes the following steps:
- Step 201 Obtain global recall features that take into account the semantic information and visual information of the image to be retrieved through the pre-trained global recall model.
- the execution subject of the image retrieval method can obtain the image to be retrieved remotely or locally based on a wired network connection or a wireless network connection, and use the pre-trained
- the global recall model obtains global recall features that take into account the semantic information and visual information of the image to be retrieved. Among them, the global recall model is used to characterize the correspondence between the image to be retrieved and global recall features. Global recall features characterize the overall information of the image to be retrieved.
- the above execution subject can determine the image information carried in the image retrieval request based on the image retrieval request issued by the user, and determine the image to be retrieved based on the image information.
- the image to be retrieved can be an image containing any content.
- the global recall model can be any neural network model with global feature extraction function.
- the global recall model can adopt network models such as convolutional neural networks and recurrent neural networks.
- the global recall model In order to make the global recall features obtained by the global recall model take into account the visual information and semantic information of the image to be retrieved, we draw lessons from the metric learning algorithm and classification algorithm, and use image pairs of the same category to train the metric loss function, so that the global recall model can have stronger vision. Discrimination, while using large-scale classification data to train the classification loss function allows the global recall model to have stronger semantic discrimination.
- Step 202 Obtain local verification features of the image to be retrieved for local feature point matching through the pre-trained local verification model.
- the above-mentioned execution subject can obtain the local verification features of the image to be retrieved for local feature point matching through the pre-trained local verification model.
- the local verification model is used to characterize the correspondence between the image to be retrieved and the local verification features.
- the local verification features mainly include local feature points of the image to be retrieved.
- the local verification model can be any neural network model with local feature extraction function.
- the local verification model can adopt network models such as convolutional neural network and recurrent neural network.
- the local verification features obtained through the local verification model may be key point features of each subject object included in the image to be retrieved, such as contour features and internal key part features of the subject object.
- Step 203 Determine similar and/or identical images of the image to be retrieved from the general image database based on the global recall features and local verification features.
- the above-mentioned execution subject can determine similar and/or identical images of the image to be retrieved from the general image database based on the global recall feature and the local verification feature.
- the general image database representation can be used universally to retrieve image databases of similar images and identical images.
- the similarity map represents images that have a certain degree of similarity with the image to be retrieved.
- the similarity can be, for example, the similarity between the images with respect to the background information and the similarity between the images with respect to the main object.
- the same image represents an image that is consistent with the background information and main objects included in the image to be retrieved.
- the above execution subject can determine a preset number of images from the general image library by combining global recall features and local verification features, and determine the similarity and consistency from large to small based on the global recall features and local verification features.
- a preset number of images are sorted sequentially, and the images that are sorted first are determined to be the same image of the image to be retrieved, and the remaining images that are sorted later among the preset number of images are determined to be similar images of the image to be retrieved.
- the above-mentioned execution subject may determine the same graph or similar graphs individually with reference to this example, or may determine the same graph and similar graphs at the same time.
- the above execution subject can determine a preset number of images from the general image library based on global recall features, and determine a preset number of images based on the similarity and consistency between the global recall features of the images. Determine a preset number of images from the general image library based on local verification features, and determine the ordering of the preset number of images based on the similarity and consistency between local verification features between images.
- the same images among them are determined. is the same image of the image to be retrieved.
- the same image among them is determined to be the target image.
- the above execution subject may refer to this example to determine the same graph or similar graphs individually, or may determine the same graph and similar graphs at the same time.
- FIG. 3 is a schematic diagram 300 of an application scenario of the image retrieval method according to this embodiment.
- the terminal device 301 sends an image retrieval request to the server 302 , where the image retrieval request carries relevant information of the image 303 to be retrieved.
- the server 302 determines the image to be retrieved 303 according to the image retrieval request, first, through the pre-trained global recall model 304, the global recall feature 305 that takes into account the semantic information and visual information of the image to be retrieved 303 is obtained, and through the pre-trained local verification model 306 Obtains the local verification features 307 of the image to be retrieved 303 for local feature point matching.
- the server 302 determines similar and/or identical images of the image 303 to be retrieved from the general image library 308 based on the global recall feature 305 and the local verification feature 307 .
- an image retrieval method is provided. Based on the global recall features of the image to be retrieved obtained by the global recall model and the local verification features of the image to be retrieved obtained by the local verification model, it provides a method to determine the image from a general image library. Similar images of images to be retrieved and common retrieval logic for identical images improve the convenience and efficiency of image retrieval.
- the above execution subject may perform the above step 203 in the following manner:
- multiple recall images are determined from the general image library, and first matching information between the image to be retrieved and each recall image in the multiple recall images is determined.
- the global recall feature represents the global feature information of the image to be retrieved. Through the global recall feature, multiple images similar to the image to be retrieved are determined as a recall image as a whole. In this embodiment, the number of returned recalled images can be flexibly set according to the actual situation. As an example, the number of recalled images is 400.
- the above execution subject determines the first matching information between each image in the general image library and the image to be retrieved based on the global recall feature, and sorts the preset number of images that are ranked first by the matching degree represented by the matching information as Multiple recall images.
- matching information about local feature points can be determined between the recall image and the image to be retrieved by using local verification features that characterize key local feature points.
- the above execution subject can sort in descending order based on the first matching information and the second matching information, and then determine the recalled image with both matching information ranked first as the same image of the image to be retrieved, and then Images other than the same image in multiple recalled images are determined as similar images of the image to be retrieved.
- a specific implementation method is provided to determine the image to be retrieved based on the global recall feature and the local verification feature. Based on the global recall feature, the first matching information between the image in the general image library and the image to be retrieved is determined, and the recall is obtained. image; furthermore, the second matching information of the recalled image and the image to be retrieved is determined based on the local verification feature, so as to determine the same image and/or similar image of the image to be retrieved based on the first matching information and the second matching information, thereby improving image retrieval. efficiency and accuracy.
- the above execution subject may also perform the above third step in the following manner:
- the first matching information and the second matching information corresponding to each recalled image in the multiple recalled images are combined to obtain the ranking score corresponding to each recalled image.
- multiple recall images are sorted according to the ranking scores, and the sorted multiple recall images are determined as similarity maps of the images to be retrieved.
- the fusion weight of the first matching information and the second matching information may be preset to fuse the first matching information and the second matching information according to the fusion weight to obtain a ranking score. Sort multiple recalled images in descending order according to the sorting scores, so that when similar images are displayed to the user, images with high similarity are displayed first.
- a method for determining the similarity map of the image to be retrieved is provided, which improves the accuracy of the determined similarity map.
- the above-mentioned execution subject can perform the above-mentioned third step in the following manner:
- recall images in different matching threshold spaces among multiple recall images are determined according to the first matching information.
- the matching threshold space represents different preset matching threshold ranges.
- the determined matching threshold space of the multiple recall images can be Set to 0.85-0.90, 0.90-0.95 and 0.95-1.0.
- the same image of the image to be retrieved is determined based on the second matching information of the recalled image in the interval.
- the local verification feature represents a key local feature point in the image to be retrieved
- the second matching information can represent the number of matching feature points between the image to be retrieved and the recalled image.
- a corresponding matching point threshold can be set for each matching threshold space. For each recalled image in the matching threshold space, when the number of matching feature points between the recalled image and the image to be retrieved is not less than the matching point threshold corresponding to the matching threshold space, the recalled image is considered to be the image to be retrieved. Same image.
- the matching degree of the global recall feature represented by the matching threshold space is negatively correlated with the matching degree of the local verification feature represented by the matching point threshold.
- the matching threshold space 0.95-1.0 you can set a smaller matching point threshold. Since the matching degree of global recall features between the recalled images and the images to be retrieved in the matching threshold space 0.95-1.0 is very high, there are fewer matching features between the recalled images and the images to be retrieved in this interval. In the case of points, the recalled image can also be considered to be the same image as the image to be retrieved. Corresponding to the matching threshold space 0.90-0.95, the set matching point threshold is greater than the matching point threshold corresponding to the matching threshold space 0.95-1.0, so as to match the global recall feature between the recalled image and the image to be retrieved in this interval. When it is not very high, more matching points can be used to ensure the accuracy of determining the same graph.
- a method for determining the same image of the image to be retrieved is provided, which improves the accuracy of the determined identical image.
- the above-mentioned execution subject may perform the above-mentioned first step in the following manner:
- the global recall features of the image to be retrieved and the global recall features of the images in the general image database multiple recall images are determined from the general image database, and the distance between the image to be retrieved and each recall image in the multiple recall images is determined. the first matching information.
- the global recall features of images in the general image library are determined by the global recall model.
- the global recall features of the image can be obtained through the pre-trained global recall model to build a global recall feature library corresponding to the general image library.
- the above execution subject can also perform the following post-processing operations: use the LW (Learned Whiten) algorithm to post-process the global recall features of each preset dimension (for example, 128 dimensions) extracted by the global recall model. deal with.
- LW Learning Whiten
- the specific process of the LW algorithm is as follows: First, randomly select a certain number of image pairs (for example, 30000-40000) from the preset database and extract the 128-dimensional features of the images in the image pairs.
- the image maps can be similar images.
- the LW algorithm is trained using the obtained feature information of the image pair to obtain the mapping matrix.
- the feature information of one image in the image pair has a certain difference between the mapping feature obtained after image mapping and the feature information of the other image.
- the trained mapping matrix is designed to make the total difference corresponding to all image pairs The difference is minimal.
- all global recall features in the global recall feature library are post-processed using the mapping matrix.
- the global recall feature of the image in the general database is determined by using the global recall model in advance to match the global recall feature of the image in the general database with the global recall feature of the image to be retrieved, which improves the determination of recall. Image efficiency.
- the appeal execution subject may perform the above second step in the following manner:
- Second matching information of feature points between the image to be retrieved and each of the recalled images is determined based on the local verification features of the image to be retrieved and the local verification features of the multiple recall images.
- the local verification features of the images in the general image library are determined through the local verification model.
- the local verification features of the image can be obtained through the pre-trained local verification model to build a local verification feature library corresponding to the general image library.
- the above execution body can also perform the following post-processing operations: use the LW (Learned Whiten) algorithm to extract the local verification features of each preset dimension (for example, 128 dimensions) from the local verification model Perform post-processing.
- LW Learning Whiten
- the specific process of the LW algorithm is as follows: First, randomly select a certain number of image pairs (for example, 30000-40000) from the preset database and extract the local feature points of the images in the image pairs, and use a matching algorithm to determine the matching feature point pairs. ; Then, use the obtained feature information of the feature point pairs to train the LW algorithm to obtain the mapping matrix. Among them, the feature information of one feature point in the feature point pair has a certain difference between the mapped feature point obtained after image mapping and the other feature point. The trained mapping matrix is designed to make the corresponding differences of all feature point pairs The total difference is the smallest. Finally, all local verification features in the local verification feature library are post-processed using the mapping matrix.
- the scale of feature points in the extracted local verification features is larger.
- the local verification features in the local verification feature library can be quantified and stored, and the characteristics of the float data type can be converted into the characteristics of the int data type.
- the PQ (Product Quantization, Product Quantization) process is as follows: randomly select multiple images from the preset image library and extract the feature points of the images; then, use the symmetric distance algorithm to calculate the PQ quantized codebook; finally, use the quantization codebook to All characteristics of the original float data type are converted to characteristics of the int data type.
- the local verification features of the images in the general database are determined by using the local verification model in advance to match the local verification features of the images in the general database with the local verification features of the image to be retrieved.
- the efficiency of determining the second matching information based on local verification features is improved.
- the above execution subject may also perform the following operations:
- the image database used for similar image retrieval and the image database used for same image retrieval are merged, and the images in the merged image database are deduplicated to obtain a universal image database.
- FIG. 4 a schematic process 400 of yet another embodiment of the image retrieval method according to the present disclosure is shown, including the following steps:
- Step 401 Obtain global recall features that take into account the semantic information and visual information of the image to be retrieved through the pre-trained global recall model.
- Step 402 Obtain local verification features of the image to be retrieved for local feature point matching through the pre-trained local verification model.
- Step 403 Determine multiple recall images from the general image library according to the global recall features, and determine the first matching information between the image to be retrieved and each recall image in the multiple recall images.
- Step 404 Determine second matching information of feature points between the image to be retrieved and each recalled image in the plurality of recalled images based on the local verification features.
- Step 405 Combine the first matching information and the second matching information corresponding to each recalled image in the plurality of recalled images to obtain the ranking score corresponding to each recalled image.
- Step 406 Sort multiple recalled images according to the sorting scores, and determine the sorted multiple recalled images as similarity images of the image to be retrieved.
- Step 407 Determine recall images in different matching threshold spaces among multiple recall images based on the first matching information.
- Step 408 For different matching threshold spaces, determine the same image of the image to be retrieved based on the second matching information of the recalled image in the interval.
- the process 400 of the image retrieval method in this embodiment specifically illustrates the determination process of similar images and the determination process of the same image, further improving the image retrieval. accuracy.
- FIG. 5 a schematic process 500 of one embodiment of a training method for a global recall model according to the present disclosure is shown, including the following steps:
- Step 501 Obtain the first training sample set.
- the execution subject of the global recall model training method can obtain the first training sample set remotely or locally based on a wired network connection or a wireless network connection.
- the training samples in the first training sample set include image pairs and classification data of the image pairs.
- the two images in an image pair have the same category represented by the categorical data.
- the image can contain any content.
- the above execution subject can obtain the first training sample set in the following manner: clustering the images in the preset image library based on a semi-supervised clustering algorithm, and based on The clustering results are obtained to obtain the first training sample set.
- the images in the preset image library are clustered based on the semi-supervised clustering algorithm to obtain the clustering results. Furthermore, two different images in the same clustering result are used as image pairs in the training sample, the classification information represented by the clustering result is used as the classification data in the training sample, and a training sample is determined to obtain the first training sample. set.
- This implementation provides a way to automatically obtain the first training sample set for training the global recall model. Based on the preset image library, a semi-supervised clustering algorithm can be used to quickly obtain the first training sample set, which improves Convenience of obtaining information.
- Step 502 Use the machine learning method to obtain the global recall features of the images in the image pair through the global recall model, and determine the metric loss between the global recall features corresponding to the input image pairs, and the corresponding loss based on the input image pairs.
- the classification loss between the classification result obtained by the global recall feature and the classification data corresponding to the image pair is used to update the global recall model through the measurement loss and classification loss to obtain the trained global recall model.
- the above execution subject can use the machine learning method to obtain the global recall features of the images in the image pair through the global recall model, and determine the metric loss between the global recall features corresponding to the input image pairs, and based on the The classification loss between the classification result obtained by the global recall feature corresponding to the input image pair and the classification data corresponding to the image pair is used to update the global recall model through the measurement loss and classification loss to obtain the trained global recall model.
- the global recall model 600 includes a backbone network 601, a fully connected layer 602, and a BNNeck (Batch Normalization Neck, batch normalization neck) module 603.
- BNNeck Batch Normalization Neck, batch normalization neck
- the global recall model 600 includes a backbone network 601, a fully connected layer 602, and a BNNeck (Batch Normalization Neck, batch normalization neck) module 603.
- BNNeck Batch Normalization Neck, batch normalization neck
- the metric loss can be, for example, Lifted Struct loss (lifted structure loss), and the classification loss can be, for example, cross-entropy loss.
- the above execution subject inputs an untrained image pair into the global recall model, obtains the global recall features of the images in the image pair through the global recall model, and determines the metric loss between the global recall features corresponding to the input image pair. , and the classification loss between the classification result based on the global recall feature corresponding to the input image pair and the classification data corresponding to the image pair, and then update the global recall model based on the metric loss and classification loss.
- the trained global recall model By looping through the above training operations, in response to reaching the preset end condition, the trained global recall model is obtained.
- the preset end conditions may be, for example, that the training time exceeds a preset time threshold, the number of training times exceeds a preset times threshold, and the training loss converges.
- the trained global recall model can be applied to the above embodiments 200 and 400.
- image pairs of the same category are used to train the metric loss function so that the global recall model can have stronger visual discrimination.
- large-scale classification data is used to train the classification loss function so that the global recall model can have stronger semantic discrimination. , improves the visual distinction and semantic distinction of the trained global recall model, and improves the accuracy of global recall features based on the global recall model.
- the above execution subject can update the global recall model through measurement loss and classification loss by performing the following methods to obtain the trained global recall model:
- the above execution entity can pre-set the combination weight of the metric loss and the classification loss, so as to combine the metric loss and the classification loss according to the combination weight to obtain the total loss.
- the global recall model is updated based on the fusion of measurement loss and classification loss, which improves the accuracy of the trained global recall model.
- the above-mentioned execution subject may perform the above-mentioned first step in the following manner:
- the metric loss and classification loss are in the same distribution space; then, the total loss is obtained by combining the metric loss and classification loss in the same distribution space.
- the above execution subject can use the BNNeck module to batch normalize the classification loss so that it is in the same distribution space as the metric loss. Based on the fusion of metric loss and classification loss in the same distribution space, the accuracy of the obtained total loss is improved.
- the above execution subject may also perform the following operations: during the update process of the global recall model, keep the weight of the classification loss unchanged, and use a warm-up strategy to adjust the weight of the measurement loss.
- This implementation further makes the global recall model more visually distinguishable while ensuring semantic distinction.
- the above-mentioned execution subject can also use the warm up strategy to adjust the learning rate of the global recall model in the early stage of training, and then reduce the learning rate in a stepwise manner after training for a period of time, so that Enough to allow the global recall model to better find the global optimum.
- FIG. 7 a schematic process 700 of one embodiment of a training method for a local verification model according to the present disclosure is shown, including the following steps:
- Step 701 Obtain the second training sample set.
- the execution subject of the training method of the local verification model can obtain the second training sample set remotely or locally based on a wired network connection or a wireless network connection.
- the training samples in the second training sample set include sample images and classification data of the sample images.
- the sample image can be an image including arbitrary content.
- the above execution subject can obtain the second training sample set in the following manner: clustering the images in the preset image library based on a semi-supervised clustering algorithm, and clustering the images based on Class result, the second training sample set is obtained.
- the images in the preset image library are clustered based on the semi-supervised clustering algorithm to obtain the clustering results. Furthermore, the images in the clustering results are used as sample images in the training samples, the classification information represented by the clustering results is used as the classification data in the training samples, and a training sample is determined to obtain the second training sample set.
- This implementation provides a way to automatically obtain the second training sample set for training the global recall model. Based on the preset image library, a semi-supervised clustering algorithm can be used to quickly obtain the second training sample set, which improves Convenience of obtaining information.
- Step 702 Obtain the global features of the sample image through the global branch, and determine the first loss based on the global features and the classification data corresponding to the input sample image.
- the above-mentioned execution subject obtains the global features of the sample image through the global branch, and determines the first loss based on the global features and the classification data corresponding to the input sample image.
- the local verification model 800 includes a global branch 801, a feature reconstruction branch 802 and an attention branch 803.
- the global branch may be, for example, a network model such as a recurrent convolutional network or a residual network.
- the feature reconstruction branch can be a network module implemented based on a fully convolutional network
- the attention branch can be a network module implemented based on an attention network.
- the global branch is the same as a normal classification network.
- the global branch uses the ResNet50 network, its last pooling layer uses GeM pooling (Generalized-mean pooling, generalized average pooling), and the loss uses ArcFace loss (additive angle margin loss).
- GeM-Pooling can be seen as an extension of Average Pooling and Max Pooling.
- This algorithm can enhance the robustness of images of different resolutions and improve the representation ability of features.
- ArcFace Loss improves the inter-class separability of the local verification model while strengthening the intra-class tightness and inter-class differences, which helps to improve the model's visual resolution of features.
- Step 703 Obtain the reconstructed features of the target feature through the feature reconstruction branch, and determine the second loss based on the reconstructed features and the target feature.
- the above execution subject can obtain the reconstructed features of the target feature through the feature reconstruction branch, and determine the second loss based on the reconstructed features and the target feature.
- the target features are obtained by the global branch in the process of extracting global features.
- the target feature can be the feature corresponding to the penultimate layer in the process of obtaining the global feature by the ResNet50 network.
- the loss between the reconstructed features and the original features (target features) is determined.
- the second loss may be a mean squared error loss.
- the above execution subject may perform the above step 703 in the following manner:
- the target features are down-sampled to obtain the down-sampled features; then, the down-sampled features are up-sampled to obtain the reconstructed features.
- feature reconstruction is based on first downsampling and then upsampling, and based on the guidance of the second loss, the local point features in the reconstructed features can accurately express the key information of the original features.
- Step 704 Determine the attention weight of the target feature through the attention branch, obtain the local point feature based on the attention weight and the reconstruction feature, and determine the third loss based on the local point feature and the classification data corresponding to the input sample image.
- the above-mentioned execution subject can determine the attention weight of the target feature through the attention branch, and obtain the local point feature based on the attention weight and the reconstruction feature, and classify the data corresponding to the input sample image based on the local point feature and the input sample image. , determine the third loss.
- the attention mechanism is used to determine the important positions in the target feature, and the weight corresponding to each feature point in the target feature is obtained. Finally, the weight and the reconstructed feature in the feature reconstruction branch are calculated to obtain the final local feature point.
- the training process is guided based on the second loss.
- the second loss may be a cross-entropy loss.
- Step 705 Update the local verification model based on the first loss, the second loss and the third loss to obtain the trained local verification model.
- the execution subject may update the local verification model based on the first loss, the second loss and the third loss to obtain the trained local verification model.
- the corresponding first loss, second loss and third loss are obtained to update the local verification model.
- the preset end conditions may be, for example, that the training time exceeds a preset time threshold, the number of training times exceeds a preset times threshold, and the training loss converges.
- the trained local verification model can be applied to the above-mentioned embodiments 200 and 400.
- a method for training a local verification model is provided, so that the local verification features obtained by the local verification model can better represent the local key information of the image, thereby improving the accuracy of the obtained local verification model.
- step 705 is performed as follows: updating the global branch according to the first loss, updating the feature reconstruction branch according to the second loss, and updating the attention branch according to the third loss, to obtain Partially verify the model.
- the loss corresponding to each branch is updated by the user to update each branch, and the parameters of the model can be updated in a targeted manner, which improves the training efficiency of the model and the accuracy of the final local verification model.
- the above-mentioned execution subject can also use a warm-up strategy to adjust the learning rate of the local verification model, and then reduce the learning rate in a stepwise manner after training for a period of time.
- gradient clipping can also be performed on the reconstruction branch.
- the present disclosure provides an embodiment of an image retrieval device.
- the device embodiment corresponds to the method embodiment shown in Figure 2.
- the device can specifically Used in various electronic equipment.
- the image retrieval device includes: a recall unit 901, configured to obtain global recall features that take into account the semantic information and visual information of the image to be retrieved through a pre-trained global recall model; a verification unit 902, configured to The pre-trained local verification model obtains the local verification features of the image to be retrieved for local feature point matching; the determination unit 903 is configured to determine the to-be-retrieved image from the general image library based on the global recall features and local verification features. Retrieve similar and/or identical images of images.
- the determining unit 903 is further configured to: determine multiple recalled images from the general image library according to the global recall features, and determine the image to be retrieved and the image in the multiple recalled images. first matching information between each recalled image; determining second matching information of feature points between the image to be retrieved and each recalled image in the plurality of recalled images according to the local verification features; according to the first matching information and The second matching information determines similar images and/or identical images of the image to be retrieved from multiple recalled images.
- the determination unit 903 is further configured to: combine the first matching information and the second matching information corresponding to each recall image in the plurality of recall images to obtain each recall image.
- the corresponding sorting score according to the sorting score, multiple recall images are sorted, and the sorted multiple recall images are determined as similar images of the image to be retrieved.
- the determination unit 903 is further configured to: determine recall images in multiple recall images in different matching threshold spaces according to the first matching information; for different matching threshold spaces , determine the same image of the image to be retrieved based on the second matching information of the recalled image in the interval.
- the determining unit 903 is further configured to: determine multiple images from the general image library based on the global recall features of the image to be retrieved and the global recall features of the images in the general image library. recall images, and determine first matching information between the image to be retrieved and each recall image in the plurality of recall images, where the global recall features of the images in the general image library are determined through a global recall model.
- the determining unit 903 is further configured to: determine the image to be retrieved and the multiple recalled images based on the local verification features of the image to be retrieved and the local verification features of the multiple recalled images. Second matching information of feature points between each recalled image in the image, wherein the local verification features of the images in the general image library are determined by the local verification model.
- the above device further includes: an image database unit (not shown in the figure) configured to merge the image database for similar image retrieval and the image database for same image retrieval. , and deduplicate the images in the merged image library to obtain a universal image library.
- an image database unit (not shown in the figure) configured to merge the image database for similar image retrieval and the image database for same image retrieval. , and deduplicate the images in the merged image library to obtain a universal image library.
- an image retrieval device Based on the global recall features of the image to be retrieved obtained by the global recall model and the local verification features of the image to be retrieved obtained by the local verification model, it provides an image retrieval device determined from a general image library. Similar images of images to be retrieved and common retrieval logic for identical images improve the convenience and efficiency of image retrieval.
- the present disclosure provides an embodiment of a training device for a global recall model.
- the device embodiment corresponds to the method embodiment shown in Figure 5.
- the device embodiment corresponds to the method embodiment shown in Figure 5.
- the device can be applied in various electronic devices.
- the training device of the global recall model includes: a first acquisition unit 1001 configured to acquire a first training sample set, where the training samples in the first training sample set include image pairs and classification data of image pairs;
- the first training unit 1002 is configured to: use machine learning methods to obtain the global recall features of the images in the image pair through the global recall model, and determine the metric loss between the global recall features corresponding to the input image pairs, and based on The classification loss between the classification result obtained by the global recall feature corresponding to the input image pair and the classification data corresponding to the image pair is used to update the global recall model through the measurement loss and classification loss to obtain the trained global recall model.
- the first training unit 1002 is further configured to: determine the total loss based on the measurement loss and the classification loss; update the global recall model based on the total loss to obtain the trained global recall model .
- the first training unit 1002 is further configured to: based on batch normalization processing, make the metric loss and the classification loss in the same distribution space; combine the metric loss in the same distribution space and classification loss to get the total loss.
- this embodiment also includes: a weight update unit (not shown in the figure), configured to keep the weight of the classification loss unchanged and use a warm-up during the update process of the global recall model.
- the policy adjusts the weight of the measured loss.
- the above device further includes: a first sample unit (not shown in the figure) configured to cluster images in the preset image library through a semi-supervised clustering algorithm. class, and based on the clustering results between images, the first training sample set is obtained.
- a first sample unit (not shown in the figure) configured to cluster images in the preset image library through a semi-supervised clustering algorithm. class, and based on the clustering results between images, the first training sample set is obtained.
- a training device for the global recall model is provided.
- Image pairs of the same category are used to train the metric loss function so that the global recall model can have stronger visual discrimination.
- large-scale classification data is used to train the classification loss function to enable global recall.
- the model can have stronger semantic distinction, improve the visual distinction and semantic distinction of the trained global recall model, and improve the accuracy of global recall features based on the global recall model.
- the present disclosure provides an embodiment of a training device for a local calibration model.
- the device embodiment corresponds to the method embodiment shown in Figure 7,
- the device can be applied in various electronic devices.
- a training device for a local verification model where the local verification model includes a global branch, a feature reconstruction branch and an attention branch
- the device includes: a second acquisition unit 1101 configured to acquire a second training sample set , wherein the training samples in the second training sample set include sample images and classification data of the sample images;
- the first loss unit 1102 is configured to obtain the global features of the sample images through the global branch, and based on the global features and the input sample images The corresponding classification data determines the first loss;
- the second loss unit 1103 is configured to obtain the reconstructed features of the target feature through the feature reconstruction branch, and determine the second loss based on the reconstructed features and the target feature, where the target feature is extracted by the global branch
- the global features are obtained in the process;
- the third loss unit 1104 is configured to determine the attention weight of the target feature through the attention branch, and obtain the local point feature based on the attention weight and the reconstructed feature, and based on the local point feature and the input Classification data corresponding to the sample image, determine
- the second training unit 1105 is further configured to: update the global branch according to the first loss, update the feature reconstruction branch according to the second loss, and update the attention branch according to the third loss, To obtain the local verification model.
- the second loss unit 1103 is further configured to: based on the fully convolutional network adopted by the feature reconstruction branch, downsample the target features to obtain downsampled features; Features are upsampled to obtain reconstructed features.
- the above device further includes: a second sample unit (not shown in the figure) configured to cluster images in the preset image library based on a semi-supervised clustering algorithm. , and based on the clustering results of the image, the second training sample set is obtained.
- a second sample unit (not shown in the figure) configured to cluster images in the preset image library based on a semi-supervised clustering algorithm. , and based on the clustering results of the image, the second training sample set is obtained.
- a training device for a local verification model is provided, so that the local verification features obtained by the local verification model can better represent the local key information of the image, thereby improving the accuracy of the obtained local verification model.
- the present disclosure also provides an electronic device, which includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores information that can be executed by the at least one processor.
- the instruction is executed by at least one processor, so that when executed by at least one processor, the image retrieval method, the training method of the global recall model, and the training method of the local verification model described in any of the above embodiments can be implemented.
- the present disclosure also provides a readable storage medium that stores computer instructions.
- the computer instructions are used to enable the computer to implement the image retrieval method described in any of the above embodiments when executed. , the training method of the global recall model, and the training method of the local verification model.
- Embodiments of the present disclosure provide a computer program product that, when executed by a processor, can implement the image retrieval method, global recall model training method, and local verification model training method described in any of the above embodiments.
- FIG. 11 illustrates a schematic block diagram of an example electronic device 1100 that may be used to implement embodiments of the present disclosure.
- Electronic devices are intended to refer to various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions are examples only and are not intended to limit implementations of the disclosure described and/or claimed herein.
- the device 1100 includes a computing unit 1101 that can execute according to a computer program stored in a read-only memory (ROM) 1102 or loaded from a storage unit 1108 into a random access memory (RAM) 1103 Various appropriate actions and treatments.
- ROM read-only memory
- RAM random access memory
- various programs and data required for the operation of the device 1100 can also be stored.
- Computing unit 1101, ROM 1102 and RAM 1103 are connected to each other via bus 1104.
- An input/output (I/O) interface 1105 is also connected to bus 1104.
- I/O interface 1105 Multiple components in the device 1100 are connected to the I/O interface 1105, including: input unit 1106, such as a keyboard, mouse, etc.; output unit 1107, such as various types of displays, speakers, etc.; storage unit 1108, such as a magnetic disk, optical disk, etc. ; and communication unit 1109, such as a network card, modem, wireless communication transceiver, etc.
- the communication unit 1109 allows the device 1100 to exchange information/data with other devices through computer networks such as the Internet and/or various telecommunications networks.
- Computing unit 1101 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing units 1101 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any appropriate processor, controller, microcontroller, etc.
- the computing unit 1101 performs various methods and processes described above, such as image retrieval methods.
- the image retrieval method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108.
- part or all of the computer program may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109.
- the computer program When the computer program is loaded into RAM 1103 and executed by computing unit 1101, one or more steps of the image retrieval method described above may be performed.
- the computing unit 1101 may be configured to perform the image retrieval method in any other suitable manner (eg, by means of firmware).
- Various implementations of the systems and techniques described above may be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on a chip implemented in a system (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or a combination thereof.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOC system
- CPLD load programmable logic device
- computer hardware firmware, software, and/or a combination thereof.
- These various embodiments may include implementation in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor
- the processor which may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
- An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
- An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
- Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, special-purpose computer, or other programmable data processing device, such that the program codes, when executed by the processor or controller, cause the functions specified in the flowcharts and/or block diagrams/ The operation is implemented.
- the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include electrical connections based on one or more wires, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM portable compact disk read-only memory
- magnetic storage device or any suitable combination of the above.
- the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
- a display device eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and pointing device eg, a mouse or a trackball
- Other kinds of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and may be provided in any form, including Acoustic input, voice input or tactile input) to receive input from the user.
- the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., A user's computer having a graphical user interface or web browser through which the user can interact with implementations of the systems and technologies described herein), or including such backend components, middleware components, or any combination of front-end components in a computing system.
- the components of the system may be interconnected by any form or medium of digital data communication (eg, a communications network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
- Computer systems may include clients and servers.
- Clients and servers are generally remote from each other and typically interact over a communications network.
- the relationship of client and server is created by computer programs running on corresponding computers and having a client-server relationship with each other.
- the server can be a cloud server, also known as cloud computing server or cloud host. It is a host product in the cloud computing service system to solve the management difficulties existing in traditional physical host and virtual private server (VPS, Virtual Private Server) services. Large, weak business scalability; it can also be a server of a distributed system, or a server combined with a blockchain.
- an image retrieval method is provided, based on the global recall features of the image to be retrieved obtained by the global recall model and the local verification features of the image to be retrieved obtained by the local verification model, providing a general-purpose image retrieval method.
- the universal retrieval logic for determining similar pictures and identical pictures of images to be retrieved in the image database improves the convenience and efficiency of image retrieval.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本公开提供了一种图像检索方法、装置、电子设备、存储介质及计算机程序产品,涉及人工智能技术领域,具体涉及深度学习技术,可用于图像检索场景下。具体实现方案为:通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征;通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征;根据全局召回特征和局部校验特征,从通用图像库中确定待检索图像的相似图和/或相同图。本公开提供了从通用图像库中确定待检索图像的相似图、相同图的通用检索逻辑,提高了图像检索的便捷性和效率。
Description
本专利申请要求于2022年04月27日提交的、申请号为202210493497.X、发明名称为“图像检索方法、装置及计算机程序产品”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
本公开涉及人工智能领域,具体涉及深度学习技术,尤其涉及图像检索方法、装置以及全局召回模型的训练方法、装置以及局部校验模型的训练方法、装置、电子设备、存储介质以及计算机程序产品,可用于图像检索场景下。
随着移动互联网的普及,拍照识图广泛应用在人们日常生活中。目前已经有很多拍照识图的产品,但是这些产品均是针对某个固定的大类(比如:商品、植物、动物等)的拍照识图应用。这些拍照识图应用,无论是针对于相同图检索或相似图检索,其检索逻辑或相关数据库均独立设置,不能通用。
发明内容
本公开提供了一种图像检索方法、装置以及全局召回模型的训练方法、装置以及局部校验模型的训练方法、装置、电子设备、存储介质以及计算机程序产品。
根据第一方面,提供了一种图像检索方法,包括:通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征;通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征;根据全局召回特征和局部校验特征,从通用图像库中确定待检索图像的相似图和/或相同图。
根据第二方面,提供了一种全局召回模型的训练方法,包括:获取第 一训练样本集,其中,第一训练样本集中的训练样本包括图像对和图像对的分类数据;利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过度量损失和分类损失更新全局召回模型,得到训练后的全局召回模型。
根据第三方面,提供了一种局部校验模型的训练方法,其中,局部校验模型包括全局分支、特征重建分支和注意力分支,方法包括:获取第二训练样本集,其中,第二训练样本集中的训练样本包括样本图像和样本图像的分类数据;通过全局分支得到样本图像的全局特征,并基于全局特征和所输入的样本图像对应的分类数据确定第一损失;通过特征重建分支得到目标特征的重建特征,并基于重建特征与目标特征确定第二损失,其中,目标特征由全局分支在提取全局特征的过程中得到;通过注意力分支,确定目标特征的注意力权重,并根据注意力权重和重建特征得到局部点特征,并基于局部点特征和所输入的样本图像对应的分类数据,确定第三损失;基于第一损失、第二损失和第三损失,更新局部校验模型,以得到训练后的局部校验模型。
根据第四方面,提供了一种图像检索装置,包括:召回单元,被配置成通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征;校验单元,被配置成通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征;确定单元,被配置成根据全局召回特征和局部校验特征,从通用图像库中确定待检索图像的相似图和/或相同图。
根据第五方面,提供了一种全局召回模型的训练装置,包括:第一获取单元,被配置成获取第一训练样本集,其中,第一训练样本集中的训练样本包括图像对和图像对的分类数据;第一训练单元,被配置成:利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过度量损失和分类损失更新全局召回模型,得 到训练后的全局召回模型。
根据第六方面,提供了一种局部校验模型的训练装置,其中,局部校验模型包括全局分支、特征重建分支和注意力分支,装置包括:第二获取单元,被配置成获取第二训练样本集,其中,第二训练样本集中的训练样本包括样本图像和样本图像的分类数据;第一损失单元,被配置成通过全局分支得到样本图像的全局特征,并基于全局特征和所输入的样本图像对应的分类数据确定第一损失;第二损失单元,被配置成通过特征重建分支得到目标特征的重建特征,并基于重建特征与目标特征确定第二损失,其中,目标特征由全局分支在提取全局特征的过程中得到;第三损失单元,被配置成通过注意力分支,确定目标特征的注意力权重,并根据注意力权重和重建特征得到局部点特征,并基于局部点特征和所输入的样本图像对应的分类数据,确定第三损失;第二训练单元,被配置成基于第一损失、第二损失和第三损失,更新局部校验模型,以得到训练后的局部校验模型。
根据第七方面,提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行如第一方面、第二方面、第三方面任一实现方式描述的方法。
根据第八方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,计算机指令用于使计算机执行如第一方面、第二方面、第三方面任一实现方式描述的方法。
根据第九方面,提供了一种计算机程序产品,包括:计算机程序,计算机程序在被处理器执行时实现如第一方面、第二方面、第三方面任一实现方式描述的方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开的一个实施例可以应用于其中的示例性系统架构 图;
图2是根据本公开的图像检索方法的一个实施例的流程图;
图3是根据本实施例的图像检索方法的应用场景的示意图;
图4是根据本公开的图像检索方法的又一个实施例的流程图;
图5是根据本公开的全局召回模型的训练方法的一个实施例的流程图;
图6是根据本公开的全局召回模型的结构示意图;
图7是根据本公开的局部校验模型的训练方法的一个实施例的流程图;
图8是根据本公开的局部校验模型的结构示意图;
图9是根据本公开的图像检索装置的一个实施例的结构图;
图10是根据本公开的全局召回模型的训练装置的一个实施例的结构图;
图11是根据本公开的局部校验模型的训练装置的一个实施例的结构图;
图12是适于用来实现本公开实施例的计算机系统的结构示意图。
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
本公开的技术方案中,所涉及的用户个人信息的收集、存储、使用、加工、传输、提供和公开等处理,均符合相关法律法规的规定,且不违背公序良俗。
图1示出了可以应用本公开的图像检索方法及装置、全局召回模型的训练方法及装置的示例性架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。终端设备101、102、103之间通信连接构成拓扑网络, 网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
终端设备101、102、103可以是支持网络连接从而进行数据交互和数据处理的硬件设备或软件。当终端设备101、102、103为硬件时,其可以是支持网络连接,信息获取、交互、显示、处理等功能的各种电子设备,包括但不限于图像采集设备、智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如,对于终端设备101、102、103提供的待检索图像,基于全局召回模型得到的待检索图像的全局召回特征和局部校验模型得到的待检索图像的局部校验特征,在通用图像库中确定待检索图像的相似图和/或相同图的后台处理服务器。可选的,服务器还可以训练得到实现上述图像检索任务的全局召回模型和局部校验模型。作为示例,服务器105可以是云端服务器。
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
还需要说明的是,本公开的实施例所提供的图像检索方法、全局召回模型的训练方法、局部校验模型的训练方法可以由服务器执行,也可以由终端设备执行,还可以由服务器和终端设备彼此配合执行。相应地,图像检索装置、全局召回模型的训练装置、局部校验模型的训练装置包括的各个部分(例如各个单元)可以全部设置于服务器中,也可以全部设置于终端设备中,还可以分别设置于服务器和终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。当图像检 索方法、全局召回模型的训练方法、局部校验模型的训练方法运行于其上的电子设备不需要与其他电子设备进行数据传输时,该系统架构可以仅包括图像检索方法、全局召回模型的训练方法、局部校验模型的训练方法运行于其上的电子设备(例如服务器或终端设备)。
根据本公开的技术,提供了一种图像检索方法,基于全局召回模型得到的待检索图像的全局召回特征和局部校验模型得到的待检索图像的局部校验特征,提供了从通用图像库中确定待检索图像的相似图、相同图的通用检索逻辑,提高了图像检索的便捷性和效率。
请参考图2,图2为本公开实施例提供的一种图像检索方法的流程图,其中,流程200包括以下步骤:
步骤201,通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征。
本实施例中,图像检索方法的执行主体(例如,图1中的终端设备或服务器)可以基于有线网络连接方式或无线网络连接方式从远程,或从本地获取待检索图像,并通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征。其中,全局召回模型用于表征待检索图像与全局召回特征之间的对应关系。全局召回特征表征待检索图像的整体信息的特征。
作为示例,上述执行主体可以基于用户发出的图像检索请求,确定图像检索请求中携带的图像信息,通过图像信息确定待检索图像。待检索图像可以是包括任意内容的图像。
全局召回模型可以具有全局特征提取功能的任意神经网络模型。作为示例,全局召回模型可以采用卷积神经网络、循环神经网络等网络模型。
为了统一相同图、相似图的检索逻辑,需要提高针对于待检索图像进行特征提取后得到的全局召回特征的召回能力。召回能力强的表现有两个方面,一个是基于全局召回特征能够把相似图召回,另一方面是基于全局召回特征能够把主体一致的图向(当两张图像中的主体一致,则可认为是相同图)召回,这就需要召回特征兼顾语义信息和视觉信息。
为了使得全局召回模型得到的全局召回特征兼顾待检索图像的视觉 信息和语义信息,借鉴度量学习算法和分类算法,使用同类别的图像对训练度量损失函数,让全局召回模型能够具有更强的视觉区分度,同时使用大规模分类数据训练分类损失函数让全局召回模型能够具有更强的语义区分度。
步骤202,通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征。
本实施例中,上述执行主体可以通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征。其中,局部校验模型用于表征待检索图像与局部校验特征之间的对应关系。
局部校验特征中主要包括待检索图像的局部特征点。局部校验模型可以具有局部特征提取功能的任意神经网络模型。作为示例,局部校验模型可以采用卷积神经网络、循环神经网络等网络模型。
作为示例,通过局部校验模型得到的局部校验特征可以是待检索图像中包括的各主体对象的关键点特征,例如主体对象的轮廓特征、内部的关键部位特征。
步骤203,根据全局召回特征和局部校验特征,从通用图像库中确定待检索图像的相似图和/或相同图。
本实施例中,上述执行主体可以根据全局召回特征和局部校验特征,从通用图像库中确定待检索图像的相似图和/或相同图。通用图像库表征可通用于检索相似图、相同图的图像库。
相似图表征与待检索图像之间具有一定相似度的图像,其相似度例如可以是图像之间关于背景信息的相似度、图像之间关于主体对象的相似度。相同图表征与待检索图像所包括的背景信息、主体对象一致的图像。
作为示例,上述执行主体可以结合全局召回特征和局部校验特征从通用图像库中确定预设数量个图像,并根据全局召回特征和局部校验特征确定的相似度、一致性从大到小的顺序对预设数量个图像进行排序,将排序在前的多个图像确定为待检索图像的相同图,将预设数量个图像中排序在后的其余图像确定为待检索图像的相似图。上述执行主体可以参照本实示例单独确定相同图或相似图,也可同时确定相同图和相似图。
作为又一示例,上述执行主体可以基于全局召回特征从通用图像库中 确定预设数量个图像,并基于图像的全局召回特征之间的相似度、一致性确定预设数量个图像。基于局部校验特征从通用图像库中确定预设数量个图像,并基于图像之间的局部校验特征之间的相似度、一致性确定预设数量个图像的排序。
进而,对于基于全局召回特征确定的预设数量个图像中排序在前的多个图像和基于局部校验特征确定的预设数量个图像中排序在前的多个图像,将其中相同的图像确定为待检索图像的相同图。
对于基于全局召回特征确定的预设数量个图像中排序在后的多个图像和基于局部校验特征确定的预设数量个图像中排序在后的多个图像,将其中相同的图像确定为待检索图像的相似图。上述执行主体可以参照本示例单独确定相同图或相似图,也可同时确定相同图和相似图。
继续参见图3,图3是根据本实施例的图像检索方法的应用场景的一个示意图300。在图3的应用场景中,终端设备301向服务器302发送了图像检索请求,其中,图像检索请求中携带有待检索图像303的相关信息。服务器302根据图像检索请求确定待检索图像303后,首先,通过预训练的全局召回模型304得到兼顾待检索图像303的语义信息和视觉信息的全局召回特征305,并通过预训练的局部校验模型306得到待检索图像303的、用于进行局部特征点匹配的局部校验特征307。进而,服务器302根据全局召回特征305和局部校验特征307,从通用图像库308中确定待检索图像303的相似图和/或相同图。
本实施例中,提供了一种图像检索方法,基于全局召回模型得到的待检索图像的全局召回特征和局部校验模型得到的待检索图像的局部校验特征,提供了从通用图像库中确定待检索图像的相似图、相同图的通用检索逻辑,提高了图像检索的便捷性和效率。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式执行上述步骤203:
第一,根据全局召回特征,从通用图像库中确定多个召回图像,并确定待检索图像与多个召回图像中的每个召回图像之间的第一匹配信息。
全局召回特征表征待检索图像的全局特征信息,通过全局召回特征, 从整体上确定与待检索图像相似的多个图像作为召回图像。本实施例中,可以根据实际情况灵活设置返回的召回图像的数量,作为示例,召回图像为的数量为400。
作为示例,上述执行主体基于全局召回特征确定通用图像库中的每个图像与待检索图像之间的第一匹配信息,并将匹配信息所表征的匹配程度排序在前的预设数量个图像作为多个召回图像。
第二,根据局部校验特征,确定待检索图像和多个召回图像中的每个召回图像之间的特征点的第二匹配信息。
对于基于全局召回特征确定的多个召回图像,可以通过表征关键的局部特征点的局部校验特征,在召回图像和待检索图像之间确定关于局部特征点的匹配信息,即第二匹配信息。
第三,根据第一匹配信息和第二匹配信息,从多个召回图像中确定待检索图像的相似图和/或相同图。
作为示例,上述执行主体可以分别基于第一匹配信息、第二匹配信息从大到小的顺序进行排序,进而将两种匹配信息均排序在前的召回图像确定为待检索图像的相同图,将多个召回图像中相同图之外的图像确定为待检索图像的相似图。
本实现方式中,提供了一种根据全局召回特征和局部校验特征确定待检索图像的具体实现方式,基于全局召回特征确定通用图像库中的图像与待检索图像的第一匹配信息,得到召回图像;进而,基于局部校验特征确定召回图像与待检索图像的第二匹配信息,以根据第一匹配信息和第二匹配信息确定待检索图像的相同图和/或相似图,提高了图像检索效率和准确度。
在本实施例的一些可选的实现方式中,对于相似图,上述执行主体还可以通过如下方式执行上述第三步骤:
首先,结合多个召回图像中的每个召回图像对应的第一匹配信息和第二匹配信息,得到每个召回图像对应的排序分数。
然后,根据排序分数,对多个召回图像进行排序,并将排序后的多个召回图像确定为待检索图像的相似图。
作为示例,可以预先设置第一匹配信息和第二匹配信息的融合权重, 以根据融合权重融合第一匹配信息和第二匹配信息,得到排序分数。按照排序分数从大到小的顺序对多个召回图像进行排序,以在向用户展示相似图时,将相似度高的图像进行优先展示。
本实现方式中,提供了待检索图像的相似图的确定方式,提高了所确定的相似图的准确度。
在本实施例的一些可选的实现方式中,对于相同图,上述执行主体可以通过如下方式执行上述第三步骤:
首先,根据第一匹配信息确定多个召回图像中处于不同匹配阈值空间中的召回图像。
其中,匹配阈值空间表征预先设置的不同匹配阈值范围。
作为示例,在确定多个召回图像时,将第一匹配信息所表征的召回分数大于等于0.85的400个图像确定为待检测图像的召回图像,则确定出的多个召回图像的匹配阈值空间可以设置为0.85-0.90、0.90-0.95和0.95-1.0。
可以理解,处于越高数值的匹配阈值空间内的召回图像与待检索图像之间具有越高的匹配度。
然后,对于不同的匹配阈值空间,根据该区间中的召回图像的第二匹配信息,确定待检索图像的相同图。
作为示例,局部校验特征表征待检索图像中的关键的局部特征点,第二匹配信息可以表征待检索图像与召回图像之间的相匹配的特征点的数量。
本实现方式中,可以为每个匹配阈值空间设置对应的匹配点数阈值。对于每个匹配阈值空间中的召回图像,当召回图像与待检索图像之间的匹配的特征点的匹配点数不小于该匹配阈值空间对应的匹配点数阈值时,认为该召回图像是待检索图像的相同图像。
其中,匹配阈值空间表征的关于全局召回特征的匹配度与匹配点数阈值表征的关于局部校验特征的匹配度呈负相关。
例如,对于匹配阈值空间0.95-1.0,可以设置较小的匹配点数阈值。由于该匹配阈值空间0.95-1.0中的召回图像与待检索图像之间关于全局召回特征的匹配度很高,因此,在该区间中的召回图像与待检索图像之间具有较少的匹配的特征点的情况下,也可认为该召回图像为待检索图像的相 同图像。对应于匹配阈值空间0.90-0.95,所设置的匹配点数阈值要大于匹配阈值空间0.95-1.0对应的匹配点数阈值,以在该区间中的召回图像与待检索图像之间关于全局召回特征的匹配度不是很高的情况下,通过较多的匹配点数保证所确定的相同图的准确度。
本实现方式中,提供了待检索图像的相同图的确定方式,提高了所确定的相同图的准确度。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式执行上述第一步骤:
根据待检索图像的全局召回特征和通用图像库中的图像的全局召回特征,从通用图像库中确定出多个召回图像,并确定待检索图像与多个召回图像中的每个召回图像之间的第一匹配信息。其中,通用图像库中的图像的全局召回特征通过全局召回模型确定。
本实现方式中,对于通用图像库中的每个图像,可以通过预训练的全局召回模型得到该图像的全局召回特征,以构建通用图像库对应的全局召回特征库。
对于全局召回特征库中的特征,上述执行主体还可以执行如下后处理操作:使用LW(Learned Whiten)算法将全局召回模型提取的每个预设维度(例如,128维)的全局召回特征进行后处理。
LW算法具体流程如下:首先,从预设数据库中随机抽取一定数量(例如30000-40000)的图像对并提取图像对中的图像的128维特征,图像图中可以是相似的图像。然后,利用所获得的图像对的特征信息训练LW算法,得到映射矩阵。其中,图像对中的一个图像的特征信息在经过图像映射后得到的映射特征与另一图像的特征信息之间具有一定的差异,训练后的映射矩阵旨在使得所有图像对对应的差异的总差异最小。最后,使用映射矩阵将全局召回特征库中的所有全局召回特征进行后处理。
本实现方式中,通过预先使用全局召回模型确定通用数据库中的图像的全局召回特征,以进行通用数据库中的图像的全局召回特征与待检索图像的全局召回特征之间的匹配,提高了确定召回图像的效率。
在本实施例的一些可选的实现方式中,上诉执行主体可以通过如下方式执行上述第二步骤:
根据待检索图像的局部校验特征和多个召回图像的局部校验特征,确定待检索图像和多个召回图像中的每个召回图像之间的特征点的第二匹配信息。其中,通用图像库中的图像的局部校验特征通过局部校验模型确定。
本实现方式中,对于通用图像库中的每个图像,可以通过预训练的局部校验模型得到该图像的局部校验特征,以构建通用图像库对应的局部校验特征库。
对于局部校验特征库中的特征,上述执行主体还可以执行如下后处理操作:使用LW(Learned Whiten)算法将局部校验模型提取的每个预设维度(例如,128维)局部校验特征进行后处理。
LW算法具体流程如下:首先,从预设数据库中随机抽取一定数量(例如30000-40000)的图像对并提取图像对中的图像的局部特征点,并采用匹配算法确定其中相匹配的特征点对;然后,利用所获得的特征点对的特征信息训练LW算法,得到映射矩阵。其中,特征点对中的一个特征点的特征信息在经过图像映射后得到的映射特征点与另一特征点之间具有一定的差异,训练后的映射矩阵旨在使得所有特征点对对应的差异的总差异最小。最后,使用映射矩阵将局部校验特征库中的所有局部校验特征进行后处理。
由于通用图像库中的图像数据规模巨大,提取出来局部校验特征中的特征点的规模更大。为了减小局部校验特征库的数据量,可以对局部校验特征库中的局部校验特征进行量化存储,将float数据类型的特征转化为int数据类型的特征。
PQ(Product Quantization,乘积量化)流程如下:从预设图像库里面随机选取多个图像,并提取图像的特征点;然后,使用对称距离算法计算PQ量化的码本;最后,使用量化码本将原始float数据类型的特征全部转化为int数据类型的特征。
本实现方式中,通过预先使用局部校验模型确定通用数据库中的图像的局部校验特征,以进行通用数据库中的图像的局部校验特征与待检索图像的局部校验特征之间的匹配,提高了基于局部校验特征确定第二匹配信息的效率。
在本实施例的一些可选的实现方式中,上述执行主体还可以执行如下操作:
合并用于相似图检索的图像库和用于相同图检索的图像库,并对合并后的图像库中的图像进行去重,得到通用图像库。
本实现方式中,通过合并用于相似图检索的图像库和用于相同图检索的图像库,为上述统一的相似图检索逻辑和相同图检索逻辑提供了丰富的数据基础。
继续参考图4,示出了根据本公开的图像检索方法的又一个实施例的示意性流程400,包括以下步骤:
步骤401,通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征。
步骤402,通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征。
步骤403,根据全局召回特征,从通用图像库中确定多个召回图像,并确定待检索图像与多个召回图像中的每个召回图像之间的第一匹配信息。
步骤404,根据局部校验特征,确定待检索图像和多个召回图像中的每个召回图像之间的特征点的第二匹配信息。
步骤405,结合多个召回图像中的每个召回图像对应的第一匹配信息和第二匹配信息,得到每个召回图像对应的排序分数。
步骤406,根据排序分数,对多个召回图像进行排序,并将排序后的多个召回图像确定为待检索图像的相似图。
步骤407,根据第一匹配信息确定多个召回图像中,处于不同匹配阈值空间中的召回图像。
步骤408,对于不同的匹配阈值空间,根据该区间中的召回图像的第二匹配信息,确定待检索图像的相同图。
从本实施例中可以看出,与图2对应的实施例相比,本实施例中的图像检索方法的流程400具体说明了相似图的确定过程和相同图的确定过程,进一步提高了图像检索的准确度。
继续参考图5,示出了根据本公开的全局召回模型的训练方法的一个实施例的示意性流程500,包括以下步骤:
步骤501,获取第一训练样本集。
全局召回模型的训练方法的执行主体(例如,图1中的终端设备或服务器)可以基于有线网络连接方式或无线网络连接方式从远程,或从本地获取第一训练样本集。其中,第一训练样本集中的训练样本包括图像对和图像对的分类数据。
图像对中的两个图像具有分类数据所表征的相同的类别。图像中可以是包括任意内容的图像。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式得到第一训练样本集:基于半监督聚类算法对预设图像库中的图像进行聚类,并基于图像之间的聚类结果,得到第一训练样本集。
作为示例,基于半监督聚类算法对预设图像库中的图像进行聚类,得到聚类结果。进而,将同一聚类结果中的两个不同图像作为训练样本中的图像对,将该聚类结果所表征的分类信息作为训练样本中的分类数据,确定一个训练样本,以得到第一训练样本集。
本实现方式中,提供了一种自动获取用于训练全局召回模型的第一训练样本集的得到方式,可以基于预设图像库,采用半监督聚类算法快速得到第一训练样本集,提高了信息获取的便利性。
步骤502,利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过度量损失和分类损失更新全局召回模型,得到训练后的全局召回模型。
本实施例中,上述执行主体可以利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过度量损失和分类损失更新全局召回模型,得到训练后的全局召回模型。
如图6所示,示出了全局召回模型的结构示意图600。全局召回模型600中包括主干网络601、全连接层602和BNNeck(Batch Normalization Neck,批量规范化颈部)模块603。为了统一相同图、相似图的检索逻辑,需要提高针对于待检索图像进行特征提取后得到的全局召回特征的召回能力。召回能力强的表现有两个方面,一个是基于全局召回特征能够把相似图召回,另一方面是基于全局召回特征能够把主体一致的图(当两张图片中的主体一致,则可认为是相同图)召回,这就需要召回特征兼顾语义信息和视觉信息。
为了使得全局召回模型得到的全局召回特征兼顾待检索图像的视觉信息和语义信息,借鉴度量学习算法和分类算法,使用同类别的图像对训练度量损失函数让全局召回模型能够具有更强的视觉区分度,同时使用大规模分类数据训练分类损失函数让全局召回模型能够具有更强的语义区分度。其中,度量损失例如可以是Lifted Struct loss(提升结构损失),分类损失例如可以是交叉熵损失。
作为示例,上述执行主体将未经训练的图像对输入全局召回模型,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,进而根据度量损失和分类损失更新全局召回模型。
通过循环执行上述训练操作,响应于达到预设结束条件,得到训练后的全局召回模型。其中,预设结束条件例如可以是训练时间超过预设时间阈值,训练次数超过预设次数阈值,训练损失收敛。训练后的全局召回模型可应用于上述实施例200、400。
本实施例中,使用同类别的图像对训练度量损失函数让全局召回模型能够具有更强的视觉区分度,同时使用大规模分类数据训练分类损失函数让全局召回模型能够具有更强的语义区分度,提高了训练后的全局召回模型的视觉区分度和语义区分度,提高了基于全局召回模型得到的全局召回特征的准确度。
在本实施例的一些可选的实现方式中,上述执行主体可以通过执行如下方式,以通过度量损失和分类损失更新全局召回模型,得到训练后的全 局召回模型:
第一,根据度量损失和分类损失,确定总损失。
作为示例,上述执行主体可以预先设置度量损失和分类损失的结合权重,以根据结合权重结合度量损失和分类损失,得到总损失。
第二,根据总损失更新全局召回模型,得到训练后全局召回模型。
本实现方式中,基于度量损失和分类损失的融合更新全局召回模型,提高了训练后的全局召回模型的准确度。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式执行上述第一步骤:
首先,基于批量规范化处理,使得度量损失和分类损失处于同一分布空间下;然后,结合处于同一分布空间下的度量损失和分类损失,得到总损失。
作为示例,上述执行主体可以通过BNNeck模块,对分类损失进行批量规范化处理,使其与度量损失处于同一分布空间下。基于处于同一分布空间下的度量损失和分类损失的融合,提高了所得到的总损失的准确度。
在本实施例的一些可选的实现方式中,上述执行主体还可以执行如下操作:在全局召回模型的更新过程中,保持分类损失的权重不变,并采用热身策略调整度量损失的权重。本实现方式进一步使得全局召回模型在保证语义区分度的同时更具有视觉区分度。
本实施例中,在全局召回模型的更新过程中,上述执行主体还可以训练初期采用warm up(热身)策略调整全局召回模型的学习率,然后在训练一段时间后使得学习率阶梯式下降,这样够让全局召回模型更好地寻找全局最优。
继续参考图7,示出了根据本公开的局部校验模型的训练方法的一个实施例的示意性流程700,包括以下步骤:
步骤701,获取第二训练样本集。
本实施例中,局部校验模型的训练方法的执行主体(例如,图1中的终端设备或服务器)可以基于有线网络连接方式或无线网络连接方式从远程,或从本地获取第二训练样本集。其中,第二训练样本集中的训练样本 包括样本图像和样本图像的分类数据。
样本图像可以是包括任意内容的图像。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式得到第二训练样本集:基于半监督聚类算法对预设图像库中的图像进行聚类,并基于图像的聚类结果,得到第二训练样本集。
作为示例,基于半监督聚类算法对预设图像库中的图像进行聚类,得到聚类结果。进而,将聚类结果中的图像作为训练样本中的样本图像,将该聚类结果所表征的分类信息作为训练样本中的分类数据,确定一个训练样本,以得到第二训练样本集。
本实现方式中,提供了一种自动获取用于训练全局召回模型的第二训练样本集的得到方式,可以基于预设图像库,采用半监督聚类算法快速得到第二训练样本集,提高了信息获取的便利性。
步骤702,通过全局分支得到样本图像的全局特征,并基于全局特征和所输入的样本图像对应的分类数据确定第一损失。
本实施例中,上述执行主体通过全局分支得到样本图像的全局特征,并基于全局特征和所输入的样本图像对应的分类数据确定第一损失。
如图8所示,示出了局部校验模型的结构示意图800。局部校验模型800中包括全局分支801、特征重建分支802和注意力分支803。其中,全局分支例如可以是循环卷积网络、残差网络等网络模型。特征重建分支例如可以是基于全卷积网络实现的网络模块,注意力分支可以是基于注意力网络实现的网络模块。
全局分支和普通的分类网络一样。作为示例,全局分支使用ResNet50网络,其最后一个池化层使用GeM pooling(Generalized-mean pooling,广义平均池化),损失使用ArcFace loss(加法角裕量损失)。GeM-Pooling可以看作Average Pooling和Max Pooling的延伸。当指数参数p=1时,GeM Pooling退化成Average Pooling;当p无穷大时,GeM pooling等效于Max Pooling,该算法能够加强对不同分辨率图片的鲁棒性,提高特征的表征能力。采用ArcFace Loss提高了局部校验模型的类间可分性同时加强类内紧度和类间差异,有助于提升模型对特征的视觉分辨力。
步骤703,通过特征重建分支得到目标特征的重建特征,并基于重建 特征与目标特征确定第二损失。
本实施例中,上述执行主体可以通过特征重建分支得到目标特征的重建特征,并基于重建特征与目标特征确定第二损失。其中,目标特征由全局分支在提取全局特征的过程中得到。
继续以全局分支使用ResNet50网络为例,目标特征可以是ResNet50网络得到全局特征的过程中,倒数第二层对应的特征。为了让局部校验模型学习的局部点特征能够准确的表达原图的关键信息,我们使用全卷积的方式进行特征重建。在特征重建过程中,确定重建特征和原始特征(目标特征)之间的损失。例如,第二损失可以是均方误差损失。
在本实施例的一些可选的实现方式中,上述执行主体可以通过如下方式执行上述步骤703:
首先,基于特征重建分支采用的全卷积网络,对目标特征进行下采样,得到下采样特征;然后,对下采样特征进行上采样,得到重建特征。
本实现方式中,基于先下采样再上采样的特征重建,基于第二损失的指导,使得重建特征中的局部点特征能够准确的表达原始特征的关键信息。
步骤704,通过注意力分支,确定目标特征的注意力权重,并根据注意力权重和重建特征得到局部点特征,并基于局部点特征和所输入的样本图像对应的分类数据,确定第三损失。
本实施例中,上述执行主体可以通过注意力分支,确定目标特征的注意力权重,并根据注意力权重和重建特征得到局部点特征,并基于局部点特征和所输入的样本图像对应的分类数据,确定第三损失。
使用注意力机制确定目标特征中重要的位置,得到目标特征中的每个特征点对应的权重,最后将权重和特征重建分支中的重建特征计算得到最后的局部特征点。在训练过程中,为了让局部特征点具有语义信息,基于第二损失指导训练过程。作为示例,第二损失可以是交叉熵损失。
步骤705,基于第一损失、第二损失和第三损失,更新局部校验模型,以得到训练后的局部校验模型。
本实施例中,上述执行主体可以基于第一损失、第二损失和第三损失,更新局部校验模型,以得到训练后的局部校验模型。
作为示例,针对每次输入的样本图像,得到对应的第一损失、第二损 失和第三损失,以更新局部校验模型。通过循环执行上述训练操作,响应于达到预设结束条件,得到训练后的局部校验特征。其中,预设结束条件例如可以是训练时间超过预设时间阈值,训练次数超过预设次数阈值,训练损失收敛。训练后的局部校验模型可应用于上述实施例200、400。
本实施例中,提供了一种局部校验模型的训练方法,使得局部校验模型得到的局部校验特征更能表征图像的局部关键信息,提高了所得到的局部校验模型的准确度。
在本实施例的一些可选的实现方式中,上述通过如下方式执行上述步骤705:根据第一损失更新全局分支,根据第二损失更新特征重建分支,根据第三损失更新注意力分支,以得到局部校验模型。
本实施例中,各分支对应的损失用户更新各分支,可以针对性地进行模型的参数更新,提高了模型的训练效率和最终得到的局部校验模型的准确度。
本实施例中,在局部校验模型的更新过程中,上述执行主体还可以采用热身策略调整局部校验模型的学习率,然后在训练一段时间后使得学习率阶梯式下降。训练过程中还可以对重建分支做梯度裁剪。
继续参考图9,作为对上述各图所示方法的实现,本公开提供了一种图像检索装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图9所示,图像检索装置包括:召回单元901,被配置成通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征;校验单元902,被配置成通过预训练的局部校验模型得到待检索图像的、用于进行局部特征点匹配的局部校验特征;确定单元903,被配置成根据全局召回特征和局部校验特征,从通用图像库中确定待检索图像的相似图和/或相同图。
在本实施例的一些可选的实现方式中,确定单元903,进一步被配置成:根据全局召回特征,从通用图像库中确定多个召回图像,并确定待检索图像与多个召回图像中的每个召回图像之间的第一匹配信息;根据局部校验特征,确定待检索图像和多个召回图像中的每个召回图像之间的特征 点的第二匹配信息;根据第一匹配信息和第二匹配信息,从多个召回图像中确定待检索图像的相似图和/或相同图。
在本实施例的一些可选的实现方式中,确定单元903,进一步被配置成:结合多个召回图像中的每个召回图像对应的第一匹配信息和第二匹配信息,得到每个召回图像对应的排序分数;根据排序分数,对多个召回图像进行排序,并将排序后的多个召回图像确定为待检索图像的相似图。
在本实施例的一些可选的实现方式中,确定单元903,进一步被配置成:根据第一匹配信息确定多个召回图像中,处于不同匹配阈值空间中的召回图像;对于不同的匹配阈值空间,根据该区间中的召回图像的第二匹配信息,确定待检索图像的相同图。
在本实施例的一些可选的实现方式中,确定单元903,进一步被配置成:根据待检索图像的全局召回特征和通用图像库中的图像的全局召回特征,从通用图像库中确定出多个召回图像,并确定待检索图像与多个召回图像中的每个召回图像之间的第一匹配信息,其中,通用图像库中的图像的全局召回特征通过全局召回模型确定。
在本实施例的一些可选的实现方式中,确定单元903,进一步被配置成:根据待检索图像的局部校验特征和多个召回图像的局部校验特征,确定待检索图像和多个召回图像中的每个召回图像之间的特征点的第二匹配信息,其中,通用图像库中的图像的局部校验特征通过局部校验模型确定。
在本实施例的一些可选的实现方式中,上述装置还包括:图像库单元(图中未示出),被配置成合并用于相似图检索的图像库和用于相同图检索的图像库,并对合并后的图像库中的图像进行去重,得到通用图像库。
本实施例中,提供了一种图像检索装置,基于全局召回模型得到的待检索图像的全局召回特征和局部校验模型得到的待检索图像的局部校验特征,提供了从通用图像库中确定待检索图像的相似图、相同图的通用检索逻辑,提高了图像检索的便捷性和效率。
继续参考图10,作为对上述各图所示方法的实现,本公开提供了一种全局召回模型的训练装置的一个实施例,该装置实施例与图5所示的方法 实施例相对应,该装置具体可以应用于各种电子设备中。
如图10所示,全局召回模型的训练装置包括:第一获取单元1001,被配置成获取第一训练样本集,其中,第一训练样本集中的训练样本包括图像对和图像对的分类数据;第一训练单元1002,被配置成:利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过度量损失和分类损失更新全局召回模型,得到训练后的全局召回模型。
在本实施例的一些可选的实现方式中,第一训练单元1002,进一步被配置成:根据度量损失和分类损失,确定总损失;根据总损失更新全局召回模型,得到训练后的全局召回模型。
在本实施例的一些可选的实现方式中,第一训练单元1002,进一步被配置成:基于批量规范化处理,使得度量损失和分类损失处于同一分布空间下;结合处于同一分布空间下的度量损失和分类损失,得到总损失。
在本实施例的一些可选的实现方式中,还包括:权重更新单元(图中未示出),被配置成在全局召回模型的更新过程中,保持分类损失的权重不变,并采用热身策略调整度量损失的权重。
在本实施例的一些可选的实现方式中,上述装置还包括:第一样本单元(图中未示出),被配置成通过半监督聚类算法对预设图像库中的图像进行聚类,并基于图像之间的聚类结果,得到第一训练样本集。
本实施例中,提供了全局召回模型的训练装置,使用同类别的图像对训练度量损失函数让全局召回模型能够具有更强的视觉区分度,同时使用大规模分类数据训练分类损失函数让全局召回模型能够具有更强的语义区分度,提高了训练后的全局召回模型的视觉区分度和语义区分度,提高了基于全局召回模型得到的全局召回特征的准确度。
继续参考图11,作为对上述各图所示方法的实现,本公开提供了一种局部校验模型的训练装置的一个实施例,该装置实施例与图7所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图11所示,局部校验模型的训练装置,其中,局部校验模型包括全局分支、特征重建分支和注意力分支,装置包括:第二获取单元1101,被配置成获取第二训练样本集,其中,第二训练样本集中的训练样本包括样本图像和样本图像的分类数据;第一损失单元1102,被配置成通过全局分支得到样本图像的全局特征,并基于全局特征和所输入的样本图像对应的分类数据确定第一损失;第二损失单元1103,被配置成通过特征重建分支得到目标特征的重建特征,并基于重建特征与目标特征确定第二损失,其中,目标特征由全局分支在提取全局特征的过程中得到;第三损失单元1104,被配置成通过注意力分支,确定目标特征的注意力权重,并根据注意力权重和重建特征得到局部点特征,并基于局部点特征和所输入的样本图像对应的分类数据,确定第三损失;第二训练单元1105,被配置成基于第一损失、第二损失和第三损失,更新局部校验模型,以得到训练后的局部校验模型。
在本实施例的一些可选的实现方式中,第二训练单元1105,进一步被配置成:根据第一损失更新全局分支,根据第二损失更新特征重建分支,根据第三损失更新注意力分支,以得到局部校验模型。
在本实施例的一些可选的实现方式中,第二损失单元1103,进一步被配置成:基于特征重建分支采用的全卷积网络,对目标特征进行下采样,得到下采样特征;对下采样特征进行上采样,得到重建特征。
在本实施例的一些可选的实现方式中,上述装置还包括:第二样本单元(图中未示出),被配置成基于半监督聚类算法对预设图像库中的图像进行聚类,并基于图像的聚类结果,得到第二训练样本集。
本实施例中,提供了一种局部校验模型的训练装置,使得局部校验模型得到的局部校验特征更能表征图像的局部关键信息,提高了所得到的局部校验模型的准确度。
根据本公开的实施例,本公开还提供了一种电子设备,该电子设备包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,该指令被至少一个处理器执行,以使至少一个处理器执行时能够实现上述任意实施例所描述的图像 检索方法、全局召回模型的训练方法、局部校验模型的训练方法。
根据本公开的实施例,本公开还提供了一种可读存储介质,该可读存储介质存储有计算机指令,该计算机指令用于使计算机执行时能够实现上述任意实施例所描述的图像检索方法、全局召回模型的训练方法、局部校验模型的训练方法。
本公开实施例提供了一种计算机程序产品,该计算机程序在被处理器执行时能够实现上述任意实施例所描述的图像检索方法、全局召回模型的训练方法、局部校验模型的训练方法。
图11示出了可以用来实施本公开的实施例的示例电子设备1100的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图11所示,设备1100包括计算单元1101,其可以根据存储在只读存储器(ROM)1102中的计算机程序或者从存储单元1108加载到随机访问存储器(RAM)1103中的计算机程序,来执行各种适当的动作和处理。在RAM 1103中,还可存储设备1100操作所需的各种程序和数据。计算单元1101、ROM 1102以及RAM 1103通过总线1104彼此相连。输入/输出(I/O)接口1105也连接至总线1104。
设备1100中的多个部件连接至I/O接口1105,包括:输入单元1106,例如键盘、鼠标等;输出单元1107,例如各种类型的显示器、扬声器等;存储单元1108,例如磁盘、光盘等;以及通信单元1109,例如网卡、调制解调器、无线通信收发机等。通信单元1109允许设备1100通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元1101可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元1101的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器 学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元1101执行上文所描述的各个方法和处理,例如图像检索方法。例如,在一些实施例中,图像检索方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元1108。在一些实施例中,计算机程序的部分或者全部可以经由ROM 1102和/或通信单元1109而被载入和/或安装到设备1100上。当计算机程序加载到RAM 1103并由计算单元1101执行时,可以执行上文描述的图像检索方法的一个或多个步骤。备选地,在其他实施例中,计算单元1101可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行图像检索方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、 便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决传统物理主机与虚拟专用服务器(VPS,Virtual Private Server)服务中存在的管理难度大,业务扩展性弱的缺陷;也可以为分布式系统的服务器,或者是结合了区块链的服务器。
根据本公开实施例的技术方案,提供了一种图像检索方法,基于全局召回模型得到的待检索图像的全局召回特征和局部校验模型得到的待检索图像的局部校验特征,提供了从通用图像库中确定待检索图像的相似图、 相同图的通用检索逻辑,提高了图像检索的便捷性和效率。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开提供的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。
Claims (22)
- 一种图像检索方法,包括:通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征;通过预训练的局部校验模型得到所述待检索图像的、用于进行局部特征点匹配的局部校验特征;根据所述全局召回特征和所述局部校验特征,从通用图像库中确定所述待检索图像的相似图和/或相同图。
- 根据权利要求1所述的方法,其中,所述根据所述全局召回特征和所述局部校验特征,从通用图像库中确定所述待检索图像的相似图和/或相同图,包括:根据所述全局召回特征,从所述通用图像库中确定多个召回图像,并确定所述待检索图像与所述多个召回图像中的每个召回图像之间的第一匹配信息;根据所述局部校验特征,确定所述待检索图像和所述多个召回图像中的每个召回图像之间的特征点的第二匹配信息;根据所述第一匹配信息和所述第二匹配信息,从所述多个召回图像中确定所述待检索图像的相似图和/或相同图。
- 根据权利要求2所述的方法,其中,所述根据所述第一匹配信息和所述第二匹配信息,从所述多个召回图像中确定所述待检索图像的相似图和/或相同图,包括:结合所述多个召回图像中的每个召回图像对应的第一匹配信息和第二匹配信息,得到每个召回图像对应的排序分数;根据排序分数,对所述多个召回图像进行排序,并将排序后的多个召回图像确定为所述待检索图像的相似图。
- 根据权利要求2所述的方法,其中,所述根据所述第一匹配信息和所 述第二匹配信息,从所述多个召回图像中确定所述待检索图像的相似图和/或相同图,包括:根据所述第一匹配信息确定所述多个召回图像中,处于不同匹配阈值空间中的召回图像;对于不同的匹配阈值空间,根据该区间中的召回图像的第二匹配信息,确定所述待检索图像的相同图。
- 根据权利要求2所述的方法,其中,所述根据所述全局召回特征,从所述通用图像库中确定多个召回图像,并确定所述待检索图像与所述多个召回图像中的每个召回图像之间的第一匹配信息,包括:根据所述待检索图像的全局召回特征和所述通用图像库中的图像的全局召回特征,从所述通用图像库中确定出所述多个召回图像,并确定所述待检索图像与所述多个召回图像中的每个召回图像之间的第一匹配信息,其中,所述通用图像库中的图像的全局召回特征通过所述全局召回模型确定。
- 根据权利要求2所述的方法,其中,所述根据所述局部校验特征,确定所述待检索图像和所述多个召回图像中的每个召回图像之间的特征点的第二匹配信息,包括:根据所述待检索图像的局部校验特征和所述多个召回图像的局部校验特征,确定所述待检索图像和所述多个召回图像中的每个召回图像之间的特征点的第二匹配信息,其中,所述通用图像库中的图像的局部校验特征通过所述局部校验模型确定。
- 根据权利要求1-6任一项所述的方法,其中,在所述从通用图像库中确定所述待检索图像的相似图和/或相同图之前,还包括:合并用于相似图检索的图像库和用于相同图检索的图像库,并对合并后的图像库中的图像进行去重,得到所述通用图像库。
- 一种全局召回模型的训练方法,包括:获取第一训练样本集,其中,所述第一训练样本集中的训练样本包括图 像对和图像对的分类数据;利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过所述度量损失和所述分类损失更新所述全局召回模型,得到训练后的全局召回模型。
- 根据权利要求8所述的方法,其中,所述通过所述度量损失和所述分类损失更新所述全局召回模型,得到训练后的全局召回模型,包括:根据所述度量损失和所述分类损失,确定总损失;根据所述总损失更新所述全局召回模型,得到训练后的全局召回模型。
- 根据权利要求9所述的方法,其中,所述根据所述度量损失和所述分类损失,确定总损失,包括:基于批量规范化处理,使得所述度量损失和所述分类损失处于同一分布空间下;结合处于同一分布空间下的所述度量损失和所述分类损失,得到所述总损失。
- 根据权利要求9所述的方法,其中,还包括:在所述全局召回模型的更新过程中,保持所述分类损失的权重不变,并采用热身策略调整所述度量损失的权重。
- 根据权利要求8-11任一项所述的方法,其中,在所述获取第一训练样本集之前,还包括:通过半监督聚类算法对预设图像库中的图像进行聚类,并基于图像之间的聚类结果,得到所述第一训练样本集。
- 一种局部校验模型的训练方法,其中,所述局部校验模型包括全局分支、特征重建分支和注意力分支,所述方法包括:获取第二训练样本集,其中,所述第二训练样本集中的训练样本包括样本图像和样本图像的分类数据;通过所述全局分支得到样本图像的全局特征,并基于所述全局特征和所输入的样本图像对应的分类数据确定第一损失;通过所述特征重建分支得到目标特征的重建特征,并基于所述重建特征与所述目标特征确定第二损失,其中,所述目标特征由所述全局分支在提取所述全局特征的过程中得到;通过所述注意力分支,确定所述目标特征的注意力权重,并根据所述注意力权重和所述重建特征得到局部点特征,并基于所述局部点特征和所输入的样本图像对应的分类数据,确定第三损失;基于所述第一损失、所述第二损失和所述第三损失,更新所述局部校验模型,以得到训练后的局部校验模型。
- 根据权利要求13所述的方法,其中,所述基于所述第一损失、所述第二损失和所述第三损失,更新所述局部校验模型,以得到训练后的局部校验模型,包括:根据所述第一损失更新所述全局分支,根据所述第二损失更新所述特征重建分支,根据所述第三损失更新所述注意力分支,以得到所述局部校验模型。
- 根据权利要求13所述的方法,其中,所述通过所述特征重建分支得到目标特征的重建特征,包括:基于所述特征重建分支采用的全卷积网络,对所述目标特征进行下采样,得到下采样特征;对所述下采样特征进行上采样,得到所述重建特征。
- 根据权利要求13-15任一项所述的方法,其中,在所述获取第二训练样本集之前,还包括:基于半监督聚类算法对预设图像库中的图像进行聚类,并基于图像的聚类结果,得到所述第二训练样本集。
- 一种图像检索装置,包括:召回单元,被配置成通过预训练的全局召回模型得到兼顾待检索图像的语义信息和视觉信息的全局召回特征;校验单元,被配置成通过预训练的局部校验模型得到所述待检索图像的、用于进行局部特征点匹配的局部校验特征;确定单元,被配置成根据所述全局召回特征和所述局部校验特征,从通用图像库中确定所述待检索图像的相似图和/或相同图。
- 一种全局召回模型的训练装置,包括:第一获取单元,被配置成获取第一训练样本集,其中,所述第一训练样本集中的训练样本包括图像对和图像对的分类数据;第一训练单元,被配置成:利用机器学习方法,通过全局召回模型得到图像对中的图像的全局召回特征,并确定所输入的图像对对应的全局召回特征之间的度量损失,以及基于所输入的图像对对应的全局召回特征得到的分类结果与该图像对对应的分类数据之间的分类损失,以通过所述度量损失和所述分类损失更新所述全局召回模型,得到训练后的全局召回模型。
- 一种局部校验模型的训练装置,其中,所述局部校验模型包括全局分支、特征重建分支和注意力分支,所述装置包括:第二获取单元,被配置成获取第二训练样本集,其中,所述第二训练样本集中的训练样本包括样本图像和样本图像的分类数据;第一损失单元,被配置成通过所述全局分支得到样本图像的全局特征,并基于所述全局特征和所输入的样本图像对应的分类数据确定第一损失;第二损失单元,被配置成通过所述特征重建分支得到目标特征的重建特征,并基于所述重建特征与所述目标特征确定第二损失,其中,所述目标特征由所述全局分支在提取所述全局特征的过程中得到;第三损失单元,被配置成通过所述注意力分支,确定所述目标特征的注意力权重,并根据所述注意力权重和所述重建特征得到局部点特征,并基于所述局部点特征和所输入的样本图像对应的分类数据,确定第三损失;第二训练单元,被配置成基于所述第一损失、所述第二损失和所述第三损失,更新所述局部校验模型,以得到训练后的局部校验模型。
- 一种电子设备,其特征在于,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-16中任一项所述的方法。
- 一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1-16中任一项所述的方法。
- 一种计算机程序产品,包括:计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-16中任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210493497.X | 2022-04-27 | ||
CN202210493497.XA CN114880505A (zh) | 2022-04-27 | 2022-04-27 | 图像检索方法、装置及计算机程序产品 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023207028A1 true WO2023207028A1 (zh) | 2023-11-02 |
Family
ID=82674426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/130517 WO2023207028A1 (zh) | 2022-04-27 | 2022-11-08 | 图像检索方法、装置及计算机程序产品 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114880505A (zh) |
WO (1) | WO2023207028A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117274778A (zh) * | 2023-11-21 | 2023-12-22 | 浙江啄云智能科技有限公司 | 基于无监督和半监督的图像搜索模型训练方法和电子设备 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114880505A (zh) * | 2022-04-27 | 2022-08-09 | 北京百度网讯科技有限公司 | 图像检索方法、装置及计算机程序产品 |
CN115170893B (zh) * | 2022-08-29 | 2023-01-31 | 荣耀终端有限公司 | 共视档位分类网络的训练方法、图像排序方法及相关设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080050712A1 (en) * | 2006-08-11 | 2008-02-28 | Yahoo! Inc. | Concept learning system and method |
CN111522986A (zh) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | 图像检索方法、装置、设备和介质 |
CN112307248A (zh) * | 2020-11-26 | 2021-02-02 | 国网电子商务有限公司 | 一种图像检索方法及装置 |
CN113806582A (zh) * | 2021-11-17 | 2021-12-17 | 腾讯科技(深圳)有限公司 | 图像检索方法、装置、电子设备和存储介质 |
CN114880505A (zh) * | 2022-04-27 | 2022-08-09 | 北京百度网讯科技有限公司 | 图像检索方法、装置及计算机程序产品 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112258625B (zh) * | 2020-09-18 | 2023-05-05 | 山东师范大学 | 基于注意力机制的单幅图像到三维点云模型重建方法及系统 |
CN112163498B (zh) * | 2020-09-23 | 2022-05-27 | 华中科技大学 | 前景引导和纹理聚焦的行人重识别模型建立方法及其应用 |
CN112966137B (zh) * | 2021-01-27 | 2022-05-31 | 中国电子进出口有限公司 | 基于全局与局部特征重排的图像检索方法与系统 |
CN114283316A (zh) * | 2021-09-16 | 2022-04-05 | 腾讯科技(深圳)有限公司 | 一种图像识别方法、装置、电子设备和存储介质 |
-
2022
- 2022-04-27 CN CN202210493497.XA patent/CN114880505A/zh active Pending
- 2022-11-08 WO PCT/CN2022/130517 patent/WO2023207028A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080050712A1 (en) * | 2006-08-11 | 2008-02-28 | Yahoo! Inc. | Concept learning system and method |
CN111522986A (zh) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | 图像检索方法、装置、设备和介质 |
CN112307248A (zh) * | 2020-11-26 | 2021-02-02 | 国网电子商务有限公司 | 一种图像检索方法及装置 |
CN113806582A (zh) * | 2021-11-17 | 2021-12-17 | 腾讯科技(深圳)有限公司 | 图像检索方法、装置、电子设备和存储介质 |
CN114880505A (zh) * | 2022-04-27 | 2022-08-09 | 北京百度网讯科技有限公司 | 图像检索方法、装置及计算机程序产品 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117274778A (zh) * | 2023-11-21 | 2023-12-22 | 浙江啄云智能科技有限公司 | 基于无监督和半监督的图像搜索模型训练方法和电子设备 |
CN117274778B (zh) * | 2023-11-21 | 2024-03-01 | 浙江啄云智能科技有限公司 | 基于无监督和半监督的图像搜索模型训练方法和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN114880505A (zh) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220129731A1 (en) | Method and apparatus for training image recognition model, and method and apparatus for recognizing image | |
WO2023207028A1 (zh) | 图像检索方法、装置及计算机程序产品 | |
US20230037908A1 (en) | Machine learning model training method and device, and expression image classification method and device | |
EP3913542A2 (en) | Method and apparatus of training model, device, medium, and program product | |
CN112509690B (zh) | 用于控制质量的方法、装置、设备和存储介质 | |
US20230196716A1 (en) | Training multi-target image-text matching model and image-text retrieval | |
WO2022228425A1 (zh) | 一种模型训练方法及装置 | |
US20230306081A1 (en) | Method for training a point cloud processing model, method for performing instance segmentation on point cloud, and electronic device | |
JP2022135991A (ja) | クロスモーダル検索モデルのトレーニング方法、装置、機器、および記憶媒体 | |
CN114648638A (zh) | 语义分割模型的训练方法、语义分割方法与装置 | |
WO2023050738A1 (zh) | 基于知识蒸馏的模型训练方法、装置、电子设备 | |
WO2024036847A1 (zh) | 图像处理方法和装置、电子设备和存储介质 | |
WO2023019933A1 (zh) | 构建检索数据库的方法、装置、设备以及存储介质 | |
CN112949433B (zh) | 视频分类模型的生成方法、装置、设备和存储介质 | |
US20220292131A1 (en) | Method, apparatus and system for retrieving image | |
US20230215136A1 (en) | Method for training multi-modal data matching degree calculation model, method for calculating multi-modal data matching degree, and related apparatuses | |
US20230114673A1 (en) | Method for recognizing token, electronic device and storage medium | |
US20230290126A1 (en) | Method for training roi detection model, method for detecting roi, device, and medium | |
WO2023093014A1 (zh) | 一种票据识别方法、装置、设备以及存储介质 | |
WO2022227760A1 (zh) | 图像检索方法、装置、电子设备及计算机可读存储介质 | |
WO2024179485A1 (zh) | 一种图像处理方法及其相关设备 | |
WO2022227759A1 (zh) | 图像类别的识别方法、装置和电子设备 | |
WO2023232031A1 (zh) | 一种神经网络模型的训练方法、装置、电子设备及介质 | |
WO2024016680A1 (zh) | 信息流推荐方法、装置及计算机程序产品 | |
WO2023173617A1 (zh) | 图像处理方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22939844 Country of ref document: EP Kind code of ref document: A1 |