CN107193979B - Method for searching homologous images - Google Patents

Method for searching homologous images Download PDF

Info

Publication number
CN107193979B
CN107193979B CN201710384823.2A CN201710384823A CN107193979B CN 107193979 B CN107193979 B CN 107193979B CN 201710384823 A CN201710384823 A CN 201710384823A CN 107193979 B CN107193979 B CN 107193979B
Authority
CN
China
Prior art keywords
pictures
extraction model
feature extraction
name variable
variable type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710384823.2A
Other languages
Chinese (zh)
Other versions
CN107193979A (en
Inventor
马良庄
蔡毅
朱奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Hand Sight Information Technology Co ltd
Original Assignee
Chengdu Hand Sight Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Hand Sight Information Technology Co ltd filed Critical Chengdu Hand Sight Information Technology Co ltd
Priority to CN201710384823.2A priority Critical patent/CN107193979B/en
Publication of CN107193979A publication Critical patent/CN107193979A/en
Application granted granted Critical
Publication of CN107193979B publication Critical patent/CN107193979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour

Abstract

The invention discloses a method for searching homologous pictures, which comprises the following steps: establishing a feature extraction model, and establishing a name variable feature extraction model according to the feature extraction model; sequentially inputting two pictures into the variable characteristic extraction model for adjustment to obtain an optimized name variable characteristic extraction model, and then obtaining name variable characteristic vectors of the two original pictures according to the name variable characteristic vectors obtained for multiple times to obtain differences among the two original pictures; when the two pictures belong to the same source picture, the preset conditions met by the difference are summarized; establishing an index library between each original picture and the corresponding name variable type feature vector; when any one image is input into the optimized name variable type feature extraction model, the corresponding original image is searched in the index library according to the obtained corresponding name variable type feature vector and the preset condition met by the difference of the name variable type feature vectors of the two images, and the homologous image can be quickly, accurately and effectively retrieved.

Description

Method for searching homologous images
Technical Field
The invention relates to the technical field of picture retrieval, in particular to a method for retrieving homologous pictures.
Background
The homologous pictures refer to pictures with the same source on the same platform, for example, pictures obtained after an original picture is photographed all belong to the homologous pictures, and pictures processed by the original picture such as blurring, scaling, rotation, shading, perspective, chromatic aberration, and occlusion all belong to the homologous pictures.
In the internet era, a large amount of data is generated by users according to requirements, and after the same data source enters the internet, the users can process an original data source according to the requirements of the users, so that a large amount of new data is generated, for example, a certain internet friend uploads a picture on the internet, and the picture spreading process can be subjected to various operations of processing, transformation, compression, PS and the like of different users, so that a large amount of similar pictures are generated. When the homologous image retrieval is performed based on the above, simple image features such as the existing MD5 and the like cannot be applied to search for the homologous image, but only matching based on image content similarity is used.
At present, the homologous image identification technology mainly comprises image similarity identification. The character label, the watermarking technology and the like are used, most of image similarity technologies are realized by SIFT technologies and the like, the method is large in operation amount and low in identification accuracy rate of homologous images, and the SIFT methods and the like are mainly based on fuzzy matching, so that feature points cannot be accurately extracted from smooth-edge targets, the exact similarity of the images cannot be guaranteed, and the homologous images cannot be guaranteed. The technology based on the text labels is a simpler technology, the workload of the text labels is large, the cross-platform implementation is difficult, and meanwhile the text labels are lost due to user operation, so that the traditional technologies are difficult to realize accurate and quick retrieval of homologous pictures in the era of internet big data.
Therefore, the existing homologous image retrieval has the technical problem of poor retrieval accuracy.
Disclosure of Invention
The invention mainly solves the technical problem that the existing homologous picture has poor retrieval accuracy, thereby providing a homologous picture retrieval method which can quickly, accurately and effectively retrieve the homologous picture.
In order to solve the technical problems, the invention adopts a technical scheme that: the method for searching the homologous images comprises the following steps:
establishing a feature extraction model;
establishing a name variable type feature extraction model, which specifically comprises the following steps: sequentially inputting two pictures into the feature extraction model, and respectively and correspondingly obtaining name variable feature vectors of the two pictures according to the two feature vectors obtained by the feature extraction model, wherein the two pictures are homologous pictures or non-homologous pictures;
training and adjusting parameters of the name variable type feature extraction model based on a first loss function to obtain an optimized name variable type feature extraction model;
sequentially inputting two pictures to the optimized name variable type feature extraction model to obtain the difference between the name variable type feature vectors of the two pictures;
summarizing preset conditions met by differences when the two pictures belong to the same picture based on the differences of the two pictures obtained for multiple times;
based on the optimized name variable type feature extraction model, obtaining a name variable type feature vector corresponding to each original picture in a database, and establishing an index library formed by the corresponding relation between each original picture and the name variable type feature vector;
and when any one picture is input into the optimized name variable type feature extraction model, searching an original picture corresponding to the obtained name variable type feature vector in the index library according to a preset condition met by the obtained corresponding name variable type feature vector and the difference of the name variable type feature vectors of the two pictures.
The invention has the beneficial effects that: different from the prior art, the invention adopts pre-modeling, obtains an optimized name variable characteristic extraction model after two times of optimization, obtains the difference of name variable characteristic vectors of two pictures according to the optimized model, induces the preset condition met by the difference when any two pictures belong to homologous pictures according to the difference of the name variable characteristic vectors of the two pictures obtained for many times, inputs any original picture into the optimized model, obtains an index library of the characteristic vector corresponding to each picture, and finally searches the original picture corresponding to the picture in the index library according to the obtained corresponding characteristic vector and the preset condition met by the homologous pictures when any picture is input into the model, thereby solving the technical problem of poor accuracy in the existing homologous picture retrieval, and further, homologous pictures can be quickly, accurately and effectively retrieved.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for searching a homologous image according to an embodiment of the present invention;
fig. 2 is a specific diagram illustrating step 101 of the method for searching a homologous image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a convolutional layer in an embodiment of the present invention;
fig. 4 is a specific diagram illustrating step 102 in the method for retrieving a homologous image according to an embodiment of the present invention.
Detailed Description
The invention mainly solves the technical problem that the existing homologous picture has poor retrieval accuracy, thereby providing a homologous picture retrieval method which can quickly, accurately and effectively retrieve the homologous picture.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
The embodiment of the invention provides a method for searching homologous images, which comprises the following steps as shown in figure 1: and S101, establishing a feature extraction model. S102, establishing a name variable type feature extraction model, specifically: and sequentially inputting two pictures into the feature extraction model, and correspondingly acquiring name variable feature vectors of the two pictures according to the two feature vectors acquired by the feature extraction model, wherein the two pictures are homologous pictures or non-homologous pictures. And S103, training and adjusting parameters of the name variable type feature extraction model based on the first loss function to obtain the optimized name variable type feature extraction model. And S104, sequentially inputting the two pictures to the optimized name variable type feature extraction model, and obtaining the difference between the name variable type feature vectors of the two pictures. And S105, summarizing preset conditions met by the differences when the two pictures belong to the same picture based on the differences of the two pictures obtained for multiple times. And S106, obtaining a name variable type feature vector corresponding to each original picture in the database based on the optimized name variable type feature extraction model, and establishing an index library formed by the corresponding relation between each original picture and the name variable type feature vector. And S107, when any one image is input into the optimized name variable type feature extraction model, searching the corresponding original image in the index library according to the obtained corresponding name variable type feature vector and the preset condition met by the difference of the name variable type feature vectors of the two images.
In a specific embodiment, S101, specifically, as shown in fig. 2, CNN1 is a Stem structure in a Google inclusion-v 4 structure, and a second full-connected layer FC2 and a CNN2 structure are connected behind the Stem structure as shown in fig. 3, and both the Stem and inclusion-A, Inception-B, Inception-C are inclusion-v 4 standard structures. And a first full connection layer FC1, a first full connection layer FC1 and a second full connection layer FC2 are connected behind the characteristic extraction model to form a characteristic extraction model, and the specific model structure is shown in FIG. 2.
Specifically, the operation of each convolutional layer is the same, except that the data obtained by each convolutional layer is changed, assuming that the vector obtained by the convolutional layer in the previous layer is X, and the operation performed by the convolutional layer in the next layer is X
Figure BDA0001306070460000041
Wherein the content of the first and second substances,
Figure BDA0001306070460000042
is the region where the convolution kernel acts, W is the parameter of the convolution kernel, b is the bias of the layer, and f is the activation function. The first full-connection FC1 layer, assuming that the data output by the upper convolution layer is X1The operation performed by the first fully-connected layer is Y ═ f (WX)1+ b), W is the layer parameter, b is the bias for the layer, and f is the layer activation function, where Y may be an 800-dimensional vector. The second full connection layer FC2 layer, assuming that the data output by the upper convolution layer is X2Then the operation of the second fully-connected layer is Y ═ f (WX)2+ b), W is the layer parameter, b is the layer bias, f is the layer activation function, where Y may be a 200-dimensional vector. And then combining the output results of the first full-connection layer and the second full-connection layer into a final output, specifically combining the output results into a 1000-dimensional vector output. The above process is to establish a feature extraction model.
Then, the feature extraction model is optimized, and therefore, after the feature extraction model is established, the method further includes: and sequentially inputting the original pictures of the feature extraction model, and adjusting initial parameters of the feature extraction model based on the second loss function catenary to obtain the optimized feature extraction model.
Specifically, the original pictures are sequentially output to the feature extraction model, and the feature vector of each original picture is obtained. Then, the categories of some pictures in the database are divided, for example, only 1000 categories of pictures in the database are selected. The classification is carried out by adopting a Softmax classifier, and the Softmax classifier is generalization of a logistic regression two-classifier and can be used for multi-classification. And then, obtaining a first loss function of the feature extraction model according to the probability corresponding to the true category of each original picture. Specifically, the probability prob (class i) that the input picture is of each class can be obtained by a Softmax classifier
Figure BDA0001306070460000051
Therefore, a probability prob (class i _ truth) corresponding to a category to which each original picture belongs can be obtained, and a calculation formula of the first Loss function Loss:
loss is log (prob (class i _ truth)). Wherein x isiN represents the number of categories of the partial picture in the database, which is an element of the feature vector.
After obtaining the value of the first loss function, adjusting the initial parameters of the feature extraction model, that is, after inputting an original picture each time, calculating the value of the first loss function, adjusting the initial parameters of the feature extraction model, and then stopping the training times until the obtained value of the first loss function is not reduced any more, at this time, the corresponding parameters of the feature extraction model are the parameters corresponding to the optimized feature extraction model.
The above process is only a pre-training process, and a feature extraction model after first optimization is obtained.
And then, establishing a name variable feature extraction model according to the optimized feature extraction model.
Specifically, S102, a name variable feature extraction model is established, specifically, two pictures are sequentially input to the feature extraction model, and name variable feature vectors of the two pictures are respectively and correspondingly obtained according to the two feature vectors obtained by the feature extraction model, where the two pictures are specifically homologous pictures or non-homologous pictures.
In S102, two pictures are sequentially input to the feature extraction model, and according to the two feature vectors obtained by the feature extraction model, the corresponding feature vectors are multiplied by 100 and then rounded, so as to obtain name variable feature vectors corresponding to the two pictures. Of course, in a preferred embodiment, two pictures are sequentially input to the optimized feature extraction model, as specifically shown in fig. 4.
Specifically, in the step, the extracted numerical characteristic vector is converted into a name variable characteristic vector, so that in later-stage application retrieval, the retrieval speed of the name variable characteristic vector is far higher than that of the traditional numerical characteristic vector, and the retrieval matching degree is higher due to the name variable characteristic vector.
Next, S103 is executed to optimize the name variable type feature extraction model.
Specifically, parameters of the name variable type feature extraction model are trained and adjusted based on the first loss function, and the optimized name variable type feature extraction model is obtained.
Preferably, the values of the first loss functions corresponding to each two pictures are sequentially obtained according to the name variable feature vectors of each two pictures obtained in S102; adjusting parameters of the name variable type feature extraction model according to the value of the first loss function obtained each time until the value of the obtained first loss function is not reduced any more; and obtaining parameters of the corresponding name variable type extraction model when the value of the first loss function is not reduced any more, and obtaining the optimized name variable type feature extraction model.
Specifically, the value of the first loss function corresponding to each two pictures is calculated according to the name variable feature vectors of each two pictures obtained in S102. Specifically, the calculation formula of the first Loss function Loss _ function is:
Figure BDA0001306070460000061
wherein f isiAnd fpName variable type feature vectors, f, of two pictures respectivelyip1 indicates that two pictures belong to the same source picture, fip1 indicates that the two pictures do not belong to two homology graphsAnd x is the number of the same elements corresponding to the name variable type characteristic vectors of the two pictures.
Thereby obtaining the first loss function values of the two pictures. And then, adjusting the parameters of the name variable type feature extraction model according to the value of the first loss function obtained each time until the value of the obtained first loss function is not reduced any more, so as to obtain the corresponding parameters of the name variable type feature extraction model when the value of the first loss function is not reduced any more, and at the moment, the parameters correspond to the optimized parameters of the name variable type feature extraction model.
Next, S104 is executed, and the two pictures are sequentially input to the optimized name variable feature extraction model, so as to obtain the difference between the name variable feature vectors of the two pictures.
Specifically, firstly, the optimized name variable type feature extraction model is input with two pictures in sequence, and two name variable type feature vectors of the two pictures are obtained. Then, the number of the same elements corresponding to the two name variable type feature vectors is obtained according to the two name variable type feature vectors of the two pictures, that is, the difference between the name variable type feature vectors of the two pictures is determined by the number of the corresponding same elements.
Next, in S105, based on the differences of the two pictures obtained multiple times, a preset condition that the differences satisfy when the two pictures belong to the same picture is summarized.
Because there may be homologous pictures or non-homologous pictures in the two selected pictures, when it is determined that the two selected pictures are homologous pictures, the preset conditions that the differences meet are summarized and summarized, that is, the number of the same elements corresponding to the name variable type feature vectors of the two selected pictures is judged, and the preset conditions that the number of the same elements corresponding to the two name variable type feature vectors meets each time the two selected pictures belong to homologous pictures are summarized and obtained. In the present invention, the number x of the two name variable type feature vectors corresponding to the same element may be specifically 700, that is, when the number of the two name variable type feature vectors corresponding to the same element is greater than 700, it is determined that the two pictures are homologous pictures.
And then, executing S106, and obtaining a name variable type feature vector corresponding to each original picture in the database based on the optimized name variable type feature extraction model, thereby establishing an index library formed by the corresponding relation between each original picture and the name variable type feature vector.
Thus, when the homologous picture needs to be searched, the homologous picture is searched according to the name variable type characteristic vector so as to be corresponding to the original picture.
Therefore, in the application, when S107 is executed, and any one of the images is input to the optimized name variable type feature extraction model, the corresponding original image is searched for in the index library according to the obtained corresponding name variable type feature vector and the preset condition that the difference between the name variable type feature vectors of the two images satisfies.
That is to say, when any one picture is input, by the above method, the name variable type feature vector of the picture can be obtained in the optimized name variable type feature extraction model, and the original picture is correspondingly found, and certainly, in order to verify whether the picture is a homologous picture, according to the preset condition that the difference of the name variable type feature vectors of the two pictures meets, whether the input picture and the found original picture are homologous pictures is judged, and when the picture is determined to be the homologous picture, the found original picture is determined to be the homologous picture of the input picture.
The technical scheme of the invention has high feasibility for matching the homologous images of a large amount of data, and the matching speed and precision are greatly improved compared with those of the traditional method.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A method for searching homologous images is characterized by comprising the following steps:
establishing a feature extraction model;
establishing a name variable type feature extraction model, which specifically comprises the following steps: sequentially inputting two pictures into the feature extraction model, and respectively and correspondingly obtaining name variable feature vectors of the two pictures according to the two feature vectors obtained by the feature extraction model, wherein the two pictures are homologous pictures or non-homologous pictures;
training and adjusting parameters of the name variable type feature extraction model based on a first loss function to obtain an optimized name variable type feature extraction model;
sequentially inputting two pictures to the optimized name variable type feature extraction model to obtain the difference between the name variable type feature vectors of the two pictures;
summarizing preset conditions met by differences when the two pictures belong to the same picture based on the differences of the two pictures obtained for multiple times;
based on the optimized name variable type feature extraction model, obtaining a name variable type feature vector corresponding to each original picture in a database, and establishing an index library formed by the corresponding relation between each original picture and the name variable type feature vector;
when any one picture is input into the optimized name variable type feature extraction model, searching an original picture corresponding to the obtained name variable type feature vector in the index library according to a preset condition met by the obtained corresponding name variable type feature vector and the difference of the name variable type feature vectors of the two pictures;
sequentially inputting two pictures to the feature extraction model, and respectively and correspondingly acquiring name variable feature vectors of the two pictures according to the two feature vectors acquired by the feature extraction model, wherein the method specifically comprises the following steps:
and sequentially inputting two pictures to the feature extraction model, and respectively multiplying the corresponding feature vectors by 100 according to the two feature vectors obtained by the feature extraction model and then rounding to obtain the name variable feature vectors corresponding to the two pictures.
2. The method for searching homologous images according to claim 1, wherein after the establishing of the feature extraction model, the method further comprises:
and sequentially inputting original pictures to the feature extraction model, training and adjusting initial parameters of the feature extraction model based on a second loss function, and obtaining the optimized feature extraction model.
3. The method for searching homologous images according to claim 2, wherein the original images are sequentially input to the feature extraction model, and an optimized feature extraction model is obtained by training and adjusting initial parameters of the feature extraction model based on a second loss function, specifically comprising:
sequentially inputting original pictures to the feature extraction model to obtain a feature vector of each original picture;
dividing the categories of partial pictures in the database;
according to the feature vector of each original picture, obtaining the probability that the original picture belongs to each category, obtaining the probability corresponding to the category to which each original picture really belongs, and obtaining the value of a second loss function of the feature extraction model;
adjusting initial parameters of the feature extraction model according to the value of the second loss function until the value of the second loss function is obtained and is not reduced any more;
and acquiring parameters of the corresponding feature extraction model when the value of the second loss function is not reduced any more, and acquiring the optimized feature extraction model.
4. The method for searching homologous images according to claim 3, wherein the formula for calculating the second Loss function Loss of the feature extraction model is as follows:
Loss=-log(prob(class=i_truth));
the picture is a calculation formula of the probability prob (class ═ i) of each class:
Figure FDA0002507165380000021
wherein x isiAs an element of the feature vector, prob (class i _ trunth) indicates a probability corresponding to a category to which each picture belongs, and N indicates the number of categories of partial pictures in the database.
5. The method for searching for a homologous image according to claim 1, wherein training and adjusting parameters of the name-variant feature extraction model based on a first loss function to obtain an optimized name-variant feature extraction model specifically comprises:
sequentially obtaining the values of the first loss functions corresponding to each two pictures according to the name variable type characteristic vectors of each two pictures;
adjusting parameters of the name variable type feature extraction model according to the value of the first loss function obtained each time until the value of the obtained first loss function is not reduced any more;
and obtaining parameters of the corresponding name variable type extraction model when the value of the first loss function is not reduced any more, and obtaining an optimized name variable type feature extraction model.
6. The method for searching homologous pictures according to claim 5, wherein the formula for calculating the first Loss function Loss _ function is as follows:
Figure FDA0002507165380000031
wherein f isiAnd fpName variable type feature vectors f of two inputted pictures respectivelyip1 indicates that two pictures belong to the same source picture, fipWhere-1 indicates that the two pictures do not belong to the same source picture, and x is the number of the same elements corresponding to the name variable feature vectors of the two input pictures.
7. The method for searching homologous pictures according to claim 1, wherein the step of sequentially inputting two pictures into the optimized name-variant feature extraction model to obtain the differences between the name-variant feature vectors of the two pictures comprises:
sequentially inputting two pictures to the optimized name variable type feature extraction model to obtain two name variable type feature vectors of the two pictures;
and obtaining the number of the same elements corresponding to the two name variable type feature vectors according to the two name variable type feature vectors of the two pictures.
8. The method for searching homologous pictures according to claim 7, wherein the preset condition that the differences satisfy when two pictures belong to the homologous picture is summarized based on the differences of the two pictures obtained multiple times, specifically:
and summarizing and obtaining a preset condition met by the number of the same elements corresponding to the two name variable type feature vectors when the two pictures belong to the same source picture each time based on the number of the same elements corresponding to the two name variable type feature vectors obtained each time.
CN201710384823.2A 2017-05-26 2017-05-26 Method for searching homologous images Active CN107193979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710384823.2A CN107193979B (en) 2017-05-26 2017-05-26 Method for searching homologous images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710384823.2A CN107193979B (en) 2017-05-26 2017-05-26 Method for searching homologous images

Publications (2)

Publication Number Publication Date
CN107193979A CN107193979A (en) 2017-09-22
CN107193979B true CN107193979B (en) 2020-08-11

Family

ID=59876010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710384823.2A Active CN107193979B (en) 2017-05-26 2017-05-26 Method for searching homologous images

Country Status (1)

Country Link
CN (1) CN107193979B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753966A (en) * 2018-12-16 2019-05-14 初速度(苏州)科技有限公司 A kind of Text region training system and method
CN110175249A (en) * 2019-05-31 2019-08-27 中科软科技股份有限公司 A kind of search method and system of similar pictures
CN111723868B (en) * 2020-06-22 2023-07-21 海尔优家智能科技(北京)有限公司 Method, device and server for removing homologous pictures

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373514A (en) * 2007-08-24 2009-02-25 李树德 Method and system for recognizing human face
CN105335956A (en) * 2014-08-06 2016-02-17 腾讯科技(深圳)有限公司 Homologous image verification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9594981B2 (en) * 2014-04-14 2017-03-14 Canon Kabushiki Kaisha Image search apparatus and control method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373514A (en) * 2007-08-24 2009-02-25 李树德 Method and system for recognizing human face
CN105335956A (en) * 2014-08-06 2016-02-17 腾讯科技(深圳)有限公司 Homologous image verification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《利用多维空间同源连续性的图像检索》;柳培忠 等;《应用科学学报》;20110331;第29卷(第2期);第153-158页 *

Also Published As

Publication number Publication date
CN107193979A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
CN107562805B (en) Method and device for searching picture by picture
CN111027575B (en) Semi-supervised semantic segmentation method for self-attention confrontation learning
CN107291945B (en) High-precision clothing image retrieval method and system based on visual attention model
CN107833213B (en) Weak supervision object detection method based on false-true value self-adaptive method
KR101183391B1 (en) Image comparison by metric embeddings
CN111160407B (en) Deep learning target detection method and system
CN110188225B (en) Image retrieval method based on sequencing learning and multivariate loss
WO2020036124A1 (en) Object recognition device, object recognition learning device, method, and program
CN107193979B (en) Method for searching homologous images
CN109993026B (en) Training method and device for relative recognition network model
CN111210402A (en) Face image quality scoring method and device, computer equipment and storage medium
WO2023221790A1 (en) Image encoder training method and apparatus, device, and medium
CN108763295A (en) A kind of video approximate copy searching algorithm based on deep learning
Huang et al. Fine-art painting classification via two-channel deep residual network
CN105354228A (en) Similar image searching method and apparatus
CN108304588B (en) Image retrieval method and system based on k neighbor and fuzzy pattern recognition
CN107578445B (en) Image discriminable region extraction method based on convolution characteristic spectrum
CN111191065B (en) Homologous image determining method and device
CN111931256B (en) Color matching recommendation method, device, equipment and storage medium
CN109493279B (en) Large-scale unmanned aerial vehicle image parallel splicing method
CN113343033B (en) Video searching method and device, computer equipment and storage medium
CN106469437B (en) Image processing method and image processing apparatus
CN110750672B (en) Image retrieval method based on deep measurement learning and structure distribution learning loss
CN112597329B (en) Real-time image retrieval method based on improved semantic segmentation network
CN111881312B (en) Image data set classification and division method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant