CN113111208A - Method, system, equipment and storage medium for searching picture by picture - Google Patents

Method, system, equipment and storage medium for searching picture by picture Download PDF

Info

Publication number
CN113111208A
CN113111208A CN202110510774.9A CN202110510774A CN113111208A CN 113111208 A CN113111208 A CN 113111208A CN 202110510774 A CN202110510774 A CN 202110510774A CN 113111208 A CN113111208 A CN 113111208A
Authority
CN
China
Prior art keywords
ship
face features
detected
face
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110510774.9A
Other languages
Chinese (zh)
Inventor
田煜
李凡平
石柱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Qingdao Yisa Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Yisa Data Technology Co Ltd filed Critical Qingdao Yisa Data Technology Co Ltd
Priority to CN202110510774.9A priority Critical patent/CN113111208A/en
Publication of CN113111208A publication Critical patent/CN113111208A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system, equipment and a storage medium for searching pictures by pictures, which comprises the following steps: s1, collecting the ship image to be detected; s2, extracting the face features of the ship image to be detected and setting the face features as the face features to be detected; s3, comparing the similarity of the to-be-detected ship face features with a ship face feature library according to a preset feature value comparison algorithm model; s4, storing the images with the similarity values larger than a preset threshold value and the corresponding similarity values; and S5, sorting the similarity values in a descending order, and outputting the first N similarity values and the corresponding ship images. Compared with the prior art, the method has the advantages that the lightweight model mibilenetv2 is used as a feature extraction algorithm, the model is high in speed, low in occupied video memory and high in precision, and is suitable for engineering application. Tripletloss is used as a loss function on the basis of mobilenetv2, so that the ship face characteristic value extracted by the model is more accurate. The method has the advantages of high image searching speed, high accuracy and strong real-time property, and is suitable for application and engineering.

Description

Method, system, equipment and storage medium for searching picture by picture
Technical Field
The invention relates to the technical field of deep learning and artificial intelligence, in particular to a method, a system, equipment and a storage medium for searching a picture by using a picture.
Background
The picture searching method is a brand new searching mode, searches out relevant data pictures and information on the Internet based on the graphic image data provided by the user, and belongs to a subdivision of a searching engine. The search is always a mainstream application along with the development of the internet, a search engine develops through three generations from 1994 to date, a search mode and a technical means are improved day by day, but the main carrier of the search is still characters, and the user demand for searching graphic images is increased day by day along with the geometric growth of a large amount of digital image information on the network.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method, a system, equipment and a storage medium for searching a picture by using a picture, which can search out similar pictures by inputting the pictures and meet the search requirement of a user on picture images.
First aspect
The invention provides a method for searching a picture by a picture, which comprises the following steps:
s1, collecting the ship image to be detected;
s2, extracting the face features of the ship image to be detected and setting the face features as the face features to be detected;
s3, comparing the similarity of the to-be-detected ship face features with a ship face feature library according to a preset feature value comparison algorithm model; the ship face feature library stores a large number of ship face features and corresponding ship images;
s4, storing the face features with the similarity value larger than a preset threshold value with the face feature to be detected and the corresponding similarity value;
and S5, sorting the similarity values in a descending order, and outputting the first N similarity values and corresponding ship images, wherein N is a natural number.
Preferably, the method for constructing the ship face feature library comprises the following steps:
acquiring a plurality of ship images and performing classification training;
and extracting the ship face features of the plurality of ship images, and storing the plurality of ship images and the corresponding ship face features to obtain a ship face feature library.
Preferably, the method for extracting the ship face features comprises the following steps: and inputting the ship image into a Mobilenetv2 convolutional neural network model, and identifying the ship image by using the Mobilenetv2 convolutional neural network model to obtain the face feature of the ship image.
Preferably, the loss function of the Mobilenetv2 convolutional neural network model is:
L=max(d(a,p)-d(a,n)+margin,0)
wherein a is anchor; p is positive, and p and a are samples in the same category; n is negativate, n is a sample of a different class from a, and margin is a distance, which is a fixed value.
Preferably, the preset feature value comparison algorithm model is as follows:
Figure BDA0003060216280000021
wherein, A and B are vector characteristic values, and n is the number of the ship face characteristics in the ship face characteristic library.
Second aspect of the invention
The invention provides a system for searching a picture by a picture, which comprises:
the acquisition module is used for acquiring an image of the ship to be detected;
the ship face feature extraction module is used for extracting the ship face features of the ship image to be detected and setting the ship face features as the ship face features to be detected;
the similarity comparison module is used for comparing the similarity of the to-be-detected ship face features with the ship face feature library according to a preset feature value comparison algorithm model; the ship face feature library stores a large number of ship face features and corresponding ship images;
the storage module is used for storing the ship face features with the similarity value larger than a preset threshold value with the similarity value of the ship face features to be detected and the corresponding similarity values;
and the output module is used for sorting the similarity values in a descending order and outputting the first N similarity values and the corresponding ship images, wherein N is a natural number.
Third aspect of the invention
The invention provides a device for searching a picture by a picture, which comprises a memory and a processor; the memory is used for storing executable program codes;
the processor is configured to read executable program code stored in the memory to perform a method of searching a graph according to the first aspect.
Fourth aspect of the invention
The present invention provides a storage medium storing executable program code as claimed in claim 7.
The invention has the beneficial effects that: compared with the prior art, the method has the advantages that the lightweight model mibilenetv2 is used as a feature extraction algorithm, the model is high in speed, low in occupied video memory and high in precision, and is suitable for engineering application. The triple loss is used as a loss function on the basis of mobilenetv2, so that the face characteristic value extracted by the model is more accurate. The method has the advantages of high image searching speed, high accuracy and strong real-time property, and is suitable for application and engineering.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a schematic flow chart of a first embodiment of the present invention;
FIG. 2 is a schematic structural diagram according to a second embodiment of the present invention;
fig. 3 is a hardware architecture diagram of a third embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
Example one
As shown in fig. 1, an embodiment of the present invention provides a method for searching a graph, including the following steps:
s1, collecting the ship image to be detected;
s2, extracting the face features of the ship image to be detected and setting the face features as the face features to be detected;
the method for extracting the ship face features comprises the following steps: and (3) inputting the ship image into a Mobilenetv2 convolutional neural network model, and identifying the ship image by using the Mobilenetv2 convolutional neural network model to obtain the face feature of the ship image.
S3, comparing the similarity of the to-be-detected ship face features with a ship face feature library according to a preset feature value comparison algorithm model; a large number of ship face features and corresponding ship images are stored in the ship face feature library;
the embodiment of the invention uses the triple loss as the loss function of the network, so that the extracted face features are more accurate, compared with softmax, the triple loss learns that the face features are good embedding, similar images are similar in an embedding space, and whether the face features are the same face can be judged. The loss function of the Mobilenetv2 convolutional neural network model is shown below:
L=max(d(a,p)-d(a,n)+margin,0)
the final optimization aims at zooming in a and p and zooming out a and n. The input is a triplet < a, p, n >, a being anchor; p is positive, i.e., a sample of the same class as a; n is negativate, expressed as samples of a different class; margin is a distance and is a fixed value.
The construction method of the ship face feature library comprises the following steps:
a plurality of ship images are obtained and classified for training, so that the Mobilenetv2 convolutional neural network model has the practicability of a specific scene, and the feature extraction precision is improved.
And extracting and storing the ship face features of the plurality of ship images to obtain a ship face feature library.
The Mobilenetv2 convolutional neural network model improves the feature extraction precision mainly through the following measures:
1. a structure named as inversed Residual Block is provided, wherein the former ResNet Block firstly adopts a convolution kernel with the size of 1x1 to perform channel reduction operation, an activation function adopts ReLU, then adopts a space convolution with the size of 3x3 to perform convolution operation, the activation function is ReLU, and further adopts a convolution kernel with the size of 1x1 to perform dimension increasing, and the dimension increasing is added with input. The design of the 1x1 convolution drop channel reduces the amount of computation (because the middle 3x3 space convolution is too computationally expensive). The Residual block is hourglass-shaped, wide at two sides and narrow in the middle. The mobilenetv2 changes the convolution of 3x3 into depth separable convolution (Depthwise), the calculation amount is greatly reduced, more channel designs can be realized, and the effect is better. The number of channels is increased by convolution with 1x1, then the 3x3 space convolution with Depthwise, and then the dimensionality is reduced by convolution with 1x 1. The number of channels at two ends is small, the calculated amount of the 1x1 convolution rising channel or falling channel is not large, and the calculated amount of the convolution of Depthwise is not large although the number of the middle channels is large.
2. A linear bottleeck (i.e. linear transformation without ReLU activation) is proposed to replace the original nonlinear activation transformation. ReLU6 is used in mobileNetv1, ReLU6 is a normal ReLU but the maximum output value is limited to 6, and the numerical resolution can be good when the accuracy of the mobile terminal float16/int8 is low. The activation range of the ReLU is not limited, the output range is 0 to positive infinity, and if the activation values are very large and distributed in a large range, the low-precision float16/int8 cannot accurately describe the values in such a large range, which brings precision loss. The mobilenetv2 removes the final output ReLU6, and directly outputs linearly, which brings the following beneficial effects: the remaining non-0 area after the ReLU transformation corresponds to a linear transformation, and the ReLU can retain all the complete information only when a low dimension is input.
The Mobilenetv2 has the advantages of proposing a Linear bottleeck and Invered residual. The Linear Bottleneck removes ReLU by removing the characteristics of Eltwise +, thus reducing the damage of the ReLU to the characteristics; invoked residual has two benefits: 1. multiplexing feature, 2. In the side branch block, the degradation condition of the characteristics is relieved by increasing the input dimension of the ReLU through the 1x 1L dimension and then connecting depthwise conv and the ReLU.
S4, storing the face features with the similarity value larger than a preset threshold value with the face feature to be detected and the corresponding similarity value;
and S5, sorting the similarity values in a descending order, and outputting the top 10 similarity values and the corresponding ship images.
The invention adopts a cosine similarity algorithm to compare the characteristic values, and filters the ship face images with the similarity threshold value smaller than 0.9 because the difference between ships is not very large. The preset eigenvalue comparison algorithm model is as follows:
Figure BDA0003060216280000061
where A and B are vector feature values, the similarity given is in the range-1 to-1, meaning that the two directions are diametrically opposite, 1 denotes that their orientations are identical, 0 usually denotes that they are independent, and the value between them denotes intermediate similarity or dissimilarity, and n is the number of the ship face features in the ship face feature library.
The method is characterized in that 10 similar images obtained by searching the images of the ship to be searched by the images are mainly obtained according to the input ship images X, and all the similar images have corresponding characteristic values F which are one-dimensional vectors. And comparing the similarity of the characteristic value F with the characteristic values in the characteristic library, filtering the images lower than the specified threshold value, and finally, sorting by sort to output 10 images similar to Y.
Example two
An embodiment of the present invention provides a system for searching a graph with a graph, as shown in fig. 2, including:
the acquisition module is used for acquiring an image of the ship to be detected;
the ship face feature extraction module is used for extracting the ship face features of the ship image to be detected and setting the ship face features as the ship face features to be detected;
the method for extracting the ship face features comprises the following steps: and (3) inputting the ship image into a Mobilenetv2 convolutional neural network model, and identifying the ship image by using the Mobilenetv2 convolutional neural network model to obtain the face feature of the ship image.
The similarity comparison module is used for comparing the similarity of the to-be-detected ship face features with the ship face feature library according to a preset feature value comparison algorithm model; a large number of ship face features and corresponding ship images are stored in the ship face feature library;
the embodiment of the invention uses the triple loss as the loss function of the network, so that the extracted face features are more accurate, compared with softmax, the triple loss learns that the face features are good embedding, similar images are similar in an embedding space, and whether the face features are the same face can be judged. The loss function of the Mobilenetv2 convolutional neural network model is shown below:
L=max(d(a,p)-d(a,n)+margin,0)
the final optimization aims at zooming in a and p and zooming out a and n. The input is a triplet < a, p, n >, a being anchor; p is positive, i.e., a sample of the same class as a; n is negativate, expressed as samples of a different class; margin is a distance and is a fixed value.
The construction method of the ship face feature library comprises the following steps:
a plurality of ship images are obtained and classified for training, so that the Mobilenetv2 convolutional neural network model has the practicability of a specific scene, and the feature extraction precision is improved.
And extracting and storing the ship face features of the plurality of ship images to obtain a ship face feature library.
The Mobilenetv2 convolutional neural network model improves the feature extraction precision mainly through the following measures:
1. a structure named as inversed Residual Block is provided, wherein the former ResNet Block firstly adopts a convolution kernel with the size of 1x1 to perform channel reduction operation, an activation function adopts ReLU, then adopts a space convolution with the size of 3x3 to perform convolution operation, the activation function is ReLU, and further adopts a convolution kernel with the size of 1x1 to perform dimension increasing, and the dimension increasing is added with input. The design of the 1x1 convolution drop channel reduces the amount of computation (because the middle 3x3 space convolution is too computationally expensive). The Residual block is hourglass-shaped, wide at two sides and narrow in the middle. The mobilenetv2 changes the convolution of 3x3 into depth separable convolution (Depthwise), the calculation amount is greatly reduced, more channel designs can be realized, and the effect is better. The number of channels is increased by convolution with 1x1, then the 3x3 space convolution with Depthwise, and then the dimensionality is reduced by convolution with 1x 1. The number of channels at two ends is small, the calculated amount of the 1x1 convolution rising channel or falling channel is not large, and the calculated amount of the convolution of Depthwise is not large although the number of the middle channels is large.
2. A linear bottleeck (i.e. linear transformation without ReLU activation) is proposed to replace the original nonlinear activation transformation. ReLU6 is used in mobileNetv1, ReLU6 is a normal ReLU but the maximum output value is limited to 6, and the numerical resolution can be good when the accuracy of the mobile terminal float16/int8 is low. The activation range of the ReLU is not limited, the output range is 0 to positive infinity, and if the activation values are very large and distributed in a large range, the low-precision float16/int8 cannot accurately describe the values in such a large range, which brings precision loss. The mobilenetv2 removes the final output ReLU6, and directly outputs linearly, which brings the following beneficial effects: the remaining non-0 area after the ReLU transformation corresponds to a linear transformation, and the ReLU can retain all the complete information only when a low dimension is input.
The Mobilenetv2 has the advantages of proposing a Linear bottleeck and Invered residual. The Linear Bottleneck removes ReLU by removing the characteristics of Eltwise +, thus reducing the damage of the ReLU to the characteristics; invoked residual has two benefits: 1. multiplexing feature, 2. In the side branch block, the degradation condition of the characteristics is relieved by increasing the input dimension of the ReLU through the 1x 1L dimension and then connecting depthwise conv and the ReLU.
The storage module is used for storing the ship face features with the similarity value larger than a preset threshold value with the similarity value of the ship face features to be detected and the corresponding similarity values;
and the output module is used for sorting the similarity values in a descending order and outputting the first 10 similarity values and the corresponding ship images.
The invention adopts a cosine similarity algorithm to compare the characteristic values, and filters the ship face images with the similarity threshold value smaller than 0.9 because the difference between ships is not very large. The preset eigenvalue comparison algorithm model is as follows:
Figure BDA0003060216280000081
where A and B are vector feature values, the similarity given is in the range-1 to-1, meaning that the two directions are diametrically opposite, 1 denotes that their orientations are identical, 0 usually denotes that they are independent, and the value between them denotes intermediate similarity or dissimilarity, and n is the number of the ship face features in the ship face feature library.
The method is characterized in that 10 similar images obtained by searching the images of the ship to be searched by the images are mainly obtained according to the input ship images X, and all the similar images have corresponding characteristic values F which are one-dimensional vectors. And comparing the similarity of the characteristic value F with the characteristic values in the characteristic library, filtering the images lower than the specified threshold value, and finally, sorting by sort to output 10 images similar to Y.
EXAMPLE III
An embodiment of the present invention provides an apparatus for searching a picture with a picture, and fig. 3 is a hardware architecture diagram of the apparatus for searching the picture with the picture according to the embodiment of the present invention, including an input apparatus, an input interface, a central processing unit, a memory, an output interface, and an output apparatus. The input interface, the central processing unit, the memory and the output interface are mutually connected through a bus, and the input equipment and the output equipment are respectively connected with the bus through the input interface and the output interface and further connected with other components of the equipment. Specifically, the input device receives input information from the outside and transmits the input information to the central processor through the input interface. The central processor processes the input information based on computer executable program code stored in the memory to generate output information, temporarily or permanently stores the output information in the memory, and then transmits the output information through the output interface to an output device, which outputs the output information outside of the device for use by a user.
The embodiment of the invention also provides a storage medium which stores the executable program code. The executable program code, when executed by a processor, implements a method for searching a graph as described above. In this embodiment, the storage medium may be any available medium that can be read by a computer or a data storage device such as a server, a data center, or the like, that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid state disk SSD). Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the system. The computer-readable storage medium is used for storing a computer program and other programs and data required by the system. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Compared with the prior art, the method, the system, the equipment and the storage medium for searching the images by the images, which are provided by the embodiment of the invention, adopt the lightweight model mibilenetv2 as a feature extraction algorithm, have the advantages of high model speed, low video memory occupation and higher precision, and are relatively suitable for engineering application. The triple loss is used as a loss function on the basis of mobilenetv2, so that the face characteristic value extracted by the model is more accurate. The method has the advantages of high image searching speed, high accuracy and strong real-time property, and is suitable for application and engineering.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (8)

1. A method for searching a picture by a picture is characterized in that: the method comprises the following steps:
s1, collecting the ship image to be detected;
s2, extracting the face features of the ship image to be detected and setting the face features as the face features to be detected;
s3, comparing the similarity of the to-be-detected ship face features with a ship face feature library according to a preset feature value comparison algorithm model; the ship face feature library stores a large number of ship face features and corresponding ship images;
s4, storing the face features with the similarity value larger than a preset threshold value with the face feature to be detected and the corresponding similarity value;
and S5, sorting the similarity values in a descending order, and outputting the first N similarity values and corresponding ship images, wherein N is a natural number.
2. The method of claim 1, wherein: the construction method of the ship face feature library comprises the following steps:
acquiring a plurality of ship images and performing classification training;
and extracting the ship face features of the plurality of ship images, and storing the plurality of ship images and the corresponding ship face features to obtain a ship face feature library.
3. The method of claim 1, wherein: the method for extracting the ship face features comprises the following steps: and inputting the ship image into a Mobilenetv2 convolutional neural network model, and identifying the ship image by using the Mobilenetv2 convolutional neural network model to obtain the face feature of the ship image.
4. A method as claimed in claim 3, wherein: the loss function of the Mobilenetv2 convolutional neural network model is as follows:
L=max(d(a,p)-d(a,n)+margin,0)
wherein a is anchor; p is positive, and p and a are samples in the same category; n is negativate, and n is a sample of a different class from a; margin is a distance and is a fixed value.
5. The method of claim 1, wherein: the preset characteristic value comparison algorithm model is as follows:
Figure FDA0003060216270000021
wherein, A and B are vector characteristic values, and n is the number of the ship face characteristics in the ship face characteristic library.
6. A system for searching a picture by a picture is characterized in that: the method comprises the following steps:
the acquisition module is used for acquiring an image of the ship to be detected;
the ship face feature extraction module is used for extracting the ship face features of the ship image to be detected and setting the ship face features as the ship face features to be detected;
the similarity comparison module is used for comparing the similarity of the to-be-detected ship face features with the ship face feature library according to a preset feature value comparison algorithm model; the ship face feature library stores a large number of ship face features and corresponding ship images;
the storage module is used for storing the ship face features with the similarity value larger than a preset threshold value with the similarity value of the ship face features to be detected and the corresponding similarity values;
and the output module is used for sorting the similarity values in a descending order and outputting the first N similarity values and the corresponding ship images, wherein N is a natural number.
7. An apparatus for searching a picture with a picture, comprising: comprising a memory and a processor; the memory is used for storing executable program codes;
the processor is configured to read executable program code stored in the memory to perform a method of searching a graph according to any one of claims 1 to 5.
8. A storage medium, characterized by: the storage medium stores executable program code as claimed in claim 7.
CN202110510774.9A 2021-05-11 2021-05-11 Method, system, equipment and storage medium for searching picture by picture Pending CN113111208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110510774.9A CN113111208A (en) 2021-05-11 2021-05-11 Method, system, equipment and storage medium for searching picture by picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110510774.9A CN113111208A (en) 2021-05-11 2021-05-11 Method, system, equipment and storage medium for searching picture by picture

Publications (1)

Publication Number Publication Date
CN113111208A true CN113111208A (en) 2021-07-13

Family

ID=76721606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110510774.9A Pending CN113111208A (en) 2021-05-11 2021-05-11 Method, system, equipment and storage medium for searching picture by picture

Country Status (1)

Country Link
CN (1) CN113111208A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229381A (en) * 2023-05-11 2023-06-06 南昌工程学院 River and lake sand production ship face recognition method
CN116405745A (en) * 2023-06-09 2023-07-07 深圳市信润富联数字科技有限公司 Video information extraction method and device, terminal equipment and computer medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918537A (en) * 2019-01-18 2019-06-21 杭州电子科技大学 A kind of method for quickly retrieving of the ship monitor video content based on HBase
CN111178187A (en) * 2019-12-17 2020-05-19 武汉迈集信息科技有限公司 Face recognition method and device based on convolutional neural network
CN111553182A (en) * 2019-12-26 2020-08-18 珠海大横琴科技发展有限公司 Ship retrieval method and device and electronic equipment
CN111612800A (en) * 2020-05-18 2020-09-01 智慧航海(青岛)科技有限公司 Ship image retrieval method, computer-readable storage medium and equipment
CN111695572A (en) * 2019-12-27 2020-09-22 珠海大横琴科技发展有限公司 Ship retrieval method and device based on convolutional layer feature extraction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918537A (en) * 2019-01-18 2019-06-21 杭州电子科技大学 A kind of method for quickly retrieving of the ship monitor video content based on HBase
CN111178187A (en) * 2019-12-17 2020-05-19 武汉迈集信息科技有限公司 Face recognition method and device based on convolutional neural network
CN111553182A (en) * 2019-12-26 2020-08-18 珠海大横琴科技发展有限公司 Ship retrieval method and device and electronic equipment
CN111695572A (en) * 2019-12-27 2020-09-22 珠海大横琴科技发展有限公司 Ship retrieval method and device based on convolutional layer feature extraction
CN111612800A (en) * 2020-05-18 2020-09-01 智慧航海(青岛)科技有限公司 Ship image retrieval method, computer-readable storage medium and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229381A (en) * 2023-05-11 2023-06-06 南昌工程学院 River and lake sand production ship face recognition method
CN116229381B (en) * 2023-05-11 2023-07-07 南昌工程学院 River and lake sand production ship face recognition method
CN116405745A (en) * 2023-06-09 2023-07-07 深圳市信润富联数字科技有限公司 Video information extraction method and device, terminal equipment and computer medium

Similar Documents

Publication Publication Date Title
CN110176027B (en) Video target tracking method, device, equipment and storage medium
CN111768432A (en) Moving target segmentation method and system based on twin deep neural network
CN103218427B (en) The extracting method of local description, image search method and image matching method
CN110555399A (en) Finger vein identification method and device, computer equipment and readable storage medium
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN113111208A (en) Method, system, equipment and storage medium for searching picture by picture
CN110765882B (en) Video tag determination method, device, server and storage medium
CN107291825A (en) With the search method and system of money commodity in a kind of video
CN108363962B (en) Face detection method and system based on multi-level feature deep learning
CN114897151A (en) Access optimization method and device, electronic equipment and storage medium
Du et al. Lightweight image super-resolution with mobile share-source network
CN116958809A (en) Remote sensing small sample target detection method for feature library migration
CN117113174A (en) Model training method and device, storage medium and electronic equipment
CN115830633A (en) Pedestrian re-identification method and system based on multitask learning residual error neural network
CN112200275B (en) Artificial neural network quantification method and device
CN115578739A (en) Training method and device for realizing IA classification model by combining RPA and AI
CN111178409B (en) Image matching and recognition system based on big data matrix stability analysis
CN114463764A (en) Table line detection method and device, computer equipment and storage medium
CN114359786A (en) Lip language identification method based on improved space-time convolutional network
CN113496228A (en) Human body semantic segmentation method based on Res2Net, TransUNet and cooperative attention
CN112084874A (en) Object detection method and device and terminal equipment
CN107766863B (en) Image characterization method and server
CN113014831B (en) Method, device and equipment for scene acquisition of sports video
CN112597329B (en) Real-time image retrieval method based on improved semantic segmentation network
Qaderi et al. A Gated Deep Model for Single Image Super-Resolution Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266000 Room 302, building 3, Office No. 77, Lingyan Road, Huangdao District, Qingdao, Shandong Province

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210713