CN108108499B - Face retrieval method, device, storage medium and equipment - Google Patents

Face retrieval method, device, storage medium and equipment Download PDF

Info

Publication number
CN108108499B
CN108108499B CN201810121581.2A CN201810121581A CN108108499B CN 108108499 B CN108108499 B CN 108108499B CN 201810121581 A CN201810121581 A CN 201810121581A CN 108108499 B CN108108499 B CN 108108499B
Authority
CN
China
Prior art keywords
face
characteristic information
convolution layer
target
retrieval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810121581.2A
Other languages
Chinese (zh)
Other versions
CN108108499A (en
Inventor
王川南
陈志博
张�杰
岳文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810121581.2A priority Critical patent/CN108108499B/en
Publication of CN108108499A publication Critical patent/CN108108499A/en
Application granted granted Critical
Publication of CN108108499B publication Critical patent/CN108108499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a face retrieval method, a face retrieval device, a storage medium and a storage device, and belongs to the technical field of deep learning. The method comprises the following steps: acquiring a target face image to be retrieved; based on each residual block connected in sequence in a depth residual network, extracting the characteristics of a target face image to obtain target face characteristic information, wherein any residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any residual block points to the output end of any residual block from the input end of any residual block; and carrying out face retrieval on the face database based on the target face characteristic information to obtain a face retrieval result, wherein the face retrieval result at least comprises an identity mark matched with the target face characteristic information. The invention realizes the face retrieval based on the depth residual error network, and the retrieval accuracy of the depth residual error network is not easily influenced by external factors, so the face retrieval method has better stability, and the accuracy of the face retrieval is ensured.

Description

Face retrieval method, device, storage medium and equipment
Technical Field
The present invention relates to the field of deep learning technologies, and in particular, to a face retrieval method, a device, a storage medium, and a device.
Background
Face retrieval is an emerging biological recognition technology which integrates computer image processing knowledge and biological statistics knowledge, and has wide application prospect at present.
Most of the existing face retrieval systems are realized based on traditional machine learning, such as face retrieval methods based on feature faces, iterative algorithms based on combination of features such as histograms or colors, and the like.
The retrieval accuracy of the face retrieval method based on traditional machine learning is easily influenced by external factors, for example, retrieval results are seriously influenced under the conditions that a user wears glasses, illumination changes or shielding objects appear, and the like, so that the existing face retrieval method is poor in stability, and the retrieval accuracy is not high enough and the effect is poor.
Disclosure of Invention
The embodiment of the invention provides a face retrieval method, a device, a storage medium and equipment, which solve the problem of low retrieval accuracy caused by poor stability of the face retrieval method in the related technology. The technical scheme is as follows:
in one aspect, a face retrieval method is provided, the method including:
acquiring a target face image to be retrieved;
based on each residual block connected in sequence in a depth residual network, extracting the characteristics of the target face image to obtain target face characteristic information, wherein any residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any residual block points to the output end of any residual block from the input end of any residual block;
And carrying out face retrieval on the basis of the target face characteristic information in a face database to obtain a face retrieval result, wherein the face database stores the corresponding relation between the face characteristic information and the identity, and the face retrieval result at least comprises the identity matched with the target face characteristic information.
In another aspect, a face retrieval device is provided, the device including:
the acquisition module is used for acquiring a target face image to be retrieved;
the feature extraction module is used for extracting features of the target face image based on each residual block which is sequentially connected in the depth residual network to obtain target face feature information, wherein any residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any residual block points to the output end of any residual block from the input end of any residual block;
the retrieval module is used for carrying out face retrieval on the basis of the target face characteristic information in a face database to obtain a face retrieval result, wherein the face database stores the corresponding relation between the face characteristic information and the identity, and the face retrieval result at least comprises the identity matched with the target face characteristic information.
In another embodiment, the search module is further configured to compare the target face feature information with face feature information stored in the face database, so as to obtain a similarity between the target face feature information and the stored face feature information; sorting the stored face characteristic information according to the similarity; determining first candidate face characteristic information of which the similarity is ranked in the front N bits, wherein N is a positive integer; and taking the identity mark and the similarity corresponding to the first candidate face characteristic information as the face retrieval result.
In another embodiment, the search module is further configured to compare the target face feature information with face feature information stored in the face database, so as to obtain a similarity between the target face feature information and the stored face feature information; obtaining a similarity threshold; determining second candidate face feature information with similarity greater than the similarity threshold; and taking the identity mark and the similarity corresponding to the second candidate face characteristic information as the face retrieval result.
In another embodiment, the apparatus further comprises:
the system comprises a building module, a searching module and a searching module, wherein the building module is used for searching images under a target path, and the target path is at least one of a local path or a remote path; starting multithreading, and extracting features of the searched image batches by using the started multithreading based on all residual blocks connected in sequence in the depth residual network; acquiring an identity matched with the extracted face characteristic information; and storing the corresponding relation between the extracted face characteristic information and the identity mark in the face database.
In another embodiment, the establishing module is further configured to periodically acquire an image of the incremental update under the target path; starting multithreading, and extracting features of the updated images in batches based on all residual blocks connected in sequence in the depth residual network by using the started multithreading; acquiring an identity matched with the newly extracted face characteristic information; and updating the corresponding relation between the newly extracted face characteristic information and the identity mark into the face database.
In another embodiment, the apparatus further comprises:
the receiving module is used for receiving a second face retrieval request sent by the terminal, wherein the second face retrieval request comprises a target identity;
the sending module is used for sending the appointed face image matched with the target identity to the terminal if the target identity is included in the face database;
the receiving module is further used for receiving an operation processing request for the specified face image sent by the terminal;
and the processing module is used for carrying out operation processing on the specified face image according to the operation processing request.
In another aspect, a storage medium is provided, where at least one instruction is stored, where the at least one instruction is loaded and executed by a processor to implement the face retrieval method described above.
In another aspect, an apparatus for face retrieval is provided, the apparatus comprising a processor and a memory, the memory storing at least one instruction, the at least one instruction being loaded and executed by the processor to implement the face retrieval method described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that:
the embodiment of the invention realizes the face retrieval based on the depth residual error network, and the retrieval accuracy of the depth residual error network is not easily influenced by external factors, so that the face retrieval method has better stability, further the accuracy of the face retrieval is ensured, and the effect is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is a schematic structural diagram of an implementation environment related to a face searching method according to an embodiment of the present invention;
Fig. 1B is a schematic structural diagram of a face retrieval system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a residual block of a depth residual network according to an embodiment of the present invention;
fig. 3 is a flowchart of a first face searching method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a retrieval process for performing face retrieval according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a depth residual network according to an embodiment of the present invention;
fig. 6 is a flowchart of a second face searching method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a face retrieval device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an apparatus for face retrieval according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
Before explaining the embodiments of the present invention in detail, some terms related to the embodiments of the present invention are explained.
Deep learning: this concept stems from the study of artificial neural networks. For example, a multi-layer sensor with multiple hidden layers is a deep learning structure. Wherein deep learning forms more abstract high-level features by combining low-level features to explore a distributed feature representation of the data.
Alternatively, deep learning is a method based on characterization learning of data. The observations (e.g., an image) can be represented in a number of ways. Such as by a vector of intensity values for each pixel in the image, or the image may be more abstract as a series of edges, a region of a particular shape, etc. While task learning from examples, such as face recognition or facial expression recognition, can be easier with some specific representation methods. Among other benefits, deep learning is the utilization of unsupervised or supervised feature learning and hierarchical feature extraction efficient algorithms instead of manually acquired features.
Depth residual network (res net): the depth of the neural network is very important to its performance, so in an ideal case, the deeper the depth should be, the better as long as the network is nevertheless fitted. However, an optimization problem that is encountered in the actual training of the neural network is that as the depth of the neural network is increased, the gradient tends to disappear more backward (i.e., gradient dispersion), so that the model is difficult to optimize, which in turn results in a decrease in the accuracy of the network. In another expression, when the depth of the neural network is continuously increased, a problem of degration occurs, that is, the accuracy rate is increased first and then saturated, and then the accuracy rate is reduced when the depth is continuously increased.
Based on the above description, when the number of network layers reaches a certain number, the performance of the network is saturated, and the performance of the deep network is degraded again, but the degradation is not caused by overfitting, because the training accuracy and the testing accuracy are both reduced, which means that the neural network is difficult to train after the network reaches a certain depth. The ResNet is presented to solve the problem of performance degradation after the network depth is deepened. In particular, resNet proposes a deep residual learning (deep residual learning) framework to address this problem of performance degradation due to depth increases.
Given that a shallower network achieves saturated accuracy, then there are several congruent mapping layers (Identity mapping) behind this network, at least the error will not increase, i.e. deeper networks should not bring about an increase in errors on the training set. The idea of using congruent mapping to pass the output of the previous layer directly to the subsequent layer is referred to herein as a source of inspiration for ResNet.
For more explanation of ResNet, see description below.
Identity mapping): for any set a, if the mapping f: a→a is defined as f (a) =a, i.e. it is specified that each element a in a corresponds to itself, then f is referred to as an identity mapping on a.
RESTful architecture: RESTful refers to a style of software architecture, design style, rather than a standard, that provides a set of design principles and constraints. It is mainly used for the software of the client and server interaction class. The software designed based on the style can be more concise and hierarchical, and can be easier to realize mechanisms such as caching.
The RESTful architecture is an internet software architecture, which adopts a client/server mode, is built on a distributed system, and has the characteristics of high latency, high concurrency and the like through internet communication.
It should be noted that the face retrieval service in the related art is often applied to dynamic recognition scenes, such as scenes of entrance guard, equipment unlocking, mobile payment, attendance management, and the like. In practical application, the face retrieval service is applied to dynamic recognition scenes, and is often required to be static retrieval in the fields of investigation and control, criminal investigation and case handling, security activities and the like, for example, the face retrieval is performed on an input static image to find out a missing population, pursue an evasion and the like. The face retrieval method provided by the embodiment of the invention can be applied to the scene with static retrieval requirements. Of course, by performing corresponding improvement, the face retrieval method provided by the embodiment of the invention is also applicable to the dynamic identification scene, and the embodiment of the invention is not limited in detail.
The following describes an implementation environment related to a face searching method provided by the embodiment of the invention.
Referring to fig. 1A, a schematic structural diagram of an implementation environment related to a face searching method according to an embodiment of the present invention is shown. As shown in fig. 1A, the implementation environment includes a terminal 101, a face search system 102, and a face database 103. The face retrieval system 102 is specifically configured as a server, and the face retrieval system 102 and the face database 103 may be configured on the same server or on different servers, which is not specifically limited in the embodiment of the present invention. Types of terminals 101 include, but are not limited to, smartphones, desktop computers, notebook computers, tablet computers, and the like.
In the embodiment of the present invention, the terminal 101 and the face retrieval system 102 perform internet communication based on RESTful architecture mode, that is, both use a client/server mode. Because the embodiment of the invention provides a RESTful standard protocol interface based on a RESTful architecture, one server can be configured to be accessed by a plurality of clients, and the method is convenient and quick.
In addition, since the data stored in the face database 103 changes in real time, the face retrieval service is deployed under a set of distributed system, so that not only can a lot of resources and workload be saved, but also a lot of requests can be processed quickly and concurrently. The problem of heavy update task of the database when each piece of software is independently deployed for face retrieval is avoided. For example, most of the current software architecture is embedded application installed in devices such as smart phones and tablet computers, and for this mode, when data update occurs, the data update needs to be configured on a large number of devices separately, so that the update task of the database is huge.
Based on the above description, the face searching method provided by the embodiment of the invention is creatively designed from two aspects of a face searching specific mode and a software architecture. On one hand, a ResNet network structure is used as a specific algorithm for face retrieval, and the face characteristics are learned by a deeper network layer number, so that more accurate face matching and comparison effects are obtained. On the other hand, the embodiment of the invention adopts a software architecture based on RESTful standard, not only can meet the static retrieval requirement, but also can conveniently configure a large-scale distributed retrieval system, and has higher practical value in the fields such as investigation and control, criminal investigation and investigation, security and protection activities and the like.
In another embodiment, referring to fig. 1B, the face retrieval system provided in the embodiment of the present invention mainly includes a face retrieval service module and a feature extraction service module.
The face retrieval service module is mainly used for carrying out warehousing of face characteristic information and retrieval of the face characteristic information; the feature extraction service module is mainly used for extracting features of a large number of images.
The face retrieval service module can be realized in the form of a feature file when the face feature information is put in storage. The feature file comprises one-to-one identity marks and face feature information. In the embodiment of the invention, the face characteristic information can be put in storage by directly calling the storage interface of the face retrieval service module.
In performing the face feature information retrieval, the face retrieval service module may perform operations including, but not limited to: automatically performing base64 coding of the image after the image is input into the client and performing face similarity comparison in a library; in addition, if the client inputs a similarity threshold in addition to the input image, the face retrieval server module retrieves faces with similarity higher than the threshold in the library; in addition, if the identity is input by the client, the face retrieval service module queries whether the identity is in the library, and can also realize operation processing, such as deletion or update, on the face image corresponding to the identity according to the operation request of the client.
For the feature extraction service module, the embodiment of the invention adopts the ResNet network to extract the features of the face image. And as the ResNet network introduces a residual network structure, the problem of gradient dispersion caused by the over-deep network layer is solved, the characteristic learning of the face image can be carried out by using a deeper network structure, and the accuracy of the face retrieval is ensured. In the embodiment of the present invention, a face image refers to an image including a face.
The face feature information stored in the face database is obtained based on feature extraction of face images stored under the target path. The target path may include a local path or a remote path, and the remote path may be an HTTP (HyperText Transfer Protocol ) path or an FTP (File Transfer Protocol, file transfer protocol) path, which is not specifically limited in the embodiment of the present invention. It should be noted that, the feature extraction service module starts multithreading to perform batch feature extraction operation.
In summary, the embodiment of the invention adopts the depth residual error network ResNet to search the face, and solves the problems that the phenomenon of gradient dispersion is more and more obvious when the network is deeper, and the network training effect is poorer. Compared with other network models, the ResNet network can make the network layer number deep, even up to 1000 layers, so that a good learning effect of the face characteristic information can be obtained. In addition, the embodiment of the invention integrates the algorithm and the platform, provides the face retrieval service in the mode of HTTP service, can externally provide a Restful standard protocol interface, and can realize that the client can complete the face retrieval by accessing the server only by configuring the face retrieval service on the server.
The depth residual network is explained in detail below.
Assuming that the input to a segment of neural network is x, the desired network layer relationship maps to H (x), and the stacked nonlinear layers fit another mapping F (x) =h (x) -x, then the original mapping H (x) becomes F (x) +x. Assuming that it is easier to optimize the residual map F (x) than the original map H (x), we first find the residual map F (x), then the original map is F (x) +x, which can be implemented by a Shortcut connection.
Fig. 2 shows a schematic structure of a residual block. As shown in fig. 2, an identity map and at least two convolutional layers are included in any one residual block of the depth residual network. Wherein an identity mapping of a residual block is directed from an input of the residual block to an output of the residual block.
I.e. adding an identity map, the original desired function H (x) is converted into F (x) +x. Although the two expression effects are the same, the difficulty of optimization is different, and the optimization training effect can be well achieved by decomposing a problem into a residual problem with multiple direct scales through a reformulation (reformation). As shown in FIG. 2, the residual block is realized through the short cut connection, the input and the output of the residual block are overlapped through the short cut connection, the training speed of the model is greatly increased, the training effect is improved on the premise that no additional parameters and calculation amount are added to the network, and the degradation problem can be well solved by the simple structure when the layer number of the model is deepened.
In another expression, H (x) is a desired complex potential map, and learning is difficult, if input x is directly transmitted to output as an initial result through the Shortcut connection of fig. 2, then the target to be learned at this time is F (x) =h (x) -x, so that the res net network is equivalent to changing the learning target, and does not learn a complete output any more, but needs to learn the difference between the optimal solution H (x) and the congruent map x, i.e., the residual map F (x). It should be noted that Shortcut originally means shortcuts, in this context it is meant that the layer-by-layer connections, the Shortcut connections in the res net network have no weights, and each residual block only learns the residual map F (x) after x is passed. And because the network is stable and easy to learn, the performance becomes better gradually with the increase of the network depth, so when the network layer is deep enough, optimizing the residual mapping F (x) =h (x) -x is easy to optimize a complex nonlinear mapping H (x).
Based on the above description, it is clear that the ResNet network has many branches bypassing the input directly to the later layers, compared to the conventional direct-connected convolutional neural network, so that the later layers can directly learn the residual, and this structure is called a Shortcut connection. The ResNet network solves the problems of information loss, loss and the like to a certain extent when the traditional convolution layer or the full connection layer transmits information, and the integrity of the information is protected by directly transmitting input detours to output, so that the whole network only needs to learn a part of input and output differences, and the learning goal and difficulty are simplified.
The invention introduces a depth residual error network, when the network layer number is very deep, the degradation problem does not occur, and the error rate of the face retrieval is greatly reduced.
Fig. 3 is a flowchart of a face searching method according to an embodiment of the present invention. Referring to fig. 3, the method provided by the embodiment of the invention includes:
301. the face retrieval system searches images under a target path, opens multiple threads, and extracts features of the searched images in batches by utilizing the opened multiple threads based on each residual block connected in sequence in a depth residual network.
As shown in fig. 4, the target path is at least one of a local path or a remote path. The remote path includes, but is not limited to, an HTTP path and an FTP path.
In the embodiment of the invention, the images stored under the target path are used for constructing a face database, and each image comprises a face. In addition, in order to accelerate the feature extraction speed, the embodiment of the invention starts multiple threads to perform feature extraction on the images searched under the target path in batches.
For any image under a target path, the embodiment of the invention is based on each residual block which is sequentially connected in a depth residual error network, face positioning is firstly carried out in the image, and then the face region is scratched out for feature learning, namely the embodiment of the invention only carries out feature extraction on the face region. The dimensions of the extracted face feature information may be 512 dimensions or 1024 dimensions, which are not limited in the embodiment of the present invention.
In an embodiment of the present invention, the first convolution layer, the second convolution layer, and the third convolution layer are included for each residual block. The first convolution layer, the second convolution layer and the third convolution layer are sequentially connected, the sizes of the first convolution layer and the third convolution layer are consistent, the sizes of the first convolution layer and the third convolution layer are smaller than those of the second convolution layer, and the identity mapping points to the output end of the third convolution layer from the input end of the first convolution layer.
That is, the embodiment of the present invention optimizes the residual block shown in fig. 2 in consideration of the calculation cost. In fig. 2, two convolutional layers having the same number of output channels are included in the residual blocks of the two layers.
Referring to fig. 5, taking an example in which the number of convolution layers in the optimized residual block is 3, the sizes of the first convolution layer and the third convolution layer may be 1*1, and the size of the second convolution layer may be 3*3. The intermediate 3*3 convolution layer reduces the calculation cost under one 1*1 convolution layer with reduced dimension, and then restores under the other 1*1 convolution layer, and the operations of reducing dimension and then increasing dimension are performed, so that the accuracy is maintained and the calculation amount is reduced. In fig. 5 the input and output dimensions are the same, and in addition, if there are different input and output dimensions, it is possible to reconnect to the following residual block by making a linear mapping transformation on the input x.
In summary, when a face image is extracted by using the depth residual error network, the face image is input into a first residual error block of the depth residual error network, and each residual error block in the depth residual error network performs the following operations: for any residual block, receiving the output of the last residual block, and extracting the characteristics of the output of the last residual block based on the first convolution layer, the second convolution layer and the third convolution layer; and acquiring the output of the third convolution layer, and transmitting the output of the third convolution layer and the output of the last residual block to the next residual block. And when the output of the last residual block is output through the full connection layer, the face characteristic information of the face image is obtained.
302. The face retrieval system acquires an identity mark matched with the extracted face feature information, and stores the corresponding relation between the extracted face feature information and the identity mark in a face database.
In the embodiment of the invention, after the face characteristic information of the image stored under the target path is extracted, the identity mark matched with each face characteristic information is also acquired in order to facilitate the subsequent face resolution. The identity includes, but is not limited to, name, age, gender, education level, marital status, work address, home address, etc., which are not particularly limited in the embodiments of the present invention. And the one-to-one correspondence between the face characteristic information and the identity mark can be stored in the face database in the form of a characteristic file.
It should be noted that, the above steps 301 and 302 are a process of constructing a face database. After the face database is built, the face retrieval system may process the first face retrieval request initiated by each terminal based on the face database, and the specific process is shown in step 303 below.
303. The face retrieval system receives a first face retrieval request sent by any terminal, wherein the first face retrieval request comprises a target face image.
In the embodiment of the invention, the terminal can specifically adopt a POST method when sending the first face retrieval request to the face retrieval system, and the embodiment of the invention is not particularly limited to this.
304. The face retrieval system performs feature extraction on the target face image based on each residual block which is sequentially connected in the depth residual network to obtain target face feature information, and performs face retrieval on a face database based on the target face feature information to obtain a face retrieval result.
In the embodiment of the invention, before the feature extraction is performed on the target face image, the target face image can be decoded first, and then the feature extraction is performed on the decoded image based on each residual block. For a specific feature extraction manner, reference may be made to step 301, which includes steps of locating a face position in an image, and performing feature learning on a face area, which will not be described herein.
The decoding method may be a Base64 method, which is not specifically limited in the embodiment of the present invention.
In another embodiment, when face retrieval is performed in the face database based on the target face feature information, the following two methods are included, but not limited to:
first, topN mode
(a) And the face retrieval system compares the target face characteristic information with the face characteristic information stored in the face database to obtain the similarity between the target face characteristic information and the stored face characteristic information.
The similarity reflects the similarity degree between the target face image and the images stored in the face database. The higher the value of the similarity, the more similar the target face image and the stored image are.
(b) And sorting the stored face characteristic information according to the similarity.
For example, the sequences may be sorted in order of similarity from high to low, which is not particularly limited in the embodiments of the present invention.
(c) And determining first candidate face characteristic information with the top N bits of similarity rows.
The value of N may be preset by the face retrieval system, and the value of N may be a positive integer, for example, may be 5, 10, 15, or the like, which is not particularly limited in the embodiment of the present invention. Taking the value of N as 5 as an example, the first candidate face feature information includes 5 face feature information.
(d) And taking the identity mark and the similarity corresponding to the first candidate face characteristic information as a face retrieval result.
Taking the value of N as 5 as an example, the identity and the similarity of the first 5 bits of the sequence are used as the face retrieval result.
Second, similarity threshold mode
(a) And comparing the target face characteristic information with the face characteristic information stored in the face database to obtain the similarity between the target face characteristic information and the stored face characteristic information.
(b) And obtaining a similarity threshold sent by the terminal.
In the embodiment of the invention, the terminal can also carry the similarity threshold when sending the first face retrieval request, so that the face retrieval system can feed back the face retrieval result according to the user-defined threshold. The terminal may provide the user with an interface for inputting or setting the similarity threshold, which is not specifically limited in the embodiments of the present invention.
It should be noted that, besides the similarity threshold sent by the receiving terminal, the face retrieval system may also define the similarity threshold, which is not limited in the embodiment of the present invention.
(c) And determining second candidate face characteristic information with similarity greater than a similarity threshold.
In theory, the second candidate face feature information includes all face feature information with similarity greater than a similarity threshold. However, if the number is too large, for example, exceeds a certain threshold, the face retrieval system may select the face feature information with the similarity greater than the specified value or topM as the second candidate face feature information, which is not specifically limited in the embodiment of the present invention.
(d) And taking the identity mark and the similarity corresponding to the second candidate face characteristic information as a face retrieval result.
305. And the face retrieval system sends the obtained face retrieval result to the terminal.
In the embodiment of the invention, the face retrieval system may selectively send the face retrieval result to the terminal in the form of JSON (JavaScript Object Notation, JS object tag), which is not particularly limited in the embodiment of the invention.
According to the method provided by the embodiment of the invention, the face retrieval is realized based on the depth residual error network and the distributed software architecture, and the retrieval accuracy of the depth residual error network is not easily influenced by external factors, so that the face retrieval method has better stability, and the accuracy of face retrieval is ensured. In addition, the distributed software architecture not only can save a large amount of resources and workload, but also can rapidly and concurrently process a large amount of face retrieval requests, and has a good effect.
In another embodiment, the embodiment of the invention also supports the incremental updating of the face database. The specific incremental update process may be as follows:
(1) The face retrieval system periodically acquires images updated incrementally under the target path.
(2) And the face retrieval system starts multithreading, and extracts the characteristics of the updated images in batches by utilizing each residual block which is sequentially connected in the opened depth residual-based network.
It should be noted that if the number of images updated in increments is not large, the multithreading may not need to be started for processing, which is not particularly limited in the embodiment of the present invention.
(3) And the face retrieval system acquires the identity which is matched with the newly extracted face characteristic information.
(4) And the face retrieval system updates the corresponding relation between the newly extracted face characteristic information and the identity mark into a face database.
In another embodiment, the embodiment of the present invention further supports the processing of other transactions by the user through the face search request, for example, the user can inquire whether the identity is stored in the library, see fig. 6, and the detailed steps are as follows:
601. the face retrieval system receives a second face retrieval request sent by the terminal, wherein the second face retrieval request comprises a target identity.
602. If the face database comprises the target identity, the face retrieval system sends the appointed face image matched with the target identity to the terminal.
603. The face retrieval system receives an operation processing request for the specified face image sent by the terminal, and performs operation processing on the specified face image according to the operation processing request.
The operation processing request may be a request for deleting the specified face image, or may be a request for replacing the specified face image with another face image, and accordingly, the operation processing may be either a deletion processing or an update processing, which is not particularly limited in the embodiment of the present invention.
It should be noted that, after the face image is processed according to the above embodiment, and the face database is updated, the face searching method described in the corresponding embodiment of fig. 3 may be executed by using the updated face database.
In another embodiment, taking a lost child as an example, the face searching method provided by the embodiment of the invention can be carded into the following steps:
1. the client sends images of the missing child to the face retrieval system.
2. The face retrieval system performs feature extraction on images of missing children based on the depth residual error network.
3. The face retrieval system performs face retrieval in a face database based on the extracted face feature information.
4. And the face retrieval system returns the obtained face retrieval result to the client, wherein the face retrieval result at least comprises an identity mark searched for the lost child.
5. The client displays the face retrieval result returned by the server so as to be distinguished, confirmed or searched by the user.
Fig. 7 is a schematic structural diagram of a face retrieval device according to an embodiment of the present invention. Referring to fig. 7, the apparatus includes: an acquisition module 701, a feature extraction module 702 and a retrieval module 703.
The acquiring module 701 is configured to acquire a target face image to be retrieved; the feature extraction module 702 is configured to perform feature extraction on the target face image based on each residual block sequentially connected in the depth residual network, so as to obtain target face feature information, where any one residual block includes an identity mapping and at least two convolution layers, and the identity mapping of any one residual block is directed from an input end of the any one residual block to an output end of the any one residual block; the retrieval module 703 is configured to perform a face retrieval in a face database based on the target face feature information to obtain a face retrieval result, where the face database stores a correspondence between the face feature information and an identity, and the face retrieval result at least includes an identity matched with the target face feature information.
According to the device provided by the embodiment of the invention, the face retrieval is realized based on the depth residual error network, and the retrieval accuracy of the depth residual error network is not easily influenced by external factors, so that the face retrieval method is better in stability, the accuracy of face retrieval is further ensured, and the effect is better.
In another embodiment, the acquiring module is further configured to receive a first face retrieval request sent by the terminal, and acquire the target face image from the first face retrieval request;
the apparatus further comprises: and the sending module is used for sending the face retrieval result to the terminal after the face retrieval result is obtained.
In another embodiment, a first convolution layer, a second convolution layer, and a third convolution layer of the at least two convolution layers are sequentially connected, the first convolution layer being of a size that is consistent with the third convolution layer, the first convolution layer being of a size that is smaller than the second convolution layer, the identity mapping being directed from an input of the first convolution layer to an output of the third convolution layer;
the feature extraction module is further used for inputting the target face image into a first residual block of the depth residual network; for any residual block, receiving the output of the last residual block, and performing feature extraction on the output of the last residual block based on the first convolution layer, the second convolution layer and the third convolution layer; acquiring the output of the third convolution layer, and transmitting the output of the third convolution layer and the output of the last residual block to a next residual block; and obtaining the output of the last residual block of the depth residual network to obtain the target face characteristic information.
In another embodiment, the retrieval module is further configured to compare the target face feature information with the face feature information stored in the face database, so as to obtain a similarity between the target face feature information and the stored face feature information; sorting the stored face characteristic information according to the similarity; determining first candidate face characteristic information of which the similarity is ranked in the front N bits, wherein N is a positive integer; and taking the identity mark and the similarity corresponding to the first candidate face characteristic information as the face retrieval result.
In another embodiment, the retrieval module is further configured to compare the target face feature information with the face feature information stored in the face database, so as to obtain a similarity between the target face feature information and the stored face feature information; obtaining a similarity threshold; determining second candidate face feature information with similarity greater than the similarity threshold; and taking the identity mark and the similarity corresponding to the second candidate face characteristic information as the face retrieval result.
In another embodiment, the apparatus further comprises:
the system comprises a building module, a searching module and a searching module, wherein the building module is used for searching images under a target path, and the target path is at least one of a local path or a remote path; starting multithreading, and extracting features of the searched image batches by using the started multithreading based on all residual blocks connected in sequence in the depth residual network; acquiring an identity matched with the extracted face characteristic information; and storing the corresponding relation between the extracted face characteristic information and the identity mark in the face database.
In another embodiment, the establishing module is further configured to periodically acquire an image updated incrementally under the target path; starting multithreading, and extracting features of the updated images in batches based on all residual blocks connected in sequence in the depth residual network by using the started multithreading; acquiring an identity matched with the newly extracted face characteristic information; and updating the corresponding relation between the newly extracted face characteristic information and the identity mark into the face database.
In another embodiment, the feature extraction module is further configured to decode the target face image to obtain a decoded image; and extracting the characteristics of the decoded image based on each residual block which is connected in sequence in the depth residual network.
In another embodiment, the apparatus further comprises:
the receiving module is used for receiving a second face retrieval request sent by the terminal, wherein the second face retrieval request comprises a target identity;
the sending module is used for sending the appointed face image matched with the target identity to the terminal if the target identity is included in the face database;
the receiving module is further used for receiving an operation processing request for the specified face image sent by the terminal;
And the processing module is used for carrying out operation processing on the specified face image according to the operation processing request.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
It should be noted that: in the face searching device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the face searching device provided in the above embodiment and the face searching method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Fig. 8 is a schematic structural diagram of an apparatus for face searching according to an embodiment of the present invention, where the apparatus 800 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 801 and one or more memories 802, where at least one instruction is stored in the memories 802, and the at least one instruction is loaded and executed by the processors 801 to implement the face searching method provided in the foregoing method embodiments. Of course, the device may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, such as a memory comprising instructions executable by a processor in a terminal to perform the face retrieval method of the above embodiment is also provided. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (13)

1. A face retrieval method, the method comprising:
acquiring a target face image to be retrieved;
inputting the target face image into a first residual block of a depth residual network, wherein the depth residual network comprises all residual blocks which are sequentially connected, any one residual block comprises an identity mapping and at least two convolution layers, the first convolution layer, the second convolution layer and the third convolution layer in the at least two convolution layers are sequentially connected, the first convolution layer is consistent with the third convolution layer in size, the first convolution layer is smaller than the second convolution layer in size, and the identity mapping points to the output end of the third convolution layer from the input end of the first convolution layer;
For any one residual block, receiving the output of the last residual block, and performing feature extraction on the output of the last residual block based on the first convolution layer, the second convolution layer and the third convolution layer;
acquiring the output of the third convolution layer, and transmitting the output of the third convolution layer and the output of the last residual block to a next residual block;
obtaining the output of the last residual block of the depth residual network to obtain target face characteristic information;
and carrying out face retrieval on the basis of the target face characteristic information in a face database to obtain a face retrieval result, wherein the face database stores the corresponding relation between the face characteristic information and the identity, and the face retrieval result at least comprises the identity matched with the target face characteristic information.
2. The method according to claim 1, wherein the acquiring the target face image to be retrieved comprises:
receiving a first face retrieval request sent by a terminal, and acquiring the target face image from the first face retrieval request;
after obtaining the face retrieval result, the method further comprises the following steps: and sending the face retrieval result to the terminal.
3. The method according to claim 1, wherein said performing face retrieval in a face database based on said target face feature information comprises:
comparing the target face characteristic information with the face characteristic information stored in the face database to obtain the similarity between the target face characteristic information and the stored face characteristic information;
sorting the stored face characteristic information according to the similarity;
determining first candidate face characteristic information of which the similarity is ranked in the front N bits, wherein N is a positive integer;
and taking the identity mark and the similarity corresponding to the first candidate face characteristic information as the face retrieval result.
4. The method according to claim 1, wherein said performing face retrieval in a face database based on said target face feature information comprises:
comparing the target face characteristic information with the face characteristic information stored in the face database to obtain the similarity between the target face characteristic information and the stored face characteristic information;
obtaining a similarity threshold;
determining second candidate face feature information with similarity greater than the similarity threshold;
And taking the identity mark and the similarity corresponding to the second candidate face characteristic information as the face retrieval result.
5. The method according to any one of claims 1 to 4, further comprising:
performing image searching under a target path, wherein the target path is at least one of a local path or a remote path;
starting multithreading, and extracting features of the searched image batches by using the started multithreading based on all residual blocks connected in sequence in the depth residual network;
acquiring an identity matched with the extracted face characteristic information;
and storing the corresponding relation between the extracted face characteristic information and the identity mark in the face database.
6. The method of claim 5, wherein the method further comprises:
periodically acquiring an image updated in an increment under the target path;
starting multithreading, and extracting features of the updated images in batches based on all residual blocks connected in sequence in the depth residual network by using the started multithreading;
acquiring an identity matched with the newly extracted face characteristic information;
and updating the corresponding relation between the newly extracted face characteristic information and the identity mark into the face database.
7. The method according to any one of claims 1 to 4, wherein the feature extraction of the target face image based on each residual block connected in sequence in a depth residual network comprises:
decoding the target face image to obtain a decoded image;
and extracting the characteristics of the decoded image based on each residual block which is connected in sequence in the depth residual network.
8. The method according to claim 2, wherein the method further comprises:
receiving a second face retrieval request sent by the terminal, wherein the second face retrieval request comprises a target identity;
if the face database comprises the target identity, sending a specified face image matched with the target identity to the terminal;
and receiving an operation processing request for the specified face image sent by the terminal, and performing operation processing on the specified face image according to the operation processing request.
9. A face retrieval device, the device comprising:
the acquisition module is used for acquiring a target face image to be retrieved;
the feature extraction module is used for inputting the target face image into a first residual block of a depth residual network, wherein the depth residual network comprises all residual blocks which are sequentially connected, any one residual block comprises an identity mapping and at least two convolution layers, the first convolution layer, the second convolution layer and the third convolution layer in the at least two convolution layers are sequentially connected, the first convolution layer is consistent with the third convolution layer in size, the first convolution layer is smaller than the second convolution layer in size, and the identity mapping is directed to the output end of the third convolution layer from the input end of the first convolution layer;
The feature extraction module is further configured to receive, for any one of the residual blocks, an output of a previous residual block, and perform feature extraction on the output of the previous residual block based on the first convolution layer, the second convolution layer, and the third convolution layer;
the feature extraction module is further configured to obtain an output of the third convolution layer, and transfer the output of the third convolution layer and the output of the previous residual block to a next residual block;
the feature extraction module is further used for obtaining the output of the last residual block of the depth residual network to obtain target face feature information;
the retrieval module is used for carrying out face retrieval on the basis of the target face characteristic information in a face database to obtain a face retrieval result, wherein the face database stores the corresponding relation between the face characteristic information and the identity, and the face retrieval result at least comprises the identity matched with the target face characteristic information.
10. The apparatus of claim 9, wherein the obtaining module is further configured to receive a first face retrieval request sent by a terminal, and obtain the target face image from the first face retrieval request;
The apparatus further comprises:
and the sending module is used for sending the face retrieval result to the terminal after the face retrieval result is obtained.
11. The apparatus according to any one of claims 9 to 10, wherein the feature extraction module is further configured to decode the target face image to obtain a decoded image; and extracting the characteristics of the decoded image based on each residual block which is connected in sequence in the depth residual network.
12. A storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the face retrieval method of any one of claims 1 to 8.
13. An apparatus for face retrieval, the apparatus comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement the face retrieval method of any one of claims 1 to 8.
CN201810121581.2A 2018-02-07 2018-02-07 Face retrieval method, device, storage medium and equipment Active CN108108499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810121581.2A CN108108499B (en) 2018-02-07 2018-02-07 Face retrieval method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810121581.2A CN108108499B (en) 2018-02-07 2018-02-07 Face retrieval method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN108108499A CN108108499A (en) 2018-06-01
CN108108499B true CN108108499B (en) 2023-05-26

Family

ID=62222019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810121581.2A Active CN108108499B (en) 2018-02-07 2018-02-07 Face retrieval method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN108108499B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163215B (en) * 2018-06-08 2022-08-23 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer readable medium and electronic equipment
CN109033971A (en) * 2018-06-27 2018-12-18 中国石油大学(华东) A kind of efficient pedestrian recognition methods again based on residual error Network Theory
CN109002789B (en) * 2018-07-10 2021-06-18 银河水滴科技(北京)有限公司 Face recognition method applied to camera
CN109271869B (en) * 2018-08-21 2023-09-05 平安科技(深圳)有限公司 Face feature value extraction method and device, computer equipment and storage medium
CN110135231B (en) * 2018-12-25 2021-05-28 杭州慧牧科技有限公司 Animal face recognition method and device, computer equipment and storage medium
CN109993102B (en) * 2019-03-28 2021-09-17 北京达佳互联信息技术有限公司 Similar face retrieval method, device and storage medium
CN109978067A (en) * 2019-04-02 2019-07-05 北京市天元网络技术股份有限公司 A kind of trade-mark searching method and device based on convolutional neural networks and Scale invariant features transform
CN110020093A (en) * 2019-04-08 2019-07-16 深圳市网心科技有限公司 Video retrieval method, edge device, video frequency searching device and storage medium
CN109871909B (en) * 2019-04-16 2021-10-01 京东方科技集团股份有限公司 Image recognition method and device
CN110232799A (en) * 2019-06-24 2019-09-13 秒针信息技术有限公司 The method and device of pursuing missing object
CN110942046B (en) * 2019-12-05 2023-04-07 腾讯云计算(北京)有限责任公司 Image retrieval method, device, equipment and storage medium
CN111339345B (en) * 2020-02-26 2023-09-19 北京国网信通埃森哲信息技术有限公司 Multi-platform face recognition service interface differentiated shielding method, system and storage medium
CN111368766B (en) * 2020-03-09 2023-08-18 云南安华防灾减灾科技有限责任公司 Deep learning-based cow face detection and recognition method
CN111723647B (en) * 2020-04-29 2022-04-15 平安国际智慧城市科技股份有限公司 Path-based face recognition method and device, computer equipment and storage medium
CN113191911A (en) * 2021-07-01 2021-07-30 明品云(北京)数据科技有限公司 Insurance recommendation method, system, equipment and medium based on user information

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010006367A1 (en) * 2008-07-16 2010-01-21 Imprezzeo Pty Ltd Facial image recognition and retrieval
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN106874898A (en) * 2017-04-08 2017-06-20 复旦大学 Extensive face identification method based on depth convolutional neural networks model
CN106919897A (en) * 2016-12-30 2017-07-04 华北电力大学(保定) A kind of facial image age estimation method based on three-level residual error network
CN107273864A (en) * 2017-06-22 2017-10-20 星际(重庆)智能装备技术研究院有限公司 A kind of method for detecting human face based on deep learning
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010006367A1 (en) * 2008-07-16 2010-01-21 Imprezzeo Pty Ltd Facial image recognition and retrieval
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN106919897A (en) * 2016-12-30 2017-07-04 华北电力大学(保定) A kind of facial image age estimation method based on three-level residual error network
CN106874898A (en) * 2017-04-08 2017-06-20 复旦大学 Extensive face identification method based on depth convolutional neural networks model
CN107273864A (en) * 2017-06-22 2017-10-20 星际(重庆)智能装备技术研究院有限公司 A kind of method for detecting human face based on deep learning
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device

Also Published As

Publication number Publication date
CN108108499A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108108499B (en) Face retrieval method, device, storage medium and equipment
US8762383B2 (en) Search engine and method for image searching
US11899681B2 (en) Knowledge graph building method, electronic apparatus and non-transitory computer readable storage medium
US11475055B2 (en) Artificial intelligence based method and apparatus for determining regional information
WO2021143267A1 (en) Image detection-based fine-grained classification model processing method, and related devices
CN111400504B (en) Method and device for identifying enterprise key people
CN112214775B (en) Injection attack method, device, medium and electronic equipment for preventing third party from acquiring key diagram data information and diagram data
CN107392238A (en) Outdoor knowledge of plants based on moving-vision search expands learning system
CN114205690A (en) Flow prediction method, flow prediction device, model training method, model training device, electronic equipment and storage medium
KR20190083127A (en) System and method for trainning convolution neural network model using image in terminal cluster
CN115510249A (en) Knowledge graph construction method and device, electronic equipment and storage medium
CN115496970A (en) Training method of image task model, image recognition method and related device
US20190050672A1 (en) INCREMENTAL AUTOMATIC UPDATE OF RANKED NEIGHBOR LISTS BASED ON k-th NEAREST NEIGHBORS
CN111191059B (en) Image processing method, device, computer storage medium and electronic equipment
WO2023213157A1 (en) Data processing method and apparatus, program product, computer device and medium
CN116796038A (en) Remote sensing data retrieval method, remote sensing data retrieval device, edge processing equipment and storage medium
US20230099484A1 (en) Application data exchange system
CN111191065A (en) Homologous image determining method and device
CN112307243A (en) Method and apparatus for retrieving image
CN111638926A (en) Method for realizing artificial intelligence in Django framework
CN113297397B (en) Information matching method and system based on hierarchical multi-mode information fusion
CN114821140A (en) Image clustering method based on Manhattan distance, terminal device and storage medium
CN106776816A (en) Locking method and device
CN113269176B (en) Image processing model training method, image processing device and computer equipment
US11863622B2 (en) Cross-device data distribution with modular architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant