CN108108499A - Face retrieval method, apparatus, storage medium and equipment - Google Patents
Face retrieval method, apparatus, storage medium and equipment Download PDFInfo
- Publication number
- CN108108499A CN108108499A CN201810121581.2A CN201810121581A CN108108499A CN 108108499 A CN108108499 A CN 108108499A CN 201810121581 A CN201810121581 A CN 201810121581A CN 108108499 A CN108108499 A CN 108108499A
- Authority
- CN
- China
- Prior art keywords
- face
- characteristic information
- residual block
- target
- convolutional layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of face retrieval method, apparatus, storage medium and equipment, belong to depth learning technology field.Method includes:Obtain target facial image to be retrieved;Based on each residual block being sequentially connected in depth residual error network, feature extraction is carried out to target facial image, obtain target face characteristic information, include an identical mapping and at least two convolutional layers in any one residual block, the identical mapping of any one residual block is directed toward the output terminal of any one residual block by the input terminal of any one residual block;Face retrieval is carried out in face database based on target face characteristic information, obtains face retrieval as a result, being included at least and the matched identity of target face characteristic information in face retrieval result.The face retrieval the present invention is based on depth residual error real-time performance, since the retrieval accuracy of depth residual error network is not easy to be influenced by extraneous factor, so this kind of face retrieval method stability is more excellent, and then the accuracy of face retrieval is also guaranteed.
Description
Technical field
The present invention relates to depth learning technology field, more particularly to a kind of face retrieval method, apparatus, storage medium and set
It is standby.
Background technology
Face retrieval is one and has merged Computer Image Processing knowledge and emerging biology that biometrics is gained knowledge is known
Other technology, has broad application prospects at present, for example, at present face retrieval technology in such as park, factory, square, meeting
There is application in the places such as the heart, stadiums, school, hospital, commercial street, hotel, food and drink public place of entertainment, office building, elevator.
Current face retrieval system is realized based on conventional machines study mostly, such as the face retrieval of feature based face
Method, the iterative algorithm being combined based on the features such as histogram or color etc..
The retrieval accuracy of face retrieval method based on conventional machines study is easily influenced by extraneous factor, for example, with
Family can seriously affect retrieval result when having worn glasses, illumination variation occur or shelter occurred, therefore current
Face retrieval method stability is poor, and the accuracy for causing retrieval is not high enough, ineffective.
The content of the invention
An embodiment of the present invention provides a kind of face retrieval method, apparatus, storage medium and equipment, solve correlation technique
Existing face retrieval method stability is poor, cause retrieval accuracy it is not high the problem of.The technical solution is as follows:
On the one hand, a kind of face retrieval method is provided, the described method includes:
Obtain target facial image to be retrieved;
Based on each residual block being sequentially connected in depth residual error network, feature is carried out to the target facial image and is carried
It takes, obtains target face characteristic information, an identical mapping and at least two convolutional layers are included in any one residual block, appoint
The identical mapping of one residual block of meaning is directed toward the defeated of any one residual block by the input terminal of any one residual block
Outlet;
Face retrieval is carried out in face database based on the target face characteristic information, obtains face retrieval as a result, institute
State the correspondence stored in face database between face characteristic information and identity, in the face retrieval result extremely
It is few to include and the matched identity of target face characteristic information.
On the other hand, a kind of face retrieval device is provided, described device includes:
Acquisition module, for obtaining target facial image to be retrieved;
Characteristic extracting module, for based on each residual block being sequentially connected in depth residual error network, to the target person
Face image carries out feature extraction, obtains target face characteristic information, include in any one residual block identical mapping and
At least two convolutional layers, the identical mapping of any one residual block are directed toward described appoint by the input terminal of any one residual block
The output terminal of one residual block of meaning;
Module is retrieved, face retrieval is carried out in face database for being based on the target face characteristic information, obtains people
Face retrieval result stores the correspondence between face characteristic information and identity, the people in the face database
It is included at least and the matched identity of target face characteristic information in face retrieval result.
In another embodiment, the retrieval module is additionally operable to the target face characteristic information and the face
The face characteristic information stored in database is compared, and obtains the target face characteristic information and the face characteristic of storage is believed
Similarity between breath;The face characteristic information of storage is ranked up according to similarity size;Determine that similarity comes top N
The first candidate face characteristic information, N is positive integer;By the corresponding identity of the first candidate face characteristic information and
Similarity is as the face retrieval result.
In another embodiment, the retrieval module is additionally operable to the target face characteristic information and the face
The face characteristic information stored in database is compared, and obtains the target face characteristic information and the face characteristic of storage is believed
Similarity between breath;Obtain similarity threshold;Determine that similarity is more than the second candidate face feature of the similarity threshold
Information;Using the corresponding identity of the second candidate face characteristic information and similarity as the face retrieval result.
In another embodiment, described device further includes:
Module is established, for carrying out image lookup under destination path, the destination path is local path or long-range road
At least one of footpath;Multithreading is opened, it is each based on being sequentially connected in the depth residual error network using the multithreading of unlatching
A residual block carries out feature extraction to the image batch found;The matched identity of face characteristic information for obtaining and extracting
Mark;Correspondence between the face characteristic information extracted and identity is stored in the face database.
In another embodiment, it is described to establish module, it is additionally operable to periodically obtain under the destination path increment more
New image;Multithreading is opened, using the multithreading of unlatching based on each residual error being sequentially connected in the depth residual error network
Block carries out feature extraction to newer image batch;Obtain the matched identity of face characteristic information with newly extracting;It will
Correspondence between the face characteristic information newly extracted and identity is updated in the face database.
In another embodiment, described device further includes:
Receiving module, for receiving the second face retrieval request that the terminal is sent, the second face retrieval request
Include target identities mark;
Sending module, will be with the target body if including the target identities mark for the face database
Part identifies matched specified facial image and is sent to the terminal;
The receiving module, the operation processing for the specified facial image for being additionally operable to receive the terminal transmission please
It asks;
Processing module, for carrying out operation processing to the specified facial image according to the operation processing request.
On the other hand, provide a kind of storage medium, be stored at least one instruction in the storage medium, it is described at least
One instruction is loaded by processor and is performed to realize above-mentioned face retrieval method.
On the other hand, a kind of equipment for face retrieval is provided, the equipment includes processor and memory, described
At least one instruction is stored in memory, at least one instruction is loaded by the processor and performed to realize above-mentioned
Face retrieval method.
The advantageous effect that technical solution provided in an embodiment of the present invention is brought is:
The embodiment of the present invention is based on depth residual error real-time performance face retrieval, since the retrieval of depth residual error network is accurate
Degree is not easy to be influenced by extraneous factor, so this kind of face retrieval method stability is more excellent, and then the accuracy of face retrieval
It is guaranteed, effect is preferable.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present invention, for
For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings
Attached drawing.
Figure 1A is the structure diagram of the implementation environment involved by a kind of face retrieval method provided in an embodiment of the present invention;
Figure 1B is a kind of structure diagram of face retrieval system provided in an embodiment of the present invention;
Fig. 2 is the structure diagram of a residual block of depth residual error network provided in an embodiment of the present invention;
Fig. 3 is the flow chart of the first face retrieval method provided in an embodiment of the present invention;
Fig. 4 is a kind of retrieving schematic diagram for carrying out face retrieval provided in an embodiment of the present invention;
Fig. 5 is the structure diagram of a residual block of depth residual error network provided in an embodiment of the present invention;
Fig. 6 is the flow chart of second of face retrieval method provided in an embodiment of the present invention;
Fig. 7 is a kind of structure diagram of face retrieval device provided in an embodiment of the present invention;
Fig. 8 is a kind of structure diagram of equipment for face retrieval provided in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Before to the embodiment of the present invention carrying out that explanation is explained in detail, first to the present embodiments relate to some names
Word is explained.
Deep learning:This concept comes from the research to artificial neural network.For example the multilayer perceptron containing more hidden layers is
For a kind of deep learning structure.Wherein, deep learning forms more abstract high-level characteristic by combining low-level feature, to excavate
The distributed nature of data represents.
A kind of expression way is changed, deep learning is a kind of based on the method that data are carried out with representative learning.For observation
(such as piece image) can use a plurality of ways to represent.For example it can be carried out with the vector of each pixel intensity value in the image
It represents or the image can be more abstractively expressed as to a series of sides, the region etc. of given shape.And use some specific tables
Show that method can be easier to carry out tasking learning from example, for example carry out recognition of face or human facial expression recognition etc..Wherein, depth
The benefit of study is that feature learning using non-supervisory formula or supervised and layered characteristic extraction highly effective algorithm obtain by hand to substitute
Take feature.
Depth residual error network (ResNet):The depth of neutral net is extremely important to its performance, therefore in the ideal case,
As long as network not over-fitting, then depth should be more deep better.But can be run into hands-on neutral net one is excellent
The problem of change, i.e., with neutral net depth continuous intensification, gradient in the backward get over evanescence (i.e. gradient disperse), it is difficult to
Optimized model, the accuracy rate for instead resulting in network decline.A kind of expression way is changed, when being continuously increased the depth of neutral net,
The problem of being present with Degradation (re-forming), i.e. accuracy rate, can first rise and then reach saturation, then continue to increase depth
Degree can then cause accuracy rate to decline.
Understood based on foregoing description, after the number that the network number of plies reaches certain, the performance of network will saturation, then increase
The performance of plus depth network will start to degenerate, but it is this degeneration be not as caused by over-fitting because training precision and
Measuring accuracy is all declining, and after network reaches certain depth, neutral net is just difficult to have trained this explanation.And ResNet
Appearance is to solve the problems, such as that network depth is deepening later performance degradation.Specifically, ResNet proposes a depth
Residual error learn (deep residual learning) frame come solve the problems, such as it is this because depth increase and cause performance degradation.
Assuming that there are one the accuracys rate for having reached saturation than shallower network, then along with several behind this network
A congruence mapping layer (Identity mapping), playing code error will not increase, i.e., deeper network should not bring training set
The rising of upper error.And the thought mentioned herein that preceding layer output is directly passed to back layer using congruence mapping, it is
The Inspiration Sources of ResNet.
Wherein, more explanations on ResNet refer to introduces hereinafter.
Identical mapping (identity mapping):To arbitrary collection A, if mapping f:A → A is defined as f (a)=a, i.e.,
Provide that each element a is corresponding with itself in A, then f is referred to as the identical mapping on A.
RESTful frameworks:RESTful refers to a kind of software architecture style, design style rather than standard, provides
One group of design principle and constraints.It is mainly used for the software of client and server interactive class.Based on this Style Design
Software can be more succinct, more have levels, it is easier to realize the mechanism such as caching.
And RESTful frameworks are a kind of internet works software frameworks, use customer end/server mode, establish and are being distributed
In formula system, by internet communication, there is high delay (high latency), high concurrent.
It should be noted that the face retrieval service in correlation technique is applied under Dynamic Recognition scene, than
It such as applies in gate inhibition, equipment unlock, mobile payment, attendance management scene.And in practical applications, face retrieval service except
It applies outside Dynamic Recognition scene, deploys to ensure effective monitoring and control of illegal activities, for the fields such as criminal investigation is handled a case, security protection activity, be also usually present quiet for raiding
State Search Requirement, for example face retrieval is carried out by the still image to input, it looks for Missing Persons, pursue and capture an escaped prisoner
Deng.Face retrieval method provided in an embodiment of the present invention can be applied to have under the scene of static Search Requirement.Certainly, by into
Row is correspondingly improved, and face retrieval method provided in an embodiment of the present invention is similarly applicable under Dynamic Recognition scene, this hair
Bright embodiment is to this without specifically limiting.
The implementation environment involved by face retrieval method provided in an embodiment of the present invention is introduced below.
Referring to Figure 1A, it illustrates the implementation environments involved by a kind of face retrieval method provided in an embodiment of the present invention
Structure diagram.As shown in Figure 1A, which includes terminal 101, face retrieval system 102 and face database
103.Wherein, 102 concrete form of face retrieval system is server, and face retrieval system 102 and face database 103 both can be with
Configuration on the same server, can also on a different server, and the embodiment of the present invention is to this without specifically limiting.Eventually
The type at end 101 includes but not limited to smart mobile phone, desktop computer, laptop, tablet computer etc..
In embodiments of the present invention, terminal 101 and face retrieval system 102 are based on RESTful architecture modes, i.e., the two is adopted
Internet communication is carried out with customer end/server mode.Since the embodiment of the present invention is based on RESTful frameworks, provide
Restful standard protocol interfaces, so a server can configure by multiple client to access, it is convenient and efficient.
Further, since the data stored in face database 103 are in real-time change, so by face retrieval service department
Administration can not only save many resources and workload, and quickly concurrent processing can largely ask under a set of distributed system.It keeps away
Exempt from when each software individually disposes face retrieval, the problem of more new task of database is heavy.It is such as most of at present
Software architecture be that Embedded Application is installed in the equipment such as smart mobile phone, tablet computer, for this pattern, when data more
Kainogenesis just needs individually to be configured in high-volume equipment when updating, and the more new task of database is very huge.
Based on above description, face retrieval method provided in an embodiment of the present invention is from face retrieval concrete mode and software frame
Two aspect of structure has carried out innovative design.On the one hand, propose to use ResNet network structures as the specific of progress face retrieval
Algorithm practises face characteristic with deeper network layer mathematics, has obtained more accurate face matching with comparing effect.The opposing party
Face, the embodiment of the present invention use the software architecture based on RESTful standards, can not only meet static retrieval needs, but also can
Easily configure large-scale distributed search system, raid deploy to ensure effective monitoring and control of illegal activities, the fields such as criminal investigation is handled a case, security protection activity have it is higher
Practical value.
In another embodiment, referring to Figure 1B, face retrieval system provided in an embodiment of the present invention mainly includes face
Retrieval service module and feature extraction service module.
Wherein, face retrieval service module is mainly used for carrying out the storage and face characteristic information of face characteristic information
Retrieval;Feature extraction service module is mainly used for carrying out the feature extraction of high-volume image.
It, can be real in the form of tag file when face characteristic information is put in storage for face retrieval service module
It is existing.Wherein, tag file includes man-to-man identity and face characteristic information.It in embodiments of the present invention, can be direct
The warehousing interface of face retrieval service module is called to be put in storage face characteristic information.
When carrying out face characteristic information retrieval, the operation that face retrieval service module can carry out includes but not limited to:
Client carries out the base64 codings of image after having input image and human face similarity degree comparison is carried out in storehouse automatically;If in addition,
Client goes back additional input similarity threshold in addition to input picture, then face retrieval takes module and phase is just retrieved in storehouse
Like face of the degree higher than the threshold value;In addition, if client has input identity, face retrieval service module inquires about the identity
It identifies whether in storehouse, and can also be realized according to the operation requests of client and the corresponding facial image of the identity is grasped
It deals with, such as deletion or update etc..
For feature extraction service module, the embodiment of the present invention carries out feature using ResNet networks to facial image
Extraction.And since ResNet networks introduce residual error network structure, so solving the gradient caused by the network number of plies is too deep more
The problem of dissipating can carry out the feature learning of facial image, it is ensured that the accuracy of face retrieval with deeper network structure.At this
In inventive embodiments, facial image refers to the image for including face.
Wherein, the face characteristic information stored in face database be based on the facial image to being stored under destination path into
Row feature extraction obtains.Wherein, destination path may include that local path may also comprise remote path, and remote path can be
HTTP (HyperText Transfer Protocol, hypertext transfer protocol) paths or FTP (File Transfer
Protocol, File Transfer Protocol) path, the embodiment of the present invention is to this without specifically limiting.It should be noted that feature carries
Service module is taken to open the feature extraction operation that multithreading carries out batch.
In conclusion the embodiment of the present invention, which employs depth residual error network ResNet, carries out face retrieval, solves network
The phenomenon that deeper gradient disperse, is more and more apparent, and then the problem of cause network training effect poor.Compared to other networks
The network number of plies can be made very deep by model, ResNet networks, even up to 1000 multilayers, so as to obtain face characteristic letter
Cease good learning effect.In addition, the embodiment of the present invention by algorithm and Platform integration, provides face inspection in a manner of HTTP service
Rope service can externally provide Restful standard protocol interfaces, it is only necessary to configure face retrieval service on the server, can realize
Client completes face retrieval by accessing server.
Next depth residual error network is carried out that explanation is explained in detail.
Assuming that the input of certain section of neutral net is x, desired network layer relationship map is H (x), allows and stacks non-linear layer plan
Another mapping F (x)=H (x)-x is closed, then original mapping H (x) then becomes F (x)+x.Assuming that optimization residual error mapping F
(x) easier than optimizing original mapping H (x), we ask for residual error mapping F (x) first here, then original mapping is F
(x)+x, and F (x)+x can be realized by Shortcut connections.
Fig. 2 shows the structure diagram of a residual block.As shown in Fig. 2, any one residual error of depth residual error network
Include an identical mapping and at least two convolutional layers in block.Wherein, the identical mapping of a residual block is by the residual block
Input terminal is directed toward the output terminal of the residual block.
That is, increase an identical mapping, the function H (x) of original required is converted into F (x)+x.Although both tables
It is identical up to effect, but the difficulty optimized but and differs, by a reformulation (re-forming), by a problem
The direct residual problem of multiple scales is resolved into, the effect of optimization training can be functioned well as.As shown in Fig. 2, this residual error
Block is realized by Shortcut connections, is overlapped outputting and inputting for this residual block by Shortcut connections, not
On the premise of increasing additional parameter and calculation amount to network, considerably increase the training speed of model, improve training effect,
And when the number of plies of model is deepened, this simple structure can well solve degenerate problem.
A kind of expression way is changed, H (x) is the desired potential mapping of complexity, and learning difficulty is big, if directly passing through Fig. 2's
Shortcut connections by input x pass to output as initial results, then need at this time the target learnt be F (x)=H (x)-
X, then ResNet networks, which are equivalent to, changes learning objective, is no longer the complete output of study one, but needs to learn
The difference of optimal solution H (x) and congruence mapping x, i.e. residual error mapping F (x).It should be noted that Shortcut original meanings refer to shortcut,
Expression is got over layer and is connected herein, the no weights of Shortcut connections in ResNet networks, and each residual block only learns residual after transmission x
Difference mapping F (x).And since network stabilization is easy to learn, the increase performance with network depth will gradually improve, therefore work as network
When the number of plies is enough deep, optimization residual error mapping F (x)=H (x)-x is readily able to one complicated Nonlinear Mapping H (x) of optimization.
Understand for ResNet networks are compared to common direct-connected convolutional neural networks, there are many sides based on above description
The branch line on road will input the floor being attached directly to below so that layer below can directly learn residual error, and this structure is referred to as
Shortcut connections.Wherein, traditional convolutional layer or full articulamentum be when information is transferred, more or less can existence information lose,
The problems such as loss, ResNet networks solve the problems, such as this to a certain extent, and output is transferred to by the way that directly input is detoured,
The integrality of protection information, whole network then only need study to output and input that part of difference, simplify study mesh
Mark and difficulty.
Present invention introduces depth residual error networks, when network number of plies chin-deep, do not occur degenerate problem, but also meeting not only
So that the error rate of face retrieval substantially reduces.
Fig. 3 is the flow chart that the embodiment of the present invention stops a kind of face retrieval method provided.Referring to Fig. 3, the present invention is implemented
The method flow that example provides includes:
301st, face retrieval system carries out image lookup under destination path, opens multithreading, utilizes the multithreading of unlatching
Based on each residual block being sequentially connected in depth residual error network, feature extraction is carried out to the image batch found.
As shown in figure 4, destination path is at least one of local path or remote path.Wherein, remote path includes
But it is not limited to HTTP paths and FTP paths.
In embodiments of the present invention, the image stored under destination path is for building face database, in each image
Include face.In addition, in order to accelerate feature extraction speed, the embodiment of the present invention starts multithreading batch in destination path
Under the image that finds carry out feature extraction.
For any one image under destination path, the embodiment of the present invention is based on being sequentially connected in depth residual error network
Each residual block, carries out Face detection in the images first, plucks out human face region afterwards and carries out feature learning, i.e., the present invention is real
It applies example and feature extraction only is carried out to human face region.Wherein, the dimension for the face characteristic information extracted can be 512 dimensions or 1024 dimensions
Deng the embodiment of the present invention is to this without specifically limiting.
In embodiments of the present invention, for each residual block, including the first convolutional layer, the second convolutional layer and
Three convolutional layers.Wherein, the first convolutional layer, the second convolutional layer and the 3rd convolutional layer are linked in sequence, and the first convolutional layer and the 3rd
Convolutional layer it is in the same size, the size of the first convolutional layer and the 3rd convolutional layer is respectively less than the second convolutional layer, and identical mapping is by
The input terminal of one convolutional layer is directed toward the output terminal of the 3rd convolutional layer.
I.e., it is contemplated that calculate cost, residual block shown in Fig. 2 is optimized in the embodiment of the present invention.In fig. 2, two
The convolutional layer that there is identical output channel number comprising two in the residual block of layer.
Referring to Fig. 5, by taking the quantity of convolutional layer in the residual block after optimization is 3 as an example, then the first convolutional layer and volume three
The big I of lamination is 1*1, and the big I of the second convolutional layer is 3*3.Wherein, intermediate 3*3 convolutional layers are first at one
Reduce calculating cost under the 1*1 convolutional layers of dimensionality reduction, then reduced under another 1*1 convolutional layer, first dimensionality reduction rises dimension again
Operation, not only maintained precision but also reduced calculation amount.The dimension output and input in Figure 5 is identical, in addition, if there is input
The situation different with output dimension can be converted by doing a Linear Mapping to input x, be connected to residual block below.
In summary, using depth residual error network when carrying out feature extraction to a facial image, first by the face figure
As the first residual block of input depth residual error network, and each residual block in depth residual error network is performing following behaviour
Make:For any one residual block, receive the output of a upper residual block, and based on the first convolutional layer, the second convolutional layer and
3rd convolutional layer carries out feature extraction to the output of a upper residual block;The output of the 3rd convolutional layer is obtained, by the 3rd convolutional layer
Output and the output of a upper residual block be transferred to next residual block.When the output of the last one residual block is by complete
After articulamentum is exported, the face characteristic information of this facial image has just been obtained.
302nd, face retrieval system obtains the matched identity of face characteristic information with extracting, the people that will be extracted
Correspondence between face characteristic information and identity is stored in face database.
In embodiments of the present invention, after the face characteristic information of the image stored under extracting destination path, in order to just
In subsequently carrying out face resolution, can also obtain and the matched identity of each face characteristic information.Wherein, identity bag
Include but be not limited to name, the age, gender, schooling, marital status, work address, home address etc., the embodiment of the present invention
To this without specifically limiting.And the one-to-one relationship between face characteristic information and identity can be with the shape of tag file
Formula is stored in face database.
It should be noted that above-mentioned steps 301 and the process that step 302 is structure face database.And in structure good person
After face database, face retrieval system can handle the first face retrieval that each terminal is initiated based on face database please
It asks, detailed process refers to following step 303.
303rd, face retrieval system receives the first face retrieval request that arbitrary terminal is sent, the first face retrieval request
Include target facial image.
In embodiments of the present invention, terminal specifically can be used when sending the first face retrieval request to face retrieval system
POST method, the embodiment of the present invention is to this without specifically limiting.
304th, face retrieval system is based on each residual block being sequentially connected in depth residual error network, to target facial image
Feature extraction is carried out, obtains target face characteristic information, face inspection is carried out in face database based on target face characteristic information
Rope obtains face retrieval result.
It in embodiments of the present invention, can also be first to target face figure before feature extraction is carried out to target facial image
As being decoded, feature extraction is carried out to decoding image based on each residual block again afterwards.Specific feature extraction mode, can join
Abovementioned steps 301 are examined, be also included in image where locating human face position and the steps such as feature learning are carried out to human face region
Suddenly, details are not described herein again.
Wherein, decoding process can be Base64 modes, and the embodiment of the present invention is to this without specifically limiting.
In another embodiment, when face database carries out face retrieval, wrapped based on target face characteristic information
It includes but is not limited to following two ways:
The first, topN modes
(a), face retrieval system by the face characteristic information stored in target face characteristic information and face database into
Row compares, and obtains the similarity between target face characteristic information and the face characteristic information of storage.
Wherein, similarity reflects the similarity degree between the image stored in target facial image and face database.
When the numerical value of similarity is higher, illustrate similar between target facial image and the image of storage.
(b), the face characteristic information of storage is ranked up according to similarity size.
For example, can be ranked up by the order of similarity size from high to low, the embodiment of the present invention is to this without specific
It limits.
(c), determine that similarity comes the first candidate face characteristic information of top N.
Wherein, the value of N can be in advance configured by face retrieval system, and the value of N is positive integer, for example, can be 5 or
10 or 15 etc., the embodiment of the present invention is to this without specifically limiting.By taking the value of N is 5 as an example, then the first candidate face feature is believed
Breath just includes 5 face characteristic informations.
(d), using the corresponding identity of the first candidate face characteristic information and similarity as face retrieval result.
Continue so that the value of N is 5 as an example, then the identity of 5 and similarity are both needed to as face retrieval knot before sorting
Fruit.
Secondth, similarity threshold mode
(a), target face characteristic information with the face characteristic information stored in face database is compared, obtains mesh
Mark the similarity between face characteristic information and the face characteristic information of storage.
(b), the similarity threshold that terminal is sent is obtained.
In embodiments of the present invention, terminal can also carry similarity threshold when sending the first face retrieval request so that
Face retrieval system can feed back face retrieval result according to user-defined thresholding.Wherein, terminal is provided to user
One interface for progress similarity threshold input or setting, the embodiment of the present invention is to this without specifically limiting.
It should be noted that in addition to the similarity threshold sent except receiving terminal, face retrieval system can also be self-defined
Similarity threshold, the embodiment of the present invention is to this without specifically limiting.
(c), determine that similarity is more than the second candidate face characteristic information of similarity threshold.
Theoretically, the second candidate face characteristic information includes whole face characteristics that similarity is more than similarity threshold
Information.But if quantity is excessive, such as more than certain threshold value, then face retrieval system may be selected similarity being more than specify numerical value or
The face characteristic information of person topM is as the second candidate face characteristic information, and the embodiment of the present invention is to this without specifically limiting.
(d), using the corresponding identity of the second candidate face characteristic information and similarity as face retrieval result.
305th, obtained face retrieval result is sent to terminal by face retrieval system.
In embodiments of the present invention, face retrieval system may be selected with JSON (JavaScript Object Notation,
JS object tags) face retrieval result is sent to terminal by form, and the embodiment of the present invention is to this without specifically limiting.
Method provided in an embodiment of the present invention realizes face based on depth residual error network and distributed software architecture
Retrieval, since the retrieval accuracy of depth residual error network is not easy to be influenced by extraneous factor, so this kind of face retrieval method is steady
It is qualitative more excellent, and then the accuracy of face retrieval is also guaranteed.In addition, distributed software architecture can not only save largely
Resource and workload, and can quickly a large amount of face retrievals of concurrent processing ask, effect is preferable.
In another embodiment, the embodiment of the present invention is also supported to carry out data increment update to face database.Specifically
Incremental update process can be as shown in following step:
(1), face retrieval system periodically obtains the image of incremental update under destination path.
(2), face retrieval system opens multithreading, each based on being sequentially connected in depth residual error network using unlatching
Residual block carries out feature extraction to newer image batch.
It should be noted that if the amount of images of incremental update is few, it can need not open multithreading and be handled, this
Inventive embodiments are to this without specifically limiting.
(3), face retrieval system obtains the matched identity of face characteristic information with newly extracting.
(4), face retrieval system updates the correspondence between the face characteristic information newly extracted and identity
Into face database.
In another embodiment, the embodiment of the present invention also supports user to ask to carry out other affairs by face retrieval
Processing, for example can inquire about whether identity is stored in storehouse, referring to Fig. 6, detailed step is as follows:
601st, face retrieval system receives the second face retrieval request that terminal is sent, and is wrapped in second face retrieval request
Include target identities mark.
If the 602, face database includes target identities mark, face retrieval system will be identified with target identities and matched
Specified facial image be sent to terminal.
603rd, face retrieval system receives the operation processing request for specified facial image that terminal is sent, according to the behaviour
Request is dealt with to facial image is specified to carry out operation processing.
Wherein, aforesaid operations processing request can be to delete the request of specified facial image or with other facial images
The request for specifying facial image is replaced, accordingly, operation processing may be either that delete processing is alternatively update processing, and the present invention is implemented
Example is to this without specifically limiting.
It should be noted that carrying out operation processing to facial image based on above-described embodiment, and then complete to face
After the update of database, the above-mentioned Fig. 3 of updated face database execution can be utilized to correspond to the face retrieval described in embodiment
Method.
In another embodiment, exemplified by finding lost children, face retrieval method provided in an embodiment of the present invention can
It combs as following step:
1st, client sends the image of lost children to face retrieval system.
2nd, face retrieval system carries out feature extraction based on depth residual error network to the image of lost children.
3rd, face retrieval system carries out face retrieval based on the face characteristic information extracted in face database.
4th, obtained face retrieval result is returned to client by face retrieval system, is at least wrapped in the face retrieval result
Include the identity found for the lost children.
5th, the face retrieval that client display server returns is as a result, so that user is differentiated, confirmed or found accordingly.
Fig. 7 is a kind of structure diagram of face retrieval device provided in an embodiment of the present invention.Referring to Fig. 7, the device bag
It includes:Acquisition module 701, characteristic extracting module 702, retrieval module 703.
Wherein, acquisition module 701, for obtaining target facial image to be retrieved;Characteristic extracting module 702, for base
The each residual block being sequentially connected in depth residual error network carries out feature extraction to the target facial image, obtains target
Face characteristic information includes an identical mapping and at least two convolutional layers, any one residual error in any one residual block
The identical mapping of block is directed toward the output terminal of any one residual block by the input terminal of any one residual block;Retrieve mould
Block 703, for be based on the target face characteristic information face database carry out face retrieval, obtain face retrieval as a result,
The correspondence between face characteristic information and identity is stored in the face database, in the face retrieval result
Including at least with the matched identity of target face characteristic information.
Device provided in an embodiment of the present invention, the face retrieval based on depth residual error real-time performance, due to depth residual error net
The retrieval accuracy of network is not easy to be influenced by extraneous factor, so this kind of face retrieval method stability is more excellent, and then face is examined
The accuracy of rope is also guaranteed, and effect is preferable
In another embodiment, acquisition module is additionally operable to receive the first face retrieval request that terminal is sent, from described
The target facial image is obtained in first face retrieval request;
The device further includes:Sending module, for after the face retrieval result is obtained, by the face retrieval knot
Fruit is sent to the terminal.
In another embodiment, the first convolutional layer at least two convolutional layer, the second convolutional layer and the 3rd
Convolutional layer is linked in sequence, and first convolutional layer is in the same size with the 3rd convolutional layer, the size of first convolutional layer
Less than second convolutional layer, the identical mapping is directed toward the defeated of the 3rd convolutional layer by the input terminal of first convolutional layer
Outlet;
Characteristic extracting module is additionally operable to input the target facial image first residual error of the depth residual error network
Block;For any one residual block, the output of a upper residual block is received, and based on first convolutional layer, described the
Two convolutional layers and the 3rd convolutional layer carry out feature extraction to the output of a upper residual block;Obtain the described 3rd
The output of 3rd convolutional layer and the output of a upper residual block are transferred to next residual error by the output of convolutional layer
Block;The output of the last one residual block of the depth residual error network is obtained, obtains the target face characteristic information.
In another embodiment, module is retrieved, is additionally operable to the target face characteristic information and the human face data
The face characteristic information stored in storehouse is compared, obtain the target face characteristic information with storage face characteristic information it
Between similarity;The face characteristic information of storage is ranked up according to similarity size;Determine that similarity comes the of top N
One candidate face characteristic information, N are positive integer;By the corresponding identity of the first candidate face characteristic information and similar
Degree is used as the face retrieval result.
In another embodiment, module is retrieved, is additionally operable to the target face characteristic information and the human face data
The face characteristic information stored in storehouse is compared, obtain the target face characteristic information with storage face characteristic information it
Between similarity;Obtain similarity threshold;Determine that similarity is more than the second candidate face characteristic information of the similarity threshold;
Using the corresponding identity of the second candidate face characteristic information and similarity as the face retrieval result.
In another embodiment, which further includes:
Module is established, for carrying out image lookup under destination path, the destination path is local path or long-range road
At least one of footpath;Multithreading is opened, it is each based on being sequentially connected in the depth residual error network using the multithreading of unlatching
A residual block carries out feature extraction to the image batch found;The matched identity of face characteristic information for obtaining and extracting
Mark;Correspondence between the face characteristic information extracted and identity is stored in the face database.
In another embodiment, module is established, is additionally operable to periodically obtain incremental update under the destination path
Image;Multithreading is opened, it is right using the multithreading of unlatching based on each residual block being sequentially connected in the depth residual error network
Newer image batch carries out feature extraction;Obtain the matched identity of face characteristic information with newly extracting;It will newly carry
Correspondence between the face characteristic information got and identity is updated in the face database.
In another embodiment, characteristic extracting module is additionally operable to be decoded the target facial image, be solved
Code image;Based on each residual block being sequentially connected in the depth residual error network, feature extraction is carried out to the decoding image.
In another embodiment, which further includes:
Receiving module, for receiving the second face retrieval request that the terminal is sent, the second face retrieval request
Include target identities mark;
Sending module, will be with the target body if including the target identities mark for the face database
Part identifies matched specified facial image and is sent to the terminal;
The receiving module, the operation processing for the specified facial image for being additionally operable to receive the terminal transmission please
It asks;
Processing module, for carrying out operation processing to the specified facial image according to the operation processing request.
The alternative embodiment that any combination forms the disclosure may be employed, herein no longer in above-mentioned all optional technical solutions
It repeats one by one.
It should be noted that:The face retrieval device that above-described embodiment provides is when carrying out face retrieval, only with above-mentioned each
The division progress of function module, can be as needed and by above-mentioned function distribution by different work(for example, in practical application
Energy module is completed, i.e., the internal structure of device is divided into different function modules, to complete whole described above or portion
Divide function.In addition, the face retrieval device that above-described embodiment provides belongs to same design with face retrieval embodiment of the method, have
Body realizes that process refers to embodiment of the method, and which is not described herein again.
Fig. 8 is a kind of structure diagram of equipment for face retrieval provided in an embodiment of the present invention, which can
Bigger difference is generated due to configuration or different performance, one or more processors (central can be included
Processing units, CPU) 801 and one or more memory 802, wherein, it is stored in the memory 802
There is at least one instruction, at least one instruction is loaded by the processor 801 and performs to realize that above-mentioned each method is real
The face retrieval method of example offer is provided.Certainly, which can also have wired or wireless network interface, keyboard and input defeated
The components such as outgoing interface, to carry out input and output, which can also include other components for being used to implement functions of the equipments, herein
It does not repeat.
In the exemplary embodiment, a kind of computer readable storage medium, such as the memory including instruction are additionally provided,
Above-metioned instruction can be performed to complete the face retrieval method in above-described embodiment by the processor in terminal.For example, the calculating
Machine readable storage medium storing program for executing can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices
Deng.
One of ordinary skill in the art will appreciate that hardware can be passed through by realizing all or part of step of above-described embodiment
It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and
Within principle, any modifications, equivalent replacements and improvements are made should all be included in the protection scope of the present invention.
Claims (15)
- A kind of 1. face retrieval method, which is characterized in that the described method includes:Obtain target facial image to be retrieved;Based on each residual block being sequentially connected in depth residual error network, feature extraction is carried out to the target facial image, is obtained To target face characteristic information, an identical mapping and at least two convolutional layers are included in any one residual block, it is any one The identical mapping of a residual block is directed toward the output terminal of any one residual block by the input terminal of any one residual block;Face retrieval is carried out in face database based on the target face characteristic information, obtains face retrieval as a result, the people The correspondence between face characteristic information and identity is stored in face database, is at least wrapped in the face retrieval result It includes and the matched identity of target face characteristic information.
- 2. according to the method described in claim 1, it is characterized in that, described obtain target facial image to be retrieved, including:The first face retrieval request that terminal is sent is received, the target face figure is obtained from the first face retrieval request Picture;After the face retrieval result is obtained, the method further includes:The face retrieval result is sent to the end End.
- 3. according to the method described in claim 1, it is characterized in that, the first convolutional layer at least two convolutional layer, Two convolutional layers and the 3rd convolutional layer are linked in sequence, and first convolutional layer is in the same size with the 3rd convolutional layer, described The size of first convolutional layer is less than second convolutional layer, and the identical mapping is directed toward institute by the input terminal of first convolutional layer State the output terminal of the 3rd convolutional layer;The each residual block being sequentially connected in the residual error network based on depth carries out feature to the target facial image and carries It takes, obtains target face characteristic information, including:The target facial image is inputted to first residual block of the depth residual error network;For any one residual block, the output of a upper residual block is received, and based on first convolutional layer, described the Two convolutional layers and the 3rd convolutional layer carry out feature extraction to the output of a upper residual block;The output of the 3rd convolutional layer is obtained, by the output of the 3rd convolutional layer and the output of a upper residual block It is transferred to next residual block;The output of the last one residual block of the depth residual error network is obtained, obtains the target face characteristic information.
- 4. according to the method described in claim 1, it is characterized in that, described be based on the target face characteristic information in face number Face retrieval is carried out according to storehouse, including:The target face characteristic information with the face characteristic information stored in the face database is compared, obtains institute State the similarity between target face characteristic information and the face characteristic information of storage;The face characteristic information of storage is ranked up according to similarity size;Determine that similarity comes the first candidate face characteristic information of top N, N is positive integer;Using the corresponding identity of the first candidate face characteristic information and similarity as the face retrieval result.
- 5. according to the method described in claim 1, it is characterized in that, described be based on the target face characteristic information in face number Face retrieval is carried out according to storehouse, including:The target face characteristic information with the face characteristic information stored in the face database is compared, obtains institute State the similarity between target face characteristic information and the face characteristic information of storage;Obtain similarity threshold;Determine that similarity is more than the second candidate face characteristic information of the similarity threshold;Using the corresponding identity of the second candidate face characteristic information and similarity as the face retrieval result.
- 6. the method according to any claim in claim 1 to 5, which is characterized in that the method further includes:Image lookup is carried out under destination path, the destination path is at least one of local path or remote path;Multithreading is opened, it is right using the multithreading of unlatching based on each residual block being sequentially connected in the depth residual error network The image batch found carries out feature extraction;The matched identity of face characteristic information for obtaining and extracting;Correspondence between the face characteristic information extracted and identity is stored in the face database.
- 7. according to the method described in claim 6, it is characterized in that, the method further includes:Periodically obtain the image of incremental update under the destination path;Multithreading is opened, it is right using the multithreading of unlatching based on each residual block being sequentially connected in the depth residual error network Newer image batch carries out feature extraction;Obtain the matched identity of face characteristic information with newly extracting;Correspondence between the face characteristic information newly extracted and identity is updated in the face database.
- 8. the method according to any claim in claim 1 to 5, which is characterized in that described to be based on depth residual error net The each residual block being sequentially connected in network carries out feature extraction to the target facial image, including:The target facial image is decoded, obtains decoding image;Based on each residual block being sequentially connected in the depth residual error network, feature extraction is carried out to the decoding image.
- 9. according to the method described in claim 2, it is characterized in that, the method further includes:The second face retrieval request that the terminal is sent is received, the second face retrieval request includes target identities mark Know;If the face database includes the target identities mark, matched nominator will be identified with the target identities Face image is sent to the terminal;The operation processing request for the specified facial image that the terminal is sent is received, according to the operation processing request Operation processing is carried out to the specified facial image.
- 10. a kind of face retrieval device, which is characterized in that described device includes:Acquisition module, for obtaining target facial image to be retrieved;Characteristic extracting module, for based on each residual block being sequentially connected in depth residual error network, to the target face figure As carrying out feature extraction, target face characteristic information is obtained, including an identical mapping and at least in any one residual block Two convolutional layers, the identical mapping of any one residual block are described any one by the input terminal direction of any one residual block The output terminal of a residual block;Module is retrieved, face retrieval is carried out in face database for being based on the target face characteristic information, obtains face inspection Rope as a result, storing the correspondence between face characteristic information and identity in the face database, examine by the face It is included at least and the matched identity of target face characteristic information in hitch fruit.
- 11. device according to claim 10, which is characterized in that the acquisition module is additionally operable to receive what terminal was sent First face retrieval request obtains the target facial image from the first face retrieval request;Described device further includes:Sending module, for after the face retrieval result is obtained, the face retrieval result to be sent to the terminal.
- 12. device according to claim 10, which is characterized in that the first convolutional layer at least two convolutional layer, Second convolutional layer and the 3rd convolutional layer are linked in sequence, first convolutional layer in the same size, institute with the 3rd convolutional layer The size for stating the first convolutional layer is less than second convolutional layer, and the identical mapping is directed toward by the input terminal of first convolutional layer The output terminal of 3rd convolutional layer;The characteristic extracting module is additionally operable to input the target facial image first residual error of the depth residual error network Block;For any one residual block, the output of a upper residual block is received, and based on first convolutional layer, described the Two convolutional layers and the 3rd convolutional layer carry out feature extraction to the output of a upper residual block;Obtain the described 3rd The output of 3rd convolutional layer and the output of a upper residual block are transferred to next residual error by the output of convolutional layer Block;The output of the last one residual block of the depth residual error network is obtained, obtains the target face characteristic information.
- 13. the device according to any claim in claim 10 to 12, which is characterized in that the feature extraction mould Block is additionally operable to be decoded the target facial image, obtains decoding image;Based on sequentially phase in the depth residual error network Each residual block even carries out feature extraction to the decoding image.
- 14. a kind of storage medium, which is characterized in that it is stored at least one instruction in the storage medium, described at least one Instruction is loaded as processor and is performed to realize the face retrieval method as described in any claim in claim 1 to 9.
- 15. a kind of equipment for face retrieval, which is characterized in that the equipment includes processor and memory, the storage At least one instruction is stored in device, at least one instruction is loaded by the processor and performed to realize such as claim Face retrieval method in 1 to 9 described in any claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810121581.2A CN108108499B (en) | 2018-02-07 | 2018-02-07 | Face retrieval method, device, storage medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810121581.2A CN108108499B (en) | 2018-02-07 | 2018-02-07 | Face retrieval method, device, storage medium and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108108499A true CN108108499A (en) | 2018-06-01 |
CN108108499B CN108108499B (en) | 2023-05-26 |
Family
ID=62222019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810121581.2A Active CN108108499B (en) | 2018-02-07 | 2018-02-07 | Face retrieval method, device, storage medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108108499B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109002789A (en) * | 2018-07-10 | 2018-12-14 | 银河水滴科技(北京)有限公司 | A kind of face identification method applied to camera |
CN109033971A (en) * | 2018-06-27 | 2018-12-18 | 中国石油大学(华东) | A kind of efficient pedestrian recognition methods again based on residual error Network Theory |
CN109871909A (en) * | 2019-04-16 | 2019-06-11 | 京东方科技集团股份有限公司 | Image-recognizing method and device |
CN109978067A (en) * | 2019-04-02 | 2019-07-05 | 北京市天元网络技术股份有限公司 | A kind of trade-mark searching method and device based on convolutional neural networks and Scale invariant features transform |
CN109993102A (en) * | 2019-03-28 | 2019-07-09 | 北京达佳互联信息技术有限公司 | Similar face retrieval method, apparatus and storage medium |
CN110020093A (en) * | 2019-04-08 | 2019-07-16 | 深圳市网心科技有限公司 | Video retrieval method, edge device, video frequency searching device and storage medium |
CN110135231A (en) * | 2018-12-25 | 2019-08-16 | 杭州慧牧科技有限公司 | Animal face recognition methods, device, computer equipment and storage medium |
CN110232799A (en) * | 2019-06-24 | 2019-09-13 | 秒针信息技术有限公司 | The method and device of pursuing missing object |
WO2019233244A1 (en) * | 2018-06-08 | 2019-12-12 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, and computer readable medium, and electronic device |
WO2020037896A1 (en) * | 2018-08-21 | 2020-02-27 | 平安科技(深圳)有限公司 | Facial feature value extraction method and device, computer apparatus, and storage medium |
CN110942046A (en) * | 2019-12-05 | 2020-03-31 | 腾讯云计算(北京)有限责任公司 | Image retrieval method, device, equipment and storage medium |
CN111339345A (en) * | 2020-02-26 | 2020-06-26 | 北京国网信通埃森哲信息技术有限公司 | Method, system and storage medium for differential shielding of multi-platform face recognition service interface |
CN111368766A (en) * | 2020-03-09 | 2020-07-03 | 云南安华防灾减灾科技有限责任公司 | Cattle face detection and identification method based on deep learning |
CN111723647A (en) * | 2020-04-29 | 2020-09-29 | 平安国际智慧城市科技股份有限公司 | Path-based face recognition method and device, computer equipment and storage medium |
CN113191911A (en) * | 2021-07-01 | 2021-07-30 | 明品云(北京)数据科技有限公司 | Insurance recommendation method, system, equipment and medium based on user information |
CN114942942A (en) * | 2022-05-18 | 2022-08-26 | 马上消费金融股份有限公司 | Characteristic data query method and device and user registration query method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010006367A1 (en) * | 2008-07-16 | 2010-01-21 | Imprezzeo Pty Ltd | Facial image recognition and retrieval |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
CN106874898A (en) * | 2017-04-08 | 2017-06-20 | 复旦大学 | Extensive face identification method based on depth convolutional neural networks model |
CN106919897A (en) * | 2016-12-30 | 2017-07-04 | 华北电力大学(保定) | A kind of facial image age estimation method based on three-level residual error network |
CN107273864A (en) * | 2017-06-22 | 2017-10-20 | 星际(重庆)智能装备技术研究院有限公司 | A kind of method for detecting human face based on deep learning |
CN107423690A (en) * | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
-
2018
- 2018-02-07 CN CN201810121581.2A patent/CN108108499B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010006367A1 (en) * | 2008-07-16 | 2010-01-21 | Imprezzeo Pty Ltd | Facial image recognition and retrieval |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
CN106919897A (en) * | 2016-12-30 | 2017-07-04 | 华北电力大学(保定) | A kind of facial image age estimation method based on three-level residual error network |
CN106874898A (en) * | 2017-04-08 | 2017-06-20 | 复旦大学 | Extensive face identification method based on depth convolutional neural networks model |
CN107273864A (en) * | 2017-06-22 | 2017-10-20 | 星际(重庆)智能装备技术研究院有限公司 | A kind of method for detecting human face based on deep learning |
CN107423690A (en) * | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019233244A1 (en) * | 2018-06-08 | 2019-12-12 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, and computer readable medium, and electronic device |
US11416781B2 (en) | 2018-06-08 | 2022-08-16 | Tencent Technology (Shenzhen) Company Ltd | Image processing method and apparatus, and computer-readable medium, and electronic device |
CN109033971A (en) * | 2018-06-27 | 2018-12-18 | 中国石油大学(华东) | A kind of efficient pedestrian recognition methods again based on residual error Network Theory |
CN109002789A (en) * | 2018-07-10 | 2018-12-14 | 银河水滴科技(北京)有限公司 | A kind of face identification method applied to camera |
CN109002789B (en) * | 2018-07-10 | 2021-06-18 | 银河水滴科技(北京)有限公司 | Face recognition method applied to camera |
WO2020037896A1 (en) * | 2018-08-21 | 2020-02-27 | 平安科技(深圳)有限公司 | Facial feature value extraction method and device, computer apparatus, and storage medium |
CN110135231B (en) * | 2018-12-25 | 2021-05-28 | 杭州慧牧科技有限公司 | Animal face recognition method and device, computer equipment and storage medium |
CN110135231A (en) * | 2018-12-25 | 2019-08-16 | 杭州慧牧科技有限公司 | Animal face recognition methods, device, computer equipment and storage medium |
CN109993102A (en) * | 2019-03-28 | 2019-07-09 | 北京达佳互联信息技术有限公司 | Similar face retrieval method, apparatus and storage medium |
CN109993102B (en) * | 2019-03-28 | 2021-09-17 | 北京达佳互联信息技术有限公司 | Similar face retrieval method, device and storage medium |
CN109978067A (en) * | 2019-04-02 | 2019-07-05 | 北京市天元网络技术股份有限公司 | A kind of trade-mark searching method and device based on convolutional neural networks and Scale invariant features transform |
CN110020093A (en) * | 2019-04-08 | 2019-07-16 | 深圳市网心科技有限公司 | Video retrieval method, edge device, video frequency searching device and storage medium |
CN109871909A (en) * | 2019-04-16 | 2019-06-11 | 京东方科技集团股份有限公司 | Image-recognizing method and device |
CN110232799A (en) * | 2019-06-24 | 2019-09-13 | 秒针信息技术有限公司 | The method and device of pursuing missing object |
CN110942046A (en) * | 2019-12-05 | 2020-03-31 | 腾讯云计算(北京)有限责任公司 | Image retrieval method, device, equipment and storage medium |
CN110942046B (en) * | 2019-12-05 | 2023-04-07 | 腾讯云计算(北京)有限责任公司 | Image retrieval method, device, equipment and storage medium |
CN111339345B (en) * | 2020-02-26 | 2023-09-19 | 北京国网信通埃森哲信息技术有限公司 | Multi-platform face recognition service interface differentiated shielding method, system and storage medium |
CN111339345A (en) * | 2020-02-26 | 2020-06-26 | 北京国网信通埃森哲信息技术有限公司 | Method, system and storage medium for differential shielding of multi-platform face recognition service interface |
CN111368766A (en) * | 2020-03-09 | 2020-07-03 | 云南安华防灾减灾科技有限责任公司 | Cattle face detection and identification method based on deep learning |
CN111368766B (en) * | 2020-03-09 | 2023-08-18 | 云南安华防灾减灾科技有限责任公司 | Deep learning-based cow face detection and recognition method |
CN111723647B (en) * | 2020-04-29 | 2022-04-15 | 平安国际智慧城市科技股份有限公司 | Path-based face recognition method and device, computer equipment and storage medium |
CN111723647A (en) * | 2020-04-29 | 2020-09-29 | 平安国际智慧城市科技股份有限公司 | Path-based face recognition method and device, computer equipment and storage medium |
CN113191911A (en) * | 2021-07-01 | 2021-07-30 | 明品云(北京)数据科技有限公司 | Insurance recommendation method, system, equipment and medium based on user information |
CN114942942A (en) * | 2022-05-18 | 2022-08-26 | 马上消费金融股份有限公司 | Characteristic data query method and device and user registration query method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108108499B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108108499A (en) | Face retrieval method, apparatus, storage medium and equipment | |
US11294953B2 (en) | Similar face retrieval method, device and storage medium | |
CN106383891B (en) | A kind of medical image distributed search method based on depth Hash | |
WO2019099899A1 (en) | Analyzing spatially-sparse data based on submanifold sparse convolutional neural networks | |
US20220222918A1 (en) | Image retrieval method and apparatus, storage medium, and device | |
Din et al. | Service orchestration of optimizing continuous features in industrial surveillance using big data based fog-enabled internet of things | |
CN109034206A (en) | Image classification recognition methods, device, electronic equipment and computer-readable medium | |
US20230080230A1 (en) | Method for generating federated learning model | |
CN112131261B (en) | Community query method and device based on community network and computer equipment | |
WO2021136058A1 (en) | Video processing method and device | |
CN107748779A (en) | information generating method and device | |
WO2022111387A1 (en) | Data processing method and related apparatus | |
CN106407381A (en) | Method and device for pushing information based on artificial intelligence | |
WO2024067884A1 (en) | Data processing method and related apparatus | |
CN111091010A (en) | Similarity determination method, similarity determination device, network training device, network searching device and storage medium | |
Chen et al. | An augmented reality question answering system based on ensemble neural networks | |
CN116821301A (en) | Knowledge graph-based problem response method, device, medium and computer equipment | |
Bai | Construction of a smart library subject precise service platform based on user needs | |
CN112598039A (en) | Method for acquiring positive sample in NLP classification field and related equipment | |
Cao | Design and Implementation of an Intelligent Machine Learning System Based on Artificial Intelligence Computing | |
Ek et al. | Federated Self-Supervised Learning in Heterogeneous Settings: Limits of a Baseline Approach on HAR | |
Kim et al. | Rete-ADH: An improvement to rete for composite context-aware service | |
CN109961319A (en) | A kind of acquisition methods and device of house type transformation information | |
Yin et al. | VAECGAN: a generating framework for long-term prediction in multivariate time series | |
Liang et al. | Human gesture recognition of dynamic skeleton using graph convolutional networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |