CN108197561B - Face recognition model optimization control method, device, equipment and storage medium - Google Patents

Face recognition model optimization control method, device, equipment and storage medium Download PDF

Info

Publication number
CN108197561B
CN108197561B CN201711472112.7A CN201711472112A CN108197561B CN 108197561 B CN108197561 B CN 108197561B CN 201711472112 A CN201711472112 A CN 201711472112A CN 108197561 B CN108197561 B CN 108197561B
Authority
CN
China
Prior art keywords
gradient
model
face recognition
loss function
optimization control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711472112.7A
Other languages
Chinese (zh)
Other versions
CN108197561A (en
Inventor
杨光磊
杨东
王栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN201711472112.7A priority Critical patent/CN108197561B/en
Publication of CN108197561A publication Critical patent/CN108197561A/en
Application granted granted Critical
Publication of CN108197561B publication Critical patent/CN108197561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention discloses a face recognition model optimization control method, a device, equipment and a storage medium, wherein the optimization control method comprises the following steps: performing loss calculation on the normalized features by using a triplet loss function to generate a first feedback gradient; performing loss calculation on the characteristics output after the full connection of the plurality of characteristic values by using a softmax loss function and generating a second pass-back gradient; and optimizing the parameters of the deep network model by utilizing the first return gradient and the second return gradient. According to the invention, the softmax loss function is introduced when the triplet loss function is used for optimizing the model, and the softmax loss function has a faster convergence effect, so that the training speed and the recognition accuracy of the face recognition model can be effectively improved through the auxiliary action of the softmax loss function.

Description

Face recognition model optimization control method, device, equipment and storage medium
Technical Field
The present invention relates to the field of face recognition, and in particular, to a method, an apparatus, a device, and a storage medium for optimizing and controlling a face recognition model.
Background
With the progress of science and technology, more and more automatic algorithms and devices are applied to our lives, and the face recognition algorithm is developed rapidly in recent years because the face recognition algorithm can automatically authenticate the identity of a user. At present, the face recognition product is widely applied to enterprises and public institutions such as finance, judicial, army, public security, side inspection and the like, and is widely accepted by the public.
The face recognition algorithm has been developed for decades, and the recognition accuracy has been developed dramatically from the early stage of using manually designed face features to the present stage of using deep learning method to extract features. In 2006 Raia Hadsell et al proposed to optimize the model using a contrast method (coherent) that yielded better classification by limiting the intra-class and inter-class distances. In 2015, Florian Schroff et al proposed a metric learning method using triplets (triplets) to optimize a deep learning model, which is similar to the method proposed by Raia Hadsell et al, and the method also optimizes by limiting the intra-class and inter-class distances, but it does not need to limit all intra-class distances to be smaller than and all inter-class distances to be larger than a certain threshold, and it only needs to compare a pair of distances, in which the difference between the intra-class distance and the inter-class distance is larger than an interval, and also omits the setting of the threshold.
The method for optimizing by using the coherent cost function needs to set thresholds applied to all intra-class distances and inter-class distances, and because the application scenes are complex and changeable, a uniform threshold value is usually difficult to find and is applicable to all intra-class and inter-class distances. Although the method for optimizing the depth model by using the triplet loss function overcomes the defects, the method for optimizing the depth learning model is difficult to converge, the optimization speed is slow, and a satisfactory optimization result is difficult to obtain.
Disclosure of Invention
The invention provides a face recognition model optimization control method, device, equipment and storage medium, and aims to solve the technical problems that when the face recognition model is optimized by using the conventional coherent cost function or triple loss function, the convergence speed of the model is low and even the convergence cannot be realized.
The technical scheme adopted by the invention is as follows:
the invention provides a face recognition model optimization control method, which is used for optimizing parameters of a deep learning model for face recognition, wherein the deep learning model comprises a deep network model for receiving a plurality of training images and outputting a plurality of characteristic values and a normalization model for normalizing the plurality of characteristic values, and the optimization control method comprises the following steps:
performing loss calculation on the normalized features by using a triplet loss function to generate a first feedback gradient;
performing loss calculation on the characteristics output after the full connection of the plurality of characteristic values by using a softmax loss function and generating a second pass-back gradient;
and optimizing the parameters of the deep network model by utilizing the first return gradient and the second return gradient.
Further, optimizing the parameters of the deep network model using the first backhaul gradient and the second backhaul gradient includes:
setting a first weight value for the first return gradient, and setting a second weight value for the second return gradient, wherein the first weight value is greater than the second weight value;
and updating the parameters of the depth network model by utilizing the linear weighting of the first return gradient and the second return gradient.
Further, performing loss calculation on the feature output after the full connection of the plurality of feature values by using the softmax loss function and generating a second pass back gradient includes:
performing full-connection mapping on the plurality of characteristic values by adopting a full-connection layer to obtain a plurality of output characteristics;
and sending the output characteristics into a softmax loss function to calculate a second pass-back gradient.
Further, the softmax loss function is:
Figure BDA0001532126480000021
wherein the content of the first and second substances,
Figure BDA0001532126480000022
tkrepresenting the kth dimension of the fully-connected-layer input feature t, N being the total dimension of the feature, ykThe classification labels are M types, the values of the M types are all 0 or 1, the class label of the image is corresponding to 1, and the rest are 0.
Further, the face recognition model optimization control method of the invention further comprises:
and updating the parameters of the full connection layer by utilizing the second backhaul gradient.
Further, the triplet loss function is:
Figure BDA0001532126480000023
wherein x isa、xp、xnRespectively representing three images marked as a, p and n, and T () is a deep neural network including a deep network model and a normalization model, [ x ]]+=max(0,x);
Figure BDA0001532126480000024
As the distance between the images a and p of the same person,
Figure BDA0001532126480000025
the distance between different human images a and n is defined, and alpha is the distance between ap and an; when d isapAnd danIf the distance between the first and second electrodes is greater than 0 and less than α, a loss occurs and a first return gradient is generated.
According to another aspect of the present invention, there is also provided a face recognition model optimization control apparatus for optimizing parameters of a deep learning model for face recognition, the deep learning model including a deep network model for receiving a plurality of training images and outputting a plurality of feature values, and a normalization model for performing normalization processing on the plurality of feature values, the optimization control apparatus including:
the first operation module is used for performing loss calculation on the characteristics of the normalization processing by using a triplet loss function and generating a first echo gradient;
the second operation module is used for performing loss calculation on the characteristics output after the full connection of the plurality of characteristic values by using a softmax loss function and generating a second pass-back gradient;
and the parameter optimization module is used for optimizing the parameters of the deep network model by utilizing the first return gradient and the second return gradient.
Further, the parameter optimization module comprises:
the weight setting unit is used for setting a first weight value for the first return gradient and setting a second weight value for the second return gradient, wherein the first weight value is greater than the second weight value;
and the joint optimization unit is used for updating the parameters of the deep network model by utilizing the linear weighting of the first return gradient and the second return gradient.
According to another aspect of the present invention, there is also provided a face recognition model optimization control apparatus, including a processor, where the processor is configured to execute a program, and the program executes the face recognition model optimization control method according to the present invention.
According to another aspect of the present invention, a storage medium is further provided, where the storage medium includes a stored program, and the program controls, when running, an apparatus on which the storage medium is located to execute the face recognition model optimization control method of the present invention.
The invention has the following beneficial effects:
according to the face recognition model optimization control method, device, equipment and storage medium, the softmax loss function is introduced when the triplet loss function is utilized to optimize the model, and the softmax loss function has a faster convergence effect, so that the training speed and the recognition accuracy of the face recognition model can be effectively improved through the auxiliary action of the softmax loss function.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating steps of a face recognition model optimization control method according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart of a face recognition model optimization control method according to a preferred embodiment of the present invention;
fig. 3 is a schematic block diagram of a face recognition model optimization control device according to a preferred embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1 and fig. 2, a preferred embodiment of the present invention provides a face recognition model optimization control method, configured to optimize parameters of a deep learning model for face recognition, where in this embodiment, the deep learning model includes a deep network model for receiving a plurality of training images and outputting a plurality of feature values, and a normalization model for performing normalization processing on the plurality of feature values, and the optimization control method in this embodiment includes:
step S100, loss calculation is carried out on the normalized features by using a triplet loss function, and a first echo gradient is generated;
step S200, loss calculation is carried out on the characteristics output after the characteristic values are fully connected by utilizing a softmax loss function, and a second pass-back gradient is generated;
step S300, optimizing parameters of the deep network model by using the first backhaul gradient and the second backhaul gradient.
When the face recognition model is optimized by simply utilizing the coherent loss function or the triplet loss function, the problem that the convergence speed of the model is slow or even the convergence cannot be realized exists. In the embodiment, a softmax loss function which is easier to optimize is introduced when the model is optimized, the model convergence speed is effectively accelerated by using the auxiliary action of the softmax function, and the model identification precision is improved through the mutual promotion between the two loss functions.
In this embodiment, the deep network model has an input interface for receiving a training image, when the face recognition model is trained, the training image is input into the deep network model, the features f output by the deep network model are linearly mapped to N-dimensional features t through a full connection layer (N is the number of face image classes used for training, i.e., the number of training people, when the network is trained), and the features t are sent to a softmax loss function to complete model loss calculation and gradient pass-back.
In this embodiment, the triple method needs three images for each training, and after three images are selected each time (two similar images, one inter-class image), the three images are sent to the depth network model to output the features f1, f2, and f3 of the three images in the same way as the above method. And (4) normalizing the three features and then sending the normalized three features into a triplet loss function to complete model loss calculation and gradient return.
In the gradient returning process, the gradient values returned by the two loss functions are mutually superposed and jointly act on parameters in the deep network model, so that the network can quickly complete the optimization of the network.
In the embodiment, the softmax loss function is introduced when the triplet loss function is used for optimizing the model, and the softmax loss function has a faster convergence effect, so that the training speed and the recognition accuracy of the face recognition model can be effectively improved through the auxiliary action of the softmax loss function.
Preferably, the optimizing the parameter of the deep network model by using the first backhaul gradient and the second backhaul gradient includes:
setting a first weight value for the first return gradient, and setting a second weight value for the second return gradient, wherein the first weight value is greater than the second weight value;
and updating the parameters of the depth network model by utilizing the linear weighting of the first return gradient and the second return gradient.
In this embodiment, performing loss calculation on the feature output after the full connection of the plurality of feature values by using the softmax loss function and generating the second pass-back gradient includes:
performing full-connection mapping on the plurality of characteristic values by adopting a full-connection layer to obtain a plurality of output characteristics;
and sending the output characteristics into a softmax loss function to calculate a second pass-back gradient.
Preferably, the softmax loss function is:
Figure BDA0001532126480000051
wherein the content of the first and second substances,
Figure BDA0001532126480000052
tkrepresenting the kth dimension of the fully-connected-layer input feature t, N being the total dimension of the feature, ykThe classification labels are M types, the values of the M types are all 0 or 1, the class label of the image is corresponding to 1, and the rest are 0.
In this embodiment, the face recognition model optimization control method further includes:
and updating the parameters of the full connection layer by utilizing the second backhaul gradient.
Preferably, the triplet loss function is:
Figure BDA0001532126480000053
wherein x isa、xp、xnRespectively representing three images marked as a, p and n, and T () is a deep neural network including a deep network model and a normalization model, [ x ]]+=max(0,x);
Figure BDA0001532126480000054
As the distance between the images a and p of the same person,
Figure BDA0001532126480000055
the distance between different human images a and n is defined, and alpha is the distance between ap and an; when d isapAnd danIf the distance between the first and second electrodes is greater than 0 and less than α, a loss occurs and a first return gradient is generated.
In a preferred embodiment, referring to fig. 2, the face recognition model optimization control method includes the following steps:
1. optimizing a network model using a triplet loss function
The deep learning network model is optimized by utilizing the triplet loss function, three images are required to be input each time, and the three images are respectively marked as a, p and n, wherein a and p belong to the image of the same person, and n belongs to the image of different persons. the optimization of the triplet loss function makes the image characteristics of the same person more similar and the image characteristics of different persons more different. the triplet loss function is as follows:
Figure BDA0001532126480000056
wherein x isa、xp、xnRespectively representing three images marked as a, p and n, and T () is a deep neural network including a deep network model and a normalized model in FIG. 2, [ x ]]+Max (0, x). In the loss function
Figure BDA0001532126480000057
As the distance between the images a and p of the same person,
Figure BDA0001532126480000058
alpha is the distance between the set ap and an, which is the distance between the different human images a and n. When d isapAnd danWhen the distance between ap and an is greater than 0 and less than alpha, loss is generated and the gradient is returned, and the image characteristics can be optimized through the arrangement, so that the difference between ap and an is as large as possible.
In implementation, as shown in fig. 2, the images 1, 2 and 3 are respectively transmitted into the depth network model to obtain the feature f1、f2、f3And respectively normalizing the features, sending the normalized features into a triplet loss function, and finally returning the gradient.
2. Optimizing a network model using a softmax loss function
The network model optimized by the softmax loss function only needs to input one image at a time, and the images 1, 2 and 3 are simultaneously input for matching the use of the triplet function. The Softmax loss function is as follows:
Figure BDA0001532126480000061
wherein
Figure BDA0001532126480000062
tkRepresenting the kth dimension of the input feature t of the full connection layer, N being the total dimension of the feature (i.e. the number of corresponding persons in the training sample, one class for each person), ykThe classification labels are M types, the values of the M types are all 0 or 1, the class label of the image is corresponding to 1, and the rest are 0.
As shown in fig. 2, after a feature f of each image is obtained through a depth network model, a full connection layer output feature t is transmitted, and the feature t is transmitted into a softmax loss function to calculate loss and then transmit back a gradient.
3. Deep neural network joint optimization
As shown in the neural network joint optimization flow shown in fig. 2, the parameters updated in the optimization process are gathered in the fully connected layer of the depth network model and the softmax loss function of the front end. The deep network model is a part shared by two loss functions, and the full connection layer only belongs to the softmax loss function part. The update of the corresponding parameter of the full connectivity layer is completely determined by the second backhaul gradient looped back by the softmax penalty function. For the shared part, the respective weights of the two loss functions need to be set, since the triplet loss function is the main cost function for network optimization, a higher weight is set, and the softmax function is an auxiliary function, a lower weight is set, for example, the feedback gradient of the triplet loss function may be set to correspond to the weight of 0.7, and the feedback gradient of the softmax loss function may be set to correspond to the weight of 0.3. The parameter updating variable corresponding to the depth network model is linear weighting of the return gradient of the two loss functions.
The face recognition model optimization control method has the following two advantages:
1) and adding a softmax loss function to jointly optimize the parameters of the deep neural network. Because the triplet loss function of the common metric learning method is slow in convergence or cannot be converged, the model parameters of the deep neural network are optimized by combining the softmax loss function, the training speed of the network model is accelerated, and the model is easy to converge.
2) And introducing a joint loss function, and improving the accuracy of the network model to the face recognition task. In the method, a softmax loss function for face classification and a triplet loss function for face recognition are optimized in a combined mode, and two loss functions are used for simultaneously restricting updating of parameters of a depth network model, so that the network model has higher accuracy for a face recognition task.
According to another aspect of the present invention, there is also provided a face recognition model optimization control apparatus for optimizing parameters of a deep learning model for face recognition, where the deep learning model includes a deep network model for receiving a plurality of training images and outputting a plurality of feature values, and a normalization model for performing normalization processing on the plurality of feature values, and referring to fig. 3, the optimization control apparatus of this embodiment includes:
a first operation module 100, configured to perform loss calculation on the normalized feature by using a triplet loss function and generate a first echo gradient;
the second operation module 200 is configured to perform loss calculation on the features output after the full connection of the multiple feature values by using a softmax loss function, and generate a second pass-back gradient;
the parameter optimization module 300 is configured to optimize parameters of the deep network model by using the first backhaul gradient and the second backhaul gradient.
Preferably, in this embodiment, the parameter optimization module 300 includes:
the weight setting unit is used for setting a first weight value for the first return gradient and setting a second weight value for the second return gradient, wherein the first weight value is greater than the second weight value;
and the joint optimization unit is used for updating the parameters of the deep network model by utilizing the linear weighting of the first return gradient and the second return gradient.
The face recognition model optimization control device of the embodiment corresponds to the method embodiment, and the specific control process can refer to the method embodiment.
According to another aspect of the present invention, there is also provided a face recognition model optimization control apparatus, including a processor, where the processor is configured to execute a program, and the program executes the face recognition model optimization control method according to the foregoing embodiment of the present invention.
According to another aspect of the present invention, a storage medium is further provided, where the storage medium includes a stored program, and the program controls, when running, an apparatus on which the storage medium is located to execute the face recognition model optimization control method according to the foregoing embodiment of the present invention.
According to the face recognition model optimization control method, device, equipment and storage medium, the softmax loss function is introduced when the triplet loss function is used for optimizing the model, and the softmax loss function has a faster convergence effect, so that the training speed and the recognition accuracy of the face recognition model can be effectively improved under the auxiliary action of the softmax loss function.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The functions described in the method of the present embodiment, if implemented in the form of software functional units and sold or used as independent products, may be stored in one or more storage media readable by a computing device. Based on such understanding, part of the contribution of the embodiments of the present invention to the prior art or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device, a network device, or the like) to execute all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A face recognition model optimization control method is used for optimizing parameters of a deep learning model for face recognition, wherein the deep learning model comprises a deep network model for receiving a plurality of training images and outputting a plurality of characteristic values, a normalization model for performing normalization processing on the characteristic values, and a full connection layer for performing full connection mapping on the characteristic values to obtain a plurality of output characteristics, and the optimization control method comprises the following steps:
performing loss calculation on the normalized features by using a triplet loss function to generate a first feedback gradient;
performing loss calculation on the features output after the full connection of the plurality of feature values by using a softmax loss function, and generating a second pass-back gradient;
updating the parameters corresponding to the full connection layer according to the second return gradient, and optimizing the parameters of the deep network model by using the first return gradient and the second return gradient;
the optimizing the parameters of the deep network model using the first backhaul gradient and the second backhaul gradient includes:
setting a first weight value for the first return gradient, and setting a second weight value for the second return gradient, wherein the first weight value is greater than the second weight value;
and updating the parameters of the deep network model by utilizing the linear weighting of the first return gradient and the second return gradient.
2. The face recognition model optimization control method of claim 1,
the softmax loss function is:
Figure FDA0002585962910000011
wherein the content of the first and second substances,
Figure FDA0002585962910000012
tkrepresenting the kth dimension of the fully-connected-layer input feature t, N being the total dimension of the feature, ykFor the classification label, the values of all M classes are 0 or 1.
3. The face recognition model optimization control method of claim 1,
the triplet loss function is:
Figure FDA0002585962910000013
wherein x isa、xp、xnRespectively representing three images marked as a, p, n, T () being a deep neural network comprising said deep network model and said normalized model, [ x ]]+=max(0,x);
Figure FDA0002585962910000014
As the distance between the images a and p of the same person,
Figure FDA0002585962910000015
the distance between different human images a and n is defined, and alpha is the distance between ap and an; when d isapAnd danIf the distance between the first and second electrodes is greater than 0 and less than α, a loss occurs and a first return gradient is generated.
4. A face recognition model optimization control device is used for optimizing parameters of a deep learning model for face recognition, wherein the deep learning model comprises a deep network model for receiving a plurality of training images and outputting a plurality of characteristic values, a normalization model for performing normalization processing on the characteristic values, and a full connection layer for performing full connection mapping on the characteristic values to obtain a plurality of output characteristics, and the optimization control device comprises:
the first operation module is used for performing loss calculation on the characteristics of the normalization processing by using a triplet loss function and generating a first echo gradient;
the second operation module is used for performing loss calculation on the features output after the full connection of the plurality of feature values by using a softmax loss function and generating a second pass-back gradient;
the parameter optimization module is used for updating the parameters corresponding to the full connection layer according to the second return gradient and optimizing the parameters of the deep network model by using the first return gradient and the second return gradient;
the parameter optimization module comprises:
the weight setting unit is used for setting a first weight value for the first return gradient and setting a second weight value for the second return gradient, wherein the first weight value is greater than the second weight value;
and the joint optimization unit is used for updating the parameters of the deep network model by using the linear weighting of the first return gradient and the second return gradient.
5. A face recognition model optimization control apparatus comprising a processor for executing a program, characterized in that the program executes to perform the face recognition model optimization control method according to any one of claims 1 to 3.
6. A storage medium comprising a stored program, wherein the program when executed controls a device on which the storage medium is located to perform the face recognition model optimization control method according to any one of claims 1 to 3.
CN201711472112.7A 2017-12-29 2017-12-29 Face recognition model optimization control method, device, equipment and storage medium Active CN108197561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711472112.7A CN108197561B (en) 2017-12-29 2017-12-29 Face recognition model optimization control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711472112.7A CN108197561B (en) 2017-12-29 2017-12-29 Face recognition model optimization control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108197561A CN108197561A (en) 2018-06-22
CN108197561B true CN108197561B (en) 2020-11-03

Family

ID=62586343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711472112.7A Active CN108197561B (en) 2017-12-29 2017-12-29 Face recognition model optimization control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108197561B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112912887A (en) * 2018-11-08 2021-06-04 北京比特大陆科技有限公司 Processing method, device and equipment based on face recognition and readable storage medium
CN109598385A (en) * 2018-12-07 2019-04-09 深圳前海微众银行股份有限公司 Anti money washing combination learning method, apparatus, equipment, system and storage medium
CN113096023B (en) * 2020-01-08 2023-10-27 字节跳动有限公司 Training method, image processing method and device for neural network and storage medium
CN111723709B (en) * 2020-06-09 2023-07-11 大连海事大学 Fly face recognition method based on deep convolutional neural network
CN114255354B (en) * 2021-12-31 2023-04-07 智慧眼科技股份有限公司 Face recognition model training method, face recognition device and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574215A (en) * 2016-03-04 2016-05-11 哈尔滨工业大学深圳研究生院 Instance-level image search method based on multiple layers of feature representations
CN106845330A (en) * 2016-11-17 2017-06-13 北京品恩科技股份有限公司 A kind of training method of the two-dimension human face identification model based on depth convolutional neural networks
CN106919951A (en) * 2017-01-24 2017-07-04 杭州电子科技大学 A kind of Weakly supervised bilinearity deep learning method merged with vision based on click
CN107247949A (en) * 2017-08-02 2017-10-13 北京智慧眼科技股份有限公司 Face identification method, device and electronic equipment based on deep learning
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574215A (en) * 2016-03-04 2016-05-11 哈尔滨工业大学深圳研究生院 Instance-level image search method based on multiple layers of feature representations
CN106845330A (en) * 2016-11-17 2017-06-13 北京品恩科技股份有限公司 A kind of training method of the two-dimension human face identification model based on depth convolutional neural networks
CN106919951A (en) * 2017-01-24 2017-07-04 杭州电子科技大学 A kind of Weakly supervised bilinearity deep learning method merged with vision based on click
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107247949A (en) * 2017-08-02 2017-10-13 北京智慧眼科技股份有限公司 Face identification method, device and electronic equipment based on deep learning

Also Published As

Publication number Publication date
CN108197561A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN108197561B (en) Face recognition model optimization control method, device, equipment and storage medium
US10983841B2 (en) Systems and methods for removing identifiable information
CN106096535B (en) Face verification method based on bilinear joint CNN
CN111461226A (en) Countermeasure sample generation method, device, terminal and readable storage medium
CN111444826B (en) Video detection method, device, storage medium and computer equipment
CN106778910B (en) Deep learning system and method based on local training
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
CN110929836B (en) Neural network training and image processing method and device, electronic equipment and medium
JP2022141931A (en) Method and device for training living body detection model, method and apparatus for living body detection, electronic apparatus, storage medium, and computer program
CN111597779A (en) Text generation method, device, equipment and storage medium
CN110647916A (en) Pornographic picture identification method and device based on convolutional neural network
CN112600794A (en) Method for detecting GAN attack in combined deep learning
CN112749558A (en) Target content acquisition method and device, computer equipment and storage medium
CN114282059A (en) Video retrieval method, device, equipment and storage medium
CN115984930A (en) Micro expression recognition method and device and micro expression recognition model training method
Zhu et al. A novel simple visual tracking algorithm based on hashing and deep learning
CN111428701A (en) Small-area fingerprint image feature extraction method, system, terminal and storage medium
He et al. Finger vein image deblurring using neighbors-based binary-gan (nb-gan)
CN114155572A (en) Facial expression recognition method and system
Liu Human face expression recognition based on deep learning-deep convolutional neural network
CN106529490A (en) System and method for realizing handwriting identification based on sparse auto-encoding codebook
CN113269719A (en) Model training method, image processing method, device, equipment and storage medium
CN113361346A (en) Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN109447240B (en) Training method of graphic image replication model, storage medium and computing device
KR102393759B1 (en) Method and system for generating an image processing artificial nerual network model operating in a device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100097 Beijing Haidian District Kunming Hunan Road 51 C block two floor 207.

Applicant after: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

Address before: 100193 4, 403, block A, 14 building, 10 East North Road, Haidian District, Beijing.

Applicant before: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 410205 14 Changsha Zhongdian Software Park Phase I, 39 Jianshan Road, Changsha High-tech Development Zone, Yuelu District, Changsha City, Hunan Province

Applicant after: Wisdom Eye Technology Co.,Ltd.

Address before: 100097 2nd Floor 207, Block C, 51 Hunan Road, Kunming, Haidian District, Beijing

Applicant before: BEIJING ATHENA EYES SCIENCE & TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Optimization control method, device, equipment and storage medium of face recognition model

Effective date of registration: 20221205

Granted publication date: 20201103

Pledgee: Agricultural Bank of China Limited Hunan Xiangjiang New Area Branch

Pledgor: Wisdom Eye Technology Co.,Ltd.

Registration number: Y2022430000107

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20231220

Granted publication date: 20201103

Pledgee: Agricultural Bank of China Limited Hunan Xiangjiang New Area Branch

Pledgor: Wisdom Eye Technology Co.,Ltd.

Registration number: Y2022430000107

PC01 Cancellation of the registration of the contract for pledge of patent right
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Address before: 410205 building 14, phase I, Changsha Zhongdian Software Park, No. 39, Jianshan Road, Changsha high tech Development Zone, Yuelu District, Changsha City, Hunan Province

Patentee before: Wisdom Eye Technology Co.,Ltd.

CP03 Change of name, title or address