CN111209839B - Face recognition method - Google Patents

Face recognition method Download PDF

Info

Publication number
CN111209839B
CN111209839B CN201911422660.8A CN201911422660A CN111209839B CN 111209839 B CN111209839 B CN 111209839B CN 201911422660 A CN201911422660 A CN 201911422660A CN 111209839 B CN111209839 B CN 111209839B
Authority
CN
China
Prior art keywords
face
face recognition
feature vectors
far
recognition method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911422660.8A
Other languages
Chinese (zh)
Other versions
CN111209839A (en
Inventor
田晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Taorun Medical Technology Co ltd
Original Assignee
Shanghai Taorun Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Taorun Medical Technology Co ltd filed Critical Shanghai Taorun Medical Technology Co ltd
Priority to CN201911422660.8A priority Critical patent/CN111209839B/en
Publication of CN111209839A publication Critical patent/CN111209839A/en
Application granted granted Critical
Publication of CN111209839B publication Critical patent/CN111209839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition method, which comprises the following steps: s1, acquiring a face picture; s2, extracting face features through a deep neural network; s3, acquiring a first loss function; s4, acquiring a second loss function; s5, acquiring a total loss function; s6, carrying out similarity calculation on the two face feature vectors; and S7, a network training step. The face recognition method provided by the invention can improve the recognition accuracy. The invention can make the distance of the face feature vectors among different people as far as possible and the distance of the face feature vectors among the same person as close as possible.

Description

Face recognition method
Technical Field
The invention belongs to the technical field of face recognition, relates to a face recognition method, and in particular relates to a face recognition method capable of improving recognition accuracy.
Background
In the face recognition field, the loss function is a loss function which is generally used when deep learning is utilized to optimize classification and recognition models
Figure BDA0002352736330000011
(the disclosed general method) for face recognition tasks, an excellent model is to have the distance between features of the same person's photograph as close as possible, and the distance between features of different persons' photographs as far as possible. However, softmax is just a classification that does not place the photo features of the same person as close as possible, but the photo features of different persons as far as possible. Arcface has been proposed.
Figure BDA0002352736330000012
In the method, in the process of the invention,
Figure BDA0002352736330000013
x * representing the neural network output layer. In face recognition, it is generally used as a face feature.
Figure BDA0002352736330000014
Representing that the face features are normalized;
Figure BDA0002352736330000015
The weight vector is normalized. At x i And W is ji Theta between ji Adding an angle interval m, punishing the angle between the photo features of the human face and the corresponding weights in an addition mode, so that the included angle between the photo features of each human face and the corresponding weights is as small as possible, namely the photo features of each human face are as close to the corresponding weight vectors as possible, thereby enhancing the intra-class tightness, namely the photo features of the same human face are as close as possible; s is a super-parameter, formulated during training.
As known above, the arcface loss function is optimized to have the photo features of each person as close as possible to the corresponding weight vector, thereby making the same person photo features as close as possible. But does not explicitly require that the different people photo features be as far as possible. As shown in fig. 1.
Therefore, if the distance between the weight vectors is as far as possible, the feature vectors of the face photos between different people can be as far as possible, so that the feature vectors of the same face photo are as close as possible, and the feature vectors of the different face photos are as far as possible.
In view of this, there is an urgent need to design a face recognition method so as to overcome the above-mentioned drawbacks of the existing recognition methods.
Disclosure of Invention
The invention provides a face recognition method which can improve recognition accuracy.
In order to solve the technical problems, according to one aspect of the present invention, the following technical scheme is adopted:
a face recognition method, the face recognition method comprising:
s1, acquiring a face picture;
s2, extracting face features through a deep neural network;
s3, acquiring a first loss function;
Figure BDA0002352736330000021
Figure BDA0002352736330000022
Figure BDA0002352736330000023
Figure BDA0002352736330000024
wherein N is the number of each batch of photos; s is radix Ginseng Rubra, m is radix Ginseng Rubra; w is a weight vector matrix;
s4, acquiring a second loss function;
Figure BDA0002352736330000025
t is a super ginseng;
s5, acquiring a total loss function; l=l arcface +λL edu The method comprises the steps of carrying out a first treatment on the surface of the Lambda is the super-ginseng;
in the training process, cascading convolutional neural network structures and total loss functions, and optimizing network parameters through continuous iteration; finally obtaining a face recognition model based on deep learning; extracting face features of each face photo by using the face recognition model obtained in the training stage;
s6, carrying out similarity calculation on the two face feature vectors, and adopting a cosine included angle mode;
let two face feature vectors be: f (F) 1 =[f 1,1 ,f 1,2 ,…,f 1,n-1 ,f 1,n ],F 2 =[f 2,1 ,f 2,2 ,…,f 2,n-1 ,f 2,n ]N is the dimension of the face feature;
the similarity of the two face feature vectors is:
Figure BDA0002352736330000026
the larger the similarity value is, the more similar the two face features are, and the greater the possibility of the same person is;
s7, a network training step; for training network parameters, a random gradient descent method SGD with small batches of Mini-batch is used for training.
As an embodiment of the present invention, the weight of arcface
Figure BDA0002352736330000031
Is L edc As an optimization object.
As one embodiment of the invention, lambda is the super parameter, 0.5 is taken; t is the radix scrophulariae, and 2 is taken. The specific values of the parameters may be adjusted during the training process.
As one embodiment of the present invention, L edc Is responsible for optimizing the W weight vectors in arcface so that the W weight vectors are as far apart as possible; and L is arcface And the face photo feature vectors of the same person are optimized to be as close to the corresponding feature vectors as possible.
In one embodiment of the present invention
Figure BDA0002352736330000032
In->
Figure BDA0002352736330000033
The meaning of (1) is for the classification weight vector in each arcface +.>
Figure BDA0002352736330000034
And other classification weight vectors->
Figure BDA0002352736330000035
Is the shortest distance of (2); the larger this shortest distance, the smaller the loss function value (illustratively: the process of deep learning training, i.e., the process of letting the loss function value be smaller and smaller, which is generally referred to as training convergence, if the value is larger and larger, the training divergence); thereby making the distance between the weight vectors as large as possible; because of the weight vectorEach component corresponds to each person, and as long as the components of the weight vector are separated as far as possible, that is, the feature vectors of the face photos between each person are separated as far as possible, so that photo features between the same person can be obtained as close as possible, and photo features between different persons are as far as possible.
The invention has the beneficial effects that: the face recognition method provided by the invention can improve the recognition accuracy. The invention can make the distance of the face feature vectors among different people as far as possible and the distance of the face feature vectors among the same person as close as possible. Since each component of the weight vector is made as far as possible and each component of the weight vector corresponds to each person by training, as long as the respective components of the weight vector are separated as far as possible, that is, the feature vectors of the face photo between each person are separated as far as possible; for different photos of the same person, the feature vector of the different photos of each person is continuously close to the weight vector of the corresponding person in the training process, so that the photo features among the same person can be obtained as close as possible. Finally, photo features among different persons are as far as possible, and photo features of the same person are as close as possible.
Drawings
Fig. 1 is a flowchart of a face recognition method according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
For a further understanding of the present invention, preferred embodiments of the invention are described below in conjunction with the examples, but it should be understood that these descriptions are merely intended to illustrate further features and advantages of the invention, and are not limiting of the claims of the invention.
The description of this section is intended to be illustrative of only a few exemplary embodiments and the invention is not to be limited in scope by the description of the embodiments. It is also within the scope of the description and claims of the invention to interchange some of the technical features of the embodiments with other technical features of the same or similar prior art.
The invention discloses a face recognition method, and fig. 1 is a flow chart of the face recognition method in an embodiment of the invention; referring to fig. 1, in an embodiment of the present invention, the face recognition method includes:
step S1, obtaining a face picture I;
s2, extracting face features X through a deep neural network;
s3, acquiring a first loss function;
Figure BDA0002352736330000041
Figure BDA0002352736330000042
Figure BDA0002352736330000043
Figure BDA0002352736330000044
wherein N is the number of each batch of photos; s is radix Ginseng Rubra, m is radix Ginseng Rubra; w is a weight vector matrix. In one embodiment of the invention, the weight of arcface
Figure BDA0002352736330000045
Is L edc As an optimization object.
S4, acquiring a second loss function;
Figure BDA0002352736330000046
t is a super ginseng; in one embodiment, t is taken as 2.
S5, acquiring a total loss function; l=l arcface +λL edu The method comprises the steps of carrying out a first treatment on the surface of the Lambda is the super-ginseng. In one embodiment, λ is the radix Ginseng Rubra, 0.5.
In the training process, the CNN structure (such as ResNet 34) and the total loss function of the convolutional neural network are cascaded, and the network parameters are continuously and iteratively optimized through a random gradient descent algorithm SGD. Finally, a face recognition model based on deep learning, namely network parameters of Resnet34, is obtained. And extracting the face characteristics of each face photo by utilizing the face recognition model obtained in the training stage.
In one embodiment of the invention, L edc Is responsible for optimizing the W weight vectors in arcface so that the W weight vectors are as far apart as possible; and L is arcface And the face photo feature vectors of the same person are optimized to be as close to the corresponding feature vectors as possible.
In one embodiment, in
Figure BDA0002352736330000051
In->
Figure BDA0002352736330000052
The meaning of (1) is for the classification weight vector in each arcface +.>
Figure BDA0002352736330000053
And other classification weight vectors->
Figure BDA0002352736330000054
Is the shortest distance of (2); the smaller this shortest distance, the larger the loss function value; the weight vectors are given as large as possible; since each component of the weight vector corresponds to each person, as long as the components of the weight vector are separated as far as possible, that is, the feature vectors of the face photos between each person are separated as far as possible, the photo features between the same person can be obtained as close as possible, and the photo features between different persons are as far as possible.
S6, carrying out similarity calculation on the two face feature vectors, and adopting a cosine included angle mode;
let two face feature vectors be: f (F) 1 =[f 1,1 ,f 1,2 ,…,f 1,n-1 ,f 1,n ],F 2 =[f 2,1 ,f 2,2 ,…,f 2,n-1 ,f 2,n ]N is the dimension of the face feature;
the similarity of the two face feature vectors is:
Figure BDA0002352736330000055
the larger the similarity value is, the more similar the two face features are, and the greater the possibility of the same person is;
s7, a network training step; for training network parameters, a random gradient descent method SGD with small batches of Mini-batch is used for training. The method is a common convolutional neural network CNN parameter optimization method; convolutional neural network CNN employs mainstream Resnet50, resnet34, resnet100.
In summary, the face recognition method provided by the invention can improve the recognition accuracy. The invention can make the distance of the face feature vectors among different people as far as possible and the distance of the face feature vectors among the same person as close as possible. Since each component of the weight vector is made as far as possible and each component of the weight vector corresponds to each person by training, as long as the respective components of the weight vector are separated as far as possible, that is, the feature vectors of the face photo between each person are separated as far as possible; for different photos of the same person, the feature vector of the different photos of each person is continuously close to the weight vector of the corresponding person in the training process, so that the photo features among the same person can be obtained as close as possible. Finally, photo features among different persons are as far as possible, and photo features of the same person are as close as possible.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The description and applications of the present invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternatives and equivalents of the various components of the embodiments are known to those of ordinary skill in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other assemblies, materials, and components, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (5)

1. A face recognition method, characterized in that the face recognition method comprises:
s1, acquiring a face picture;
s2, extracting face features through a deep neural network;
s3, acquiring a first loss function;
Figure FDA0004134298610000011
Figure FDA0004134298610000012
Figure FDA0004134298610000013
Figure FDA0004134298610000014
wherein N is the number of each batch of photos; s is a super parameter, m is an angular margin, and is a super parameter; w is a weight vector matrix;
s4, acquiring a second loss function;
Figure FDA0004134298610000015
t is a super ginseng;
at the position of
Figure FDA0004134298610000016
In->
Figure FDA0004134298610000017
The meaning of (1) is for the classification weight vector in each arcface +.>
Figure FDA0004134298610000018
And other classification weight vectors->
Figure FDA0004134298610000019
Is the shortest distance of (2); the larger this shortest distance, the smaller the loss function value; so that the distance between the weight vectors is as far as possible; because each component of the weight vector corresponds to each person, as long as the components of the weight vector are separated as far as possible, namely, the feature vectors of the face photos among each person are separated as far as possible, so that photo features among the same person can be obtained as close as possible, and photo features among different persons are as far as possible;
s5, acquiring a total loss function; l=l arcface +λL edu The method comprises the steps of carrying out a first treatment on the surface of the Lambda is the super-ginseng;
in the training process, cascading convolutional neural network structures and total loss functions, and optimizing network parameters through continuous iteration; finally obtaining a face recognition model based on deep learning; extracting face features of each face photo by using the face recognition model obtained in the training stage;
s6, carrying out similarity calculation on the two face feature vectors, and adopting a cosine included angle mode;
let two face feature vectors be: f (F) 1 =[f 1,1 ,f 1,2 ,…,f 1,n-1 ,f 1,n ],F 2 =[f 2,1 ,f 2,2 ,…,f 2,n-1 ,f 2,n ];
The similarity of the two face feature vectors is:
Figure FDA0004134298610000021
the larger the similarity value is, the more similar the two face features are, and the greater the possibility of the same person is;
s7, a network training step; for training network parameters, a random gradient descent method SGD with small batches of Mini-batch is used for training.
2. The face recognition method according to claim 1, wherein:
weights of arcface
Figure FDA0004134298610000022
Is L edc As an optimization object.
3. The face recognition method according to claim 1, wherein:
lambda is 0.5; t is the radix scrophulariae, and 2 is taken.
4. The face recognition method according to claim 1, wherein:
s is set to 30 and m is set to 0.1.
5. The face recognition method according to claim 1, wherein:
L edc is responsible for optimizing the W weight vectors in arcface so that the W weight vectors are as far apart as possible; and L is arcface And the face photo feature vectors of the same person are optimized to be as close to the corresponding feature vectors as possible.
CN201911422660.8A 2019-12-31 2019-12-31 Face recognition method Active CN111209839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911422660.8A CN111209839B (en) 2019-12-31 2019-12-31 Face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911422660.8A CN111209839B (en) 2019-12-31 2019-12-31 Face recognition method

Publications (2)

Publication Number Publication Date
CN111209839A CN111209839A (en) 2020-05-29
CN111209839B true CN111209839B (en) 2023-05-23

Family

ID=70789467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911422660.8A Active CN111209839B (en) 2019-12-31 2019-12-31 Face recognition method

Country Status (1)

Country Link
CN (1) CN111209839B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113069080B (en) * 2021-03-22 2021-12-21 上海交通大学医学院附属第九人民医院 Difficult airway assessment method and device based on artificial intelligence
CN112949618A (en) * 2021-05-17 2021-06-11 成都市威虎科技有限公司 Face feature code conversion method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215240A1 (en) * 2016-06-14 2017-12-21 广州视源电子科技股份有限公司 Neural network-based method and device for face feature extraction and modeling, and face recognition
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN108647583A (en) * 2018-04-19 2018-10-12 浙江大承机器人科技有限公司 A kind of face recognition algorithms training method based on multiple target study
CN109033938A (en) * 2018-06-01 2018-12-18 上海阅面网络科技有限公司 A kind of face identification method based on ga s safety degree Fusion Features
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
WO2019192121A1 (en) * 2018-04-04 2019-10-10 平安科技(深圳)有限公司 Dual-channel neural network model training and human face comparison method, and terminal and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215240A1 (en) * 2016-06-14 2017-12-21 广州视源电子科技股份有限公司 Neural network-based method and device for face feature extraction and modeling, and face recognition
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
WO2019192121A1 (en) * 2018-04-04 2019-10-10 平安科技(深圳)有限公司 Dual-channel neural network model training and human face comparison method, and terminal and medium
CN108647583A (en) * 2018-04-19 2018-10-12 浙江大承机器人科技有限公司 A kind of face recognition algorithms training method based on multiple target study
CN109033938A (en) * 2018-06-01 2018-12-18 上海阅面网络科技有限公司 A kind of face identification method based on ga s safety degree Fusion Features
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘施乐 ; .基于深度学习的人脸识别技术研究.电子制作.2018,(24),全文. *

Also Published As

Publication number Publication date
CN111209839A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN107748895B (en) Unmanned aerial vehicle landing landform image classification method based on DCT-CNN model
CN110458750B (en) Unsupervised image style migration method based on dual learning
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN109064423B (en) Intelligent image repairing method for generating antagonistic loss based on asymmetric circulation
CN111209839B (en) Face recognition method
CN107330355B (en) Deep pedestrian re-identification method based on positive sample balance constraint
CN110660020B (en) Image super-resolution method of antagonism generation network based on fusion mutual information
CN111738303B (en) Long-tail distribution image recognition method based on hierarchical learning
CN108846822B (en) Fusion method of visible light image and infrared light image based on hybrid neural network
CN110969089B (en) Lightweight face recognition system and recognition method in noise environment
CN108960043A (en) A kind of personage's family relationship construction method for electron album management
CN111339931A (en) Face recognition network model construction method combined with gender attribute training
CN107357761A (en) A kind of minimal error entropy computational methods of quantization
CN110021049A (en) A kind of highly concealed type antagonism image attack method based on space constraint towards deep neural network
CN113361387A (en) Face image fusion method and device, storage medium and electronic equipment
CN113989405B (en) Image generation method based on small sample continuous learning
Huang et al. Self-supervised medical image denoising based on wista-net for human healthcare in metaverse
CN111291193B (en) Application method of knowledge graph in zero-time learning
Krushkal Strengthened Moser’s conjecture, geometry of Grunsky coefficients and Fredholm eigenvalues
JP2020123337A (en) On-device continuous learning method and device of neural network for analyzing input data by optimal sampling of training image for smart phone, drone, ship, or military purpose, and test method and device using it
CN114373205B (en) Face detection and recognition method based on convolution width network
CN110599403A (en) Image super-resolution reconstruction method with good high-frequency visual effect
JP2020170496A (en) Age privacy protection method and system for face recognition
CN116137023A (en) Low-illumination image enhancement method based on background modeling and detail enhancement
Yang et al. Image super resolution using deep convolutional network based on topology aggregation structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant