CN113343898A - Mask shielding face recognition method, device and equipment based on knowledge distillation network - Google Patents

Mask shielding face recognition method, device and equipment based on knowledge distillation network Download PDF

Info

Publication number
CN113343898A
CN113343898A CN202110713814.XA CN202110713814A CN113343898A CN 113343898 A CN113343898 A CN 113343898A CN 202110713814 A CN202110713814 A CN 202110713814A CN 113343898 A CN113343898 A CN 113343898A
Authority
CN
China
Prior art keywords
network
mask
face recognition
loss function
student
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110713814.XA
Other languages
Chinese (zh)
Other versions
CN113343898B (en
Inventor
苟建平
熊祥硕
欧卫华
夏书银
柯佳
陈潇君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202110713814.XA priority Critical patent/CN113343898B/en
Publication of CN113343898A publication Critical patent/CN113343898A/en
Application granted granted Critical
Publication of CN113343898B publication Critical patent/CN113343898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Abstract

The invention relates to the technical field of face recognition, and discloses a mask-shielded face recognition method, a mask-shielded face recognition device and mask-shielded face recognition equipment based on a knowledge distillation network, wherein the method comprises the following steps: constructing a training set and a testing set based on the facial image shielded by the mask, wherein the training set also comprises corresponding real label data; constructing a mask shielding face recognition network based on the training set; and inputting the test set into a face recognition network shielded by an inlet cover for recognition. The invention can effectively compress the student network and further improve the performance of the student network in the aspect of face identification covered by the mask.

Description

Mask shielding face recognition method, device and equipment based on knowledge distillation network
Technical Field
The invention relates to the technical field of face recognition, in particular to a mask shielding face recognition method, a mask shielding face recognition device and mask shielding face recognition equipment based on a knowledge distillation network.
Background
In recent years, the deep learning technology is rapidly developed, particularly, great advantages are highlighted in the field of computer vision, and the deep learning technology is widely applied. The deep learning is a branch of machine learning, and aims to establish a neural network simulating the human brain for analysis and learning and continuously simulate the human brain to process and express complex data such as sound, images and the like. Although deep learning networks have achieved many excellent results in many areas, there are some drawbacks and deficiencies in practical industrial applications. From AlexNet proposed in 2012 to DenseNet proposed in 2016, these excellent neural networks, although achieving better performance, require the neural networks to become wider and deeper, and the structure of the networks to become more complex. The more complex neural network structure means that it takes longer to reason, but this is not suitable for industrial environment. The AlexNet and DenseNet neural networks improve the accuracy of the network, ignoring real-time responsiveness in the industry. In order to be able to maintain high accuracy and reduce complexity of the network, a neural network compression direction has emerged. Neural network compression is a popular direction in deep learning research at present, and the main research directions are distillation, network structure search, quantification and the like. The Knowledge Distillation (KD) technique is a particularly important technique in the field of model compression.
The main objective of knowledge distillation is to train a small network model (small parameter model) to mimic a large or integrated network (large parameter model) that has been trained in advance. This training mode is also known as "Teacher-Student", the large network being the "Teacher" and the small network being the "Student". It is in effect the output of having the student model learn a teacher model that has been trained on the target data set. The trained student model is compressed, and meanwhile, the network precision is improved. The small-sized network can be easily applied to small-sized terminal equipment, and can play better benefits under the condition of limited computing resources.
Since the new crown pneumonia epidemic situation, everyone needs to wear a mask during business, which brings some challenges to traditional face recognition. For example, when people go out and use face mobile payment, train station arrival face recognition, mobile phone face unlocking and the like, under the conditions, face recognition cannot be carried out after a mask is worn, and much inconvenience is brought to control of epidemic situations, so that the development of a mask face shielding recognition technology is necessary, mask face shielding face recognition application research based on a knowledge distillation network is provided, accuracy is met, recognition speed is increased, and the recognition performance of a model can be improved by using the knowledge distillation network technology.
Disclosure of Invention
Based on the technical problems, the invention provides a mask-shielded face recognition method, a mask-shielded face recognition device and mask-shielded face recognition equipment based on a knowledge distillation network, which can enable a light-weight face recognition network to effectively extract the features of a face shielded by a mask for recognition and achieve higher performance on low-computing-power equipment, and specifically comprises the following technical schemes:
a mask shielding face recognition method based on a knowledge distillation network comprises the following steps:
constructing a training set and a testing set based on the facial image shielded by the mask, wherein the training set also comprises corresponding real label data;
constructing a mask shielding face recognition network based on the training set;
and inputting the test set into a face recognition network shielded by an inlet cover for recognition.
A knowledge distillation apparatus based on multiple knowledge transfers, comprising:
the data acquisition module is used for constructing a training set and a test set based on the mask shielding face image, and the training set also comprises corresponding real label data;
the model construction module is used for constructing a mask shielding face recognition network based on the training set;
and the face recognition module is used for inputting the test set into the face recognition network shielded by the inlet cover for recognition.
A computer device comprises a memory and a processor, wherein a computer program is stored in the memory, and the processor executes the computer program to realize the steps of the mask blocking face recognition method based on the knowledge distillation network.
A computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the above-mentioned mask occlusion face recognition method based on the knowledge distillation network.
Compared with the prior art, the invention has the beneficial effects that:
according to the mask-shielding face recognition method based on the knowledge distillation network, the student network can learn not only the soft label output of a large teacher network but also the soft label output of the student network with the same network structure as the student network, and the singleness of teacher student knowledge distillation and self-learning knowledge distillation is overcome, so that the student network model can be effectively compressed, and the performance of the student network can be further improved and even surpassed that of the large teacher network. By the method, the face features shielded by the mask can be effectively extracted and identified by the lightweight network, and high performance can be achieved on low-calculation-force equipment, which is undoubtedly a major breakthrough in the face identification technology.
Drawings
The present application will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a mask-shielded face recognition method based on a knowledge distillation network.
Fig. 2 is a schematic flow chart of a mask occlusion face recognition network constructed based on a training set.
Fig. 3 is a schematic flow chart of a distillation loss function determination based on first soft tag data, second soft tag data, and authentic tag data.
Fig. 4 is a schematic diagram of a pre-training process.
Fig. 5 is a schematic diagram of the identification accuracy of the teacher network, the first student network and the second student network.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
The application aims to provide a mask blocking face recognition method, a mask blocking face recognition device and mask blocking face recognition equipment based on a knowledge distillation network, wherein the method comprises the following steps: constructing a training set and a testing set based on the facial image shielded by the mask, wherein the training set also comprises corresponding real label data; constructing a mask shielding face recognition network based on the training set; and inputting the test set into a face recognition network shielded by an inlet cover for recognition.
The embodiment of the application can be used in the following application scenarios, specifically:
for illustrative purposes only, the embodiment of the application can be used in a mobile phone terminal face photographing application scenario. The technical problems to be solved by the application scenario are as follows: when a user uses a mobile phone to take a picture, the user needs to automatically grab a human face target so as to help the mobile phone to automatically focus, beautify and the like, so that a convolutional neural network model for target detection, which is small in size and fast in operation, is needed, and better user experience is brought to the user and the quality of a mobile phone product is improved.
For illustrative purposes only, the embodiments of the present application may also be used in application scenarios of portal gate face verification. The technical problems to be solved by the application scenario are as follows: when passengers carry out face authentication on gates at entrances of high-speed rails, airports and the like, a camera can shoot a face image and extract features by using a convolutional neural network, and then similarity calculation is carried out on the face image and the image features of identity documents stored in a system; and if the similarity is high, the verification is successful. Among them, it is most time-consuming to extract features through a convolutional neural network, and thus an efficient convolutional neural network model capable of performing face verification and feature extraction quickly is required.
For illustrative purposes only, the embodiment of the application can also be used in an application scenario of face verification of attendance checking equipment. The technical problems to be solved by the application scenario are as follows: when the staff performs face authentication on attendance machines at entrances of companies, units and the like, a camera shoots a face image, extracts features by using a convolutional neural network, and then performs similarity calculation with image features of staff identity information stored in a system; and if the similarity is high, the verification is successful. Among them, it is most time-consuming to extract features through a convolutional neural network, and thus an efficient convolutional neural network model capable of performing face verification and feature extraction quickly is required.
Referring to fig. 1, in the present embodiment, a method for recognizing a face covered by a mask based on a knowledge distillation network includes:
s101, constructing a training set and a testing set based on the face image shielded by the mask, wherein the training set also comprises corresponding real label data;
the mask-shielded face images in the training set and the test set can be acquired through image acquisition equipment or can be acquired from the existing data set; specifically, the acquisition of the face image shielded by the mask can select an applicable image acquisition device, such as a camera or a video camera, according to a specific scene;
wherein, let X ═ { X ═ X1,x2,x3,…,xnRepresents the training set;
wherein, let Y ═ { Y ═ Y1,y2,y3,…,ymRepresenting real label data corresponding to the training set;
preferably, the acquired mask shielding face image can be preprocessed when a training set and a test set are formed;
specifically, the mask shielding face image preprocessing mainly comprises the steps of noise elimination, gray level normalization, set correction, compression, cutting and the like;
s102, constructing a mask shielding face recognition network based on a training set;
s103, the test set is input into the face recognition network covered by the inlet cover for recognition.
Referring to fig. 2, in some embodiments, constructing a mask occlusion face recognition network based on a training set comprises:
s201, a teacher network, a first student network and a second student network are constructed, and the first student network and the second student network have the same structure;
wherein, the first student network and the second student network have the same structure, and both of them belong to lightweight networks.
S202, pre-training a teacher network and a second student network;
s203, inputting the training set into a teacher network and a second student network respectively to obtain first soft label data output by the teacher network and second soft label data output by the second student network;
the soft label data comprises probability data obtained after the teacher network processes the training set and probability data obtained after the second student network processes the training set, and in essence, the teacher network and the second student network respectively obtain prediction data according to the training set, and the property of the prediction data is close to that of the real label data, but the prediction data is different from that of the real label data, so that the prediction data is called as the soft label data.
S204, determining a distillation loss function based on the real label data, the first soft label data and the second soft label data;
s205, carrying out iterative training on the first student network based on the distillation loss function to obtain a mask shielding face recognition network.
The distillation loss function is used for updating parameters of the optimized collaborative network, the parameters of the collaborative network are correspondingly updated by minimizing the loss function or adjusting the value of the distillation loss function in other ways in each iteration in the collaborative network training process, and the parameter values of the collaborative network tend to be fitted step by performing iterative training on the collaborative network for multiple times, wherein the training process is a process of supervised learning.
In this embodiment, the student network can not only learn the soft tag output of the large-scale teacher network, but also learn the soft tag output of the student network with the same network structure as the student network, and overcomes the singleness of teacher knowledge distillation and self-learning knowledge distillation, so that the student network model can be effectively compressed, and the performance of the student network can be further improved and even surpassed that of the large-scale teacher network. By the method, the face features shielded by the mask can be effectively extracted and identified by the lightweight network, and high performance can be achieved on low-computing-power equipment.
Referring to fig. 3, in some embodiments, determining a distillation loss function based on the first soft tag data, the second soft tag data, and the genuine tag data comprises:
s301, determining a first loss function based on the real label data;
specifically, the first loss function is:
Figure BDA0003133986840000051
wherein L isCERepresenting a first loss function, X ═ X1,x2,x3,…,xnDenotes the training set, σi(Z, T) represents the output result of the first student network after the Softmax function under the condition that the temperature parameter is T, and Z ═ { Z ═1,z2,z3,…,zmAnd the symbol represents the output result of the first student network through the logs function, and the T represents the temperature parameter.
In particular, σi(Z, 1) temperature parameter T ═ 1.
S302, determining a second loss function based on the first soft label data;
specifically, the second loss function is:
Figure BDA0003133986840000052
wherein L isKLRepresenting a second loss function, σi(Zt,T1) Indicating that the teacher network has a temperature parameter T1Output result of the Softmax function under the conditions of (1), ZtAnd the output result of the teacher network through the Logits function is shown.
S303, determining a third loss function based on the second soft tag data;
specifically, the third loss function is:
Figure BDA0003133986840000053
wherein σi(Zs,T2) Indicating that the second student network has a temperature parameter T2Output result of the Softmax function under the conditions of (1), ZsAnd the output result of the second student network through the logs function is shown.
And S304, weighting and summing the first loss function, the second loss function and the third loss function to obtain a distillation loss function.
Specifically, the distillation loss function is:
LKD=(1-a)LCE+(1-a)LKL1+aLKL2
wherein L isKDRepresents a distillation loss function, and a represents a weight coefficient.
Referring to fig. 4, in some embodiments, the pre-training comprises:
s401, acquiring a network to be trained, wherein the network to be trained is a teacher network or a second student network;
s402, inputting the training set into the network to be trained to obtain the output result of the network to be trained;
s403, determining a cross entropy loss function based on the output result of the pre-training network and the real label data;
s404, performing iterative training on the network to be trained based on the cross entropy loss function.
The knowledge distillation method based on multiple knowledge transfer of the present invention will be further described below with reference to experimental data:
the training set adopts a WebFace simulation mask face data set, and the testing set adopts an LFW simulation mask face data set and an LFW face data set. After the teacher network, the first student network and the second student network are trained, the test sets are respectively input into the teacher network, the first student network and the second student network, and the identification accuracy of each network shown in fig. 5 is obtained.
Wherein, the precision of the pre-trained large teacher network ResNet101 on the LFW simulation mask face data set and the LFW face data set is 82.61% and 95.65% respectively;
wherein, the accuracy of the second student network ResNet50 on the LFW simulation mask face data set and LFW face data set is 82.03% and 95.53% respectively.
Wherein, the accuracy of the finally trained lightweight first student network (namely the finally obtained mask-covered face recognition network) ResNet50 on the LFW simulation mask face data set and the LFW face data set is 82.91% and 96.95% respectively.
Therefore, after the knowledge distillation method is trained, compared with the second pre-trained student model, the trained first student network is improved by 0.88% and 1.42% on the LFW simulation mask face data set and the LFW face data set, and is improved by 0.30% and 1.3% compared with the pre-trained teacher model, so that the effectiveness of the knowledge distillation method is verified.
As can be seen from fig. 5, the mask-shielded face recognition network obtained by the first student network training of the present application also has a performance exceeding that of the pre-training teacher network.
A knowledge distillation apparatus based on multiple knowledge transfers, comprising:
the data acquisition module is used for constructing a training set and a test set based on the mask shielding face image, and the training set also comprises corresponding real label data;
the model construction module is used for constructing a mask shielding face recognition network based on the training set;
and the face recognition module is used for inputting the test set into the face recognition network shielded by the inlet cover for recognition.
In some embodiments, the present application further discloses a computer device, which is characterized by comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the mask occlusion face recognition method based on the knowledge distillation network when executing the computer program.
The computer device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device. Of course, the memory may also include both internal and external storage devices of the computer device. In this embodiment, the memory is commonly used for storing an operating system and various application software installed in the computer device, such as a program code of a mask-occlusion face recognition method based on a knowledge distillation network. In addition, the memory may also be used to temporarily store various types of data that have been output or are to be output.
The processor may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor is typically used to control the overall operation of the computer device. In this embodiment, the processor is configured to run a program code stored in the memory or process data, for example, run a program code of the mask occlusion face recognition method based on the knowledge distillation network.
In some embodiments, the present application further discloses a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of the mask-occlusion face recognition method based on the knowledge distillation network.
Wherein the computer readable storage medium stores an interface display program executable by at least one processor to cause the at least one processor to perform the steps of the program code of the knowledge distillation network based mask occlusion face recognition method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
The above is an embodiment of the present invention. The embodiments and specific parameters in the embodiments are only used for clearly illustrating the verification process of the invention and are not used for limiting the patent protection scope of the invention, which is defined by the claims, and all the equivalent structural changes made by using the contents of the description and the drawings of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The mask shielding face recognition method based on the knowledge distillation network is characterized by comprising the following steps:
constructing a training set and a testing set based on the facial image shielded by the mask, wherein the training set also comprises corresponding real label data;
constructing a mask shielding face recognition network based on the training set;
and inputting the test set into the mask-shielded face recognition network for recognition.
2. The knowledge distillation network-based mask-obscured face recognition method according to claim 1, wherein constructing a mask-obscured face recognition network based on the training set comprises:
constructing a teacher network, a first student network and a second student network, wherein the first student network and the second student network have the same structure;
pre-training the teacher network and the second student network;
inputting the training set into the teacher network and the second student network respectively to obtain first soft label data output by the teacher network and second soft label data output by the second student network;
determining a distillation loss function based on the real label data, the first soft label data, and the second soft label data;
and performing iterative training on the first student network based on the distillation loss function to obtain the mask shielding face recognition network.
3. The knowledge distillation network based mask blocking face recognition method according to claim 2, wherein determining a distillation loss function based on the first soft label data, the second soft label data and the real label data comprises:
determining a first loss function based on the genuine tag data;
determining a second loss function based on the first soft tag data;
determining a third loss function based on the second soft tag data;
and weighting and summing the first loss function, the second loss function and the third loss function to obtain a distillation loss function.
4. The mask-occluded face recognition method based on the knowledge distillation network of claim 3, wherein the first loss function is:
Figure FDA0003133986830000011
wherein L isCERepresenting a first loss function, X ═ X1,x2,x3,…,xnDenotes the training set, σi(Z, T) represents the output result of the first student network after the Softmax function under the condition that the temperature parameter is T, and Z ═ { Z ═1,z2,z3,…,zmAnd the symbol represents the output result of the first student network through the logs function, and the T represents the temperature parameter.
5. The mask-occluded face recognition method based on the knowledge distillation network of claim 3, wherein the second loss function is:
Figure FDA0003133986830000021
wherein L isKLRepresenting a second loss function, σi(Zt,T1) Indicating that the teacher network has a temperature parameter T1Output result of the Softmax function under the conditions of (1), ZtAnd the output result of the teacher network through the Logits function is shown.
6. The mask-occluded face recognition method based on the knowledge distillation network of claim 3, wherein the third loss function is:
Figure FDA0003133986830000022
wherein σi(Zs,T2) Indicating that the second student network has a temperature parameter T2Output result of the Softmax function under the conditions of (1), ZsAnd the output result of the second student network through the logs function is shown.
7. The knowledge distillation network based mask blocking face recognition method according to claim 1, wherein the pre-training comprises:
acquiring a network to be trained, wherein the network to be trained is a teacher network or a second student network;
inputting the training set into the network to be trained to obtain an output result of the network to be trained;
determining a cross entropy loss function based on the output result of the pre-training network and the real label data;
and performing iterative training on the network to be trained based on the cross entropy loss function.
8. A knowledge distillation apparatus based on multiple knowledge transfers, comprising:
the system comprises a data acquisition module, a data analysis module and a data analysis module, wherein the data acquisition module is used for constructing a training set and a test set based on a face image shielded by a mask, and the training set also comprises corresponding real label data;
a model construction module for constructing a mask occlusion face recognition network based on the training set;
and the face recognition module is used for inputting the test set into the mask-shielded face recognition network for recognition.
9. A computer device, characterized by comprising a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the steps of the mask-blocking face recognition method based on the knowledge distillation network according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the knowledge distillation network based mask occlusion face recognition method according to any one of claims 1 to 7.
CN202110713814.XA 2021-06-25 2021-06-25 Mask shielding face recognition method, device and equipment based on knowledge distillation network Active CN113343898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110713814.XA CN113343898B (en) 2021-06-25 2021-06-25 Mask shielding face recognition method, device and equipment based on knowledge distillation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110713814.XA CN113343898B (en) 2021-06-25 2021-06-25 Mask shielding face recognition method, device and equipment based on knowledge distillation network

Publications (2)

Publication Number Publication Date
CN113343898A true CN113343898A (en) 2021-09-03
CN113343898B CN113343898B (en) 2022-02-11

Family

ID=77478879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110713814.XA Active CN113343898B (en) 2021-06-25 2021-06-25 Mask shielding face recognition method, device and equipment based on knowledge distillation network

Country Status (1)

Country Link
CN (1) CN113343898B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674880A (en) * 2019-09-27 2020-01-10 北京迈格威科技有限公司 Network training method, device, medium and electronic equipment for knowledge distillation
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method
CN111460962A (en) * 2020-03-27 2020-07-28 武汉大学 Mask face recognition method and system
CN111680600A (en) * 2020-05-29 2020-09-18 北京百度网讯科技有限公司 Face recognition model processing method, device, equipment and storage medium
US20200302295A1 (en) * 2019-03-22 2020-09-24 Royal Bank Of Canada System and method for knowledge distillation between neural networks
CN111783606A (en) * 2020-06-24 2020-10-16 北京百度网讯科技有限公司 Training method, device, equipment and storage medium of face recognition network
CN112116030A (en) * 2020-10-13 2020-12-22 浙江大学 Image classification method based on vector standardization and knowledge distillation
CN112115783A (en) * 2020-08-12 2020-12-22 中国科学院大学 Human face characteristic point detection method, device and equipment based on deep knowledge migration
KR102232138B1 (en) * 2020-11-17 2021-03-25 (주)에이아이매틱스 Neural architecture search method based on knowledge distillation
CN112712052A (en) * 2021-01-13 2021-04-27 安徽水天信息科技有限公司 Method for detecting and identifying weak target in airport panoramic video
CN112766463A (en) * 2021-01-25 2021-05-07 上海有个机器人有限公司 Method for optimizing neural network model based on knowledge distillation technology

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302295A1 (en) * 2019-03-22 2020-09-24 Royal Bank Of Canada System and method for knowledge distillation between neural networks
CN110674880A (en) * 2019-09-27 2020-01-10 北京迈格威科技有限公司 Network training method, device, medium and electronic equipment for knowledge distillation
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method
CN111460962A (en) * 2020-03-27 2020-07-28 武汉大学 Mask face recognition method and system
CN111680600A (en) * 2020-05-29 2020-09-18 北京百度网讯科技有限公司 Face recognition model processing method, device, equipment and storage medium
CN111783606A (en) * 2020-06-24 2020-10-16 北京百度网讯科技有限公司 Training method, device, equipment and storage medium of face recognition network
CN112115783A (en) * 2020-08-12 2020-12-22 中国科学院大学 Human face characteristic point detection method, device and equipment based on deep knowledge migration
CN112116030A (en) * 2020-10-13 2020-12-22 浙江大学 Image classification method based on vector standardization and knowledge distillation
KR102232138B1 (en) * 2020-11-17 2021-03-25 (주)에이아이매틱스 Neural architecture search method based on knowledge distillation
CN112712052A (en) * 2021-01-13 2021-04-27 安徽水天信息科技有限公司 Method for detecting and identifying weak target in airport panoramic video
CN112766463A (en) * 2021-01-25 2021-05-07 上海有个机器人有限公司 Method for optimizing neural network model based on knowledge distillation technology

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
GEOFFREY HINTON 等: "Distilling the Knowledge in a Neural Network", 《ARXIV》 *
JIANPING GOU 等: "Knowledge Distillation: A Survey", 《ARXIV》 *
MARIANA-IULIANA GEORGESCU 等: "Teacher-Student Training and Triplet Loss for Facial Expression Recognition under Occlusion", 《ARXIV》 *
姜慧明: "基于生成对抗网络与知识蒸馏的人脸修复与表情识别", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
许文元 等: "一种面向IPv6物联网网关的边缘智能优化技术", 《现代计算机》 *
赵振兵 等: "基于动态监督知识蒸馏的输电线路螺栓缺陷图像分类", 《高电压技术》 *

Also Published As

Publication number Publication date
CN113343898B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
US20220058426A1 (en) Object recognition method and apparatus, electronic device, and readable storage medium
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
US20160350611A1 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
WO2020238353A1 (en) Data processing method and apparatus, storage medium, and electronic apparatus
CN113344206A (en) Knowledge distillation method, device and equipment integrating channel and relation feature learning
JP2022141931A (en) Method and device for training living body detection model, method and apparatus for living body detection, electronic apparatus, storage medium, and computer program
WO2021223738A1 (en) Method, apparatus and device for updating model parameter, and storage medium
US20230095182A1 (en) Method and apparatus for extracting biological features, device, medium, and program product
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN112699297A (en) Service recommendation method, device and equipment based on user portrait and storage medium
CN114742224A (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
CN110399712A (en) Validation-cross method, apparatus, medium and calculating equipment based on identifying code
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN111310732A (en) High-precision face authentication method, system, computer equipment and storage medium
CN114282258A (en) Screen capture data desensitization method and device, computer equipment and storage medium
CN111062019A (en) User attack detection method and device and electronic equipment
CN110909578A (en) Low-resolution image recognition method and device and storage medium
CN113343898B (en) Mask shielding face recognition method, device and equipment based on knowledge distillation network
CN115565186A (en) Method and device for training character recognition model, electronic equipment and storage medium
WO2022142032A1 (en) Handwritten signature verification method and apparatus, computer device, and storage medium
CN114067394A (en) Face living body detection method and device, electronic equipment and storage medium
CN113362249A (en) Text image synthesis method and device, computer equipment and storage medium
CN117079336B (en) Training method, device, equipment and storage medium for sample classification model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant