CN111291670B - Small target facial expression recognition method based on attention mechanism and network integration - Google Patents
Small target facial expression recognition method based on attention mechanism and network integration Download PDFInfo
- Publication number
- CN111291670B CN111291670B CN202010076302.2A CN202010076302A CN111291670B CN 111291670 B CN111291670 B CN 111291670B CN 202010076302 A CN202010076302 A CN 202010076302A CN 111291670 B CN111291670 B CN 111291670B
- Authority
- CN
- China
- Prior art keywords
- layer
- network
- convolution
- attention
- full
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a small target facial expression recognition method based on attention mechanism and network integration, which comprises the following steps: aiming at the facial expression data set, data enhancement means including rotation, turning and noise adding are adopted to improve the generalization capability and the training result of the whole network recognition; the network structure comprises a convolution layer and full-connection layer network structure, wherein the network structure comprises two network branches, namely a convolution layer and full-connection layer added with attention and a reduced resnet network; and multiplying the output values of the two networks by corresponding weight parameters respectively, adding the output values with a common bias weight, performing softmax on the obtained result, and finally obtaining a classification result probability value, performing cross entropy calculation on the classification result obtained by the softmax to obtain a loss value in the training process, selecting a value 70% before the loss value, performing reverse propagation on the value to update the weight parameters, wherein an Adam gradient updating method is adopted in an updating strategy.
Description
Technical Field
The invention belongs to the field of classification and identification, and relates to a low-resolution small-target facial expression identification method based on an attention mechanism and network integration (ensemble).
Background
The psychologist Ekman and Friesen research in 1971 suggested that humans have six major emotions, which are called basic emotions, namely: anger (anger), happy (happy), sad (sad), surprise (surrise), disgust (disgust) and fear (fear), each of which expresses a specific psychological activity of a person in a unique expression. On the basis of the emotion recognition, a neutral emotion (normal) is additionally added as a seventh emotion for subsequent classification.
In recent years, as deep learning is greatly developed in an image classification task and an object detection task, a deep learning method is introduced to a human facial expression recognition task, and particularly, the problems of multi-pose, shielding, uneven illumination and the like can be solved by extracting features by using a convolutional neural network. H.Ding, S.K.Zhou [1] proposes a facial expression recognition system Facenet2expnet jointly trained by face recognition and expression recognition, and in the pre-training stage, the expression network of convolution level is trained and regularized through the face network; in the fine tuning stage, an additional full-connection network and a facial expression feature extraction network are trained together to achieve a better facial expression recognition effect. Christopher Pramerdorfer, martin Kampel [2], and the like, in combination with six distinct states of the art deep learning methods, recognize facial expressions. In 2018, sanghyun Woo, jongchan Park [3] proposed a Convolutional Block Attention Module (CBAM), which is a simple and efficient feedforward convolutional neural network attention Module. The module deduces the attention diagrams in turn according to two independent dimensions of channel and space, and then multiplies the attention diagrams into the input feature diagram for adaptive feature refinement under the condition of giving the intermediate feature diagram. Disclosed are a self-adaptive emotion expression system and method based on expression recognition. Patents (CN 201910790582.0) filed by Sun Lingyun, zhou Zihong, et al, zhejiang university all have respective characteristics in terms of network structure.
[1]H.Ding,S.K.Zhou,and R.Chellappa,“Facenet2expnet:Regularizing a deep face recognition net for expression recognition,”in Automatic Face&Gesture Recognition(FG 2017),201712th IEEE International
[2]C.Pramerdorfer andM.Kampel,“Facial expression recognition using convolutional neural networks:State of the art,”arXiv preprint arXiv:1612.02903,2016.
[3]S.Woo,J.Park,J.Lee et al.,CBAM:Convolutional Block Attention Module,in European Conference on Computer Vision(ECCV),2018.
Disclosure of Invention
The invention discloses a small target face expression recognition method based on an attention mechanism and an ensemble. The method can be used for different backgrounds, illumination intensities and weather conditions, and can ensure higher expression recognition accuracy for the long-distance low-resolution small target face. The technical scheme is as follows:
a small target facial expression recognition method based on attention mechanism and network integration comprises the following steps:
firstly, adopting data enhancement means including rotation, turning and noise addition aiming at a facial expression data set to improve the generalization capability and the training result of the whole network identification, and meanwhile, normalizing the picture data and adding expression category labels to obtain 7 expression categories in total.
And secondly, adding a full-connection layer network structure to the convolution layer, wherein the network structure comprises two network branches, namely a convolution plus full-connection layer with an attention mechanism and a reduced renet network, and the method comprises the following steps:
(1) In the convolution layer and the full connection layer, firstly, 64 1*1 convolution cores are used for carrying out preliminary feature acquisition on an image, then feature enhancement is carried out through a channel attention layer and a space attention layer, then a relu activation function and a batch standardization layer are used for enabling a network to have nonlinear features and avoiding gradient disappearance, but a posing layer is not used after the first layer of convolution, so that more data information is reserved; then, the second layer of convolution and the third layer of convolution are both 32 3*3 convolution kernels, and a channel attention and space attention layer, a relu activation function, a batch normalization layer and a maxporoling layer are connected behind the second layer of convolution and the third layer of convolution; the fourth layer of convolution has 64 5*5 convolution kernels, a channel attention and space attention layer, a relu activation function, a batch normalization layer and a maxporoling layer are connected in the same way to obtain a final image characteristic diagram, and finally, the final image characteristic diagram is subjected to three layers of full connection layers to obtain the output value of the convolution added with the attention mechanism and the full connection layers;
(2) In a reduced resnet network, firstly, the width and the height of a characteristic diagram are reduced through 64 convolution kernels of 5*5, then a batch normalization layer and a relu activation function are connected, transmission to a residual network block is started after one time of maxporoling, seven residual network units are used in total, the sizes of the convolution kernels are 3*3, the numbers of the convolution kernels are 32, 64, 128, 256 and 256 respectively, but the step sizes are not 1 any more, but are 1,2,1,2,1,2,1 respectively; obtaining a characteristic diagram through averageposing, and finally obtaining an output value of the reduced renet network through a full connection layer with the output characteristic vector length being the expression category number;
and thirdly, integrating two networks (ensemble), namely multiplying respective output values by corresponding weight parameters and adding the output values, then adding the output values with a common bias weight, and then performing softmax to finally obtain a classification result probability value, performing cross entropy calculation on the classification result obtained by the softmax to obtain a loss value in the training process, and then adopting a top-k loss value training strategy, namely selecting a value 70% of the loss value to perform reverse propagation to update the weight parameters, wherein the updating strategy adopts an Adam gradient updating method.
The invention uses the newly designed convolution network plus the difference of the full connection layer and the reduced resnet network classification to carry out ensemble combination; adding channel attention and spatial attention to the network simultaneously; and in the training process, the loss function value of top-k is adopted for back propagation, so that the over-fitting phenomenon of the network is avoided.
Drawings
FIG. 1 is a diagram of an ensemble network structure of convolutional layer plus full link layer and reduced resnet
FIG. 2 confusion matrix results diagram
Detailed Description
In order to make the technical scheme of the invention clearer, the invention is further explained below by combining the attached drawings. The invention is realized by the following steps:
(1) Data preprocessing:
the invention uses the disclosed fer2013 small target low pixel data set, which comprises 28708 training samples, 3589 Zhang Ceshi samples. Each picture is composed of gray images with the fixed size of 48 multiplied by 48, 7 expressions are provided, the expressions correspond to the digital labels 0-6 respectively, and the labels corresponding to the specific expressions are as follows in Chinese and English: 0-anger (gas production); 1-disagust (aversion); 2-fear (fear); 3-happy; 4-sad (heart injury); 5-surprism (surprised); 6-normal (neutral). Carrying out data enhancement and normalization processing on the pixel values of the picture, and converting the pixel values into floating point number types in the range of 0-1
(2) Forward propagation:
the method mainly comprises three parts of convolution plus full connection layer, reduced resnet and ensemble:
firstly, a self-designed convolution layer and full-connection layer network structure is provided, and the specific structure is shown in fig. 1. In the convolution layer and the full connection layer, firstly, 64 1*1 convolution cores are used for carrying out preliminary feature acquisition on an image (the step length in the network structure is set to be 1), then feature enhancement is carried out through a channel attention layer and a space attention layer, then a relu activation function and a batch normalization layer are used for enabling the network to have nonlinear features and avoiding gradient disappearance, but a posing layer is not used after the first layer of convolution, so that more data information is reserved; then, the second layer of convolution and the third layer of convolution are both 32 3*3 convolution kernels, and a channel attention and space attention layer, a relu activation function, a batch normalization layer and a maxporoling layer are connected behind the second layer of convolution and the third layer of convolution; the fourth layer of convolution has 64 5*5 convolution kernels, and the channel attention and space attention layer, the relu activation function, the batch normalization layer and the maxporoling layer are connected in the same way to finally obtain the characteristic diagram of the image. And then, obtaining the final output value of the network through three fully-connected layers, wherein the lengths of the output characteristic vectors of the three fully-connected layers are 2048, 1024 and 7 (the number of categories) respectively. The last full connectivity layer is the result of the classification of the network.
Meanwhile, the input data is also transmitted to the reduced resnet network, and the specific structure is shown in fig. 1. In the reduced resnet network, the width and height of a characteristic diagram are reduced through 64 convolution kernels 5*5, then a batch normalization layer and a relu activation function are connected, transmission to a residual network block is started after one time of maxporoling, seven residual network units are used totally, the sizes of the convolution kernels are 3*3, the numbers of the convolution kernels are 32, 64, 128, 256 and 256 respectively, but the step sizes are not 1 any more, but are 1,2,1,2,1,2,1 respectively. And obtaining a characteristic diagram through averaging, and finally obtaining the network classification result through a full connection layer with the output characteristic vector length of 7.
And finally, respectively multiplying the two network layer ensemble, namely the respective results by the weight parameters, then adding the weighted parameters, adding the common weight to obtain a classification result, and then calculating probability values of all classes by utilizing a softmax function to obtain a final classification result.
(3) Training setting and optimization:
updating the gradient: too small a learning rate can lead to long-time non-convergence and waste of resources; too large a learning rate may result in a local minimum being trapped. The experiment therefore selects the Adam method with an initial learning rate of 10-3 for gradient update.
Loss function: the cross entropy function is used for calculating the loss value, and particularly, the loss function value training strategy of top-k is adopted, namely, the parameters are updated by back propagation 70% of the selected loss value of a sample entering the network, so that the overfitting phenomenon is avoided
(4) Evaluation indexes are as follows:
the accuracy of the test set, the confusion matrix, is shown in FIG. 2.
Claims (1)
1. A small target facial expression recognition method based on attention mechanism and network integration comprises the following steps:
firstly, adopting data enhancement means including rotation, turning and noise addition aiming at a facial expression data set to improve the generalization capability and the training result of the whole network identification, and meanwhile, normalizing picture data and adding expression category labels, wherein the total number of the expression categories is 7;
secondly, a convolution layer and full-connection layer network structure is constructed, the network structure has two network branches which are a convolution plus full-connection layer network and a reduced resnet network with attention mechanisms, and the method comprises the following steps:
(1) Convolution with attention mechanism plus full-link network: in the convolution layer and the full connection layer, firstly, 64 1*1 convolution cores are used for carrying out preliminary feature acquisition on an image, then feature enhancement is carried out through a channel attention layer and a space attention layer, then a relu activation function and a batch standardization layer are used for enabling a network to have nonlinear features and avoiding gradient disappearance, but a posing layer is not used after the first layer of convolution, so that more data information is reserved; then, the second layer of convolution and the third layer of convolution are both 32 3*3 convolution kernels, and a channel attention and space attention layer, a relu activation function, a batch normalization layer and a maxporoling layer are connected behind the second layer of convolution and the third layer of convolution; the fourth layer of convolution has 64 5*5 convolution kernels, a channel attention and space attention layer, a relu activation function, a batch normalization layer and a maxporoling layer are connected in the same way to obtain a final image characteristic diagram, and finally, the final image characteristic diagram is subjected to three layers of full connection layers to obtain the output value of the convolution added with the attention mechanism and the full connection layers;
(2) In a reduced resnet network, firstly, the width and the height of a characteristic diagram are reduced through 64 convolution kernels of 5*5, then a batch normalization layer and a relu activation function are connected, transmission to a residual network block is started after one time of maxporoling, seven residual network units are used in total, the sizes of the convolution kernels are 3*3, the numbers of the convolution kernels are 32, 64, 128, 256 and 256 respectively, but the step sizes are not 1 any more, but are 1,2,1,2,1,2,1 respectively; obtaining a characteristic diagram through averageposing, and finally obtaining an output value of the reduced renet network through a full connection layer with the output characteristic vector length being the expression category number;
thirdly, integrating the convolution plus full-link network and the reduced resnet network (ensemble) added with the attention mechanism, namely multiplying respective output values by corresponding weight parameters and adding the output values, then adding the output values with common partial weights and then performing softmax to finally obtain a classification result probability value, performing cross entropy calculation on the classification result obtained by the softmax to obtain a loss value in the training process, and then adopting a top-k loss value training strategy, namely selecting a value 70% before the loss value to perform reverse propagation to update the weight parameters, wherein the updating strategy adopts an Adam gradient updating method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010076302.2A CN111291670B (en) | 2020-01-23 | 2020-01-23 | Small target facial expression recognition method based on attention mechanism and network integration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010076302.2A CN111291670B (en) | 2020-01-23 | 2020-01-23 | Small target facial expression recognition method based on attention mechanism and network integration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111291670A CN111291670A (en) | 2020-06-16 |
CN111291670B true CN111291670B (en) | 2023-04-07 |
Family
ID=71024324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010076302.2A Active CN111291670B (en) | 2020-01-23 | 2020-01-23 | Small target facial expression recognition method based on attention mechanism and network integration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111291670B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085069B (en) * | 2020-08-18 | 2023-06-20 | 中国人民解放军战略支援部队信息工程大学 | Multi-target countermeasure patch generation method and device based on integrated attention mechanism |
CN111898709B (en) * | 2020-09-30 | 2021-01-15 | 中国人民解放军国防科技大学 | Image classification method and device |
CN112541409B (en) * | 2020-11-30 | 2021-09-14 | 北京建筑大学 | Attention-integrated residual network expression recognition method |
CN112381061B (en) * | 2020-12-04 | 2022-07-12 | 中国科学院大学 | Facial expression recognition method and system |
CN112603325A (en) * | 2020-12-11 | 2021-04-06 | 上海交通大学 | Electrocardio abnormity discrimination system and method based on morphological filtering and wavelet threshold |
CN112597941B (en) * | 2020-12-29 | 2023-01-06 | 北京邮电大学 | Face recognition method and device and electronic equipment |
CN112348001B (en) * | 2021-01-08 | 2021-05-25 | 北京沃东天骏信息技术有限公司 | Training method, recognition method, device, equipment and medium for expression recognition model |
CN113627558A (en) * | 2021-08-19 | 2021-11-09 | 中国海洋大学 | Fish image identification method, system and equipment |
CN114724219A (en) * | 2022-04-11 | 2022-07-08 | 辽宁师范大学 | Expression recognition method based on attention shielding mechanism |
CN115457643B (en) * | 2022-11-09 | 2023-04-07 | 暨南大学 | Fair facial expression recognition method based on increment technology and attention mechanism |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292256A (en) * | 2017-06-14 | 2017-10-24 | 西安电子科技大学 | Depth convolved wavelets neutral net expression recognition method based on secondary task |
CN109508654A (en) * | 2018-10-26 | 2019-03-22 | 中国地质大学(武汉) | Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks |
CN110427867A (en) * | 2019-07-30 | 2019-11-08 | 华中科技大学 | Human facial expression recognition method and system based on residual error attention mechanism |
-
2020
- 2020-01-23 CN CN202010076302.2A patent/CN111291670B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292256A (en) * | 2017-06-14 | 2017-10-24 | 西安电子科技大学 | Depth convolved wavelets neutral net expression recognition method based on secondary task |
CN109508654A (en) * | 2018-10-26 | 2019-03-22 | 中国地质大学(武汉) | Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks |
CN110427867A (en) * | 2019-07-30 | 2019-11-08 | 华中科技大学 | Human facial expression recognition method and system based on residual error attention mechanism |
Non-Patent Citations (9)
Title |
---|
A Compact Deep Learning Model for Robust Facial Expression Recognition;Chieh-Ming Kuo等;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops》;20181231;第2202-2210页 * |
BAM: Bottleneck Attention Module;Jongchan Park等;《arXiv:1807.06514v2》;20180718;第1-14页 * |
Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition;Heechul Jung等;《2015 IEEE International Conference on Computer Vision》;20151231;第2983-2991页 * |
SPATIAL AND CHANNEL ATTENTION BASED CONVOLUTIONAL NEURAL NETWORKS FOR MODELING NOISY SPEECH;Sirui Xu等;《2019 IEEE International Conference on Acoustics,speech and signal processing》;20190417;第6625-6629页 * |
一种基于注意力模型的面部表情识别算法;褚晶辉等;《激光与光电子学进展》;20191107;http://kns.cnki.net/kcms/detail/31.1690.TN.20191107.1705.038.html * |
基于CNN局部特征融合的人脸表情识别;姚丽莎等;《激光与光电子学进展》;20190508;第1-14页 * |
基于深度注意力网络的人脸表情识别;李政浩;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20200115(第01期);I138-1881 * |
基于混合域注意力机制的人脸表情识别研究;李秋生;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20191215(第12期);I138-334 * |
用于人脸表情识别的多分辨率特征融合卷积神经网络;何志超等;《激光与光电子学进展》;20181231;第364-369页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111291670A (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111291670B (en) | Small target facial expression recognition method based on attention mechanism and network integration | |
CN108133188B (en) | Behavior identification method based on motion history image and convolutional neural network | |
CN109345538B (en) | Retinal vessel segmentation method based on convolutional neural network | |
CN109035149B (en) | License plate image motion blur removing method based on deep learning | |
CN109345508B (en) | Bone age evaluation method based on two-stage neural network | |
CN108665005B (en) | Method for improving CNN-based image recognition performance by using DCGAN | |
CN109903237B (en) | Multi-scale face image deblurring method based on low and high frequency separation | |
CN108171318B (en) | Convolution neural network integration method based on simulated annealing-Gaussian function | |
CN111652231B (en) | Casting defect semantic segmentation method based on feature self-adaptive selection | |
CN112597873A (en) | Dual-channel facial expression recognition method based on deep learning | |
CN114283345A (en) | Small sample city remote sensing image information extraction method based on meta-learning and attention | |
CN113344077A (en) | Anti-noise solanaceae disease identification method based on convolution capsule network structure | |
CN112364705A (en) | Light-weight CNN expression recognition method based on multilevel feature fusion | |
CN117422653A (en) | Low-light image enhancement method based on weight sharing and iterative data optimization | |
CN117158997A (en) | Deep learning-based epileptic electroencephalogram signal classification model building method and classification method | |
CN115797205A (en) | Unsupervised single image enhancement method and system based on Retinex fractional order variation network | |
CN111539248B (en) | Infrared face detection method and device and electronic equipment thereof | |
Ge et al. | Multi-grained cascade adaboost extreme learning machine for feature representation | |
Wang et al. | Deeper monocular depth prediction via long and short skip connection | |
CN115511059B (en) | Network light-weight method based on convolutional neural network channel decoupling | |
Jiejue | The algorithm and implementation of Yi character recognition based on convolutional neural network | |
CN115457611B (en) | Vein recognition method based on characteristic decoupling network | |
CN116542872B (en) | Single image defogging method based on semi-supervised knowledge distillation | |
CN114511462B (en) | Visual image enhancement method | |
CN116958540A (en) | Portrait segmentation method and system based on attention mechanism and U-net network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |