CN110263756A - A kind of human face super-resolution reconstructing system based on joint multi-task learning - Google Patents

A kind of human face super-resolution reconstructing system based on joint multi-task learning Download PDF

Info

Publication number
CN110263756A
CN110263756A CN201910578695.4A CN201910578695A CN110263756A CN 110263756 A CN110263756 A CN 110263756A CN 201910578695 A CN201910578695 A CN 201910578695A CN 110263756 A CN110263756 A CN 110263756A
Authority
CN
China
Prior art keywords
face
resolution
information
image
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910578695.4A
Other languages
Chinese (zh)
Inventor
吴成东
王欢
迟剑宁
胡倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910578695.4A priority Critical patent/CN110263756A/en
Publication of CN110263756A publication Critical patent/CN110263756A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of human face super-resolution reconstructing system based on joint multi-task learning, comprising: acquisition module, the first extraction module rebuild module, the second extraction module and training module.The present invention obtains shared expression of the face characteristic between inter-related task by the joint training method for the more attribute learning tasks of face;Then feasibility of the perception loss in terms of the reconstruction effect for improving face semantic information is demonstrated;Finally, face character data set is enhanced, the data of screening missing association attributes label, bring up again taking with facial critical point detection algorithm to characteristic point attribute, carry out joint multi-task learning on this basis and generate the more true super-resolution result of visual perception.

Description

Face super-resolution reconstruction system based on joint multi-task learning
Technical Field
The invention relates to a face reconstruction technology, which is suitable for reconstruction and reconstruction of a face image under low resolution, in particular to a face super-resolution reconstruction system based on joint multi-task learning.
Background
The image collected in the monitoring environment is affected by the blurring effect of atmosphere and imaging and the motion transformation of the target, so that the resolution of the captured face image is low and cannot be recognized by people or machines, and therefore, the improvement of the definition of the acquired image is a problem to be solved urgently. A method for enhancing the resolution of a face image using a face super-resolution restoration technique becomes an important means for solving the problem. Face super-resolution reconstruction, which is a process of predicting a high-resolution face image from one or more observed low-resolution face images, is a typical morbidity problem.
Super-resolution algorithms facing the human face field are mainly divided into methods based on reconstruction and learning, wherein the learning-based methods can be subdivided into shallow learning and deep learning methods. Reconstruction-based methods generate new image information from low-resolution images using some particular model. However, in an actual application scenario, since the resolution of the acquired face image is generally low, a large scale magnification factor is required. However, with the increase of the magnification factor, the performance of the reconstruction-based super-resolution algorithm is obviously reduced, and the actual requirements are difficult to meet. The learning-based method can reconstruct the high-frequency edge and texture information of the face, which is lacked in the original low-resolution image, by a method of training a large data set.
Early face super-resolution algorithms assumed that a face was in a controlled environment with small changes, learned prior image gradient spatial distribution, and implemented mapping between low-resolution faces and high-resolution faces through feature transformation. However, since the face components match the detection results depending on the face feature points, it is difficult to obtain accurate detection results when the resolution of the face image is small.
In recent years, the deep convolutional neural network is successfully applied to a face super-resolution task. The face super-resolution recognition is carried out by utilizing a generation countermeasure network (GAN), the authenticity of the generated face is judged by utilizing the countermeasure loss, and a Space Transformation Network (STN) is provided for a compensation link of a deconvolution network. However, due to the unstable training process of generating the countermeasure network, artifacts often appear in the output result. Meanwhile, due to the fact that the data quality of the super-resolution of the human face is uneven, the model is difficult to distinguish real relevant information from noise data.
Disclosure of Invention
According to the technical problem that accurate face recognition results are difficult to obtain under low resolution, the face super-resolution reconstruction system based on the combined multi-task learning is provided, and face super-resolution technology is optimized by combining auxiliary tasks such as face characteristic point detection, gender classification and facial expression recognition.
The technical means adopted by the invention are as follows:
a face super-resolution reconstruction system based on joint multi-task learning is characterized by comprising the following steps:
the acquisition module acquires a small-size face image and performs preliminary amplification to obtain a large-size low-resolution fuzzy face image;
the first extraction module is used for extracting the characteristics of the fuzzy face image by using a multi-scale characteristic image fusion model to obtain shared characteristics;
the reconstruction module is used for reconstructing the shared characteristics to obtain a rough high-resolution face image, fusing face gender information, face expression information, face age information, face key point information and the high-resolution face image by using a multi-task learning method, acquiring shared representation of the face characteristics among related tasks, and finally acquiring face priori knowledge;
the second extraction module is used for simultaneously sending the obtained high-resolution face image and the corresponding high-definition face image into a VGG16 network for operation to obtain a first face perception semantic feature map corresponding to the high-resolution face image and a second face perception semantic feature map corresponding to the high-definition face image, and extracting a difference value between the first face perception semantic feature map and the second face perception semantic feature map;
and the training module is used for reversely training the multi-scale feature map fusion model in the first extraction module by taking the difference value and the face priori knowledge as constraints.
Further, the acquisition module performs preliminary amplification on the input image by adopting a bicubic interpolation algorithm.
Further, the multi-scale feature map fusion model connects the high-resolution face image with the face gender information, the face expression information, the face age information and the face key point information through a residual error structure, and restores the details and texture features of the face by using an encoder-decoder structure to obtain the shared features.
Further, the reconstruction module reconstructs the shared features by using convolution kernel with the size of 3 × 3, and uses global mean pooling and full connection layers to detect the tasks of face gender information, face expression information, face age information and face key point information to obtain final output.
Further, when the training module trains the multi-scale feature map fusion model, a square loss function is used for the face key point information detection auxiliary task, and a cross entropy loss function is used for other information detection auxiliary tasks.
The invention firstly obtains the shared representation of the face features among related tasks by a combined training method aiming at the face multi-attribute learning task, and improves the reconstruction effect of face semantic information by combining a perception loss technology on the basis, and the beneficial effects comprise that:
1) the invention designs a cross-layer connected multi-scale feature map fusion network, obtains feature representation of face information in a high-dimensional space, and fuses feature maps of an encoder and a decoder in different visual levels through a symmetrical cross-layer connection structure, thereby effectively improving the face super-resolution reconstruction effect of an algorithm.
2) The invention accurately reconstructs the details of the face by utilizing the attributes of human face characteristic points, human face expression, human face gender and the like, combines the related tasks of human face super-resolution, facial characteristic point detection and the like by utilizing a multi-task learning method, and obtains the shared representation of the human face characteristics among the related tasks, thereby further obtaining rich human face prior knowledge.
3) The invention utilizes the prior knowledge of the human face and the constraint of the perception loss to generate the human face edge and the texture detail which are more real and clear in visual perception.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of the system operation of the present invention.
FIG. 2 is a flow chart of the multi-scale fusion model according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a face super-resolution reconstruction system based on joint multi-task learning, which is characterized by comprising the following steps:
and the acquisition module acquires the small-size face image and performs preliminary amplification to obtain the large-size low-resolution fuzzy face image. Further, the acquisition module performs preliminary amplification on the input image by adopting a bicubic interpolation algorithm.
The first extraction module is used for extracting the characteristics of the fuzzy face image by using a multi-scale characteristic image fusion model to obtain shared characteristics. The super-resolution main task and other related auxiliary tasks use the shared features to learn and calculate loss, and the sum of the loss of all the tasks is calculated to reversely propagate the training network. Specifically, a residual structure is used to connect a shallow feature map with higher resolution with a deep feature with lower resolution but obvious semantic features. The network body uses an encoder-decoder structure, wherein the encoder structure extracts visual features of deeper levels to feed the decoder portion by gradually reducing the dimensionality of the feature space. The decoder part uses the deep-level visual features to gradually restore the spatial dimension and repair the detail and texture features of the human face.
And the first reconstruction module reconstructs the shared features to obtain a rough high-resolution face image, further fuses face gender information, face expression information, face age information, face key point information and the high-resolution face image by using a multi-task learning method, acquires shared representation of the face features among related tasks, and finally acquires face priori knowledge. The shared representation refers to the vector representation of the prior information of the human face in a high-dimensional space obtained by multi-task learning. Although the characteristics of the type can not be visualized, experimental results show that the face prior knowledge can better realize a face super-resolution task. Specifically, the reconstruction module reconstructs the shared features by using convolution kernel with the size of 3 × 3, and finally outputs the detection tasks of the face gender information, the face expression information, the face age information and the face key point information by using global mean pooling and a full connection layer.
And the second extraction module is used for simultaneously sending the obtained high-resolution face image and the corresponding high-definition face image into a VGG16 network for operation to obtain a first face perception semantic feature map corresponding to the high-resolution face image and a second face perception semantic feature map corresponding to the high-definition face image, and extracting a difference value between the first face perception semantic feature map and the second face perception semantic feature map. The VGG-16 is a model proposed by the university of Oxford in 2014, and the semantic feature map is an output vector of the fourth convolution layer in the second convolution stage.
And the training module is used for reversely training the multi-scale feature map fusion model in the first extraction module by taking the difference value and the face priori knowledge as constraints. When the training module trains the multi-scale feature map fusion model, a square loss function is used for face key point information detection auxiliary tasks, and a cross entropy loss function is used for other information detection auxiliary tasks.
Specifically, the multi-scale feature fusion model uses the joint loss of pixel-by-pixel difference finding and perception loss as the loss function of the face super-resolution task, namely:
LSR=LMSE+λLperce
wherein L isMSERepresenting the loss function of the pixel-by-pixel comparison, LperceRepresenting a loss function of semantic feature comparison.
In the invention, a square loss function is used for the feature point detection auxiliary task, other related auxiliary tasks use a cross entropy loss function, and the auxiliary task loss function during training has the following specific form:
wherein,for a given face image training set, N is the number of pictures in the training set, x(i)In order to be a low-resolution image,for the purpose of a corresponding high-resolution image,the truth value of the attribute of the face of the auxiliary task is detected for the key point, the truth value is the image attribute carried by the training sample, can be directly extracted,detects specific values of the auxiliary task face attributes for the key points,the true values of the face attributes of the remaining respective subtasks,for the specific values of the face attributes of the remaining respective auxiliary tasks, k is 2, and 3 respectively represents the classification tasks such as expression classification, gender identification, and the like, so that each specific value is a probability, that is, a number between 0 and 1. Lambda [ alpha ]1Detecting auxiliary task weights, λ, for feature pointskAnd k is 2, and 3 is the weight of each rest auxiliary task.
In the above formulaThe square loss function used for the face keypoint information detection task,cross entropy loss functions used for other related auxiliary tasks.
The training model uses a gradient descent algorithm to train the model. Because loss functions and learning difficulties of different face attribute tasks are different, when training is started, learning of a super-resolution task (namely a main task) is constrained by auxiliary tasks such as face key point detection, gender identification, expression classification and the like, so that a main network is prevented from being in poor local optimization; as training progresses, when the loss value of the auxiliary task falls below the threshold, the task no longer benefits the main task, and their learning process is stopped.
As shown in FIG. 1, the present invention relates to a face super-resolution reconstruction system based on joint multi-task learning, which can execute the following steps:
step 1, preprocessing an input face image, and specifically, primarily amplifying the face image by using a bicubic sampling algorithm.
In this embodiment, 4000 low-resolution face images are used as a training set, 1000 images are used as a test set, and the scale is 16 × 16. In step 1, the face image is initially magnified 8 times to obtain a low-resolution face image of 128 × 128 size.
And 2, extracting feature representation of the face information in a high-dimensional space by using a multi-scale feature map fusion network for each low-resolution image obtained in the step 1.
In this embodiment, a multi-scale feature map fusion method is adopted, and as shown in fig. 2, a shallow feature map with higher resolution and a deep feature with lower resolution but obvious semantic features are connected by using a residual structure. At the same time, the method can maximize the information flow between all layers in the network, so that the connected layer can accept the characteristics of the previous partial layers as input. The network body uses an encoder-decoder structure, wherein the encoder structure extracts visual features of deeper levels to feed the decoder portion by gradually reducing the dimensionality of the feature space. The decoder part uses the deep-level visual features to gradually restore the spatial dimension and repair the detail and texture features of the human face. The characteristic graphs of the encoder and the decoder at different visual levels are fused through a symmetrical cross-layer connection structure, and the face super-resolution reconstruction effect of the algorithm is effectively improved.
And 3, respectively sending the high-dimensional characteristics of the face information obtained in the step 2 into branches of different face attribute tasks, so as to obtain a face initial super-resolution result, face key point positions, face age information and face gender information.
In this embodiment, the super-resolution main task uses a convolution kernel of 3 × 3 and shared features to perform face reconstruction, and the auxiliary tasks such as feature point detection use global mean pooling and a full connection layer to obtain final output.
And 4, respectively inputting the initial face super-resolution result obtained in the step 3 and the high-definition face image corresponding to the image in the training set in the step 1 into the VGG-16 model trained by the ImageNet data set, extracting an output vector of the fourth convolution layer of the second convolution section as a high-level semantic feature of each image, and calculating a difference value.
And 5, reversely transmitting and training the face super-resolution reconstruction network by using the face semantic feature perception loss obtained by the calculation in the step 4 and the face multi-attribute information obtained in the step 3 as constraints.
In this embodiment, the training set includes 4000 low-resolution face images proposed in step 1 and 4000 high-resolution face images corresponding to the 4000 low-resolution face images.
And 6, obtaining a final face super-resolution result from the low-resolution face image to be reconstructed according to the steps 1-3.
In this embodiment, the 1000 test set images proposed in step 1 are subjected to step 1-3 to obtain a final face super-resolution result. The reconstructed face image can reach the accuracy of 30.65dB on the evaluation index of the peak signal-to-noise ratio.
In summary, the present invention provides a new face reconstruction algorithm based on multi-task joint learning for the face super-resolution problem. The performance of the face super-resolution algorithm is optimized by using auxiliary tasks such as face characteristic point detection, gender classification, facial expression recognition and the like. The loss function of pixel-by-pixel difference is replaced by the perception loss function, the reconstruction effect of human face perception semantic information is improved while the human face edge textural features are recovered, and the visual perception effect is more real. Experimental analysis shows that the algorithm provided by the method can better utilize the prior knowledge of the human face and generate more real and clear human face edge and texture details in visual perception.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A face super-resolution reconstruction system based on joint multi-task learning is characterized by comprising the following steps:
the acquisition module acquires a small-size face image and performs preliminary amplification to obtain a large-size low-resolution fuzzy face image;
the first extraction module is used for extracting the characteristics of the fuzzy face image by using a multi-scale characteristic image fusion model to obtain shared characteristics;
the reconstruction module is used for reconstructing the shared characteristics to obtain a rough high-resolution face image, fusing face gender information, face expression information, face age information, face key point information and the high-resolution face image by using a multi-task learning method, acquiring shared representation of the face characteristics among related tasks, and finally acquiring face priori knowledge;
the second extraction module is used for simultaneously sending the obtained high-resolution face image and the corresponding high-definition face image into a VGG16 network for operation to obtain a first face perception semantic feature map corresponding to the high-resolution face image and a second face perception semantic feature map corresponding to the high-definition face image, and extracting a difference value between the first face perception semantic feature map and the second face perception semantic feature map;
and the training module is used for reversely training the multi-scale feature map fusion model in the first extraction module by taking the difference value and the face priori knowledge as constraints.
2. The system of claim 1, wherein the acquisition module performs a preliminary magnification on the input image by using a bicubic interpolation algorithm.
3. The super-resolution reconstruction system for human faces according to claim 1 or 2, wherein the multi-scale feature map fusion model connects the high-resolution human face image with the human face gender information, the human face expression information, the human face age information and the human face key point information through a residual structure, and uses an encoder-decoder structure to repair the detail and texture features of the human face to obtain the shared features.
4. The super-resolution reconstruction system for human face according to claim 3, wherein the reconstruction module reconstructs the shared feature by using convolution kernel with size of 3 x 3, and uses global mean pooling and full connection layer to detect the human face gender information, human face expression information, human face age information and human face key point information to obtain the final output.
5. The system of claim 1, wherein when the training module trains the multi-scale feature map fusion model, a square loss function is used for a face key point information detection auxiliary task, and a cross entropy loss function is used for other information detection auxiliary tasks.
CN201910578695.4A 2019-06-28 2019-06-28 A kind of human face super-resolution reconstructing system based on joint multi-task learning Pending CN110263756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578695.4A CN110263756A (en) 2019-06-28 2019-06-28 A kind of human face super-resolution reconstructing system based on joint multi-task learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578695.4A CN110263756A (en) 2019-06-28 2019-06-28 A kind of human face super-resolution reconstructing system based on joint multi-task learning

Publications (1)

Publication Number Publication Date
CN110263756A true CN110263756A (en) 2019-09-20

Family

ID=67923079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578695.4A Pending CN110263756A (en) 2019-06-28 2019-06-28 A kind of human face super-resolution reconstructing system based on joint multi-task learning

Country Status (1)

Country Link
CN (1) CN110263756A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689618A (en) * 2019-09-29 2020-01-14 天津大学 Three-dimensional deformable object filling method based on multi-scale variational graph convolution
CN110738160A (en) * 2019-10-12 2020-01-31 成都考拉悠然科技有限公司 human face quality evaluation method combining with human face detection
CN111507248A (en) * 2020-04-16 2020-08-07 成都东方天呈智能科技有限公司 Face forehead area detection and positioning method and system of low-resolution thermodynamic diagram
CN111612133A (en) * 2020-05-20 2020-09-01 广州华见智能科技有限公司 Internal organ feature coding method based on face image multi-stage relation learning
CN111753670A (en) * 2020-05-29 2020-10-09 清华大学 Human face overdividing method based on iterative cooperation of attention restoration and key point detection
CN112348743A (en) * 2020-11-06 2021-02-09 天津大学 Image super-resolution method fusing discriminant network and generation network
CN112419158A (en) * 2020-12-07 2021-02-26 上海互联网软件集团有限公司 Image video super-resolution and super-definition reconstruction system and method
CN112507997A (en) * 2021-02-08 2021-03-16 之江实验室 Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN112784660A (en) * 2019-11-01 2021-05-11 财团法人工业技术研究院 Face image reconstruction method and system
CN112818833A (en) * 2021-01-29 2021-05-18 中能国际建筑投资集团有限公司 Face multitask detection method, system, device and medium based on deep learning
US20210209732A1 (en) * 2020-06-17 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Face super-resolution realization method and apparatus, electronic device and storage medium
WO2021179822A1 (en) * 2020-03-12 2021-09-16 Oppo广东移动通信有限公司 Human body feature point detection method and apparatus, electronic device, and storage medium
CN113807265A (en) * 2021-09-18 2021-12-17 山东财经大学 Diversified human face image synthesis method and system
CN114140843A (en) * 2021-11-09 2022-03-04 东南大学 Cross-database expression identification method based on sample self-repairing
CN114170484A (en) * 2022-02-11 2022-03-11 中科视语(北京)科技有限公司 Picture attribute prediction method and device, electronic equipment and storage medium
CN117225921A (en) * 2023-09-26 2023-12-15 山东天衢铝业有限公司 Automatic control system and method for extrusion of aluminum alloy profile
US11900563B2 (en) 2020-04-01 2024-02-13 Boe Technology Group Co., Ltd. Computer-implemented method, apparatus, and computer-program product

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070242883A1 (en) * 2006-04-12 2007-10-18 Hannes Martin Kruppa System And Method For Recovering Image Detail From Multiple Image Frames In Real-Time
US20100124383A1 (en) * 2008-11-19 2010-05-20 Nec Laboratories America, Inc. Systems and methods for resolution-invariant image representation
WO2016050729A1 (en) * 2014-09-30 2016-04-07 Thomson Licensing Face inpainting using piece-wise affine warping and sparse coding
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
WO2017015390A1 (en) * 2015-07-20 2017-01-26 University Of Maryland, College Park Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107958444A (en) * 2017-12-28 2018-04-24 江西高创保安服务技术有限公司 A kind of face super-resolution reconstruction method based on deep learning
CN107958246A (en) * 2018-01-17 2018-04-24 深圳市唯特视科技有限公司 A kind of image alignment method based on new end-to-end human face super-resolution network
CN109063565A (en) * 2018-06-29 2018-12-21 中国科学院信息工程研究所 A kind of low resolution face identification method and device
CN109101915A (en) * 2018-08-01 2018-12-28 中国计量大学 Face and pedestrian and Attribute Recognition network structure design method based on deep learning
CN109146813A (en) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 Multitask image reconstruction method, device, equipment and medium
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070242883A1 (en) * 2006-04-12 2007-10-18 Hannes Martin Kruppa System And Method For Recovering Image Detail From Multiple Image Frames In Real-Time
US20100124383A1 (en) * 2008-11-19 2010-05-20 Nec Laboratories America, Inc. Systems and methods for resolution-invariant image representation
WO2016050729A1 (en) * 2014-09-30 2016-04-07 Thomson Licensing Face inpainting using piece-wise affine warping and sparse coding
WO2017015390A1 (en) * 2015-07-20 2017-01-26 University Of Maryland, College Park Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107958444A (en) * 2017-12-28 2018-04-24 江西高创保安服务技术有限公司 A kind of face super-resolution reconstruction method based on deep learning
CN107958246A (en) * 2018-01-17 2018-04-24 深圳市唯特视科技有限公司 A kind of image alignment method based on new end-to-end human face super-resolution network
CN109063565A (en) * 2018-06-29 2018-12-21 中国科学院信息工程研究所 A kind of low resolution face identification method and device
CN109101915A (en) * 2018-08-01 2018-12-28 中国计量大学 Face and pedestrian and Attribute Recognition network structure design method based on deep learning
CN109146813A (en) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 Multitask image reconstruction method, device, equipment and medium
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Y. CHEN ET AL.: "FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
贾平: "基于多任务学习的图像超分辨率重建方法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689618A (en) * 2019-09-29 2020-01-14 天津大学 Three-dimensional deformable object filling method based on multi-scale variational graph convolution
CN110738160A (en) * 2019-10-12 2020-01-31 成都考拉悠然科技有限公司 human face quality evaluation method combining with human face detection
CN112784660A (en) * 2019-11-01 2021-05-11 财团法人工业技术研究院 Face image reconstruction method and system
CN112784660B (en) * 2019-11-01 2023-10-24 财团法人工业技术研究院 Face image reconstruction method and system
WO2021179822A1 (en) * 2020-03-12 2021-09-16 Oppo广东移动通信有限公司 Human body feature point detection method and apparatus, electronic device, and storage medium
US11900563B2 (en) 2020-04-01 2024-02-13 Boe Technology Group Co., Ltd. Computer-implemented method, apparatus, and computer-program product
CN111507248A (en) * 2020-04-16 2020-08-07 成都东方天呈智能科技有限公司 Face forehead area detection and positioning method and system of low-resolution thermodynamic diagram
CN111612133B (en) * 2020-05-20 2021-10-19 广州华见智能科技有限公司 Internal organ feature coding method based on face image multi-stage relation learning
CN111612133A (en) * 2020-05-20 2020-09-01 广州华见智能科技有限公司 Internal organ feature coding method based on face image multi-stage relation learning
CN111753670A (en) * 2020-05-29 2020-10-09 清华大学 Human face overdividing method based on iterative cooperation of attention restoration and key point detection
US11710215B2 (en) * 2020-06-17 2023-07-25 Beijing Baidu Netcom Science And Technology Co., Ltd. Face super-resolution realization method and apparatus, electronic device and storage medium
US20210209732A1 (en) * 2020-06-17 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Face super-resolution realization method and apparatus, electronic device and storage medium
CN112348743A (en) * 2020-11-06 2021-02-09 天津大学 Image super-resolution method fusing discriminant network and generation network
CN112348743B (en) * 2020-11-06 2023-01-31 天津大学 Image super-resolution method fusing discriminant network and generation network
CN112419158A (en) * 2020-12-07 2021-02-26 上海互联网软件集团有限公司 Image video super-resolution and super-definition reconstruction system and method
CN112818833A (en) * 2021-01-29 2021-05-18 中能国际建筑投资集团有限公司 Face multitask detection method, system, device and medium based on deep learning
CN112818833B (en) * 2021-01-29 2024-04-12 中能国际建筑投资集团有限公司 Face multitasking detection method, system, device and medium based on deep learning
CN112507997A (en) * 2021-02-08 2021-03-16 之江实验室 Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN113807265A (en) * 2021-09-18 2021-12-17 山东财经大学 Diversified human face image synthesis method and system
CN114140843A (en) * 2021-11-09 2022-03-04 东南大学 Cross-database expression identification method based on sample self-repairing
CN114140843B (en) * 2021-11-09 2024-04-16 东南大学 Cross-database expression recognition method based on sample self-repairing
CN114170484A (en) * 2022-02-11 2022-03-11 中科视语(北京)科技有限公司 Picture attribute prediction method and device, electronic equipment and storage medium
CN117225921A (en) * 2023-09-26 2023-12-15 山东天衢铝业有限公司 Automatic control system and method for extrusion of aluminum alloy profile
CN117225921B (en) * 2023-09-26 2024-03-12 山东天衢铝业有限公司 Automatic control system and method for extrusion of aluminum alloy profile

Similar Documents

Publication Publication Date Title
CN110263756A (en) A kind of human face super-resolution reconstructing system based on joint multi-task learning
EP3961484B1 (en) Medical image segmentation method and device, electronic device and storage medium
Zhou et al. Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network
CN111047516B (en) Image processing method, image processing device, computer equipment and storage medium
Zhuge et al. Deep embedding features for salient object detection
Wang et al. A survey of deep face restoration: Denoise, super-resolution, deblur, artifact removal
CN116797787B (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN113298736B (en) Face image restoration method based on face pattern
CN114170184A (en) Product image anomaly detection method and device based on embedded feature vector
Chen et al. Self-supervised remote sensing images change detection at pixel-level
CN117974693B (en) Image segmentation method, device, computer equipment and storage medium
CN112131969A (en) Remote sensing image change detection method based on full convolution neural network
CN116258632A (en) Text image super-resolution reconstruction method based on text assistance
CN111666813A (en) Subcutaneous sweat gland extraction method based on three-dimensional convolutional neural network of non-local information
Conrad et al. Two-stage seamless text erasing on real-world scene images
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
Susan et al. Deep learning inpainting model on digital and medical images-a review.
Gupta et al. A robust and efficient image de-fencing approach using conditional generative adversarial networks
Ma et al. MHGAN: A multi-headed generative adversarial network for underwater sonar image super-resolution
CN113421212B (en) Medical image enhancement method, device, equipment and medium
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
Ghanem et al. Face completion using generative adversarial network with pretrained face landmark generator
CN115116117A (en) Learning input data acquisition method based on multi-mode fusion network
Samadzadegan Data integration related to sensors, data and models
Wyzykowski et al. A Universal Latent Fingerprint Enhancer Using Transformers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230929

AD01 Patent right deemed abandoned