CN107392865B - Restoration method of face image - Google Patents

Restoration method of face image Download PDF

Info

Publication number
CN107392865B
CN107392865B CN201710528727.0A CN201710528727A CN107392865B CN 107392865 B CN107392865 B CN 107392865B CN 201710528727 A CN201710528727 A CN 201710528727A CN 107392865 B CN107392865 B CN 107392865B
Authority
CN
China
Prior art keywords
image
network
face image
input image
restored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710528727.0A
Other languages
Chinese (zh)
Other versions
CN107392865A (en
Inventor
林倞
曹擎星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou wisdom Technology (Guangzhou) Co.,Ltd.
Original Assignee
Guangzhou Shenyu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shenyu Information Technology Co ltd filed Critical Guangzhou Shenyu Information Technology Co ltd
Priority to CN201710528727.0A priority Critical patent/CN107392865B/en
Publication of CN107392865A publication Critical patent/CN107392865A/en
Application granted granted Critical
Publication of CN107392865B publication Critical patent/CN107392865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a restoration method of a face image, which comprises the following steps: s1, acquiring a group of human face image pairs; s2, inputting the blurred image into a policy network as an initial input image; s3, selecting an area from the input image through the strategy network; s4, restoring the region selected in S3 in the input image through an enhancement network; s5, repeating the steps from S3 to S4 for a plurality of times; s6, training the strategy network and the enhancement network; s7, initializing the strategy network and the enhancement network; and S8, taking the face image to be restored as an initial input image, inputting the initial input image into the strategy network, and repeating S3 to S5 to obtain the restored face image. The restoration method of the face image provided by the invention can autonomously and preferentially select the areas with less distortion in the blurred face image, restore the areas, and help the restoration of the rest distorted areas by using the extra information after the restoration of the areas, thereby achieving better restoration effect than the prior art.

Description

Restoration method of face image
Technical Field
The invention relates to the field of image processing, in particular to a restoration method of a face image.
Background
The low-resolution face image restoration refers to restoring a clear and high-resolution face image by using one or more low-resolution face images. In many images or videos, high definition faces often carry important information and value. Especially, in recent years, along with the large-scale popularization of road monitoring, automobile data recorders and security monitoring, clear human faces are more and more emphasized in monitoring videos and images. For many applications, such as identity authentication, crowd analysis, human body tracking, etc., face images play an extremely important role. In practical applications, the high resolution requirement for the face image often contradicts the low resolution of the surveillance video. Therefore, the blurring and the lack of definition of the face image in the monitoring video bring many obstacles and inconveniences to the practical application of video monitoring. High-resolution optical sensors are not ubiquitous, subject to technical constraints. Although the problems of face blurring and the like in the video can be solved by upgrading an optical sensor and other devices, the purchase cost and the maintenance cost are increased, and the definition of the recorded video cannot be solved. Meanwhile, in the using process, a plurality of interferences exist, such as the conditions of affecting the quality of recorded videos due to movement, long distance and the like. Therefore, there is a great demand for practical use to obtain desired information from restored high-resolution images by technical means.
In the current stage, when the video is analyzed, people often repeatedly check information in the monitored video and repeatedly observe the important part, and the face image is often one of the important information in the video. Because the faces in the surveillance video are often far away and small in occupied ratio. Therefore, when the camera is located far away, the resolution of the face image is often lower. For the face with insufficient video definition, the adopted method is usually to directly perform interpolation amplification and then perform analysis. The interpolation method is fast and has wide application. However, the amplification quality is poor, high-frequency information of the image is damaged to cause image blurring, and a lot of difficulties are brought to the recognition and restoration of the face image in the video.
With the development of computer vision technology, many computer vision technologies have been applied to low-resolution face image restoration. At present, the mature techniques include interpolation, dictionary learning, deep convolutional neural network, etc. The dictionary learning method is to establish two dictionaries of a low-resolution image and a high-resolution image, and to achieve mapping from low resolution to high resolution by learning different mapping relations; the interpolation rule is that an image is amplified by establishing a more optimized up-sampling function model under the condition of ensuring the integrity of high-frequency information retention; the deep learning method establishes a sparse representation-mapping-reconstruction process from low resolution to high resolution through a neural network to obtain a high-resolution image. Although there are many methods for restoring low-resolution facial images, most of them are directed to facial images in a controlled environment, i.e., the face must be under strict conditions of angle, illumination and expression.
Disclosure of Invention
The invention aims to provide a method for restoring a face image, aiming at solving the problems in the prior art, so as to restore a blurred face image into a clear image under an uncontrolled environment.
In order to achieve the purpose, the invention adopts the following technical scheme:
a restoration method of a face image comprises the following steps:
s1, acquiring a group of face image pairs, wherein the face image pairs comprise a clear image and a fuzzy image of the same face image;
s2, inputting the blurred image into a policy network as an initial input image;
s3, selecting an area from the input image through the strategy network;
s4, restoring the region selected in S3 in the input image through an enhancement network;
s5, taking the whole image obtained after the region restoration of the S4 as an input image of S3, repeatedly and iteratively executing S3 to S4 for a plurality of times, and obtaining an image which is obtained by repeating the S4 for the last time and is the restored image;
s6, calculating the similarity between the restored image obtained in S5 and the clear image obtained in S1, training the strategy network in S3 by using a reinforcement learning algorithm, and training the enhancement network in S4 by using a gradient return and gradient descent method;
s7, initializing the strategy network and the enhanced network based on the parameters obtained by training in S6;
and S8, taking the face image to be restored as an initial input image, inputting the initial input image into the strategy network, and repeating S3 to S5 to obtain the restored face image.
Further, the strategy network comprises a full connection layer and a long-short term memory network; the long-short term memory network is used for recording and encoding the area selected when the previous repeated iteration is executed at S3, and transmitting the area to the next iteration in the form of hidden vectors.
Further, the input image of the policy network in step S3 is the blurred image or the image obtained in step S4 in the previous iteration, and is output as a probability map with the size consistent with that of the input image; in S8, execution proceeds to S3, where the point with the highest probability in the probability map is taken as the center point, and a rectangular region of a fixed size is cut out at the corresponding position on the input image as the region selected in step S3.
Further, before S8, when S3 is executed, a point is randomly selected as a center point in the probability map, and a rectangular region with a fixed size is cut out at a corresponding position on the input image as the region selected in step S3.
Further, the enhancement network comprises a convolutional neural network and a plurality of fully-connected layers, wherein the convolutional neural network is composed of 8 convolutional layers.
Further, in S6, the method for calculating the similarity between the restored image obtained in S5 and the clear image obtained in S1 is to calculate a mean square error between the restored image and the clear image, that is, calculate a square of a difference between pixels at corresponding positions of the two images, and sum all the obtained values.
Further, in S6, the method for training the policy network by using the reinforcement learning algorithm specifically includes: negating the image similarity obtained in step S6 to obtain an incentive signal in the reinforcement learning method; obtaining the gradient of the reward signal relative to the strategy network by using a REINFORCE algorithm; and updating parameters of the strategy network by using a gradient back-transmission algorithm and a gradient descending algorithm.
Further, the S7 further includes obtaining a plurality of sets of face image pairs, and iteratively performing S2 to S7 on each set of face image pairs in sequence.
Compared with the prior art, the invention has the beneficial effects that: the restoration method of the face image provided by the invention can autonomously and preferentially select the areas with less distortion in the blurred face image, restore the areas, and help the restoration of the rest distorted areas by using the extra information after the restoration of the areas, thereby achieving better restoration effect than the prior art.
Drawings
Fig. 1 is a schematic flow chart of a method for restoring a face image according to the present invention.
Fig. 2 is a schematic diagram of a pair of human face images in the present invention.
FIG. 3 is a schematic flow chart of S3-S4 in the present invention.
Fig. 4 is a schematic flow chart of S5 in the present invention.
Fig. 5 is a diagram of an example of face image restoration using the method of the present invention.
Detailed Description
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The restoration method of the face image provided by the invention can restore the blurred face image into a clear image, and mainly comprises two parts of neural network training and face image restoration.
Specifically, as shown in fig. 1, the method for restoring a face image provided by the present invention includes the following steps:
s1, acquiring a group of face image pairs, as shown in fig. 2, where the face image pairs include a clear image and a blurred image of the same face image;
s2, inputting the blurred image into a policy network as an initial input image;
s3, selecting an area from the input image through the strategy network;
s4, restoring the region selected in S3 in the input image through an enhancement network;
s5, taking the whole image obtained after the region restoration of the S4 as an input image of S3, repeatedly and iteratively executing S3 to S4 for a plurality of times, and obtaining an image which is obtained by repeating the S4 for the last time and is the restored image;
s6, calculating the similarity between the restored image obtained in S5 and the clear image obtained in S1, training the strategy network in S3 by using a reinforcement learning algorithm, and training the enhancement network in S4 by using a gradient return and gradient descent method;
s7, initializing the strategy network and the enhanced network based on the parameters obtained by training in S6;
and S8, taking the face image to be restored as an initial input image, inputting the initial input image into the strategy network, and repeating S3 to S5 to obtain the restored face image.
Wherein, S1 to S7 are processes of neural network training, and S8 is a process of face image restoration.
Before training the neural network, parameters of the strategy network and the enhancement network can be randomly initialized in a normal distribution mode with a mean value of 0 and a variance of 0.01. Wherein the policy network comprises a full connection layer and a long-short term memory network; the enhancement network comprises a convolutional neural network and a plurality of fully-connected layers, wherein the convolutional neural network is composed of 8 convolutional layers.
Further, the processing procedure of steps S3 to S4 is as shown in fig. 3, and the processing procedure of S5 is as shown in fig. 4. The method comprises the following specific steps: in S5, each iteration of S3 through S4 will output a new "state" of the image. A "state" includes two parts, one part is the image after the region restoration output in S4, and the image includes the region restoration results of all previous "states", so that the policy network can obtain information about which regions of the image are clear and which regions remain blurred, and can decide which region should be currently restored according to the already restored regions. The other part is the hidden vector generated by the strategy network in S3, the long short term memory network possesses the ability to memorize the long term information, and the long short term memory network is used to record and encode the region position selected in the previous iteration and transmit the region position to the next iteration in the form of the hidden vector.
From the second iteration, the input of the policy network in S3 is the "state" generated in the previous iteration (i.e., the whole image obtained after the region restoration in the previous iteration S4). The first layer of the strategy network is a full connection layer, and the input is an image. Assuming that the size of the image is 128 x 128, the fully connected layer pulls the input image into a 16384-dimensional vector, outputting a 256-dimensional vector. The 256-dimensional vector is input into the long-short term memory network together with the implicit vector obtained in the previous round. The long-short term memory network further outputs a hidden variable with 512 dimensions, and outputs a probability map with the size of 128 × 128 through the full connection layer. Each point in the probability map represents the probability of the policy network selecting a region of fixed size centered on the point on the input image. Because the training process is still performed at present, the region with the maximum probability does not need to be selected; therefore, a point may be randomly selected from the probability map, and a rectangular region with a size of 60 × 45 with the point as the center is used as the selection region output in step S3.
Assume that the size of the image to be restored is 128 × 128 and the size of the extracted image area is 60 × 45. In step S4, the whole image is pulled into 16384-dimensional vectors, 256-dimensional vectors are obtained through the first fully-connected layer, 256-dimensional vectors are obtained through the second fully-connected layer, and finally a feature map with a size of 60 × 45 is obtained through the third fully-connected layer. The 60 × 45 feature maps are merged with the extracted image regions to form a 2 × 60 × 45 feature map. The characteristic map is passed through a convolutional neural network to obtain a 60 × 45 recovered region image. The convolutional neural network consists of 8 convolutional layers. The convolution kernel size of the first and second layers was 5 x 5, the output size was 60 x 45 x 16, the convolution kernel size of the second and sixth layers was 7 x 7, the output size was 60 x 45 x 32, the convolution kernel size of the third, fourth and fifth layers was 7 x 7, and the output size was 60 x 45 x 64. The eighth layer convolution kernel size is 5 × 5 and the output size is 60 × 45 × 1, which is the restored region image. The restored area image replaces the area corresponding to the image obtained from the previous iteration, and the whole image formed after replacement is used as the input of the next iteration.
By repeatedly and iteratively performing S3 to S4 several times in S5, an image restored from the blurred image, referred to herein as a restored image, can be finally obtained. By comparing the restored image with the clear image, the policy network and the enhancement network can be trained.
Specifically, in S6, the method for calculating the similarity between the restored image obtained in S5 and the clear image obtained in S1 is to calculate a mean square error between the restored image and the clear image, that is, calculate a square of a difference between pixels at corresponding positions of the two images, and sum all the obtained values.
Further, the enhancement network uses a general method of training neural networks, i.e. using mean square error as a loss function, and using gradient back-transmission and gradient descent algorithms to update network parameters. The policy network uses a reinforcement learning algorithm, each time trying to select a different region, the selection in the whole sequence is encouraged or inhibited according to the quality of the final reward signal.
In S6, the method for training the policy network using the reinforcement learning algorithm specifically includes: negating the mean square error obtained by the calculation to be used as an awarding signal in the reinforcement learning method; obtaining the gradient of the reward signal relative to the strategy network by using a REINFORCE algorithm; and updating parameters of the strategy network by using a gradient back-transmission algorithm and a gradient descending algorithm. In this embodiment, assuming that the value of the reward signal is R, and the probability of the randomly selected point in one iteration is P, the value of the gradient of the policy network at this point is R/P, and the remaining unselected points are 0, and the gradient will be used for updating the parameters of the policy network by the gradient back-transmission and gradient down methods.
As a refinement, the S7 further includes acquiring a plurality of sets of face image pairs, and iteratively performing S2 to S7 on each set of face image pairs in turn. And by taking a plurality of groups of face images as samples, iterative training is carried out on the strategy network and the enhancement network, and the training effects of the strategy network and the enhancement network can be improved. The more the number of sample groups of the human face image pair, the better the effect. In each group of face image pairs, the fuzzy image can be obtained by a method that the clear image is reduced through bilinear interpolation and then is enlarged back to the original size, and the sample acquisition process is simplified.
After the training of the strategy network and the enhancement network is completed and the parameter initialization is carried out, the facial image to be restored can be used as an initial input image, and the restoration of the facial image is realized. In S8, after repeating the iteration from S3 to S4 for 25 times or a certain number of times, the final obtained image is the face image after multiple regions have been restored. As shown in fig. 5, each of the 25 face images is an output image of the current iteration of steps S3 to S4; above the face image is the face region selected in step S3 of the current iteration. The output image of the last iteration is the output image of the method.
In S8, when the process goes to S3, the image region is selected based on the probability map output by the policy network, which is slightly different from that during the neural network training. When a single face image is actually restored, a point with the highest probability in the probability map is taken as a central point, and a rectangular region with a fixed size is cut out from the corresponding position on the input image as the region selected in step S3.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A restoration method of a face image is characterized by comprising the following steps:
s1, acquiring a group of face image pairs, wherein the face image pairs comprise a clear image and a fuzzy image of the same face image;
s2, inputting the blurred image into a policy network as an initial input image;
s3, selecting an area from the input image through the strategy network;
s4, restoring the region selected in S3 in the input image through an enhancement network;
s5, taking the whole image obtained after the region restoration of the S4 as an input image of S3, repeatedly and iteratively executing S3 to S4 for a plurality of times, and obtaining an image which is obtained by repeating the S4 for the last time and is the restored image;
s6, calculating the similarity between the restored image obtained in S5 and the clear image obtained in S1, training the strategy network in S3 by using a reinforcement learning algorithm, and training the enhancement network in S4 by using a gradient return and gradient descent method;
s7, initializing the strategy network and the enhanced network based on the parameters obtained by training in S6;
s8, inputting the face image to be restored serving as an initial input image into a policy network, and repeating S3-S5 to obtain a restored face image;
the input image of the policy network in the step S3 is the blurred image or the image obtained in the step S4 in the previous iteration, and is output as a probability map with the size consistent with that of the input image; in S8, execution proceeds to S3, where the point with the highest probability in the probability map is taken as the center point, and a rectangular region of a fixed size is cut out at the corresponding position on the input image as the region selected in step S3.
2. The method of claim 1, wherein the policy network comprises a full connectivity layer and a long-short term memory network; the long-short term memory network is used for recording and encoding the area selected when the previous repeated iteration is executed at S3, and transmitting the area to the next iteration in the form of hidden vectors.
3. The method of claim 1, wherein before S8, when going to S3, a point is randomly selected as a center point in the probability map, and a rectangular region with a fixed size is cut out from the corresponding position on the input image as the region selected in S3.
4. The method of claim 1, wherein the enhancement network comprises a convolutional neural network and a plurality of fully-connected layers, the convolutional neural network consisting of 8 convolutional layers.
5. The method according to claim 1, wherein in S6, the similarity between the restored image obtained in S5 and the sharp image obtained in S1 is calculated by calculating a mean square error between the restored image and the sharp image, i.e. calculating a square of a difference between pixels at corresponding positions of the two images, and summing all the obtained values.
6. The method according to claim 1, wherein in S6, the method for training the strategy network using the reinforcement learning algorithm specifically comprises: negating the image similarity obtained in step S6 to obtain an incentive signal in the reinforcement learning method; obtaining the gradient of the reward signal relative to the strategy network by using a REINFORCE algorithm; and updating parameters of the strategy network by using a gradient back-transmission algorithm and a gradient descending algorithm.
7. The method of claim 1, wherein the step S7 further comprises obtaining a plurality of sets of face image pairs, and iteratively performing steps S2 to S7 on each set of face image pairs in turn.
CN201710528727.0A 2017-07-01 2017-07-01 Restoration method of face image Active CN107392865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710528727.0A CN107392865B (en) 2017-07-01 2017-07-01 Restoration method of face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710528727.0A CN107392865B (en) 2017-07-01 2017-07-01 Restoration method of face image

Publications (2)

Publication Number Publication Date
CN107392865A CN107392865A (en) 2017-11-24
CN107392865B true CN107392865B (en) 2020-08-07

Family

ID=60335138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710528727.0A Active CN107392865B (en) 2017-07-01 2017-07-01 Restoration method of face image

Country Status (1)

Country Link
CN (1) CN107392865B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305214B (en) * 2017-12-28 2019-09-17 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
CN108280058A (en) * 2018-01-02 2018-07-13 中国科学院自动化研究所 Relation extraction method and apparatus based on intensified learning
CN108364262A (en) * 2018-01-11 2018-08-03 深圳大学 A kind of restored method of blurred picture, device, equipment and storage medium
CN108510451B (en) * 2018-02-09 2021-02-12 杭州雄迈集成电路技术股份有限公司 Method for reconstructing license plate based on double-layer convolutional neural network
CN108830801A (en) * 2018-05-10 2018-11-16 湖南丹尼尔智能科技有限公司 A kind of deep learning image recovery method of automatic identification vague category identifier
CN110858279A (en) * 2018-08-22 2020-03-03 格力电器(武汉)有限公司 Food material identification method and device
CN109886891B (en) * 2019-02-15 2022-01-11 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
CN112200226B (en) * 2020-09-27 2021-11-05 北京达佳互联信息技术有限公司 Image processing method based on reinforcement learning, image processing method and related device
CN112634158A (en) * 2020-12-22 2021-04-09 平安普惠企业管理有限公司 Face image recovery method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680491A (en) * 2015-02-28 2015-06-03 西安交通大学 Non-uniform image motion blur removing method based on deep neural network
CN106127684A (en) * 2016-06-22 2016-11-16 中国科学院自动化研究所 Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI419059B (en) * 2010-06-14 2013-12-11 Ind Tech Res Inst Method and system for example-based face hallucination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680491A (en) * 2015-02-28 2015-06-03 西安交通大学 Non-uniform image motion blur removing method based on deep neural network
CN106127684A (en) * 2016-06-22 2016-11-16 中国科学院自动化研究所 Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Global-Local Face Upsampling Network;Oncel Tuzel et al;《arXiv preprint arXiv:1603.07235》;20160427;第1-23页 *
Recurrent Models of Visual Attention;Volodymyr Mnih et al;《Advances in neural information processing systems》;20140624;第2204–2212页 *

Also Published As

Publication number Publication date
CN107392865A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107392865B (en) Restoration method of face image
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
CN108764292B (en) Deep learning image target mapping and positioning method based on weak supervision information
CN109389552B (en) Image super-resolution algorithm based on context-dependent multitask deep learning
CN109166130B (en) Image processing method and image processing device
Li et al. Deep identity-aware transfer of facial attributes
Jiang et al. Single image super-resolution via locally regularized anchored neighborhood regression and nonlocal means
CN107529650B (en) Closed loop detection method and device and computer equipment
Wang et al. Learning super-resolution jointly from external and internal examples
CN107229914B (en) Handwritten digit recognition method based on deep Q learning strategy
Chen et al. Convolutional neural network based dem super resolution
CN107506761A (en) Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN107862275A (en) Human bodys' response model and its construction method and Human bodys' response method
CN114782694B (en) Unsupervised anomaly detection method, system, device and storage medium
CN111046734B (en) Multi-modal fusion sight line estimation method based on expansion convolution
CN106408550A (en) Improved self-adaptive multi-dictionary learning image super-resolution reconstruction method
Zhang et al. Single image dehazing via dual-path recurrent network
CN112861718A (en) Lightweight feature fusion crowd counting method and system
CN115410030A (en) Target detection method, target detection device, computer equipment and storage medium
CN113822790A (en) Image processing method, device, equipment and computer readable storage medium
CN112598604A (en) Blind face restoration method and system
CN114581918A (en) Text recognition model training method and device
Uddin et al. A perceptually inspired new blind image denoising method using $ L_ {1} $ and perceptual loss
CN116977674A (en) Image matching method, related device, storage medium and program product
CN115358952A (en) Image enhancement method, system, equipment and storage medium based on meta-learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220310

Address after: 511455 No. 106, Fengze East Road, Nansha District, Guangzhou City, Guangdong Province (self compiled Building 1) x1301-b013290

Patentee after: Guangzhou wisdom Technology (Guangzhou) Co.,Ltd.

Address before: 510000 210-5, Chuangqi Building 1, 63 Chuangqi Road, Shilou Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU SHENYU INFORMATION TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right