CN112102167A - Image super-resolution method based on visual perception - Google Patents

Image super-resolution method based on visual perception Download PDF

Info

Publication number
CN112102167A
CN112102167A CN202010895035.1A CN202010895035A CN112102167A CN 112102167 A CN112102167 A CN 112102167A CN 202010895035 A CN202010895035 A CN 202010895035A CN 112102167 A CN112102167 A CN 112102167A
Authority
CN
China
Prior art keywords
layer
convolution
resolution
image
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010895035.1A
Other languages
Chinese (zh)
Other versions
CN112102167B (en
Inventor
管声启
常江
师红宇
倪奕棋
胡璐萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hangyu Digital Vision Technology Co ltd
Original Assignee
Shaoxing Keqiao District West Textile Industry Innovation Research Institute
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Keqiao District West Textile Industry Innovation Research Institute, Xian Polytechnic University filed Critical Shaoxing Keqiao District West Textile Industry Innovation Research Institute
Priority to CN202010895035.1A priority Critical patent/CN112102167B/en
Publication of CN112102167A publication Critical patent/CN112102167A/en
Application granted granted Critical
Publication of CN112102167B publication Critical patent/CN112102167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image super-resolution method based on visual perception, which comprises the following steps: step 1, establishing an image data set, wherein the image data set comprises a training set and a testing set, and the training set comprises high-resolution images and low-resolution images which correspond to each other one by one; constructing a generating network model, wherein the generating network model comprises a generator and a discriminator; respectively inputting the low-resolution images and the high-resolution images in the training set into a generator and a discriminator for training, and respectively updating parameters of the generator and the discriminator by using a first loss function and a second loss function to obtain an image super-resolution generation network model; and (4) inputting the super-resolution of the image into the test set to generate a network model, and generating a high-resolution image. A visual attention mechanism and visual significance are integrated to construct a super-resolution model of the instrument panel image on the basis of the neural network, the model has the visual perception advantage, and the resolution of the instrument panel image can be improved on the premise of ensuring clear details.

Description

Image super-resolution method based on visual perception
Technical Field
The invention belongs to the technical field of image processing, and relates to an image super-resolution method based on visual perception.
Background
The instrument panel is an important display device for reflecting the working condition of the equipment, and is small to an ammeter and a water meter, large to an automobile instrument, a machine tool instrument and the like. The instrument detection is vital to maintaining the health condition of the equipment, the manual detection is high in cost and low in efficiency, and the automatic detection of the instrument panel can be realized by using machine vision. And a proper light source is adopted to match with a high-resolution camera for close-range shooting, so that a high-resolution instrument panel image can be obtained for visual detection. However, in the case of long-distance shooting or other situations where short-distance shooting is impossible, the acquired image often contains useless background information, and the resolution of the image of the instrument panel part is cut from the image containing the background, which further affects the subsequent detection accuracy. Conventional algorithms such as interpolation inevitably produce detail blurring when the image size is enlarged.
Disclosure of Invention
The invention aims to provide an image super-resolution method based on visual perception, which solves the problem of reduced image resolution in the prior art.
The invention adopts the technical scheme that an image super-resolution method based on visual perception comprises the following steps:
step 1, establishing an image data set, wherein the image data set comprises a training set and a testing set, and the training set comprises high-resolution images and low-resolution images which correspond to each other one by one;
step 2, constructing a generated network model, wherein the generated network model comprises a generator and a discriminator;
step 3, inputting the low-resolution images and the high-resolution images in the training set into a generator and a discriminator respectively for training, and updating parameters of the generator and the discriminator respectively by using a first loss function and a second loss function to obtain an image super-resolution generation network model;
and 4, inputting the super-resolution test set into an image to generate a network model, and generating a high-resolution image.
The invention is also characterized in that:
the structure of the generator is as follows in sequence: two convolution attention layers, an upper sampling layer A, a convolution attention layer A and a convolution layer A; each convolutional attention layer comprises a convolutional layer and a visual attention layer, and the step length of the convolutional layer and the convolutional layer A is 1.
The structure of the discriminator comprises: eleven convolutional attention layers, a first convolutional layer B, a pooling layer B, a second convolutional layer B and a third convolutional layer B which are arranged in sequence; each convolution attention layer comprises a convolution layer and a visual attention layer, and the step length of the convolution layer in the first convolution attention layer is 2; in the second layer to the eleventh convolution attention layer, the step lengths of two adjacent convolution layers are 1 and 2 respectively; the step length of the first convolution layer B, the second convolution layer B and the third convolution layer B is 1.
The specific operations of the visual attention layer are as follows:
performing convolution on the input feature graph to obtain a feature F1, a feature G1 and a feature H1; reshaping the characteristic F1 and transferring to obtain a characteristic F2, reshaping the characteristic G1 to obtain a characteristic G2, and reshaping the characteristic H1 to obtain a characteristic H2; multiplying and normalizing the characteristic F2 and the characteristic G2 to obtain a visual attention drawing; and multiplying the characteristic H2 with the visual attention map and reshaping to obtain an output characteristic map.
The first loss function is:
Figure BDA0002658199210000021
in the above formula, k1、k2、k3To define the coefficient, LpercepIn order to sense the loss of power,
Figure BDA0002658199210000022
for relative loss of the generator, LHRTo generate an L1loss between the high resolution image and the true high resolution image;
Lsaliencyfor significant losses:
Lsaliency=L1Loss(HC(HRgen),HC(HRreal)) (2);
in the above formula, HC represents a saliency image obtained by the HC algorithm.
The invention has the beneficial effects that:
the image super-resolution method based on visual perception provided by the invention is based on the neural network, a visual attention mechanism and visual significance are integrated to construct an instrument panel image super-resolution model, the model has the visual perception advantage, the resolution of an instrument panel image can be improved on the premise of ensuring clear details, and a foundation is laid for realizing high-precision remote instrument panel detection.
Drawings
FIG. 1 is a schematic structural diagram of a generator in an image super-resolution method based on visual perception according to the present invention;
FIG. 2 is a flow chart of a visual attention operation in the image super-resolution method based on visual perception of the present invention;
FIG. 3 is a training flow chart of an image super-resolution generation network model in the image super-resolution method based on visual perception.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
An image super-resolution method based on visual perception comprises the following steps:
step 1, establishing an image data set, wherein the image data set comprises a training set and a testing set, and the training set comprises high-resolution images and low-resolution images which correspond to each other one by one;
specifically, the training set includes 1000 high-resolution dashboard images with a resolution of 512 × 512 and 1000 low-resolution dashboard images with a resolution of 128 × 128, wherein the 128 × 128 images are obtained by reducing the 512 × 512 images by a factor of 4 through a bicubic interpolation algorithm. The test set contained 100 low resolution dashboard images at 128x128 resolution, all cropped from the large size image containing the background.
Step 2, constructing a generated network model, wherein the generated network model comprises a generator and a discriminator;
as shown in fig. 1, the structure of the generator is as follows: two convolution attention layers, an upper sampling layer A, a convolution attention layer A and a convolution layer A; each convolutional attention layer comprises a convolutional layer and a visual attention layer, and the step length of the convolutional layer and the convolutional layer A is 1.
Further, the structure of the discriminator is as follows in sequence: the convolution layer comprises a first convolution attention layer A, a second convolution attention layer A, an up-sampling layer A, a third convolution attention layer A, an up-sampling layer A, a fourth convolution attention layer A and a convolution layer A. The first convolution attention layer A, the second convolution attention layer A, the third convolution attention layer A and the fourth convolution attention layer A all comprise convolution layers and visual attention layers.
The step size of all the convolution layers is 1, the convolution kernel sizes of the convolution layer in the first convolution attention layer a, the second convolution attention layer a, the third convolution attention layer a, the fourth convolution attention layer a and the fifth convolution layer a are 3x3, an image of 128x128xchannels is input, an image of 512x512xchannels is output, the channels of the image are represented, and the dashboard image used in the embodiment is a three-channel color image.
The structure of the discriminator is as follows in sequence: eleven convolutional attention layers, a first convolutional layer B, a pooling layer B, a second convolutional layer B and a third convolutional layer B which are arranged in sequence; each convolution attention layer comprises a convolution layer and a visual attention layer, wherein the step length of the convolution layer in the first convolution attention layer is 2, and the step lengths of the convolution layers in the second layer to the eleventh convolution attention layer are 1 and 2 respectively.
Further, as shown in table 1, the structure of the discriminator includes: a first convolution attention layer B, a second convolution attention layer B, a third convolution attention layer B, a fourth convolution attention layer B, a fifth convolution attention layer B, a sixth convolution attention layer B, a seventh convolution attention layer B, an eighth convolution attention layer B, a ninth convolution attention layer B, a tenth convolution attention layer B, an eleventh convolution attention layer B, a first convolution layer B, a pooling layer B, a second convolution layer B, and a third convolution layer B. The first convolution attention layer B, the second convolution attention layer B, the third convolution attention layer B, the fourth convolution attention layer B, the fifth convolution attention layer B, the sixth convolution attention layer B, the seventh convolution attention layer B, the eighth convolution attention layer B, the ninth convolution attention layer B, the tenth convolution attention layer B and the eleventh convolution attention layer B respectively comprise convolution layers and visual attention layers, the convolution layer step size in the first convolution attention layer B is 2, the convolution layer step size in the second convolution attention layer B, the fourth convolution attention layer B, the sixth convolution attention layer B, the eighth convolution attention layer B and the tenth convolution layer B is 2, and the step sizes of the rest convolution layers are 1; or the convolution layer step size in the third convolution attention layer B, the fifth convolution attention layer B, the seventh convolution attention layer B, the ninth convolution attention layer B, and the eleventh convolution attention layer B is 2, and the remaining convolution layer step size is 1, and both of the above-described modes are possible. The step length of the first convolution layer B, the second convolution layer B and the third convolution layer B is 1.
The convolution kernel sizes of convolution layers in the first convolution note layer B, the second convolution note layer B, the third convolution note layer B, the fourth convolution note layer B, and the fifth convolution note layer B are 3x3, the convolution kernel sizes of convolution layers in the sixth convolution note layer B, the seventh convolution note layer B, the eighth convolution note layer B, the ninth convolution note layer B, the tenth convolution note layer B, and the eleventh convolution note layer B are 5x5, and the convolution kernel sizes of the first convolution layer B, the second convolution layer B, and the third convolution layer B are 1x 1.
TABLE 1 arbiter network architecture
Figure BDA0002658199210000051
The specific operations of the visual attention layers in the first visual attention layer a, the second visual attention layer a, the third visual attention layer a, the fourth visual attention layer a, the first visual attention layer B, the second visual attention layer B, the third visual attention layer B, the fourth visual attention layer B, the fifth visual attention layer B, the sixth visual attention layer B, the seventh convolutional attention layer B, the eighth convolutional attention layer B, the ninth convolutional attention layer B, the tenth convolutional attention layer B and the eleventh convolutional attention layer B are as follows:
as shown in fig. 2, convolving the input feature map to obtain a feature F1, a feature G1, and a feature H1; reshaping and transferring the feature F1 to obtain a feature F2 with the shape of [ WxH, C/8], reshaping the feature G1 to obtain a feature G2 with the shape of [ C/8, WxH ], and reshaping the feature H1 to obtain H2 with the shape of [ C, WxH ]; multiplying and normalizing the characteristic F2 and the characteristic G2 to obtain a visual attention force with the shape of [ WxH, WxH ]; the feature H2 is multiplied by the visual attention map and reshaped to obtain an output feature map of [ C, W, H ] shape.
Step 3, inputting the low-resolution images and the high-resolution images in the training set into a generator and a discriminator respectively for training, and updating parameters of the generator and the discriminator respectively by using a first loss function and a second loss function to obtain an image super-resolution generation network model;
specifically, as shown in fig. 3, step 3.1, a batch of low-resolution images and corresponding high-resolution images are extracted from the training set;
step 3.2, inputting the low-resolution image into the generator and the high-resolution image into the discriminator to generate a high-resolution image;
3.3, calculating the generator loss according to the formula (1) and updating generator parameters;
step 3.4, calculating the loss of the discriminator according to the formula (6) and updating the parameters of the discriminator;
step 3.5, returning to the step 3.1, and performing next batch of training until the training set is traversed;
and 3.6, returning to the step 3.2, and performing the next round of training until the total number of training rounds is reached.
The first loss function is:
Figure BDA0002658199210000071
in the above formula, k1、k2、k3The coefficient is self-defined;
Lsaliencyto showSignificant loss:
Lsaliency=L1Loss(HC(HRgen),HC(HRreal)) (2);
Lpercepto perceive the loss:
Lpercep=L1Loss(VGG1954(HRgen),VGG1954(HRreal)) (3);
in the above formula, L1Loss represents L1 norm Loss function, VGG1954Representing a feature graph, HR, extracted from layer 54 after an image is input into a VGG19 networkgenRepresenting a high resolution image, HR, generated by a generatorrealRepresenting a true high resolution image;
Figure BDA0002658199210000072
to generate relative loss of generator:
Figure BDA0002658199210000074
in the above equation, BCELoss represents a binary cross-entropy loss function, D (HR)gen) Denotes a discrimination result obtained by inputting the high-resolution image generated by the generator to a discriminator, D (HR)real) Representing a discrimination result obtained by inputting a real high-resolution image into a discriminator, and ONE representing a matrix which is 1 and has the same size with the discrimination result of the discriminator;
LHRto generate the L1loss between the high resolution image and the true high resolution image:
LHR=L1Loss(HRgen,HRreal) (5)。
the second loss function is:
Figure BDA0002658199210000073
Lrealto discriminate loss of a true high resolution image:
Lreal=BCELoss(D(HRreal)-D(HRgen),ONE) (7);
Lgento discriminate loss of the high resolution image generated by the generator:
Lgen=BCELoss(D(HRgen)-D(HRreal),ZERO) (8);
in the above equation, ZERO represents a matrix of all 0's having the same size as the discrimination result of the discriminator.
And 4, inputting the super-resolution test set into an image to generate a network model, and generating a high-resolution image.
Through the mode, the image super-resolution method based on visual perception disclosed by the invention is based on the neural network, a visual attention mechanism and visual significance are integrated to construct an instrument panel image super-resolution model, the model has the advantage of visual perception, the resolution of an instrument panel image can be improved on the premise of ensuring clear details, and a foundation is laid for realizing high-precision remote instrument panel detection.

Claims (5)

1. An image super-resolution method based on visual perception is characterized by comprising the following steps:
step 1, establishing an image data set, wherein the image data set comprises a training set and a testing set, and the training set comprises high-resolution images and low-resolution images which correspond to each other one by one;
step 2, constructing a generating network model, wherein the generating network model comprises a generator and a discriminator;
step 3, inputting the low-resolution images and the high-resolution images in the training set into a generator and a discriminator respectively for training, and updating parameters of the generator and the discriminator respectively by using a first loss function and a second loss function to obtain an image super-resolution generation network model;
and 4, generating a network model by using the super-resolution of the test set input image to generate a high-resolution image.
2. The method for super-resolution of images based on visual perception according to claim 1, wherein the generator is configured to: two convolution attention layers, an upper sampling layer A, a convolution attention layer A and a convolution layer A; each convolution attention layer comprises a convolution layer and a visual attention layer, and the step length of each convolution layer A are both 1.
3. The method for super-resolution of images based on visual perception according to claim 2, wherein the structure of the discriminator comprises: eleven convolutional attention layers, a first convolutional layer B, a pooling layer B, a second convolutional layer B and a third convolutional layer B which are arranged in sequence; each convolution attention layer comprises a convolution layer and a visual attention layer, and the step length of the convolution layer in the first convolution attention layer is 2; in the convolution attention layers from the second layer to the eleventh layer, the step lengths of two adjacent convolution layers are 1 and 2 respectively; the step length of the first convolution layer B, the second convolution layer B and the third convolution layer B is 1.
4. The method for super-resolution of images based on visual perception according to claim 2 or 3, wherein the visual attention layer is implemented by the following specific operations:
performing convolution on the input feature graph to obtain a feature F1, a feature G1 and a feature H1; reshaping the characteristic F1 and transferring to obtain a characteristic F2, reshaping the characteristic G1 to obtain a characteristic G2, and reshaping the characteristic H1 to obtain a characteristic H2; multiplying and normalizing the characteristic F2 and the characteristic G2 to obtain a visual attention drawing; and multiplying the characteristic H2 with the visual attention map and reshaping to obtain an output characteristic map.
5. The method for super-resolution of images based on visual perception according to claim 1, wherein the first loss function is:
Figure FDA0002658199200000021
in the above formula, k1、k2、k3In order to self-define the coefficient,Lpercepin order to sense the loss of power,
Figure FDA0002658199200000022
for relative loss of the generator, LHRTo generate an L1loss between the high resolution image and the true high resolution image;
Lsaliencyfor significant losses:
Lsaliency=L1Loss(HC(HRgen),HC(HRreal)) (2);
in the above formula, HC represents a saliency image obtained by the HC algorithm.
CN202010895035.1A 2020-08-31 2020-08-31 Image super-resolution method based on visual perception Active CN112102167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010895035.1A CN112102167B (en) 2020-08-31 2020-08-31 Image super-resolution method based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010895035.1A CN112102167B (en) 2020-08-31 2020-08-31 Image super-resolution method based on visual perception

Publications (2)

Publication Number Publication Date
CN112102167A true CN112102167A (en) 2020-12-18
CN112102167B CN112102167B (en) 2024-04-26

Family

ID=73758511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010895035.1A Active CN112102167B (en) 2020-08-31 2020-08-31 Image super-resolution method based on visual perception

Country Status (1)

Country Link
CN (1) CN112102167B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240583A (en) * 2021-04-13 2021-08-10 浙江大学 Image super-resolution method based on convolution kernel prediction

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4310092A1 (en) * 1992-03-25 1993-09-30 Mitsubishi Electric Corp Optical scanning and processing of image data from photo detectors - has outputs coupled to sensitivity controlling neural network arranged in matrix
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
JP2018160863A (en) * 2017-03-23 2018-10-11 京セラドキュメントソリューションズ株式会社 Dither matrix creation method, dither matrix creation device, image processing apparatus, and dither matrix creation program
CN109816593A (en) * 2019-01-18 2019-05-28 大连海事大学 A kind of super-resolution image reconstruction method of the generation confrontation network based on attention mechanism
CN110084741A (en) * 2019-04-26 2019-08-02 衡阳师范学院 Image wind network moving method based on conspicuousness detection and depth convolutional neural networks
CN110222220A (en) * 2019-05-06 2019-09-10 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110660020A (en) * 2019-08-15 2020-01-07 天津中科智能识别产业技术研究院有限公司 Image super-resolution method of countermeasure generation network based on fusion mutual information
CN110837786A (en) * 2019-10-30 2020-02-25 汇纳科技股份有限公司 Density map generation method and device based on spatial channel, electronic terminal and medium
CN111178499A (en) * 2019-12-10 2020-05-19 西安交通大学 Medical image super-resolution method based on generation countermeasure network improvement
CN111340696A (en) * 2020-02-10 2020-06-26 南京理工大学 Convolutional neural network image super-resolution reconstruction method fused with bionic visual mechanism
CN111583115A (en) * 2020-04-30 2020-08-25 西安交通大学 Single image super-resolution reconstruction method and system based on depth attention network
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4310092A1 (en) * 1992-03-25 1993-09-30 Mitsubishi Electric Corp Optical scanning and processing of image data from photo detectors - has outputs coupled to sensitivity controlling neural network arranged in matrix
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
JP2018160863A (en) * 2017-03-23 2018-10-11 京セラドキュメントソリューションズ株式会社 Dither matrix creation method, dither matrix creation device, image processing apparatus, and dither matrix creation program
CN109816593A (en) * 2019-01-18 2019-05-28 大连海事大学 A kind of super-resolution image reconstruction method of the generation confrontation network based on attention mechanism
CN110084741A (en) * 2019-04-26 2019-08-02 衡阳师范学院 Image wind network moving method based on conspicuousness detection and depth convolutional neural networks
CN110222220A (en) * 2019-05-06 2019-09-10 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110660020A (en) * 2019-08-15 2020-01-07 天津中科智能识别产业技术研究院有限公司 Image super-resolution method of countermeasure generation network based on fusion mutual information
CN110837786A (en) * 2019-10-30 2020-02-25 汇纳科技股份有限公司 Density map generation method and device based on spatial channel, electronic terminal and medium
CN111178499A (en) * 2019-12-10 2020-05-19 西安交通大学 Medical image super-resolution method based on generation countermeasure network improvement
CN111340696A (en) * 2020-02-10 2020-06-26 南京理工大学 Convolutional neural network image super-resolution reconstruction method fused with bionic visual mechanism
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network
CN111583115A (en) * 2020-04-30 2020-08-25 西安交通大学 Single image super-resolution reconstruction method and system based on depth attention network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JUNWEI YU等: ""Texture-suppressed Visual Attention Model for Grain Insects Detection"", 《IEEE》 *
危笑: ""基于深度学习的视频显著区域提取"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
常江等: ""基于改进生成对抗网络和MobileNetV3的带钢缺陷分类"", 《激光与光电子学进展》, vol. 58, no. 4 *
胡学敏;童秀迟;郭琳;张若晗;孔力;: "基于深度视觉注意神经网络的端到端自动驾驶模型", 计算机应用, no. 07 *
赵波等: ""细粒度图像分类、分割、生成与检索关键技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240583A (en) * 2021-04-13 2021-08-10 浙江大学 Image super-resolution method based on convolution kernel prediction
CN113240583B (en) * 2021-04-13 2022-09-16 浙江大学 Image super-resolution method based on convolution kernel prediction

Also Published As

Publication number Publication date
CN112102167B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN111275618B (en) Depth map super-resolution reconstruction network construction method based on double-branch perception
CN110163801B (en) Image super-resolution and coloring method, system and electronic equipment
CN109509149A (en) A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
CN111598778A (en) Insulator image super-resolution reconstruction method
CN112184577A (en) Single image defogging method based on multi-scale self-attention generation countermeasure network
CN110930306B (en) Depth map super-resolution reconstruction network construction method based on non-local perception
CN113538246B (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN112634163A (en) Method for removing image motion blur based on improved cycle generation countermeasure network
CN111414988B (en) Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network
CN113139904B (en) Image blind super-resolution method and system
CN111833261A (en) Image super-resolution restoration method for generating countermeasure network based on attention
CN113554032A (en) Remote sensing image segmentation method based on multi-path parallel network of high perception
CN114550305B (en) Human body posture estimation method and system based on Transformer
CN112102167A (en) Image super-resolution method based on visual perception
CN115526777A (en) Blind over-separation network establishing method, blind over-separation method and storage medium
CN113516693B (en) Rapid and universal image registration method
CN114626984A (en) Super-resolution reconstruction method for Chinese text image
CN113870327A (en) Medical image registration method based on multi-level deformation field prediction
CN115511705A (en) Image super-resolution reconstruction method based on deformable residual convolution neural network
CN112102388B (en) Method and device for obtaining depth image based on inspection robot monocular image
CN111680640B (en) Vehicle type identification method and system based on domain migration
CN113240584A (en) Multitask gesture picture super-resolution method based on picture edge information
CN115330935A (en) Three-dimensional reconstruction method and system based on deep learning
CN115660979A (en) Attention mechanism-based double-discriminator image restoration method
CN113516115B (en) Dense scene text detection method, device and medium based on multi-dimensional fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240130

Address after: 518000 1002, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Wanzhida Technology Co.,Ltd.

Country or region after: China

Address before: 710048 Shaanxi province Xi'an Beilin District Jinhua Road No. 19

Applicant before: XI'AN POLYTECHNIC University

Country or region before: China

Applicant before: Shaoxing Keqiao District West Textile Industry Innovation Research Institute

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240306

Address after: 518000, Building 2, Block 1016, Yaofengtong Industrial Park, No.1 East Ring Road, Yousong Community, Longhua Street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Hangyu Digital Vision Technology Co.,Ltd.

Country or region after: China

Address before: 518000 1002, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Applicant before: Shenzhen Wanzhida Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant