CN112950470A - Video super-resolution reconstruction method and system based on time domain feature fusion - Google Patents

Video super-resolution reconstruction method and system based on time domain feature fusion Download PDF

Info

Publication number
CN112950470A
CN112950470A CN202110217175.8A CN202110217175A CN112950470A CN 112950470 A CN112950470 A CN 112950470A CN 202110217175 A CN202110217175 A CN 202110217175A CN 112950470 A CN112950470 A CN 112950470A
Authority
CN
China
Prior art keywords
sequence
feature
features
resolution
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110217175.8A
Other languages
Chinese (zh)
Other versions
CN112950470B (en
Inventor
徐君
许刚
程明明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN202110217175.8A priority Critical patent/CN112950470B/en
Publication of CN112950470A publication Critical patent/CN112950470A/en
Application granted granted Critical
Publication of CN112950470B publication Critical patent/CN112950470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention provides a video super-resolution reconstruction method and a system based on time domain feature fusion. The method comprises the steps of obtaining an image sequence of a video, extracting image sequence characteristics and obtaining an initial characteristic sequence; performing local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features; inputting the local feature sequence into a variable convolution duration memory network for bidirectional sampling, and performing feature supplementation in a global scope on each feature in the local feature sequence to obtain a global feature sequence; extracting super-resolution characteristics of the global characteristic sequence, correspondingly adding the super-resolution characteristics with the initial characteristic sequence, extracting high-resolution up-sampling characteristics of the sequence after characteristic addition, and obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.

Description

Video super-resolution reconstruction method and system based on time domain feature fusion
Technical Field
The invention belongs to the field of video super-resolution reconstruction, and particularly relates to a video super-resolution reconstruction method and system based on time domain feature fusion.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Due to the rapid development of Liquid Crystal Display (LCD) technology and Light Emitting Diode (LED) technology in recent years, displays on the market today can play ultra high definition television video with 4K UHD (3840 × 2160) resolution or 8K (7680 × 4320) resolution. However, currently available video typically employs a full high definition resolution of 2K FHD (1920 x 1080). In order to play full high-definition video on the ultra high-definition television, it is necessary to increase the spatial resolution of the full high-definition video in the broadcasting standard of the ultra high-definition television. Therefore, video super-resolution reconstruction techniques have been proposed to process low-resolution video into high-resolution video to alleviate the current problem of insufficient resources of high-resolution video. The video super-resolution reconstruction technology has been widely applied to various multimedia devices such as televisions, mobile phones and the like, and for example, mobile phone manufacturers utilize the technology to improve the definition of mobile phone imaging.
The traditional video super-resolution method generally adopts a motion compensation technology based on interpolation or a discrete Fourier transform technology based on a frequency domain to improve the resolution of a video, however, both the two technologies can only be applied to image reconstruction under translational motion and cannot be applied to more complex motion scenes.
Due to the rapid development of the deep learning technology, the super-resolution reconstruction method based on the convolutional neural network can perform accurate and robust super-resolution reconstruction on images by learning mode information in mass data. Although the industry has been able to improve the spatial resolution of video frame by performing super-resolution reconstruction on a single-frame image, it is accompanied by an unstable "flicker" phenomenon caused by the fluctuation of the quality of video reconstruction. Therefore, a video super-resolution reconstruction method for improving the reconstruction effect of a video image by using an information fusion mechanism on a time domain is developed, and the reconstruction effect is improved by local feature fusion of EDVR (enhanced dynamic response video) proposed by Xintao Wang et al and RBPN (radial basis function) proposed by Muhammad Haris et al.
However, the inventor finds that the previous method only performs fusion on features in a local time domain and ignores global features, and the defect causes the problems of image detail loss and the like, thereby leading to poor super-resolution reconstruction effect.
Disclosure of Invention
In order to solve at least one technical problem in the background art, the invention provides a video super-resolution reconstruction method and system based on time domain feature fusion, which effectively screen local features and effectively fuse global features, so that organic complementation and integration are performed on the local features and the global features, and the super-resolution reconstruction effect of an image is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a video super-resolution reconstruction method based on time domain feature fusion.
A video super-resolution reconstruction method based on time domain feature fusion comprises the following steps:
acquiring an image sequence of a video, and extracting image sequence characteristics to obtain an initial characteristic sequence;
performing local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features;
extracting super-resolution characteristics of the global characteristic sequence, correspondingly adding the super-resolution characteristics with the initial characteristic sequence, extracting high-resolution up-sampling characteristics of the sequence after characteristic addition, and finally obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.
The invention provides a video super-resolution reconstruction system based on time domain feature fusion.
A video super-resolution reconstruction system based on time domain feature fusion comprises:
the initial feature extraction module is used for acquiring an image sequence of a video and extracting image sequence features to obtain an initial feature sequence;
the local feature fusion module is used for carrying out local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features;
the global feature fusion module is used for inputting the local feature sequence into a variable convolution duration memory network for bidirectional sampling, and performing feature supplement in a global range on each feature in the local feature sequence to obtain a global feature sequence;
and the super-resolution reconstruction module is used for extracting the super-resolution features of the global feature sequence, correspondingly adding the super-resolution features with the initial feature sequence, extracting the high-resolution up-sampling features of the sequence after feature addition, and finally obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for super-resolution reconstruction of video based on temporal feature fusion as set forth above.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the video super-resolution reconstruction method based on temporal feature fusion as described above when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
the invention establishes a competitive feature fusion mechanism for the features of the local time domain, thereby screening out the features which have larger influence on the final super-resolution reconstruction effect, and then, the effective feature information which runs through the whole video sequence is transmitted and supplemented in the global range through the global time domain features, thereby effectively improving the reconstruction effect of each frame.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of a video super-resolution reconstruction method based on temporal domain feature fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a local time domain feature fusion process according to an embodiment of the present invention;
fig. 3 is a schematic process diagram of global temporal feature fusion according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
Aiming at the problem that local and global information in a video cannot be effectively utilized in the prior convolutional neural network-based video super-resolution reconstruction technology, and thus the video reconstruction quality is poor, the embodiment provides a video super-resolution reconstruction method based on time domain feature fusion, and under the condition that effective features in a local time domain are screened out, the super-resolution reconstruction effect of each frame in a video sequence is improved by using complementary features from a global time domain
Referring to fig. 1, the present embodiment provides a video super-resolution reconstruction method based on temporal feature fusion, which specifically includes the following steps:
s101: and acquiring an image sequence of the video, and extracting the image sequence characteristics to obtain an initial characteristic sequence.
In a specific implementation, for a sequence of images of a set resolution video
Figure BDA0002954250120000051
The sequence is composed of n frames of images, firstly, each frame of image is subjected to first-step feature extraction through a convolution of 3 x 3 and a Leaky ReLU activation function, then, the features in the extracted feature sequence are input into 5 Residual blocks (Residual blocks) one by one for further feature extraction to obtain an initial feature sequence
Figure BDA0002954250120000052
S102: performing local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features;
in specific implementation, for the initial feature sequence, firstly, the features identical to the two critical features are respectively supplemented at the two ends of the initial feature sequence, and the sequence after the features are supplemented at the two ends is
Figure BDA0002954250120000061
Sequence of
Figure BDA0002954250120000062
Every three characteristics in the time sequence window are combined, a central frame in the middle and two adjacent frames around the central frame are spliced in channel dimension respectively, then two groups of '3 x 3 convolution + Leaky ReLU activation functions' are carried out to obtain two deviation characteristics, then the two deviation characteristics are utilized to carry out variable sampling on the two adjacent frames through variable convolution respectively, the two adjacent frames and the central frame which are subjected to variable sampling are spliced in channel dimension, and the spliced characteristics are subjected to four groups of '1 x 1 convolution + Leaky ReLU activation functions', so that effective information in the characteristics of the two adjacent frames is screened out, and the characteristics of the intermediate frame are supplemented. The above-mentioned treatment is implemented frame by frame so as to obtain the local characteristic sequence formed from effective characteristics screened under the local time domain
Figure BDA0002954250120000063
See figure 2 for a detailed description.
In specific implementations, the sequences
Figure BDA0002954250120000064
Every third feature in (1) constitutes a timing window, respectively
Figure BDA0002954250120000065
And
Figure BDA0002954250120000066
these windows.
S103: and inputting the local feature sequence into a variable convolution duration memory network for bidirectional sampling, and performing feature supplement in a global scope on each feature in the local feature sequence to obtain a global feature sequence.
The local feature sequence obtained in step S102
Figure BDA0002954250120000067
Input into bidirectional samplingAnd carrying out global feature supplement on each feature in the sequence by using a variable convolution duration memory network (BDConvLSTM) so as to obtain a feature sequence
Figure BDA0002954250120000068
See fig. 3 for a detailed description.
S104: extracting super-resolution characteristics of the global characteristic sequence, correspondingly adding the super-resolution characteristics with the initial characteristic sequence, extracting high-resolution up-sampling characteristics of the sequence after characteristic addition, and finally obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.
Inputting the features in the feature sequence obtained in the step S103 into 40 residual blocks one by one for super-resolution feature extraction to obtain high-resolution features, adding the high-resolution features to the initial features obtained in the step S1, then obtaining the high-resolution upsampling features after 2 groups of ' 3 × 3 convolution + 2-magnification Pixel buffer upsampling + Leaky ReLU activating function ', and finally obtaining the final high-resolution reconstruction image sequence through a group of ' 3 × 3 convolution + Leaky ReLU activating function +3 × 3 convolution
Figure BDA0002954250120000071
Example two
The embodiment provides a video super-resolution reconstruction system based on time domain feature fusion, which specifically comprises the following modules:
the initial feature extraction module is used for acquiring an image sequence of a video and extracting image sequence features to obtain an initial feature sequence;
the local feature fusion module is used for carrying out local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features;
the global feature fusion module is used for inputting the local feature sequence into a variable convolution duration memory network for bidirectional sampling, and performing feature supplement in a global range on each feature in the local feature sequence to obtain a global feature sequence;
and the super-resolution reconstruction module is used for extracting the super-resolution features of the global feature sequence, correspondingly adding the super-resolution features with the initial feature sequence, extracting the high-resolution up-sampling features of the sequence after feature addition, and finally obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.
It should be noted here that, each module in the video super-resolution reconstruction system based on temporal domain feature fusion of the present embodiment corresponds to each step in the video super-resolution reconstruction method based on temporal domain feature fusion in the first embodiment one to one, and the specific implementation process is the same, and will not be described here again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the video super-resolution reconstruction method based on temporal feature fusion as described in the first embodiment above.
Example four
The embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the steps in the video super-resolution reconstruction method based on temporal domain feature fusion as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A video super-resolution reconstruction method based on time domain feature fusion is characterized by comprising the following steps:
acquiring an image sequence of a video, and extracting image sequence characteristics to obtain an initial characteristic sequence;
performing local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features;
extracting super-resolution characteristics of the global characteristic sequence, correspondingly adding the super-resolution characteristics with the initial characteristic sequence, extracting high-resolution up-sampling characteristics of the sequence after characteristic addition, and finally obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.
2. The video super-resolution reconstruction method based on temporal domain feature fusion of claim 1, wherein the process of extracting image sequence features is as follows:
and performing first feature extraction on each frame of image through a convolution of 3 multiplied by 3 and a Leaky ReLU activation function, and then inputting the features in the extracted feature sequence into a set number of residual blocks one by one to perform further feature extraction to obtain an initial feature sequence.
3. The super-resolution video reconstruction method based on temporal domain feature fusion of claim 1, wherein in the local temporal domain feature fusion process, the same features as two critical features are respectively supplemented at two ends of the initial feature sequence, and for each time sequence window composed of three features, the central frame in the middle and two adjacent frames around are respectively spliced in the channel dimension.
4. The super-resolution video reconstruction method based on temporal domain feature fusion of claim 3, wherein two deviation features are obtained by subjecting the spliced features to two sets of "3 x 3 convolution + leak ReLU activation functions", then two adjacent frames are respectively subjected to variable sampling by using the two deviation features through variable convolution, the two adjacent frames and a central frame which are subjected to variable sampling are spliced in channel dimension, and the spliced features are subjected to four sets of "1 x 1 convolution + leak ReLU activation functions", so that effective information in the adjacent two frames is screened out and the intermediate frame features are supplemented.
5. The video super-resolution reconstruction method based on temporal domain feature fusion of claim 1, wherein the process of extracting the super-resolution feature of the global feature sequence comprises:
and inputting the features in the global feature sequence into a set number of residual blocks one by one to perform super-resolution feature extraction to obtain high-resolution features.
6. The video super-resolution reconstruction method based on temporal domain feature fusion of claim 1, wherein the process of extracting the high resolution up-sampling feature of the sequence after feature addition is as follows:
and (3) after the features are added, the sequence is subjected to 2 groups of '3 × 3 convolution +2 multiplying power Pixel Shuffle upsampling + Leaky ReLU activation function' to obtain the high-resolution upsampling features.
7. The super-resolution video reconstruction method based on temporal domain feature fusion of claim 1, wherein the convolutional neural network is a set of "3 x 3 convolution + Leaky ReLU activation function +3 x 3 convolution".
8. A video super-resolution reconstruction system based on time domain feature fusion is characterized by comprising:
the initial feature extraction module is used for acquiring an image sequence of a video and extracting image sequence features to obtain an initial feature sequence;
the local feature fusion module is used for carrying out local time domain feature fusion on the features in the initial feature sequence to obtain a local feature sequence; fusing non-boundary features in the initial feature sequence with two features which are nearest to the non-boundary features; for the boundary features in the initial feature sequence, fusing two boundary features and one feature which is most adjacent to the two boundary features;
the global feature fusion module is used for inputting the local feature sequence into a variable convolution duration memory network for bidirectional sampling, and performing feature supplement in a global range on each feature in the local feature sequence to obtain a global feature sequence;
and the super-resolution reconstruction module is used for extracting the super-resolution features of the global feature sequence, correspondingly adding the super-resolution features with the initial feature sequence, extracting the high-resolution up-sampling features of the sequence after feature addition, and finally obtaining a final high-resolution reconstruction image sequence through a convolutional neural network.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for reconstructing super-resolution video based on temporal domain feature fusion according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the video super-resolution reconstruction method based on temporal domain feature fusion according to any one of claims 1-7 when executing the program.
CN202110217175.8A 2021-02-26 2021-02-26 Video super-resolution reconstruction method and system based on time domain feature fusion Active CN112950470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110217175.8A CN112950470B (en) 2021-02-26 2021-02-26 Video super-resolution reconstruction method and system based on time domain feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110217175.8A CN112950470B (en) 2021-02-26 2021-02-26 Video super-resolution reconstruction method and system based on time domain feature fusion

Publications (2)

Publication Number Publication Date
CN112950470A true CN112950470A (en) 2021-06-11
CN112950470B CN112950470B (en) 2022-07-15

Family

ID=76246438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110217175.8A Active CN112950470B (en) 2021-02-26 2021-02-26 Video super-resolution reconstruction method and system based on time domain feature fusion

Country Status (1)

Country Link
CN (1) CN112950470B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949863A (en) * 2021-10-21 2022-01-18 上海复达兴智能技术有限公司 Experience quality evaluation method, system and equipment for real-time audio and video communication
CN116452741A (en) * 2023-04-20 2023-07-18 北京百度网讯科技有限公司 Object reconstruction method, object reconstruction model training method, device and equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150104116A1 (en) * 2012-03-05 2015-04-16 Thomason Licensing Method and apparatus for performing super-resolution
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
US20180232857A1 (en) * 2015-11-04 2018-08-16 Peking University Shenzhen Graduate School Method and device for super-resolution image reconstruction based on dictionary matching
CN110969577A (en) * 2019-11-29 2020-04-07 北京交通大学 Video super-resolution reconstruction method based on deep double attention network
CN111524068A (en) * 2020-04-14 2020-08-11 长安大学 Variable-length input super-resolution video reconstruction method based on deep learning
CN111681166A (en) * 2020-06-02 2020-09-18 重庆理工大学 Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
US20200311871A1 (en) * 2017-12-20 2020-10-01 Huawei Technologies Co., Ltd. Image reconstruction method and device
CN111860147A (en) * 2020-06-11 2020-10-30 北京市威富安防科技有限公司 Pedestrian re-identification model optimization processing method and device and computer equipment
CN112070676A (en) * 2020-09-10 2020-12-11 东北大学秦皇岛分校 Image super-resolution reconstruction method of two-channel multi-sensing convolutional neural network
CN112102163A (en) * 2020-08-07 2020-12-18 南京航空航天大学 Continuous multi-frame image super-resolution reconstruction method based on multi-scale motion compensation framework and recursive learning
US20210004935A1 (en) * 2018-04-04 2021-01-07 Huawei Technologies Co., Ltd. Image Super-Resolution Method and Apparatus
CN112215755A (en) * 2020-10-28 2021-01-12 南京信息工程大学 Image super-resolution reconstruction method based on back projection attention network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150104116A1 (en) * 2012-03-05 2015-04-16 Thomason Licensing Method and apparatus for performing super-resolution
US20180232857A1 (en) * 2015-11-04 2018-08-16 Peking University Shenzhen Graduate School Method and device for super-resolution image reconstruction based on dictionary matching
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
US20200311871A1 (en) * 2017-12-20 2020-10-01 Huawei Technologies Co., Ltd. Image reconstruction method and device
US20210004935A1 (en) * 2018-04-04 2021-01-07 Huawei Technologies Co., Ltd. Image Super-Resolution Method and Apparatus
CN110969577A (en) * 2019-11-29 2020-04-07 北京交通大学 Video super-resolution reconstruction method based on deep double attention network
CN111524068A (en) * 2020-04-14 2020-08-11 长安大学 Variable-length input super-resolution video reconstruction method based on deep learning
CN111681166A (en) * 2020-06-02 2020-09-18 重庆理工大学 Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN111860147A (en) * 2020-06-11 2020-10-30 北京市威富安防科技有限公司 Pedestrian re-identification model optimization processing method and device and computer equipment
CN112102163A (en) * 2020-08-07 2020-12-18 南京航空航天大学 Continuous multi-frame image super-resolution reconstruction method based on multi-scale motion compensation framework and recursive learning
CN112070676A (en) * 2020-09-10 2020-12-11 东北大学秦皇岛分校 Image super-resolution reconstruction method of two-channel multi-sensing convolutional neural network
CN112215755A (en) * 2020-10-28 2021-01-12 南京信息工程大学 Image super-resolution reconstruction method based on back projection attention network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIAOYU XIANG: "Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution", 《IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
YAWEI LI; XIAOFENG LI; ZHIZHONG FU; TINGTING NIU; KEYU LONG: "Spatiotemporal_super-resolution_for_multiview_video_in_transform_domain", 《2016 VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)》 *
刘村,李元祥,周拥军,骆建华: "基于卷积神经网络的视频图像超分辨率重建方法", 《计算机应用研究》 *
李金航等: "基于时空相关性的视频超分辨率重建算法", 《计算机与数字工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949863A (en) * 2021-10-21 2022-01-18 上海复达兴智能技术有限公司 Experience quality evaluation method, system and equipment for real-time audio and video communication
CN116452741A (en) * 2023-04-20 2023-07-18 北京百度网讯科技有限公司 Object reconstruction method, object reconstruction model training method, device and equipment
CN116452741B (en) * 2023-04-20 2024-03-01 北京百度网讯科技有限公司 Object reconstruction method, object reconstruction model training method, device and equipment

Also Published As

Publication number Publication date
CN112950470B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN111193923B (en) Video quality evaluation method and device, electronic equipment and computer storage medium
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
US11200649B2 (en) Image processing method and apparatus, display device, and computer readable storage medium
CN112950470B (en) Video super-resolution reconstruction method and system based on time domain feature fusion
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
EP2681710A1 (en) Local multiscale tone-mapping operator
CN110189260B (en) Image noise reduction method based on multi-scale parallel gated neural network
CN113784175B (en) HDR video conversion method, device, equipment and computer storage medium
CN111784570A (en) Video image super-resolution reconstruction method and device
CN105744275A (en) Video data input method, video data output method, video data input device and video data output device
CN112019827A (en) Method, device, equipment and storage medium for enhancing video image color
CN109102463B (en) Super-resolution image reconstruction method and device
CN113781321A (en) Information compensation method, device, equipment and storage medium for image highlight area
CN111696034B (en) Image processing method and device and electronic equipment
US20230162329A1 (en) High quality ui elements with frame extrapolation
US10706501B2 (en) Method and apparatus for stretching image
CN111860363A (en) Video image processing method and device, electronic equipment and storage medium
US20230186608A1 (en) Method, device, and computer program product for video processing
CN114926491A (en) Matting method and device, electronic equipment and storage medium
CN115861121A (en) Model training method, image processing method, device, electronic device and medium
CN113747242B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111970564A (en) Optimization method and device for HDR video display processing, storage medium and terminal
CN109146822A (en) A method of equipment simulating HDR monitor is shown using SDR
Yang et al. Multi-scale extreme exposure images fusion based on deep learning
US20240119706A1 (en) Generating images with small objects for training a pruned super-resolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant