CN112488922B - Super-resolution processing method based on optical flow interpolation - Google Patents

Super-resolution processing method based on optical flow interpolation Download PDF

Info

Publication number
CN112488922B
CN112488922B CN202011431240.9A CN202011431240A CN112488922B CN 112488922 B CN112488922 B CN 112488922B CN 202011431240 A CN202011431240 A CN 202011431240A CN 112488922 B CN112488922 B CN 112488922B
Authority
CN
China
Prior art keywords
optical flow
image
time
interpolation
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011431240.9A
Other languages
Chinese (zh)
Other versions
CN112488922A (en
Inventor
陈建兵
吴丹
孙伟
田鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yijing Zhilian Suzhou Technology Co ltd
Original Assignee
Yijing Zhilian Suzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yijing Zhilian Suzhou Technology Co ltd filed Critical Yijing Zhilian Suzhou Technology Co ltd
Priority to CN202011431240.9A priority Critical patent/CN112488922B/en
Publication of CN112488922A publication Critical patent/CN112488922A/en
Application granted granted Critical
Publication of CN112488922B publication Critical patent/CN112488922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Television Systems (AREA)

Abstract

The invention discloses a super-resolution processing method based on optical flow interpolation, which distorts two input images to a specific time step, then adaptively fuses the two distorted images to generate an intermediate image, wherein motion interpretation and occlusion reasoning are modeled in a single end-to-end trainable network.

Description

Super-resolution processing method based on optical flow interpolation
Technical Field
The invention relates to the technical field of image processing, in particular to a super-resolution processing method based on optical flow interpolation.
Background
In life, like the video shot by a smart phone, a lot of fine details in life are recorded, and the eyes of a user who is glad to see clearly are not good, so that the user needs to watch the video through a slow playback function, and the following three methods for improving the video frame rate in the prior art are mainly adopted: shooting a motion video by using a high-speed camera; recording video using a standard frame rate of the mobile phone; processing using computer vision algorithms; wherein the high-speed camera is a device capable of capturing a moving image at an exposure of less than 1/1000 second or a frame rate exceeding 250 frames per second; the mobile phone records that standard frame video is converted into higher frame rate through video processing software; using computer processing, mainly using video interpolation algorithm to implement smooth view conversion;
however, the existing technical schemes have respective defects and limitations, and the shooting scheme using the high-speed camera has the problems of high price, heavy equipment, portability and high requirement on the storage space of the high-speed camera;
the shooting scheme of the mobile phone has the advantages that the requirement on the memory of the mobile phone is high, the power consumption is huge in the shooting process, and the video shot by the mobile phone can be played only through frame rate conversion;
methods based on video interpolation exist that cannot be used directly to generate any high frame rate video.
Disclosure of Invention
The invention provides a super-resolution processing method based on optical flow interpolation, which can effectively solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a super-resolution processing method based on optical flow interpolation specifically comprises the following steps:
s1, giving two continuous frames, and performing optical flow interpolation at any moment in any time step between the two continuous frames;
s2, calculating a bidirectional optical flow between input images by using a U-Net architecture;
s3, linearly combining the streams at each time step to approximate an intermediate bidirectional optical flow;
s4, the two input images are distorted and linearly fused to form each intermediate frame so as to synthesize an intermediate frame image.
Based on the above technical solution, preferably, in the step S1, in the video interpolation frame, an intermediate frame is generated to form a spatially and temporally coherent video sequence, an end-to-end convolutional neural network for variable-length multi-frame video interpolation is proposed, wherein motion interpretation and occlusion reasoning are jointly modeled;
in the step S2, the U-Net neural network model is a full convolution network and consists of a decoder and an encoder, each part adopts a structure formed by two layers of convolution and one layer of smooth ReLU, 6 layers are adopted, and each layer finally adopts an average value pooling layer with the step length of 2 to reduce the characteristic dimension and realize optical flow calculation and optical flow interpolation network.
Based on the above solution, it is preferable that in the step S3, the approximate middle bidirectional optical flow is only applied to the locally smooth region and generates artifacts around the motion boundary, and to solve this disadvantage, we use another U-Net to improve the approximate flow and predict the soft visibility map;
in the step S4, the visibility graph is applied to the deformed image before the intermediate frame image is synthesized, so that the contribution of the blocked pixels to the interpolated intermediate frame is eliminated to avoid artifacts, the learned network parameters are independent of time, and the generation of as many intermediate frames as possible is facilitated according to the needs.
Based on the above technical solution, preferably, the synthesizing intermediate frame image: to I 0 ,I 1 Two-temporal images, and intermediate temporal instant T e (0, 1) we want to predict, the goal being to predict the image frames of intermediate temporal instant t=tThe most straightforward way is to train a direct predictor +.>In order to predict each pixel value, the neural network model of each pixel in the image frame not only needs to learn the motion mode of a video character, but also needs to learn how to express two image contents, and because of the rich color space of RGB images, the mode is difficult to generate a high-definition intermediate image, and the research progress of a single-frame intermediate interpolation method is used for providing a method for fusing two input images at the moment to obtain an intermediate t moment image:
suppose F t→0 And F is equal to t→1 Respectively input image I t To I 0 Optical flow of (2) and input image I t To I 1 When obtaining these two optical flows, we can synthesize the image at the intermediate time t, with the following formula:
where g (·, ·) is a backward deformation function, which can be implemented using bi-directional interpolation, and is also differentiable, parameter α 0 Controlling the ratio of two imagesThe rate, the size depends on the timing consistency and the spatial consistency,representing pixel-by-pixel multiplication, realizing the attention of the algorithm to the image content, in terms of time sequence consistency, the closer the time t=t and the time t=0, the more closely the I 0 For->The greater the contribution of (c).
Based on the above technical solution, preferably, in the video frame insertion, an important law is: if a pixel p is visible at time t=t, then the pixel is visible at least at time t=0 or t=1, so that the blurring problem is solved in the video plug-in, we introduce the concept of a visible graph;
assuming that at time t=0 and time t=1, the visible graphs are respectively: v (V) t←0 And V t←1 ,V t←0 (p)∈[0,1]Indicating whether the pixel p remains in a visible state from time 0 to time t, the value 0 indicates completely invisible, and integrating the pixel state into the image frame generation process results in the following formula:
wherein regularization parameter Z= (1-t) V t→0 +tV t→1
Based on the above technical solution, preferably, the optical flow interpolation at any time: due to intermediate frame image I t Instead of the input image, we have difficulty directly calculating the optical flow F t→0 And F t→1 To solve this problem we can pass the optical flow F between two input images 0→1 ,F 1→0 To generate an intermediate optical flow F t→0 ,F t→1
Based on the above technical solution, preferably, the interpolation smoothing: method for reducing the problem of poor image synthesis effect caused by 'artifact' phenomenon on motion boundary by using model learningBased on the hierarchical optical flow prediction method, an optical flow frame insertion prediction sub-network is designed, and the input of the network comprises two input images I 0 And I 1 Optical flow F between input images 0→1 And F 1→0 Optical flow predictionAnd->Two integrated optical flow prediction resultsAnd->Output optimized intermediate optical flow field F t→1 And F t→0
Compared with the prior art, the invention has the beneficial effects that: the invention realizes the end-to-end variable-length multi-frame video interpolation based on the convolutional neural network model, improves the frame rate of the moving video through the video interpolation, and can interpolate at any time step between two frames at the same time, so that the video shot by a mobile phone can be played at high definition and low speed;
in addition, the missing frames in the video are predicted and complemented through the deep neural network, so that the continuous slow playback effect is generated, and the blocked pixels in the original video frames can be eliminated, so that the blurring artifacts generated in the generated interpolation intermediate frames are avoided.
Secondly, the invention does not need to use a high-speed camera to shoot video, and a mobile phone or any camera equipment is used for shooting the motion video;
the method avoids the contribution of the blocked pixels to the interpolated intermediate frame, and effectively avoids the problem of 'artifacts' caused by motion blocking in the video frame interpolation process.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of the steps of the process of the present invention;
FIG. 2 is a schematic illustration of mid-frame optical flow estimation in accordance with the present invention;
FIG. 3 is a schematic representation of the prediction of the optical flow interpolation result of the present invention;
FIG. 4 is a schematic view showing the effect of the visual image of the present invention;
fig. 5 is a schematic diagram of the U-Net network architecture of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Examples: as shown in fig. 1, the present invention provides a technical solution, a super-resolution processing method based on optical flow interpolation, which specifically includes the following steps:
s1, giving two continuous frames, and performing optical flow interpolation at any moment in any time step between the two continuous frames;
s2, calculating a bidirectional optical flow between input images by using a U-Net architecture;
s3, linearly combining the streams at each time step to approximate an intermediate bidirectional optical flow;
s4, the two input images are distorted and linearly fused to form each intermediate frame so as to synthesize an intermediate frame image.
Based on the above technical solution, in S1, in a video interpolation frame, an intermediate frame is aimed to be generated to form a spatially and temporally coherent video sequence, an end-to-end convolutional neural network for variable-length multi-frame video interpolation is proposed, in which motion interpretation and occlusion reasoning are jointly modeled;
as shown in FIG. 5, in S2, the U-Net neural network model is a full convolution network, which consists of a decoder and an encoder, and in each part, we adopt a structure consisting of two layers of convolution and one layer of smooth ReLU, 6 layers are adopted, and in each layer, an average value pooling layer with a step length of 2 is adopted at last to reduce the characteristic dimension and realize optical flow calculation and optical flow interpolation network.
Based on the above technical solution, in S3, the approximate middle bidirectional optical flow is only applicable to the locally smooth region and generates artifacts around the motion boundary, in order to solve this drawback, we use another U-Net to improve the approximate flow and predict the soft visibility map;
in S4, the visibility graph is applied to the deformed image before the intermediate frame image synthesis, so that the contribution of the blocked pixels to the interpolated intermediate frame is eliminated to avoid artifacts, the learned network parameters are independent of time, and the generation of as many intermediate frames as possible is facilitated according to the needs.
Based on the technical scheme, synthesizing an intermediate frame image: to I 0 ,I 1 Two-temporal images, and intermediate temporal instant T e (0, 1) we want to predict, the goal being to predict the image frames of intermediate temporal instant t=tThe most straightforward way is to train a direct predictor +.>In order to predict each pixel value, the neural network model of each pixel in the image frame not only needs to learn the motion mode of a video character, but also needs to learn how to express two image contents, and because of the rich color space of RGB images, the mode is difficult to generate a high-definition intermediate image, and the research progress of a single-frame intermediate interpolation method is used for providing a method for fusing two input images at the moment to obtain an intermediate t moment image:
suppose F t→0 And F is equal to t→1 Respectively input image I t To I 0 Optical flow of (2) and input image I t To I 1 When these two optical flows are obtained,the image at the intermediate time t can be synthesized by the following formula:
where g (·, ·) is a backward deformation function, which can be implemented using bi-directional interpolation, and is also differentiable, parameter α 0 Controlling the ratio of the two images, the size depends on the timing consistency and the spatial consistency,representing pixel-by-pixel multiplication, realizing the attention of the algorithm to the image content, in terms of time sequence consistency, the closer the time t=t and the time t=0, the more closely the I 0 For->The greater the contribution of (c).
Based on the above technical scheme, in video frame insertion, an important law is: if a pixel p is visible at time t=t, then the pixel is visible at least at time t=0 or t=1, so that the blurring problem is solved in the video plug-in, we introduce the concept of a visible graph;
assuming that at time t=0 and time t=1, the visible graphs are respectively: v (V) t←0 And V t←1 ,V t←0 (p)∈[0,1]Indicating whether the pixel p remains in a visible state from time 0 to time t, the value 0 indicates completely invisible, and integrating the pixel state into the image frame generation process results in the following formula:
wherein regularization parameter Z= (1-t) V t→0 +tV t→1
Based on the technical scheme, optical flow interpolation at any moment: due to intermediate frame image I t Instead of the input image, we have difficulty directly calculating the optical flow F t→0 And F t→1 To solve this problem we can pass the optical flow F between two input images 0→1 ,F 1→0 To generate an intermediate optical flow F t→0 ,F t→1
As shown in fig. 2, each column represents a moment, each point represents a specific pixel point, and for point p in the graph, we want to generate the optical flow of the point at the moment t=1, one possible way is to borrow the optical flow information at the moments t=0 and t=1 corresponding to the point;
thus F t→1 (p) can be calculated from the following formula:
here we calculate the optical flow of the two input images from the same or opposite directions, and similar to the temporal consistency in RGB image generation, we formulate the optical flow of the two input images in two directions to predict the optical flow of the intermediate frame as follows:
based on the technical scheme, interpolation smoothing: the problem of poor image synthesis effect caused by 'artifact' phenomenon on a motion boundary is reduced, an initial estimated result is perfected by using a model learning method, an optical flow frame insertion prediction sub-network is designed on the basis of a hierarchical optical flow prediction method, and the input of the network comprises two input images I 0 And I 1 Optical flow F between input images 0→1 And F 1→0 Optical flow predictionAnd->And two integrated optical flow predictors +.>Andoutput optimized intermediate optical flow field F t→1 And F t→0
As shown in fig. 3, the example result, where t=0.5, is that the whole picture is moved leftwards, while the motorcycle is moved leftwards relative to the picture, and the result of the last line shows the optimizing effect of our optical flow interpolation model on the motion boundary, and the higher the whiteness of the pixels in the figure, the better the optimizing effect is;
the visible graph is very effective in dealing with the blur problem, so we predict two visible graphs V simultaneously using the light flow model t←0 ,V t←1 The following constraint relation is satisfied between the two:
V t←0 =1-V t←1 (6)
in practice, V t←0 (p) =0 means V t←1 (p) =1, i.e. pixel p is occluded at time t=0, but released at time t=1, because few pixels are occluded at time t=t, while also occluded at time t=0, t=1, when a visible map is used, when pixel p is at I 0 ,I 1 When both are visible, the information of the two images is fused;
as shown in fig. 4, t=0.5, from time t=0 to time t=1, the player's arm moves downward in the figure, so that the area on the upper right of the player's arm is visible at time t=0, and also visible at time T;
if the upper right region at time t=1 is not visible at time T, the image of the fourth row in fig. 3 better reflects this point, V t←0 Near the middle armWhite area indicates I 0 Are generated by the pixel pairs in the imageThe contribution is larger.
In order to perform optical flow interpolation, we first need to calculate the bi-directional interpolation between two input images, and recently break through the optical flow calculation method using deep learning, we receive two input images by training an optical flow calculation convolution network, and simultaneously estimate the forward optical flow F 0→1 Backward optical flow F 1→0 To implement optical flow calculation and optical flow interpolation networks, we employed UNet neural network models.
The working principle and the using flow of the invention are as follows:
finally, it should be noted that: the foregoing is merely a preferred example of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A super-resolution processing method based on optical flow interpolation is characterized in that: the method specifically comprises the following steps:
s1, giving two continuous frames, and performing optical flow interpolation at any moment in any time step between the two continuous frames;
s2, calculating a bidirectional optical flow between input images by using a U-Net architecture;
s3, linearly combining the streams at each time step to approximate an intermediate bidirectional optical flow;
s4, the two input images are distorted and linearly fused to form each intermediate frame so as to synthesize an intermediate frame image;
the composite intermediate frame image: to I 0 ,I 1 Two-moment image and weIntermediate instant T e (0, 1) to be predicted, the goal being to predict image frames of intermediate instant t=tThe most straightforward way is to train a direct predictor +.>In order to predict each pixel value, the neural network model of each pixel in the image frame not only needs to learn the motion mode of a video character, but also needs to learn how to express two image contents, and because of the rich color space of RGB images, the mode is difficult to generate a high-definition intermediate image, and the research progress of a single-frame intermediate interpolation method is used for providing a method for fusing two input images at the moment to obtain an intermediate t moment image:
suppose F t→0 And F is equal to t→1 Respectively input image I t To I 0 Optical flow of (2) and input image I t To I 1 When obtaining these two optical flows, we can synthesize the image at the intermediate time t, with the following formula:
where g (·, ·) is a backward deformation function, which can be implemented using bi-directional interpolation, and is also differentiable, parameter α 0 Controlling the ratio of the two images, the size of which depends on the time sequence consistency and the space consistency, wherein, the weight of which is that the multiplication is pixel by pixel, the attention of the algorithm to the image content is realized, and the closer the time T=t and the time T=0 are in the aspect of the time sequence consistency, the more closely the time T=t and the time T=0 are, the I 0 For a pair ofThe greater the contribution of (c).
2. The super-resolution processing method based on optical flow interpolation according to claim 1, wherein: in the step S1, in the video interpolation frame, an intermediate frame is generated to form a video sequence which is coherent in space and time, an end-to-end convolution neural network for variable-length multi-frame video interpolation is provided, wherein motion interpretation and occlusion reasoning are jointly modeled;
in the step S2, the U-Net neural network model is a full convolution network and consists of a decoder and an encoder, each part adopts a structure formed by two layers of convolution and one layer of smooth ReLU, 6 layers are adopted, and each layer finally adopts an average value pooling layer with the step length of 2 to reduce the characteristic dimension and realize optical flow calculation and optical flow interpolation network.
3. The super-resolution processing method based on optical flow interpolation according to claim 1, wherein: in said S3, the approximate intermediate bidirectional optical flow is only applicable to locally smooth regions and produces artifacts around motion boundaries, in order to solve this drawback we use another U-Net to refine the approximate flow and predict the soft visibility map;
in the step S4, the visibility graph is applied to the deformed image before the intermediate frame image is synthesized, so that the contribution of the blocked pixels to the interpolated intermediate frame is eliminated to avoid artifacts, the learned network parameters are independent of time, and the generation of as many intermediate frames as possible is facilitated according to the needs.
4. The super-resolution processing method based on optical flow interpolation according to claim 1, wherein: in the video interpolation, an important law is: if a pixel p is visible at time t=t, then the pixel is visible at least at time t=0 or t=1, so that the blurring problem is solved in the video plug-in, we introduce the concept of a visible graph;
assuming that at time t=0 and time t=1, the visible graphs are respectively: v (V) t←0 And V t←1 ,V t←0 (p)∈[0,1]Indicating whether the pixel p remains in a visible state from time 0 to time t, the value 0 indicates completely invisible, and integrating the pixel state into the image frame generation process results in the following formula:
wherein regularization parameter Z= (1-t) V t→0 +tV t→1
5. The super-resolution processing method based on optical flow interpolation according to claim 1, wherein: the optical flow interpolation at any moment: due to intermediate frame image I t Instead of the input image, we have difficulty directly calculating the optical flow F t→0 And F t→1 To solve this problem we can pass the optical flow F between two input images 0→1 ,F 1→0 To generate an intermediate optical flow F t→0 ,F t→1
6. The super-resolution processing method based on optical flow interpolation according to claim 1, wherein: the interpolation smoothing: the problem of poor image synthesis effect caused by 'artifact' phenomenon on a motion boundary is reduced, an initial estimated result is perfected by using a model learning method, an optical flow frame insertion prediction sub-network is designed on the basis of a hierarchical optical flow prediction method, and the input of the network comprises two input images I 0 And I 1 Optical flow F between input images 0→1 And F 1→0 Optical flow predictionAnd->And two integrated optical flow predictors +.>And->Output optimized intermediate optical flow field F t→1 And F t→0
CN202011431240.9A 2020-12-08 2020-12-08 Super-resolution processing method based on optical flow interpolation Active CN112488922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011431240.9A CN112488922B (en) 2020-12-08 2020-12-08 Super-resolution processing method based on optical flow interpolation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011431240.9A CN112488922B (en) 2020-12-08 2020-12-08 Super-resolution processing method based on optical flow interpolation

Publications (2)

Publication Number Publication Date
CN112488922A CN112488922A (en) 2021-03-12
CN112488922B true CN112488922B (en) 2023-09-12

Family

ID=74941024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011431240.9A Active CN112488922B (en) 2020-12-08 2020-12-08 Super-resolution processing method based on optical flow interpolation

Country Status (1)

Country Link
CN (1) CN112488922B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362230B (en) * 2021-07-12 2024-04-05 昆明理工大学 Method for realizing super-resolution of countercurrent model image based on wavelet transformation
CN114422852A (en) * 2021-12-16 2022-04-29 阿里巴巴(中国)有限公司 Video playing method, storage medium, processor and system
CN114494023B (en) * 2022-04-06 2022-07-29 电子科技大学 Video super-resolution implementation method based on motion compensation and sparse enhancement
CN116033183A (en) * 2022-12-21 2023-04-28 上海哔哩哔哩科技有限公司 Video frame inserting method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109379550A (en) * 2018-09-12 2019-02-22 上海交通大学 Video frame rate upconversion method and system based on convolutional neural networks
CN110351511A (en) * 2019-06-28 2019-10-18 上海交通大学 Video frame rate upconversion system and method based on scene depth estimation
CN110913218A (en) * 2019-11-29 2020-03-24 合肥图鸭信息科技有限公司 Video frame prediction method and device and terminal equipment
CN113342797A (en) * 2021-06-30 2021-09-03 亿景智联(北京)科技有限公司 Geographic community portrait missing prediction method based on Monte Carlo method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9230303B2 (en) * 2013-04-16 2016-01-05 The United States Of America, As Represented By The Secretary Of The Navy Multi-frame super-resolution of image sequence with arbitrary motion patterns

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109379550A (en) * 2018-09-12 2019-02-22 上海交通大学 Video frame rate upconversion method and system based on convolutional neural networks
CN110351511A (en) * 2019-06-28 2019-10-18 上海交通大学 Video frame rate upconversion system and method based on scene depth estimation
CN110913218A (en) * 2019-11-29 2020-03-24 合肥图鸭信息科技有限公司 Video frame prediction method and device and terminal equipment
CN113342797A (en) * 2021-06-30 2021-09-03 亿景智联(北京)科技有限公司 Geographic community portrait missing prediction method based on Monte Carlo method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multi-frame super-resolution algorithm for complex motion patterns.;Kanaev A V等;《Optics express》;第21卷(第17期);19850-66 *

Also Published As

Publication number Publication date
CN112488922A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112488922B (en) Super-resolution processing method based on optical flow interpolation
US11699217B2 (en) Generating gaze corrected images using bidirectionally trained network
Jiang et al. Super slomo: High quality estimation of multiple intermediate frames for video interpolation
WO2020037965A1 (en) Method for multi-motion flow deep convolutional network model for video prediction
CN102158712B (en) Multi-viewpoint video signal coding method based on vision
CN103402098B (en) A kind of video frame interpolation method based on image interpolation
CN111008938B (en) Real-time multi-frame bit enhancement method based on content and continuity guidance
JP2009303236A (en) Adaptive image stability
KR102242343B1 (en) A Fast High Quality Video Frame Rate Conversion Method and Apparatus
CN114494050A (en) Self-supervision video deblurring and image frame inserting method based on event camera
CN114339030A (en) Network live broadcast video image stabilization method based on self-adaptive separable convolution
Isikdogan et al. Eye contact correction using deep neural networks
CN111696034A (en) Image processing method and device and electronic equipment
Choi et al. Self-supervised real-time video stabilization
CN117768774A (en) Image processor, image processing method, photographing device and electronic device
CN117576179A (en) Mine image monocular depth estimation method with multi-scale detail characteristic enhancement
JP4633595B2 (en) Movie generation device, movie generation method, and program
CN116033183A (en) Video frame inserting method and device
Zhao et al. Multiframe joint enhancement for early interlaced videos
TW536918B (en) Method to increase the temporal resolution of continuous image series
WO2014115522A1 (en) Frame rate converter, frame rate conversion method, and display device and image-capturing device provided with frame rate converter
CN114463213A (en) Video processing method, video processing device, terminal and storage medium
Choi et al. Frame Rate Up-Conversion for HDR Video Using Dual Exposure Camera
WO2023004727A1 (en) Video processing method, video processing device, and electronic device
CN112927175B (en) Single viewpoint synthesis method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 3015, 3 / F, 6 Chuangye Road, Shangdi Information Industry base, Haidian District, Beijing 100085

Applicant after: Yijing Zhilian (Suzhou) Technology Co.,Ltd.

Address before: 3015, 3 / F, 6 Chuangye Road, Shangdi Information Industry base, Haidian District, Beijing 100085

Applicant before: Yijing Zhilian (Beijing) Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant