CN114066761A - Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection - Google Patents

Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection Download PDF

Info

Publication number
CN114066761A
CN114066761A CN202111385904.7A CN202111385904A CN114066761A CN 114066761 A CN114066761 A CN 114066761A CN 202111385904 A CN202111385904 A CN 202111385904A CN 114066761 A CN114066761 A CN 114066761A
Authority
CN
China
Prior art keywords
video
frame
optical flow
foreground
foreground detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111385904.7A
Other languages
Chinese (zh)
Inventor
王海滨
纪文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Genjian Intelligent Technology Co ltd
Original Assignee
Qingdao Genjian Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Genjian Intelligent Technology Co ltd filed Critical Qingdao Genjian Intelligent Technology Co ltd
Priority to CN202111385904.7A priority Critical patent/CN114066761A/en
Publication of CN114066761A publication Critical patent/CN114066761A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method and a system for enhancing a frame rate of a motion video based on optical flow estimation and foreground detection, wherein the method comprises the following steps: processing the acquired video data to be processed by using a foreground detection algorithm to obtain a complete moving target image; fitting the video data to be processed by using an optical flow estimation network to obtain foreground optical flow information; and generating a video compensation frame by using a generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensating the frame of the video data to be processed according to the video compensation frame. The method combines foreground detection and optical flow estimation of a moving object, combines foreground motion information of an original video, and generates a vivid moving video frame supplement by using a generation countermeasure network, so that the motion characteristic of the moving object of the video can be simulated, and the frame supplement is more vivid.

Description

Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection
Technical Field
The present disclosure relates to the field of computer vision related technologies, and in particular, to a method and a system for enhancing a frame rate of a motion video based on optical flow estimation and foreground detection.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Due to the rapid development of multimedia technology, many video sources with different frame rates appear in the market, and frame rate conversion among the video sources is inevitably needed. Frame rate up-conversion is a technique for converting low frame rate video to higher frame rate video. Meanwhile, as the requirements of people on video quality are continuously improved, high-definition video is more and more commonly applied. Therefore, it is very important to research the frame rate up-conversion algorithm suitable for high definition video.
Video frame interpolation has wide application in the field of computer vision, such as slow motion video, novel view synthesis, frame rate up-conversion, and frame recovery in video streams. High frame rate video can avoid common artifacts such as temporal jitter and motion blur and is therefore more visually appealing to viewers. Video frame interpolation aims at synthesizing intermediate frames between two consecutive video frames, which can be used to improve the frame rate and enhance the visual quality. Video frame interpolation is challenging due to complex, large non-linear motion and illumination variations in the real world. At present, a common method for a video frame interpolation algorithm in the prior art is as follows: 1) warping the input frame according to the approximated optical flow; 2) the warped frames are fused and refined using a Convolutional Neural Network (CNN). The method for performing video frame interpolation has the problems of complex preprocessing, complex model and the like.
Disclosure of Invention
The method combines foreground detection and optical flow estimation of moving objects, combines foreground motion information of original videos, and generates vivid moving video frame supplement by using a generation countermeasure network (GAN for short), so that the motion characteristics of the moving objects of the videos can be simulated, and the frame supplement is more vivid.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
one or more embodiments provide a motion video frame rate enhancement method based on optical flow estimation and foreground detection, comprising the following steps:
processing the acquired video data to be processed by using a foreground detection algorithm to obtain a complete moving target image;
fitting the video data to be processed by using an optical flow estimation network to obtain foreground optical flow information;
and generating a video compensation frame by using a generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensating the frame of the video data to be processed according to the video compensation frame.
One or more embodiments provide a motion video frame rate enhancement system based on optical flow estimation and foreground detection, comprising a video acquisition device and a server;
the video acquisition device is configured to acquire video data to be enhanced and transmit the video data to the server;
the server is configured to perform the steps of the above method.
One or more embodiments provide a motion video frame rate enhancement system based on optical flow estimation and foreground detection, comprising:
a foreground detection module: the foreground detection algorithm is used for processing the acquired video data to be processed to obtain a complete moving target image;
an optical flow estimation module: configured for fitting the video data to be processed with an optical flow estimation network to obtain foreground optical flow information;
a frame supplementing module: and the system is configured to generate a video compensation frame by using the generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensate the original video data according to the video compensation frame.
An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions, when executed by the processor, performing the steps of the above method.
Compared with the prior art, the beneficial effect of this disclosure is:
the method combines foreground detection and optical flow estimation of the moving object, combines foreground motion information of an original video, and generates a vivid moving video frame supplement by using a generation countermeasure network (GAN for short), so that the motion characteristic of the moving object of the video can be simulated, and the frame supplement is more vivid. And the GAN model focuses on the frame supplementing operation of the moving target instead of the whole video frame through the foreground detection in the early stage and the optical flow estimation of the moving target, and the goal of frame supplementing operation is mainly met, so that the moving part is smoother, and the local frame supplementing is focused instead of the whole, thereby greatly reducing the complexity of the model. Meanwhile, foreground detection and optical flow estimation processing are only carried out on the video frames, and the preprocessing step is simplified.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure.
Fig. 1 is a schematic diagram of a motion video frame rate enhancement method according to embodiment 1 of the present disclosure;
fig. 2 is a structural schematic of the generation of a countermeasure network of embodiment 1 of the present disclosure;
fig. 3 is a flowchart of a motion video frame rate enhancement method according to embodiment 1 of the present disclosure.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments in the present disclosure may be combined with each other. The embodiments will be described in detail below with reference to the accompanying drawings.
Example 1
In one or more embodiments, as shown in fig. 1 to 3, a method for enhancing a frame rate of a moving video based on optical flow estimation and foreground detection includes the following steps:
s1, processing the acquired video data to be processed by a foreground detection algorithm to obtain a complete moving target image;
step S2, fitting the video data to be processed by using an optical flow estimation network to obtain foreground optical flow information;
and step S3, generating a video compensation frame by using the generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensating the original video data according to the video compensation frame.
In the embodiment, the input Frame is preprocessed by combining foreground detection and optical flow estimation of a moving object, and a realistic moving Video Frame supplement is generated by using a generated confrontation network, so that the Frame rate enhancement of a Video is realized, the Frame supplement operation of the confrontation network focusing on the moving object is generated through the foreground detection in the early stage and the optical flow estimation of the moving object instead of the whole Video Frame, the Frame supplement operation is mainly the cognition of making a moving part smoother, the complexity of a model is greatly reduced by focusing on a local Frame supplement instead of the whole Frame, and secondly, compared with the existing classical Frame supplement algorithm, such as a DAIN (Depth-Aware Video Frame Interpolation) algorithm, the preprocessing step is simplified. In addition, the generation of the video compensation frame by the generation countermeasure network can simulate the motion characteristics of a video motion target, so that the compensation frame is more vivid.
Further, the method also comprises the step of preprocessing the video data: and converting the original video data and the enhanced high-frame-rate video truth value into a frame sequence form.
In this embodiment, the high frame rate video is a video with a frame rate higher than that of the original video image (mostly 30fps), and the frame rate higher than that of the original video image is the high frame rate video.
Assume low frame rate original video as VinVideo frame rate of RinSuppose that the enhanced corresponding high frame rate video truth value is VoutThe frame rate is RoutThe size of each frame in the video is w multiplied by h, original video data and a real value of the high-frame-rate video are converted into a frame sequence form, and an original image sequence is obtained
Figure BDA0003367112410000051
Total NiFrame sum
Figure BDA0003367112410000052
Total NoAnd (5) frame.
The preprocessing of the embodiment converts the video into the frame sequence, performs foreground detection and optical flow estimation on the input adjacent frames to meet the requirement of post-network frame supplementing operation, and has the advantages of simple preprocessing and high processing speed.
In step S1, a foreground detection algorithm is used to process the image to obtain a complete moving target image, and the method specifically includes:
step S11: for two adjacent frames of images in the video data to be processed, modeling the background pixel distribution by using a background modeling method to obtain a background image frame;
step S12: and subtracting the pixel values of the current video frame and the background image frame to obtain an image containing a complete moving target in the current video frame, so as to obtain the moving target in the image.
In step S2, the optical flow estimation network is used to fit the video data to be processed to obtain foreground optical flow information, and the adopted optical flow estimation network PWC-Net, Pyramid, Warping, and Cost volume is referred to as PWC for short.
In step S3, the structure of the countermeasure network gan (generic adaptive network) may include a generator network and a discriminator network connected in cascade as shown in fig. 2.
The generator network is used for generating a realistic frame complementing image which is recorded as
Figure BDA0003367112410000061
The number N of the complementary frames is determined by the original frame rate and the required generated video frame rate, i.e. N ═ R (R)out/Rin)-1;
The discriminator network is used for identifying the frame supplementing image output by the generator network and the set video truth value VoutIf the error is in the set range, judging that the frame supplement generated by the generator is valid, otherwise, judging that the frame supplement generated by the generator is invalid, and performing frame supplement operation on the original video according to the valid frame supplement image so as to obtain the frame supplement video with a high frame rate.
Wherein, the frame-filling image and the video truth value VoutThe error magnitude between the two frames can be determined by a loss function, the loss function is the sum of absolute values of differences between pixel values of all generated video frame-complementing frames and pixel values of corresponding frames in the required enhanced high frame rate video, and the formula is as follows:
Figure BDA0003367112410000062
wherein, cxwxh represents the number of all pixel points, c represents the number of channels, w and h represent the width and height of the image, p represents each pixel point in the image, f represents the number of the pixel points in the image, andi′(p) pixel values representing the generated video complement frame, fj(p) represents the corresponding pixel value in the required enhanced high frame rate true value video.
Further, the method also comprises a step of training the generation of the antagonistic network GAN, and comprises the following steps:
step S31, acquiring original video data and preprocessing the original video data;
s32, processing the acquired original video data by using a foreground detection algorithm to obtain a complete moving target image;
step S33, fitting the original video data by using an optical flow estimation network to obtain foreground optical flow information;
step S34, constructing a generation confrontation network, and generating a video supplementary frame by using the generation confrontation network according to a target image obtained by foreground detection and foreground optical flow information;
step S35, constructing an objective function by taking the minimum sum of absolute values of differences between the pixel values of the generated video frame-filling and the corresponding pixel values in the required enhanced high-frame-rate video as an objective;
and S36, reducing the objective function by adopting a back propagation algorithm and a random gradient descent method to train a model, optimizing and correcting the network weight based on the objective function, and performing repeated iterative training to obtain a final video frame supplement to generate the countermeasure network.
The following description will be given with specific examples.
Assume the original video frame rate at the low frame rate is RinWhen the frame rate is 60fps, the video frame rate of the corresponding high frame rate after the frame compensation is assumed to be RoutEach frame size in the video is 256 × 256 at 120 fps.
Preprocessing video data, namely converting original video data and a high frame rate video true value into a frame sequence form to obtain 1200 frames of an original video image sequence and 2400 frames of a high frame rate video image sequence;
step S1, sequentially inputting two adjacent frames of images in time sequence, and recording as fi,fi+1(i belongs to 1,2 … 1199), modeling background pixel distribution by using a foreground detection algorithm, namely, a background modeling method, obtaining a background image frame, and subtracting pixel values of the current video frame and the background image frame to obtain an image containing a complete moving target in the current video frame and obtain a moving target in the image;
step S2, for the two adjacent images f input in step S1i,fi+1Calculating foreground optical flow by utilizing optical flow estimation network fitting;
combining the foreground object and foreground optical flow information obtained in steps S1 and S2, generating a realistic frame-filling image by using a generator network in a gan (generic adaptive network) model as shown in fig. 2, and recording the realistic frame-filling image as f1Wherein the number of the complementary frames N ═ Rout/Rin)-1=1。
Step S4, obtaining a generated video frame f1Inputting the real value of the high frame rate video into a discriminator of the GAN model together, optimizing the L1 reconstruction loss function pixel by pixel, namely optimizing the above common loss functionThe loss function value obtained by the formula; wherein, c multiplied by w multiplied by h represents the number of all pixel points, c represents the number of channels, for example, the number of channels can be 3, w and h represent the width and height of the image, 256 multiplied by 256, p represents each pixel point in the image, f represents the number of channels, andi′(p) pixel values representing the generated video complement frame, fj(p) represents the corresponding pixel value in the high frame rate true value video.
According to the method, PWC (Pyramid, warp, and Cost Volume) -Net is used as an optical flow estimation network, a frame difference method is used as a foreground detection algorithm to qualitatively analyze the model complexity and the speed of generating the frame supplement of the model, the model parameter number of the current method is about 50 ten thousand, and the parameter number of the current classical video frame supplement algorithm model is about 100 ten thousand at least, so that the complexity of the model is greatly reduced by the method. In addition, the running time of the model for generating one frame of frame supplement data is 8ms, and the running time of the current classical video frame supplement algorithm is about 10ms at the shortest time, which shows that the method is simpler in preprocessing and shorter in running time.
Example 2
Based on embodiment 1, the embodiment provides a motion video frame rate enhancement system based on optical flow estimation and foreground detection, which includes a video acquisition device and a server;
the video acquisition device is configured to acquire video data to be enhanced and transmit the video data to the server;
the server configured to perform the steps of the method of embodiment 1.
Wherein, video acquisition device can be the camera.
Example 3
Based on embodiment 1, this embodiment provides a motion video frame rate enhancement system based on optical flow estimation and foreground detection, including:
a foreground detection module: the foreground detection algorithm is used for processing the acquired video data to be processed to obtain a complete moving target image;
an optical flow estimation module: configured for fitting the video data to be processed with an optical flow estimation network to obtain foreground optical flow information;
a frame supplementing module: and the system is configured to generate a video compensation frame by using the generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensate the original video data according to the video compensation frame.
Example 4
Based on embodiment 1, this embodiment provides an electronic device, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the steps of the method of embodiment 1.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. The method for enhancing the frame rate of the motion video based on optical flow estimation and foreground detection is characterized by comprising the following steps of:
processing the acquired video data to be processed by using a foreground detection algorithm to obtain a complete moving target image;
fitting the video data to be processed by using an optical flow estimation network to obtain foreground optical flow information;
and generating a video compensation frame by using a generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensating the frame of the video data to be processed according to the video compensation frame.
2. The method of claim 1, further comprising the step of preprocessing the video data by: and converting the original video data and the enhanced high frame rate video truth value into a frame sequence form.
3. The method for enhancing frame rate of moving video based on optical flow estimation and foreground detection as claimed in claim 1, wherein the foreground detection algorithm is used to process and obtain a complete moving object image, and the method specifically comprises:
for two adjacent frames of images in the video data to be processed, modeling the background pixel distribution by using a background modeling method to obtain a background image frame;
and subtracting the pixel values of the current video frame and the background image frame to obtain an image containing a complete moving target in the current video frame, so as to obtain the moving target in the image.
4. The method of claim 1, wherein the frame rate enhancement method for motion video based on optical flow estimation and foreground detection comprises: the generation countermeasure network comprises a generator network and a discriminator network which are cascaded;
the generator network is used for generating a vivid frame supplementing image;
and the discriminator network is used for identifying the error between the frame supplementing image output by the generator network and a set video true value, and when the error is in a set range, carrying out frame supplementing operation on the original video according to the frame supplementing image so as to obtain the enhanced frame supplementing video.
5. The method of claim 4, wherein the frame rate enhancement method for motion video based on optical flow estimation and foreground detection comprises: the error magnitude between the frame-filling image and the set video truth value is determined by a loss function, and the loss function is specifically as follows: the sum of the absolute values of the differences between the pixel values of all the generated video complement frames and the pixel values of the corresponding frames in the required enhanced high frame rate video.
6. The method of claim 1, further comprising the step of training generation of an anti-net GAN, comprising the steps of:
acquiring original video data and preprocessing the original video data;
processing the obtained original video data by using a foreground detection algorithm to obtain a complete moving target image;
fitting the original video data by using an optical flow estimation network to obtain foreground optical flow information;
constructing a generation countermeasure network, and generating a video supplementary frame by using the generation countermeasure network according to a target image obtained by foreground detection and foreground optical flow information;
constructing an objective function for generating a countermeasure network;
and reducing an objective function by adopting a back propagation algorithm and a random gradient descent method to train and generate a confrontation network model, optimizing and correcting a network weight based on the objective function, and performing repeated iterative training to obtain a final video supplementary frame to generate a confrontation network.
7. The method of claim 6, wherein the frame rate enhancement method for motion video based on optical flow estimation and foreground detection comprises: the objective function for generating the countermeasure network is: the sum of the absolute values of the differences between the pixel values of the generated video complement frames and the corresponding pixel values in the desired enhanced high frame rate video is minimized.
8. A motion video frame rate enhancement system based on optical flow estimation and foreground detection is characterized in that: the system comprises a video acquisition device and a server;
the video acquisition device is configured to acquire video data to be enhanced and transmit the video data to the server;
the server configured to perform the steps of any of the methods of claims 1-7.
9. A motion video frame rate enhancement system based on optical flow estimation and foreground detection is characterized by comprising the following steps:
a foreground detection module: the foreground detection algorithm is used for processing the acquired video data to be processed to obtain a complete moving target image;
an optical flow estimation module: configured for fitting the video data to be processed with an optical flow estimation network to obtain foreground optical flow information;
a frame supplementing module: and the device is configured to generate a video compensation frame by using the generation countermeasure network according to the target image obtained by foreground detection and the foreground optical flow information, and compensate the frame of the video data to be processed according to the video compensation frame.
10. An electronic device, characterized by: comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, which when executed by the processor, perform the steps of the method of any one of claims 1 to 7.
CN202111385904.7A 2021-11-22 2021-11-22 Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection Pending CN114066761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111385904.7A CN114066761A (en) 2021-11-22 2021-11-22 Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111385904.7A CN114066761A (en) 2021-11-22 2021-11-22 Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection

Publications (1)

Publication Number Publication Date
CN114066761A true CN114066761A (en) 2022-02-18

Family

ID=80278849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111385904.7A Pending CN114066761A (en) 2021-11-22 2021-11-22 Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection

Country Status (1)

Country Link
CN (1) CN114066761A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001887A1 (en) * 2022-06-30 2024-01-04 深圳市中兴微电子技术有限公司 Video image processing method and apparatus, electronic device and storage medium
CN117372967A (en) * 2023-12-06 2024-01-09 广东申创光电科技有限公司 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001887A1 (en) * 2022-06-30 2024-01-04 深圳市中兴微电子技术有限公司 Video image processing method and apparatus, electronic device and storage medium
CN117372967A (en) * 2023-12-06 2024-01-09 广东申创光电科技有限公司 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things
CN117372967B (en) * 2023-12-06 2024-03-26 广东申创光电科技有限公司 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things

Similar Documents

Publication Publication Date Title
CN111667442B (en) High-quality high-frame-rate image reconstruction method based on event camera
JP6755395B2 (en) Neural network model training methods, devices, and storage media for image processing
US10896356B2 (en) Efficient CNN-based solution for video frame interpolation
CN109064507B (en) Multi-motion-stream deep convolution network model method for video prediction
WO2021208122A1 (en) Blind video denoising method and device based on deep learning
US10542249B2 (en) Stereoscopic video generation method based on 3D convolution neural network
CN110634108B (en) Composite degraded network live broadcast video enhancement method based on element-cycle consistency confrontation network
US8139152B2 (en) Image processing apparatus, image processing method and program
CN114066761A (en) Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection
CN110610467B (en) Multi-frame video compression noise removing method based on deep learning
CN113066022B (en) Video bit enhancement method based on efficient space-time information fusion
Niu et al. Blind motion deblurring super-resolution: When dynamic spatio-temporal learning meets static image understanding
CN114339030B (en) Network live video image stabilizing method based on self-adaptive separable convolution
CN111614965A (en) Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN110889809A (en) Image processing method and device, electronic device and storage medium
Han et al. Hybrid high dynamic range imaging fusing neuromorphic and conventional images
CN113850718A (en) Video synchronization space-time super-resolution method based on inter-frame feature alignment
Xu et al. Gan based multi-exposure inverse tone mapping
Xu et al. Deep video inverse tone mapping
Jeelani et al. Expanding synthetic real-world degradations for blind video super resolution
CN116389912B (en) Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera
CN113343764A (en) Driver distraction detection method and system based on cascade network architecture
US11790501B2 (en) Training method for video stabilization and image processing device using the same
CN111861877A (en) Method and apparatus for video hyper-resolution
CN116208812A (en) Video frame inserting method and system based on stereo event and intensity camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination