WO2021237743A1 - 视频插帧方法及装置、计算机可读存储介质 - Google Patents

视频插帧方法及装置、计算机可读存储介质 Download PDF

Info

Publication number
WO2021237743A1
WO2021237743A1 PCT/CN2020/093530 CN2020093530W WO2021237743A1 WO 2021237743 A1 WO2021237743 A1 WO 2021237743A1 CN 2020093530 W CN2020093530 W CN 2020093530W WO 2021237743 A1 WO2021237743 A1 WO 2021237743A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical flow
frame
maps
input frames
input
Prior art date
Application number
PCT/CN2020/093530
Other languages
English (en)
French (fr)
Inventor
卢运华
段然
陈冠男
张丽杰
刘瀚文
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2020/093530 priority Critical patent/WO2021237743A1/zh
Priority to US17/278,403 priority patent/US11800053B2/en
Priority to CN202080000871.7A priority patent/CN114073071B/zh
Publication of WO2021237743A1 publication Critical patent/WO2021237743A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0137Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes dependent on presence/absence of motion, e.g. of motion zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Definitions

  • the present disclosure relates to the field of information display technology, and in particular, to a video frame insertion method and device, computer-readable storage medium, and electronic equipment.
  • Video frame insertion is a research direction in the field of digital image processing and computer vision.
  • the use of video frame insertion technology can increase the frame rate of the video.
  • the video frame insertion method in the prior art consists of multiple sub-modules in parallel, and the accuracy of each module is low. As a result, the final frame insertion result will be affected by the accuracy of each module and the final synthesis module. The accuracy of the inserted frame image is low.
  • a video frame insertion method including:
  • an output frame is obtained using a frame synthesis method; the foregoing steps satisfy at least one of the following conditions:
  • the target depth estimation model is used to obtain the two depth maps, and the target depth estimation model is generated by using the reference virtual surface normal and the target depth map generated from the real depth maps of the two input frames
  • the error loss between the normals of the target virtual surface is obtained by training the initial depth estimation model
  • the two input frames are two image frames at different moments in a multi-frame video image.
  • the iterative residual optical flow estimation on two input frames to obtain the two initial optical flow diagrams includes:
  • the final output of the Nth optical flow estimation processing is used to update the input and output of the N+1 optical flow estimation, where N is a positive integer greater than or equal to 1;
  • the final output of the last optical flow estimation process is used as the two initial optical flow diagrams.
  • using the final output of the Nth optical flow estimation process to update the input and output of the N+1 optical flow estimation includes:
  • the two final outputs of the Nth optical flow estimation process are respectively added with the two input frames of the first optical flow estimation to obtain the two inputs of the N+1th optical flow estimation;
  • the two final outputs of the Nth optical flow estimation process are respectively added with the two initial outputs of the N+1th optical flow estimation process to obtain the final output of the N+1th optical flow estimation process.
  • processing two initial optical flow diagrams using pixel-adaptive convolution and joint up-sampling according to the two input frames to obtain the target optical flow diagram includes:
  • the two input frames include a first input frame and a second input frame
  • the two initial optical flow diagrams include a first initial optical flow diagram and a second initial optical flow diagram
  • the two target optical flow diagrams include a first A target optical flow graph and a second target optical flow graph
  • the first input frame corresponds to the first initial optical flow graph
  • the second input frame corresponds to the second initial optical flow graph
  • the target depth estimation model is one of the reference virtual surface normal generated by the real depth map of the two input frames and the target virtual surface normal generated by the target depth map.
  • the training methods obtained by training the initial depth estimation model include:
  • using a pixel adaptive convolution frame synthesis method to obtain an output frame according to the target optical flow map, depth map, context feature map, and frame interpolation kernel includes:
  • the two projected optical flow maps, the interpolated frame kernel, the two deformed depth maps, and the frame synthesis method of pixel adaptive convolution Two deformed input frames and two deformed context feature maps are synthesized to obtain an output frame, including:
  • performing frame synthesis processing including pixel adaptive convolution on the synthesized input image to obtain the output frame includes:
  • the second residual module includes at least one residual sub-module, and at least one residual sub-module includes a pixel-adaptive convolutional layer.
  • a projected optical flow map is determined according to the two target optical flow maps and the two depth maps, and an interpolation frame core, two deformed depth maps, and two depth maps are acquired.
  • the deformed input frame and two deformed context feature maps including:
  • obtaining the inserted frame of the two input frames according to the output frame includes:
  • using the average deformed frame to update the output frame includes:
  • a video frame interpolation device including:
  • a motion estimation module configured to obtain two input frames and obtain two initial optical flow diagrams corresponding to the two input frames according to the two input frames;
  • a data optimization module configured to perform up-sampling processing on the two initial optical flow graphs to obtain two target optical flow graphs
  • a depth estimation module configured to obtain, according to the two input frames, a frame insertion core, two depth maps respectively corresponding to the two input frames, and two context feature maps corresponding to the two input frames respectively;
  • An image synthesis module that uses a frame synthesis method to obtain an output frame according to the two target optical flow maps, the two depth maps, the two context feature maps, and the frame insertion core;
  • the target depth estimation model is used to obtain the two depth maps, and the target depth estimation model is generated by using the reference virtual surface normal and the target depth map generated by the real depth maps of the two input frames
  • the error loss between the normals of the target virtual surface is obtained by training the initial depth estimation model
  • the two input frames are two image frames at different moments in a multi-frame video image.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the video frame insertion method as described in any one of the above is implemented.
  • an electronic device including:
  • the memory is used to store one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the video described in any one of the above Frame insertion method.
  • Fig. 1 schematically shows a flowchart of a video frame interpolation method in an exemplary embodiment of the present disclosure
  • Fig. 2 schematically shows a framework diagram of optical flow estimation processing in an exemplary embodiment of the present disclosure
  • Fig. 3 schematically shows a frame diagram of a pixel adaptive convolution joint up-sampling module in an exemplary embodiment of the present disclosure
  • Fig. 4 schematically shows a frame diagram of monocular depth estimation constrained by a set of virtual surface normals in an exemplary embodiment of the present disclosure
  • Fig. 5 schematically shows an overall frame diagram of a video frame insertion method in an exemplary embodiment of the present disclosure
  • Fig. 6 schematically shows a frame diagram of a frame synthesis module with pixel adaptive convolution in an exemplary embodiment of the present disclosure
  • FIG. 7 schematically shows a schematic diagram of the composition of a video frame interpolation device in an exemplary embodiment of the present disclosure
  • FIG. 8 schematically shows a structural diagram of a computer system suitable for implementing an electronic device of an exemplary embodiment of the present disclosure
  • FIG. 9 schematically shows a schematic diagram of a computer-readable storage medium according to some embodiments of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • the example embodiments can be implemented in various forms, and should not be construed as being limited to the examples set forth herein; on the contrary, the provision of these embodiments makes the present disclosure more comprehensive and complete, and fully conveys the concept of the example embodiments To those skilled in the art.
  • the described features, structures or characteristics can be combined in one or more embodiments in any suitable way.
  • a video frame interpolation method is first provided.
  • the above video frame interpolation method may include the following steps:
  • S130 Obtain, according to the two input frames, a frame insertion core, two depth maps respectively corresponding to the two input frames, and two context feature maps respectively corresponding to the two input frames;
  • S140 Obtain an output frame by using a frame synthesis method according to the two target optical flow maps, the two depth maps, the two context feature maps, and the frame insertion core;
  • the above steps meet at least one of the following conditions:
  • the target depth estimation model is used to obtain the two depth maps, and the target depth estimation model is generated by using the reference virtual surface normal and the target depth map generated by the real depth maps of the two input frames
  • the error loss between the normals of the target virtual surface is obtained by training the initial depth estimation model
  • the two input frames are two image frames at different moments in a multi-frame video image.
  • the inserted frame here refers to an image frame that can be inserted between two input frames, which can reduce video motion blur and improve video quality.
  • iterative residual refined optical flow prediction is adopted to perform motion estimation on two adjacent input frames to obtain the initial optical flow.
  • the figure initially improves the accuracy of the interpolation result.
  • the initial optical flow diagram is processed by using pixel adaptive convolution and upsampling according to the input frame to obtain the target optical flow diagram, which further improves the accuracy of the interpolation result.
  • the pixel adaptive convolution is used to improve the interpolation result, which can improve well
  • the quality of the interpolated frame result the obtained interpolated frame result has a higher precision, which can be applied to video enhancement, and the upgraded slow motion special effect of video post-processing, which expands the usable scene of the video interpolating method.
  • step S110 two input frames are obtained, and two initial optical flow diagrams corresponding to the two input frames are obtained according to the two input frames.
  • the two acquired input frames may be the first input frame and the second input frame, respectively, and then perform optical flow estimation based on the first input frame and the second input frame to obtain the first initial light.
  • PWC-Net CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume
  • CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume can be used to estimate the optical flow of the above two input frames through a new model obtained through training, or other models can be used to estimate the optical flow of the two input frames.
  • the optical flow estimation for the above two input frames is not specifically limited in this example implementation.
  • the optical flow estimation may be performed only once on the above-mentioned first input frame and the second input frame to obtain the first initial optical flow graph and the second initial optical flow graph.
  • the iterative residual refinement optical flow prediction can be used to perform motion estimation on two adjacent input frames to obtain the initial optical flow graph.
  • the above-mentioned first input frame and second input frame are used as input 210 to perform multiple optical flow estimation 220 processing.
  • the final output 230 processed by the Nth optical flow estimation 220 is used to update the Nth optical flow estimation processing.
  • N can be 1, or it can be a positive integer greater than or equal to 1, such as 2, 3, 4, etc. It is not specifically limited in this example embodiment, and N cannot exceed the light The maximum number of flow estimates.
  • using the final output of the Nth optical flow estimation process to update the input and output of the N+1 optical flow estimation process includes: the two final outputs of the Nth optical flow estimation process can be used with the two final outputs of the optical flow estimation process respectively.
  • the two inputs of the first optical flow estimation are added to obtain the input of the N+1th optical flow estimation; the final output of the optical flow estimation process of the N times is compared with the initial output of the N+1 optical flow estimation.
  • the final output of the N+1th optical flow estimation is obtained by adding, and the final output of the last optical flow estimation processing may be used as the initial optical flow graph.
  • the server can feed back the output 230 of the first optical flow estimation to the input 210 of the second optical flow estimation, that is, the second optical flow estimation.
  • the input can be obtained by adding the two outputs of the first optical flow estimation to the first input frame and the second input respectively, that is, the pixel values of the two outputs of the first optical flow estimation are respectively in the first input frame and the first input frame and the second input.
  • the pixel values of the two input frames are added to obtain the input of the second optical flow estimation.
  • the second optical flow estimation is updated with the first output of the first optical flow estimation The second initial output.
  • Obtain the second target output that is, add the pixel values of the first output and the second initial output to obtain the second target output, where the second initial output is obtained after the input of the second optical flow estimation is processed by the optical flow estimation .
  • step S120 two target optical flow diagrams are obtained by up-sampling the two initial optical flow diagrams.
  • the server may directly perform feature extraction on the two initial optical flow graphs, and perform at least one up-sampling process after the feature extraction is completed to obtain the target optical flow graph.
  • i can represent the pixel i
  • v l can be represented as the feature map of the lth layer in the convolutional neural network
  • ⁇ (i) can be represented as the convolution window around the pixel i
  • W l can be used to represent the first layer of the convolutional neural network.
  • p i can be used to represent pixel coordinates
  • b l can be used to represent the bias term of layer l of the convolutional neural network.
  • the initial optical flow map 310 can be extracted through the convolutional layer 320 to obtain the reference optical flow map, and the same volume can be used for the input frame 311.
  • the multi-layer 320 performs feature extraction to obtain the reference input image, and then the reference optical flow image may be subjected to multiple pixel adaptive convolution and up-sampling 330 with the reference input image as a constraint to obtain the target optical flow image 340.
  • the server may perform feature extraction on the first initial optical flow graph and the second initial optical flow graph to obtain the first reference optical flow graph and the second reference optical flow graph respectively, and perform the feature extraction on the first input frame and the second input frame respectively.
  • Perform feature extraction to obtain a first reference input image and a second reference input image then, using the first reference input image as a guide image, perform at least one pixel adaptive convolution joint up-sampling process on the first reference optical flow image, and Perform feature extraction to obtain a first target optical flow graph; use the second reference input image as a guide image to perform at least one pixel-adaptive convolution and joint up-sampling processing on the second reference optical flow graph, and perform feature extraction to obtain the result The second target optical flow diagram.
  • two pixel adaptive convolution and joint up-sampling 330 may be performed on the above-mentioned reference optical flow graph.
  • three, four or more pixel-adaptive convolution joint up-sampling 330 can also be performed on the above-mentioned reference optical flow graph, which can be based on the size relationship between the target optical flow graph and the two input frames and pixel self-adaptation.
  • the multiple of upsampling determines the number of upsampling. There is no specific limitation in this exemplary embodiment.
  • each pixel-adaptive convolution joint upsampling 330 performed on the above-mentioned reference optical flow graph requires the above-mentioned reference input image as a guide image, that is, a constraint condition is added to the above-mentioned pixel-adaptive convolutional up-sampling.
  • the output result can be obtained by using the convolutional layer 320 to perform a feature extraction to obtain the target optical flow graph 340, which improves the accuracy of the initial optical flow graph 310 , The optimization of the initial optical flow graph 310 is completed.
  • pixel adaptive convolution is based on ordinary convolution, multiplied by an adaptive kernel function K obtained from the guided feature map f, that is, the convolution operation in pixel adaptive convolution upsampling is as follows:
  • i can represent the pixel i
  • v l can be represented as the feature map of the lth layer in the convolutional neural network
  • ⁇ (i) can be represented as the convolution window around the pixel i
  • W l can be used to represent the first layer of the convolutional neural network.
  • p i and p j can be used to represent pixel coordinates
  • b l can be used to represent the bias term of layer l of the convolutional neural network.
  • f i and f j can represent guiding feature maps, specifically, pixel j is a pixel point within a preset distance from pixel i, where the preset distance can be customized according to requirements, and is not specifically limited in this example embodiment .
  • the resolution of the initial optical flow map obtained after the above-mentioned optical flow estimation is one-fourth of the input frame. Therefore, in this exemplary embodiment, two pixel self-sampling multiples of 2 can be performed.
  • Adaptive convolution joint up-sampling, or a pixel-adaptive convolution joint up-sampling with an up-sampling multiple of 4 times which is not specifically limited in this example embodiment, and reference can be introduced when using pixel-adaptive two and last sampling
  • the optical flow graph is used as a guide graph, which in turn enables the accuracy of upsampling.
  • step S130 a frame insertion core, two depth maps respectively corresponding to the two input frames, and two context feature maps corresponding to the two input frames are obtained according to the two input frames.
  • the initial depth estimation module may be used to obtain the depth map. And according to the first input frame and the second input frame, the frame insertion kernel and the first context feature map and the second context feature map are obtained.
  • a pre-trained model can be used to complete the spatiotemporal context feature extraction of two input frames, and the feature map of any layer in the middle of the model can be used as the two context feature maps obtained.
  • the above and the training model can be It is a VGG model or a residual network, which is not specifically limited in this example implementation.
  • the initial depth estimation model may be trained to obtain the target depth estimation model, and then the depth estimation model may be used to calculate the first depth map corresponding to the first input frame and the second input frame respectively. And the second depth map.
  • the pre-training model of the monocular depth model MegaDepth may be used as the initial depth estimation model, or other pre-training models may be used as the initial depth estimation model, which is not specifically limited in this example embodiment.
  • the method of training the initial depth estimation model includes: first obtain the real depth map of two input frames, and calculate the three-dimensional (3D) point cloud of the real depth map, specifically, the two-dimensional depth of field The map is converted to a three-dimensional map, and then a three-dimensional (3D) point cloud can be obtained relatively simply; then a reference virtual surface normal can be generated from the 3D point cloud, and then referring to Figure 4, the server can input the input frame 410 into the initial depth estimation model 420 Obtain the target depth map 430, and then perform the calculation of the 3D point cloud 440 on the target depth map 430, and generate the target virtual surface normal 450 according to the 3D point cloud 440, and then according to the target virtual surface normal and the reference virtual surface normal.
  • the first input frame and the second input frame may be input into the target depth estimation model to obtain the first depth map and the second depth map.
  • step S140 a projection optical flow map is determined according to the target optical flow map and the depth map, and an interpolation frame core, a deformed depth map, a deformed input frame, and a deformed context feature map are acquired.
  • the server may first pass two input frames through the target optical flow diagram obtained by the optical flow estimation module 521 and the pixel adaptive convolution combined upsampling module 530, and then The input frame 510 may be subjected to the depth map obtained by the monocular depth estimation 522 constrained by the set of virtual surface normals; the target optical flow map and the depth map may be used to obtain the projected optical flow map by using the depth-sensing optical flow projection 540.
  • the related description of the optical flow estimation 521 has been described above in detail with reference to FIG. 2, so it will not be repeated here, and the related content of the pixel adaptive convolution joint up-sampling module 530 has been described above with reference to FIG. 3
  • the monocular depth estimation 522 of the geometric constraint of the virtual surface normal has been described in detail with reference to FIG. 4, so it will not be repeated here.
  • the first depth map may be used to perform depth-sensing optical flow projection processing on the first target optical flow map to obtain the first projected optical flow map
  • the second depth map may be used to perform depth-sensing optical flow projection on the second target optical flow.
  • the flow projection process obtains the second projected optical flow diagram.
  • the time of the first input frame can be defined as the 0th time
  • the time of the second input frame can be defined as the 1st time
  • a time t is defined, which is located between the first time and the second time.
  • F 0 ⁇ 1 (y) represents the optical flow of the pixel point y from the first input frame to the second input frame; D 0 (y) the depth value of the pixel point y; y ⁇ S(x) represents the pixel point y Optical flow F 0 ⁇ 1 (y), if optical flow F_(0 ⁇ 1)(y) passes through pixel point x at time t, then F t ⁇ 0 (x) can be approximated to -t F(0 ⁇ 1) (y); F t ⁇ 0 (x) represents the optical flow of pixel x from time t to the first input frame.
  • the server can extract the two input frames 510 through the temporal and spatial context feature extraction 523 to obtain two context feature maps, and perform the frame interpolation kernel estimation 524 on the two input frames to obtain the frame interpolation kernel, and use the interpolation
  • the frame check performs adaptive deformation 550 on the aforementioned two input frames, two depth maps, and two context feature maps to obtain two deformed input frames, two deformed depth maps, and two deformed context feature maps.
  • the depth estimation may use an hourglass model
  • the context feature extraction uses a pre-trained ResNet neural network
  • the kernel estimation and the adaptive deformation layer are based on the U-Net neural network, which is not specifically limited in this exemplary embodiment.
  • a deep learning classic backbone network can be used to generate an interpolation kernel for each pixel position according to two input frames, and in the adaptive deformation layer, according to the interpolation frame kernel and the projected optical flow diagram, the two The depth map, two input frames, and two context feature maps are deformed to obtain two deformed input frames, two deformed depth maps, and two deformed context feature maps.
  • the server superimposes 560 the interpolated frame core, the projected optical flow map, the deformed input frame, the deformed depth map, and the deformed context feature map to obtain a composite image.
  • the server inputs the composite image 610 to the residual network through the input layer 620, and uses the output feature image of the first residual module 630 in the residual network as the second residual module
  • the feature guide map of and the input of the second residual module in order to be able to input the feature guide map, replace the convolutional layer in other residual modules except the first residual module, which is the first residual module, with pixels Adaptive convolutional layer, and then form a second residual module
  • the second residual module may include at least one residual sub-module 640, wherein the at least one residual sub-module 640 includes a pixel adaptive convolution layer, the residual
  • the difference sub-module can be a pixel-adaptive convolution residual block;
  • the convolutional layer in the first residual module can be
  • i can represent the pixel i
  • v l can be represented as the feature map of the lth layer in the convolutional neural network
  • ⁇ (i) can be represented as the convolution window around the pixel i
  • W l can be used to represent the first layer of the convolutional neural network.
  • p i and p j can be used to represent pixel coordinates
  • b l can be used to represent the bias term of layer l of the convolutional neural network.
  • the pixel-adaptive convolutional layer is used to replace the above-mentioned convolutional layer to obtain the second residual module, and the pixel-adaptive convolutional layer is:
  • i can represent the pixel i
  • v l can be represented as the feature map of the lth layer in the convolutional neural network
  • ⁇ (i) can be represented as the convolution window around the pixel i
  • W l can be used to represent the first layer of the convolutional neural network.
  • p i and p j can be used to represent pixel coordinates
  • b l can be used to represent the bias term of layer l of the convolutional neural network.
  • f i and f j can represent guiding feature maps, specifically, pixel j is a pixel point within a preset distance from pixel i, where the preset distance can be customized according to requirements, and is not specifically limited in this example embodiment .
  • the pixel adaptive convolution layer is based on the ordinary convolution layer, multiplied by an adaptive kernel function K obtained from the guided feature map f.
  • the feature image output by the first residual module 630 is used as the guide image of the second residual module, that is, according to the feature image, a new pixel-adaptive convolutional layer in the pixel-adaptive residual block is added. Constraint conditions to enable obtaining higher precision output frames.
  • the number of residual blocks in the residual network may be multiple, such as 2, 3, 4 or more, which is not specifically limited in this example embodiment.
  • the server may also obtain the average deformed frame 581 of the two deformed input frames, and update the output frame 590 (that is, the final output frame, It is also an insertion frame), which may first calculate the average deformed frame according to the input frame, and then splice the average deformed frame and the output frame 650 obtained by synthesizing the aforementioned frame with pixel adaptive convolution to obtain the final output frame 590.
  • the pixel values of the two deformed input frames can be added and the average value is calculated to obtain the average deformed frame.
  • a new output frame 590 is obtained by adding the average deformed frame and the output frame 650, that is, the pixel values of the average deformed frame and the output frame 650 are added to obtain the new output frame 590.
  • the video frame interpolation device 700 includes: a motion estimation module 710, a data optimization module 720, a depth estimation module 730, and an image synthesis module 740.
  • the motion estimation module 710 can be used to obtain two input frames and obtain two initial optical flow diagrams corresponding to the two input frames according to the two input frames; the data optimization module 720 can be used to The two initial optical flow graphs are subjected to up-sampling processing to obtain two target optical flow graphs; the depth estimation module 730 may be used to obtain the interpolation frame core according to the two input frames, and the two depths corresponding to the two input frames respectively. Image and the two context feature maps corresponding to the two input frames respectively; the image synthesis module 740 can be used to perform according to the two target optical flow maps, the two depth maps, the two context feature maps, and the The frame insertion core uses a frame synthesis method to obtain an output frame.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • an electronic device capable of implementing the aforementioned video frame insertion.
  • the electronic device 800 according to such an embodiment of the present disclosure will be described below with reference to FIG. 8.
  • the electronic device 800 shown in FIG. 8 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 800 is represented in the form of a general-purpose computing device.
  • the components of the electronic device 800 may include, but are not limited to: the aforementioned at least one processing unit 810, the aforementioned at least one storage unit 820, a bus 830 connecting different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
  • the storage unit stores program code, and the program code can be executed by the processing unit 810, so that the processing unit 810 executes the various exemplary methods described in the "Exemplary Method" section of this specification. Example steps.
  • the processing unit 810 may perform step S110 as shown in FIG.
  • the electronic device can implement the steps shown in FIG. 1.
  • the storage unit 820 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 821 and/or a cache storage unit 822, and may further include a read-only storage unit (ROM) 823.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 820 may also include a program/utility tool 824 having a set of (at least one) program module 825.
  • program module 825 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
  • the bus 830 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
  • the electronic device 800 may also communicate with one or more external devices 870 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable a user to interact with the electronic device 800, and/or communicate with Any device (eg, router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 850.
  • the electronic device 800 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 860.
  • networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 860 communicates with other modules of the electronic device 800 through the bus 830. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
  • the exemplary embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • a computer-readable storage medium is also provided, on which a program product capable of implementing the above-mentioned method of this specification is stored.
  • various aspects of the present disclosure may also be implemented in the form of a program product, which includes program code, and when the program product runs on a terminal device, the program code is used to enable the The terminal device executes the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification.
  • a program product 900 for implementing the above method according to an embodiment of the present disclosure is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be used in a terminal device, such as a personal computer. Run on the computer.
  • the program product of the present disclosure is not limited thereto.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
  • the program product can use any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
  • the program code used to perform the operations of the present disclosure can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers). Shanglai is connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • an external computing device for example, using Internet service providers.
  • Shanglai is connected via the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及信息展示技术领域,具体涉及一种视频插帧方法及装置、计算机可读存储介质及电子设备,方法包括:根据两个输入帧得到与两个输入帧对应的两个初始光流图;优化初始光流图得到目标光流图;根据两个输入帧得到插帧核、两个深度图和两个上下文特征图;根据目标光流图、深度图、上下文特征图和插帧核利用帧合成方法得到输出帧;上述步骤至少满足以下条件之一:对两个输入帧进行迭代残差光流估计得到两个初始光流图;根据两个输入帧利用像素自适应卷积联合上采样优化初始光流图得到目标光流图;根据两个输入帧利用目标深度估计模型得到两个深度图;根据目标光流图、深度图、上下文特征图以及插帧核利用像素自适应卷积帧合成方法得到输出帧。

Description

视频插帧方法及装置、计算机可读存储介质 技术领域
本公开涉及信息展示技术领域,具体而言,涉及一种视频插帧方法及装置、计算机可读存储介质及电子设备。
背景技术
视频插帧是数字图像处理与计算机视觉领域的一个研究方向,利用视频插帧技术,可以提升视频的帧率。现有技术中的视频插帧方法由多个子模块并联,且各模块的准确率均较低,导致最终插帧的结果将受到各模块准确率的影响,以及最终的合成模块的影响,得到的插帧图像的精度较低。
发明内容
根据本公开的第一方面,提供了一种视频插帧方法,包括:
获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;
对所述两个初始光流图进行上采样处理得到两个目标光流图;
根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;
根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;上述步骤至少满足以下条件之一:
对两个输入帧进行迭代残差光流估计得到与所述两个初始光流图;
根据所述两个输入帧利用像素自适应卷积联合上采样的处理所述两个初始光流图得到两个目标光流图;
根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;
根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;
根据所述输出帧得到所述两个输入帧的插入帧;
述两个输入帧为多帧视频图像中两个不同时刻的图像帧。
在本公开的一种示例性实施例中,所述对两个输入帧进行迭代残差光流估计 得到所述两个初始光流图,包括:
对所述两个输入帧进行多次光流估计处理;
在所述多次光流估计处理中,利用第N次所述光流估计处理的最终输出更新N+1次光流估计的输入和输出,N为大于等于1的正整数;
将最后一次光流估计处理的最终输出作为所述两个初始光流图。
在本公开的一种示例性实施例中,利用第N次所述光流估计处理的最终输出更新N+1次光流估计的输入和输出,包括:
利用第N次光流估计处理的两个最终输出分别与第一次光流估计的两个输入帧进行相加得到第N+1次光流估计的两个输入;
利用第N次光流估计处理的两个最终输出分别与第N+1次光流估计的两个初始输出进行相加得到第N+1次光流估计处理的最终输出。
在本公开的一种示例性实施例中,根据所述两个输入帧利用像素自适应卷积联合上采样的处理两个初始光流图得到目标光流图,包括:
所述两个输入帧包括第一输入帧和第二输入帧,所述两个初始光流图包括第一初始光流图个第二初始光流图,所述两个目标光流图包括第一目标光流图和第二目标光流图,所述第一输入帧与所述第一初始光流图相对应,所述第二输入帧与所述第二初始光流图相对应;
利用所述第一输入帧为像素自适应卷积联合上采样的引导图,对所述第一初始光流图进行像素自适应卷积联合上采样处理得到所述第一目标光流图;
利用所述第二输入帧为像素自适应卷积联合上采样的引导图,对所述第二初始光流图进行像素自适应卷积联合上采样处理得到所述第二目标光流图。
在本公开的一种示例性实施例中,包括:
对所述第一初始光流图和第二初始光流图分别进行特征提取得到第一参考光流图和第二参考光流图,对所述第一输入帧和第二输入帧分别进行特征提取得到第一参考输入图和第二参考输入图;
以所述第一参考输入图为引导图对所述第一参考光流图进行至少一次联合上采样处理,并进行特征提取得到所述第一目标光流图;
以所述第二参考输入图为引导图对所述第二参考光流图进行至少一次联合上采样处理,并进行特征提取得到所述第二目标光流图。
在本公开的一种示例性实施例中,在所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的中的训练方法包括:
获取所述两个输入帧的真实景深图,并计算所述真实景深图的参考虚拟表面法线;
根据所述两个输入帧利用初始深度估计模型得到目标景深图,并计算所述目 标景深图的目标虚拟表面法线;
根据所述参考虚拟表面法线和所述目标虚拟表面法线的误差损失更新所述初始深度估计模型的参数得到目标深度估计模型。
在本公开的一种示例性实施例中,根据所述目标光流图、深度图、上下文特征图以及插帧核利用像素自适应卷积帧合成方法得到输出帧,包括:
根据两个目标光流图和两个深度图确定两个投影光流图,并获取插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图;
利用像素自适应卷积的帧合成方法将两个投影光流图、所述插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图进行合成得到输出帧。
在本公开的一种示例性实施例中,利用像素自适应卷积的帧合成方法将所述两个投影光流图、所述插帧核、所述两个变形后的深度图、所述两个变形后的输入帧以及两个变形后的上下文特征图进行合成得到输出帧,包括:
将所述两个投影光流图、两个变形后的深度图、两个变形后的输入帧、插帧核以及两个变形后的上下文特征图进行拼接得到一个合成图像;
对所述合成图像进行含像素自适应卷积的帧合成处理得到所述输出帧。
在本公开的一种示例性实施例中,对所述合成输入图像进行含像素自适应卷积的帧合成处理得到所述输出帧包括:
将所述合成图像输入第一残差模块;并以第一残差模块的输出特征图作为第二残差模块的输入和输入引导图,完成帧合成处理得到所述输出帧,
所述第二残差模块包含至少一个残差子模块,至少一个残差子模块包含像素自适应卷积层。
在本公开的一种示例性实施例中,根据所述两个目标光流图和所述两个深度图确定投影光流图,并获取插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图,包括:
根据两个深度图分别对两个目标光流图进行深度感知光流投影处理得到所述投影光流图;
对所述两个输入帧进行时空上下文特征提取处理得到两个上下文特征图,并对所述两个输入帧进行插帧核估计处理得到插帧核;
根据所述投影光流图和所述插帧核对所述两个输入帧、所述两个深度图、所述两个上下文特征图进行自适应变形处理得到所述两个变形后的深度图、所述两个变形后的输入帧以及所述两个变形后的上下文特征图。
在本公开的一种示例性实施例中,根据所述输出帧得到所述两个输入帧的插入帧,包括:
获取两个变形后输入帧的平均变形帧,并利用所述平均变形帧更新所述输出 帧;
将更新后的输出帧作为所述插入帧。
在本公开的一种示例性实施例中,利用所述平均变形帧更新所述输出帧,包括:
将所述平均变形帧和所述输出帧进行相加得到所述插入帧。
根据本公开的一个方面,提供一种视频插帧装置,包括:
运动估计模块,用于获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;
数据优化模块,用于对所述两个初始光流图进行上采样处理得到两个目标光流图;
深度估计模块,用于根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;
图像合成模块,根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;
上述模块至少满足以下条件之一:
根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;
根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;
根据所述输出帧得到所述两个输入帧的插入帧;
述两个输入帧为多帧视频图像中两个不同时刻的图像帧。
根据本公开的一个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如上述任意一项所述的视频插帧方法。
根据本公开的一个方面,提供一种电子设备,包括:
处理器;以及
存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如上述任意一项所述的视频插帧方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性 劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1示意性示出本公开示例性实施例中视频插帧方法的流程图;
图2示意性示出本公开示例性实施例中光流估计处理的框架图;
图3示意性示出本公开示例性实施例中像素自适应卷积联合上采样模块的框架图;
图4示意性示出本公开示例性实施例中虚拟表面法线集合约束的单目深度估计得框架图;
图5示意性示出本公开示例性实施例中视频插帧方法的整体框架图;
图6示意性示出本公开示例性实施例中含像素自适应卷积的帧合成模块的框架图
图7示意性示出本公开示例性实施例中一种视频插帧装置的组成示意图;
图8示意性示出了适于用来实现本公开示例性实施例的电子设备的计算机系统的结构示意图;
图9示意性示出了根据本公开的一些实施例的计算机可读存储介质的示意图。
具体实施方式
现在将参照附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
在本示例性实施例中,首先提供了一种视频插帧方法,参照图1中所示,上述的视频插帧方法可以包括以下步骤:
S110,获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;
S120,对所述两个初始光流图进行上采样处理得到两个目标光流图;
S130,根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;
S140,根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;
其中,上述步骤至少满足以下条件之一:
对两个输入帧进行迭代残差光流估计得到与所述两个输入帧分别对应的两个初始光流图;
根据所述两个输入帧利用像素自适应卷积联合上采样的处理所述两个初始光流图得到两个目标光流图;
根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;
根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;
根据所述输出帧得到所述两个输入帧的插入帧;
其中所述两个输入帧为多帧视频图像中两个不同时刻的图像帧。需要说明的是此处的插入帧指的是在可以插入到两个输入帧之间的图像帧,能够减少视频运动模糊,提升视频质量。
根据本示例性实施例中所提供的视频插帧方法中,相较于现有技术,一方面,采用通过迭代残差细化光流预测对相邻两个输入帧进行运动估计得到初始光流图初步提升了插帧结果的精度,另一方面,根据输入帧利用像素自适应卷积联合上采样的处理初始光流图得到目标光流图,进一步提升了插帧结果的精度,再一方面,通过虚拟表面法线几何约束的深度估计方法,进行深度预测,并结合深度预测对目标光流图进行投影;在合成模块中,利用像素自适应卷积提升插帧结果,能够很好的提升插帧结果的质量,得到的插帧结果精度较高,可应用于视频增强,视频后期处理的升格慢动作特效,扩大了视频插帧方法的可使用场景。
下面,将结合附图及实施例对本示例性实施例中的视频插帧方法的各个步骤进行更详细的说明。
在步骤S110中,获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对 应的两个初始光流图。
在本公开的一种示例实施例中,获取的两个输入帧可以分别是第一输入帧和第二输入帧,然后根据第一输入帧和第二输入帧做光流估计得到第一初始光流图和第二初始光流图,其中第一初始光流图可以和第一输入帧相对应,第二初始光流图可以和第二输入帧相对应。
在本示例实施方式中,可以使用PWC-Net(CNNs for Optical Flow Using Pyramid,Warping,and Cost Volume)通过训练得到的新模型来对上述两个输入帧进行光流估计,也可以通过其他模型来对上述两个输入帧进行光流估计,在本示例实施方式中不做具体限定。
在本公开的一种示例实施方式中,可以只对上述的第一输入帧和第二输入帧进行一次光流估计得到第一初始光流图和第二初始光流图。
在本公开的另一种示例实施方式中,参照图2所示,可通过迭代残差细化光流预测对相邻两个输入帧进行运动估计得到初始光流图,具体而言,可以以上述第一输入帧和第二输入帧作为输入210进行多次光流估计220处理,在所述多次光流估计处理中,采用第N次光流估计220处理的最终输出230,更新第N+1次的光流估计的输入210以及输出,N可以为1,也可以为如2、3、4等大于等于1的正整数,在本示例实施方式中不做具体限定,N不能超过光流估计的最大次数。
具体而言,利用第N次所述光流估计处理的最终输出更新N+1次光流估计的输入和输出,包括:可以利用第N次所述光流估计处理的两个最终输出分别与第一次光流估计的两个输入进行相加得到第N+1次光流估计的输入;利用N次所述光流估计处理的最终输出与N+1次光流估计的初始输出进行相加得到第N+1次光流估计的最终输出,可以将最后一次光流估计处理的最终输出作为所述初始光流图。
在本示例实施方式中,以N取值为1为例进行详细说明,服务器可以将第一次光流估计的输出230反馈至第2次光流估计的输入210,即第二次光流估计的输入可以由第一次光流估计的两个输出分别于第一输入帧和第二输入相加得到,即将第一次光流估计的两个输出的像素值分别于第一输入帧和第二输入帧的像素值进行相加,得到第二次光流估计的输入,同时在经过第二次光流估计处理后,利用第一次光流估计的第一输出更新第二次光流估计的第二初始输出。得到第二目标输出,即将第一输出与第二初始输出的像素值进行相加得到第二目标输出,其中第二初始输出是有第二次光流估计的输入经过光流估计处理之后得到的。
在步骤S120中,对所述两个初始光流图上采样处理得到两个目标光流图。
在本公开的第一种示例实施方式中,服务器可以直接对上述两个初始光流图分别进行特征提取,在完成特征提取后进行至少一次上采样处理来的得到目标光流图。
在本示例实施方式中的上采样处理中的卷积层的运算如下所示:
Figure PCTCN2020093530-appb-000001
其中,i可以表示像素i,v l可以表示为卷积神经网络中第l层特征图,Ω(i)可以表示为像素i周围的卷积窗口,W l可以用于表示卷积神经网络第l层的卷积核,p i可以用于表示像素坐标,b l可以用于表示卷积神经网络第l层的偏置项。
在本公开的另外一种示例实施方式中,参照图3所示,可以将初始光流图310通过卷积层320进行特征提取得到参考光流图,同时可以对输入帧311也采用相同的卷积层320进行特征提取得到参考输入图,之后可以以参考输入图为约束对参考光流图进行多次像素自适应卷积联合上采样330得到目标光流图340。
具体而言,服务器可以对第一初始光流图和第二初始光流图分别进行特征提取得到第一参考光流图和第二参考光流图,对第一输入帧和第二输入帧分别进行特征提取得到第一参考输入图和第二参考输入图;然后可以以第一参考输入图为引导图对所述第一参考光流图进行至少一次像素自适应卷积联合上采样处理,并进行特征提取得到第一目标光流图;以所述第二参考输入图为引导图对所述第二参考光流图进行至少一次像素自适应卷积联合上采样处理,并进行特征提取得到所述第二目标光流图。
在本示例实施方式中,如图3所示,可以对上述参考光流图进行两次像素自适应卷积联合上采样330。此外,也可以对上述参考光流图进行三次、四次或更多次的像素自适应卷积联合上采样330,可以根据目标光流图和两个输入帧之间的尺寸关系以及像素自适应上采样的倍数来确定上采样的次数。在本示例实施方式中不做具体限定。
在本示例实施方式中,对上述参考光流图进行每一次的像素自适应卷积联合上采样330均需要上述参考输入图作为引导图,即对上述像素自适应卷积上采样增加约束条件。在对上述参考光流图进行多次像素自适应卷积联合上采样330之后,可以对输出结果采用卷积层320在进行一次特征提取得到目标光流图340,提高初始光流图310的精度,完成对初始光流图310的优化。
具体而言,像素自适应卷积是在普通卷积的基础上,乘以一个由引导特征图f得到的自适应核函数K,即在像素自适应卷积上采样中卷积运算如下:
Figure PCTCN2020093530-appb-000002
其中,i可以表示像素i,v l可以表示为卷积神经网络中第l层特征图,Ω(i)可以表示为像素i周围的卷积窗口,W l可以用于表示卷积神经网络第l层的卷积核,p i和p j可以用于表示像素坐标,b l可以用于表示卷积神经网络第l层的偏置项。f i和f j可以表示引导特征图,具体为像素j是以像素i为中心预设距离内的像素点,其中预设距离可以根据需求进行自定义,在本示例实施方式中不做具体限定。
在本示例实施方式中,上述光流估计后得到的初始光流图的分辨率是输入帧的四分之一,因此,在本示例实施方式中可以进行两次采样倍数为2倍的像素自适应卷积联合上采样,或者进行一次上采样倍数为4倍的像素自适应卷积联合上采样,在本示例实施方式中不做具体限定,采用像素自适应俩和上次采样时可以引入参考光流图作为引导图,进而使得上采样的精度。
在步骤S130中,根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图。
在本公开的一种示例实施方式中,可以使用初始深度估计模块得到深度图。并根据第一输入帧和第二输入帧得到插帧核和第一上下文特征图以及第二上下文特征图。
在本示例实施方式中,可以采用经过预训练模型来完成对两个输入帧的时空上下文特征提取,可以将模型中间任意一层的特征图作为得到的两个上下文特征图,上述与训练模型可以是VGG模型,也可以是残差网络,在本示例实施方式中不做具体限定。
在本公开的另一种示例实施方式中,可以首先对初始深度估计模型进行训练得到目标深度估计模型,然后利用深度估计模型分别计算第一输入帧和第二输入帧分别对应的第一深度图和第二深度图。
在本示例实施方式中,可以使用单目深度模型MegaDepth的预训练模型作为上述初始深度估计模型,也可以才用其他预训练模型作为初始深度估计模型,在本示例实施方式中不做具体限定。
具体而言,对初始深度估计模型进行训练的方法包括:首先获取两个输入帧的真实景深图,并对真实景深图进行三维(3D)点云的计算,具体而言,将二维的景深图转变为三维图,即可较为简单的得到三维(3D)点云;然后可以根据3D点云生成参考虚拟表面法线,然后参照图4所示,服务器可以将输入帧410输入初始深度估计模型420得到目标景深图430,然后对上述目标景深图430进行3D点云440的计算,并根据3D点云440生成目标虚拟表面法线450,然后根据目标虚拟表面法线和参考虚拟表面法线之间 的误差损失更新初始深度估计模型中的参数得到目标深度估计模型。具体而言,调整初始深度估计模型中的参数以使得上述误差损失达到最小,将上述误差损失最小时的初始深度估计模型最为目标深度估计模型。
在本示例实施方式中可以分别将第一输入帧和第二输入帧输入到目标深度估计模型中得到第一深度图和第二深度图。
在步骤S140中,根据所述目标光流图和所述深度图确定投影光流图,并获取插帧核、变形后的深度图、变形后的输入帧以及变形后的上下文特征图。
在本公开的一种示例实施例中,参照图5所示,服务器可以首先将两个输入帧经过光流估计模块521、像素自适应卷积联合上采样模块530得到的目标光流图,然后可以将输入帧510经过虚拟表面法线集合约束的单目深度估计522得到的深度图;对目标光流图和深度图利用深度感知光流投影540得到投影光流图。
其中,关于光流估计521的相关说明,上述已经参照图2进行了详细的说明,因此此除不再赘述,关于像素自适应卷积联合上采样模块530的相关内容上述已经参照图3进行了详细说明,关于虚拟表面法线几何约束的单目深度估计522上述已经参照考图4进行了详细说明,因此此处均不再赘述。
在本示例实施方式中,可以以第一深度图对第一目标光流图进行深度感知光流投影处理得到第一投影光流图,利用第二深度图对第二目标光流进行深度感知光流投影处理得到第二投影光流图。
具体而言,可以将上述第一输入帧的时间定义为第0时刻,将第二输入帧的时间定义为第1时刻,且定义一个t时刻,该t时刻位于第一时刻和第二时刻之间,通过以下公式即可计算上述投影光流图:
Figure PCTCN2020093530-appb-000003
Figure PCTCN2020093530-appb-000004
其中,F 0→1(y)表示像素点y从第一输入帧到第二输入帧的光流;D 0(y)像素点y的深度值;y∈S(x)表示像素点y的光流F 0→1(y),如果光流F_(0→1)(y)在t时刻经过像素点x,则可将F t→0(x)近似成-t F(0→1)(y);F t→0(x)表示像素点x从t时刻到到第一输入帧的光流。
在本示例实施方式中,服务器可以将上述两个输入帧510分别经过时空上下文特征提取523得到两个上下文特征图,并对两个输入帧进行插帧核估计524得到插帧核,并利用插帧核对上述两个输入帧、两个深度图、以及两个上下文特征图进行自适应变形550,得到两个变形后输入帧、两个变形后深度图以及两个变形后上下文特征图。
在本示例实施方式中,深度估计可以采用沙漏模型,上下文特征提取采用预训练的ResNet神经网络,核估计和自适应变形层基于U-Net神经网络,在本示例实施方式中不做具体限定。
在本示例实施方式中,可以使用深度学习经典主干网络,根据两个输入帧,生成每个像素位置的插帧核,并在自适应变形层,根据插帧核和投影光流图对两个深度图、两个输入帧、两个上下文特征图进行变形得到两个变形后输入帧、两个变形后深度图以及两个变形后上下文特征图。
在本公开的一种示例实施例中,参照图5所示,服务器将插帧核、投影光流图、变形后输入帧、变形后深度图以及变形后上下文特征图进行叠加560得到合成图像。
在本示例实施方式中,参照图6所示,服务器将合成图像610经过输入层620输入到残差网络,并将残差网络中第一残差模块630的输出特征图像作为第二残差模块的特征引导图和第二残差模块的输入,为了能够将特征引导图输入,将除第一个残差模块即第一残差模块之外的其他残差模块中的卷积层替换为像素自适应卷积层,进而形成第二残差模块,第二残差模块可以包括至少一个残差子模块640,其中所述至少一个残差子模块640包含像素自适应卷积层,所述残差子模块可以是像素自适卷积残差块;
具体而言,第一残差模块中的卷积层可以为
Figure PCTCN2020093530-appb-000005
其中,i可以表示像素i,v l可以表示为卷积神经网络中第l层特征图,Ω(i)可以表示为像素i周围的卷积窗口,W l可以用于表示卷积神经网络第l层的卷积核,p i和p j可以用于表示像素坐标,b l可以用于表示卷积神经网络第l层的偏置项。
采用像素自适应卷积层替换掉上述卷积层得到第二残差模块,像素自适应卷积层为:
Figure PCTCN2020093530-appb-000006
其中,i可以表示像素i,v l可以表示为卷积神经网络中第l层特征图,Ω(i)可以表示为像素i周围的卷积窗口,W l可以用于表示卷积神经网络第l层的卷积核,p i和p j可以用于表示像素坐标,b l可以用于表示卷积神经网络第l层的偏置项。f i和f j可以表示引导特征图,具体为像素j是以像素i为中心预设距离内的像素点,其中预设距离可以根据需求进行自定义,在本示例实施方式中不做具体限定。
像素自适应卷积层是在普通卷积层的基础上,乘以一个由引导特征图f得到的自适应核函数K。
在本示例实施方式中,将由第一残差模块630输出的特征图像作为第二残差模块的 引导图,即根据特征图像对像素自适应残差块中的像素自适应卷积层添加新的约束条件,以使得能够获取更高精度的输出帧。
在本示例实施方式中,残差网络中的残差块的数量可以是多个,如2个、3个、4个或更多,在本示例实施方式中不做具体限定。
在本公开的一种示例实施方式中,再次参照图5所示,服务器还可以获取两个变形后输入帧的平均变形帧581,并通过平均变形帧581更新输出帧590(即最终输出帧,也是插入帧),可以是首先根据输入帧计算平均变形帧,然后将平均变形帧和上述有含像素自适应卷积的帧合成得到的输出帧650进行拼接得到最终输出帧590。
具体而言,可以的两个变形后输入帧的像素值进行相加并求得平均值来计算得到平均变形帧。利用平均变形帧和输出帧650进行相加得到新的输出帧590,即将平均变形帧和输出帧650的像素值进行相加得到新的输出帧590。
以下介绍本公开的装置实施例,可以用于执行本公开上述的视频插帧方法。此外,在本公开的示例性实施方式中,还提供了一种视频插帧装置。参照图7所示,所述视频插帧装置700包括:运动估计模块710,数据优化模块720,深度估计模块730和图像合成模块740。
其中,所述运动估计模块710可以用于获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;数据优化模块720可以用于对所述两个初始光流图进行上采样处理得到两个目标光流图;深度估计模块730可以用于根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;图像合成模块740可以用于根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧。
由于本公开的示例实施例的视频插帧装置的各个功能模块与上述视频插帧方法的示例实施例的步骤对应,因此对于本公开装置实施例中未披露的细节,请参照本公开上述的视频插帧方法的实施例。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
此外,在本公开的示例性实施例中,还提供了一种能够实现上述视频插帧的电子设备。
所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施例、完全的软件实施例(包括固件、微代码等),或硬件和软件方面结合的实施例,这里可以统称为“电路”、“模块”或“系统”。
下面参照图8来描述根据本公开的这种实施例的电子设备800。图8显示的电子设备800仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图8所示,电子设备800以通用计算设备的形式表现。电子设备800的组件可以包括但不限于:上述至少一个处理单元810、上述至少一个存储单元820、连接不同系统组件(包括存储单元820和处理单元810)的总线830、显示单元840。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元810执行,使得所述处理单元810执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施例的步骤。例如,所述处理单元810可以执行如图1中所示的步骤S110:获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;S120:对所述初始光流图进行优化得到目标光流图;S130:根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;S140:根据所述目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧。
又如,所述的电子设备可以实现如图1所示的各个步骤。
存储单元820可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)821和/或高速缓存存储单元822,还可以进一步包括只读存储单元(ROM)823。
存储单元820还可以包括具有一组(至少一个)程序模块825的程序/实用工具824,这样的程序模块825包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线830可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备800也可以与一个或多个外部设备870(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备800交互的设备通信,和/或与使得该电子设备800能与一个或多个其它计算设备进行通信的任何设备(例如路由器、 调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口850进行。并且,电子设备800还可以通过网络适配器860与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器860通过总线830与电子设备800的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备800使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施例的描述,本领域的技术人员易于理解,这里描述的示例实施例可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施例的方法。
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施例中,本公开的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施例的步骤。
参照图9,描述了根据本公开的实施例的用于实现上述方法的程序产品900,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中 承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
此外,上述附图仅是根据本公开示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。

Claims (15)

  1. 一种视频插帧方法,其中,包括:
    获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;
    对所述两个初始光流图进行上采样处理得到两个目标光流图;
    根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;
    根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;上述步骤至少满足以下条件之一:
    对两个输入帧进行迭代残差光流估计得到与所述两个初始光流图;
    根据所述两个输入帧利用像素自适应卷积联合上采样的处理所述两个初始光流图得到两个目标光流图;
    根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;
    根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;
    根据所述输出帧得到所述两个输入帧的插入帧;
    其中所述两个输入帧为多帧视频图像中两个不同时刻的图像帧。
  2. 根据权利要求1所述的方法,其中,所述对两个输入帧进行迭代残差光流估计得到所述两个初始光流图,包括:
    对所述两个输入帧进行多次光流估计处理;
    其中,在所述多次光流估计处理中,利用第N次所述光流估计处理的最终输出更新N+1次光流估计的输入和输出,N为大于等于1的正整数;
    将最后一次光流估计处理的最终输出作为所述两个初始光流图。
  3. 根据权利要求2所述的方法,其中,利用第N次所述光流估计处理的输出更新N+1次光流估计的输入和输出,包括:
    利用第N次光流估计处理的两个最终输出分别与第一次光流估计的两个输入帧进行相加得到第N+1次光流估计的两个输入;
    利用第N次光流估计处理的两个最终输出分别与第N+1次光流估计的两个初始输出进行相加得到第N+1次光流估计处理的最终输出。
  4. 根据权利要求1所述的方法,其中,根据所述两个输入帧利用像素自适应卷积联合上采样的处理两个初始光流图得到目标光流图,包括:
    所述两个输入帧包括第一输入帧和第二输入帧,所述两个初始光流图包括第一初始光流图个第二初始光流图,所述两个目标光流图包括第一目标光流图和第二目标光流图,其中,所述第一输入帧与所述第一初始光流图相对应,所述第二输入帧与所述第二初始光流图相对应;
    利用所述第一输入帧为像素自适应卷积联合上采样的引导图,对所述第一初始光流图进行像素自适应卷积联合上采样处理得到所述第一目标光流图;
    利用所述第二输入帧为像素自适应卷积联合上采样的引导图,对所述第二初始光流图进行像素自适应卷积联合上采样处理得到所述第二目标光流图。
  5. 根据权利要求4所述的方法,其中,包括:
    对所述第一初始光流图和第二初始光流图分别进行特征提取得到第一参考光流图和第二参考光流图,对所述第一输入帧和第二输入帧分别进行特征提取得到第一参考输入图和第二参考输入图;
    以所述第一参考输入图为引导图对所述第一参考光流图进行至少一次联合上采样处理,并进行特征提取得到所述第一目标光流图;
    以所述第二参考输入图为引导图对所述第二参考光流图进行至少一次联合上采样处理,并进行特征提取得到所述第二目标光流图。
  6. 根据权利要求1所述的方法,其中,在所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的中的训练方法包括:
    获取所述两个输入帧的真实景深图,并计算所述真实景深图的参考虚拟表面法线;
    根据所述两个输入帧利用初始深度估计模型得到目标景深图,并计算所述目标景深图的目标虚拟表面法线;
    根据所述参考虚拟表面法线和所述目标虚拟表面法线的误差损失更新所述初始深度估计模型的参数得到目标深度估计模型。
  7. 根据权利要求1所述的方法,其中,根据所述目标光流图、深度图、上下 文特征图以及插帧核利用像素自适应卷积帧合成方法得到输出帧,包括:
    根据两个目标光流图和两个深度图确定两个投影光流图,并获取插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图;
    利用像素自适应卷积的帧合成方法将两个投影光流图、所述插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图进行合成得到输出帧。
  8. 根据权利要求7所述的方法,其中,利用像素自适应卷积的帧合成方法将所述两个投影光流图、所述插帧核、所述两个变形后的深度图、所述两个变形后的输入帧以及两个变形后的上下文特征图进行合成得到输出帧,包括:
    将所述两个投影光流图、两个变形后的深度图、两个变形后的输入帧、插帧核以及两个变形后的上下文特征图进行拼接得到一个合成图像;
    对所述合成图像进行含像素自适应卷积的帧合成处理得到所述输出帧。
  9. 根据权利要求8所述的方法,其中,对所述合成输入图像进行含像素自适应卷积的帧合成处理得到所述输出帧包括:
    将所述合成图像输入第一残差模块;并以第一残差模块的输出特征图作为第二残差模块的输入和输入引导图,完成帧合成处理得到所述输出帧,
    其中,所述第二残差模块包含至少一个残差子模块,至少一个残差子模块包含像素自适应卷积层。
  10. 根据权利要求7所述的方法,其中,根据所述两个目标光流图和所述两个深度图确定投影光流图,并获取插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图,包括:
    根据两个深度图分别对两个目标光流图进行深度感知光流投影处理得到所述投影光流图;
    对所述两个输入帧进行时空上下文特征提取处理得到两个上下文特征图,并对所述两个输入帧进行插帧核估计处理得到插帧核;
    根据所述投影光流图和所述插帧核对所述两个输入帧、所述两个深度图、所述两个上下文特征图进行自适应变形处理得到所述两个变形后的深度图、所述两个变形后的输入帧以及所述两个变形后的上下文特征图。
  11. 根据权利要求7所述的方法,其中,根据所述输出帧得到所述两个输入帧的插入帧,包括:
    获取两个变形后输入帧的平均变形帧,并利用所述平均变形帧更新所述输出帧;
    将更新后的输出帧作为所述插入帧。
  12. 根据权利要求11所述的方法,其中,利用所述平均变形帧更新所述输出帧,包括:
    将所述平均变形帧和所述输出帧进行相加得到所述插入帧。
  13. 一种视频插帧装置,其中,包括:
    运动估计模块,用于获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;
    数据优化模块,用于对所述两个初始光流图进行上采样处理得到两个目标光流图;
    深度估计模块,用于根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;
    图像合成模块,根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;
    上述模块至少满足以下条件之一:
    根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;
    根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;
    根据所述输出帧得到所述两个输入帧的插入帧;
    其中所述两个输入帧为多帧视频图像中两个不同时刻的图像帧。
  14. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1至12中任一项所述的视频插帧方法。
  15. 一种电子设备,其中,包括:
    处理器;以及
    存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至12中任一项所述的视频插帧方法。
PCT/CN2020/093530 2020-05-29 2020-05-29 视频插帧方法及装置、计算机可读存储介质 WO2021237743A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/093530 WO2021237743A1 (zh) 2020-05-29 2020-05-29 视频插帧方法及装置、计算机可读存储介质
US17/278,403 US11800053B2 (en) 2020-05-29 2020-05-29 Method, device and computer readable storage medium for video frame interpolation
CN202080000871.7A CN114073071B (zh) 2020-05-29 2020-05-29 视频插帧方法及装置、计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093530 WO2021237743A1 (zh) 2020-05-29 2020-05-29 视频插帧方法及装置、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021237743A1 true WO2021237743A1 (zh) 2021-12-02

Family

ID=78745417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093530 WO2021237743A1 (zh) 2020-05-29 2020-05-29 视频插帧方法及装置、计算机可读存储介质

Country Status (3)

Country Link
US (1) US11800053B2 (zh)
CN (1) CN114073071B (zh)
WO (1) WO2021237743A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745545A (zh) * 2022-04-11 2022-07-12 北京字节跳动网络技术有限公司 一种视频插帧方法、装置、设备和介质
CN115661304A (zh) * 2022-10-11 2023-01-31 北京汉仪创新科技股份有限公司 基于帧插值的字库生成方法、电子设备、存储介质和系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11871145B2 (en) * 2021-04-06 2024-01-09 Adobe Inc. Optimization of adaptive convolutions for video frame interpolation
US11640668B2 (en) 2021-06-10 2023-05-02 Qualcomm Incorporated Volumetric sampling with correlative characterization for dense estimation
CN116546183B (zh) * 2023-04-06 2024-03-22 华中科技大学 基于单帧图像的具有视差效果的动态图像生成方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151474A (zh) * 2018-08-23 2019-01-04 复旦大学 一种生成新视频帧的方法
US20190138889A1 (en) * 2017-11-06 2019-05-09 Nvidia Corporation Multi-frame video interpolation using optical flow
WO2019168765A1 (en) * 2018-02-27 2019-09-06 Portland State University Context-aware synthesis for video frame interpolation
CN110351511A (zh) * 2019-06-28 2019-10-18 上海交通大学 基于场景深度估计的视频帧率上变换系统及方法
CN110392282A (zh) * 2018-04-18 2019-10-29 优酷网络技术(北京)有限公司 一种视频插帧的方法、计算机存储介质及服务器
CN110738697A (zh) * 2019-10-10 2020-01-31 福州大学 基于深度学习的单目深度估计方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010065344A1 (en) * 2008-11-25 2010-06-10 Refocus Imaging, Inc. System of and method for video refocusing
US10839573B2 (en) * 2016-03-22 2020-11-17 Adobe Inc. Apparatus, systems, and methods for integrating digital media content into other digital media content
US10091435B2 (en) * 2016-06-07 2018-10-02 Disney Enterprises, Inc. Video segmentation from an uncalibrated camera array
CN109145922B (zh) 2018-09-10 2022-03-29 成都品果科技有限公司 一种自动抠图系统
CN109379550B (zh) * 2018-09-12 2020-04-17 上海交通大学 基于卷积神经网络的视频帧率上变换方法及系统
CN110913230A (zh) * 2019-11-29 2020-03-24 合肥图鸭信息科技有限公司 一种视频帧预测方法、装置及终端设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138889A1 (en) * 2017-11-06 2019-05-09 Nvidia Corporation Multi-frame video interpolation using optical flow
WO2019168765A1 (en) * 2018-02-27 2019-09-06 Portland State University Context-aware synthesis for video frame interpolation
CN110392282A (zh) * 2018-04-18 2019-10-29 优酷网络技术(北京)有限公司 一种视频插帧的方法、计算机存储介质及服务器
CN109151474A (zh) * 2018-08-23 2019-01-04 复旦大学 一种生成新视频帧的方法
CN110351511A (zh) * 2019-06-28 2019-10-18 上海交通大学 基于场景深度估计的视频帧率上变换系统及方法
CN110738697A (zh) * 2019-10-10 2020-01-31 福州大学 基于深度学习的单目深度估计方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745545A (zh) * 2022-04-11 2022-07-12 北京字节跳动网络技术有限公司 一种视频插帧方法、装置、设备和介质
CN115661304A (zh) * 2022-10-11 2023-01-31 北京汉仪创新科技股份有限公司 基于帧插值的字库生成方法、电子设备、存储介质和系统
CN115661304B (zh) * 2022-10-11 2024-05-03 北京汉仪创新科技股份有限公司 基于帧插值的字库生成方法、电子设备、存储介质和系统

Also Published As

Publication number Publication date
CN114073071B (zh) 2023-12-05
US11800053B2 (en) 2023-10-24
US20220201242A1 (en) 2022-06-23
CN114073071A (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
WO2021237743A1 (zh) 视频插帧方法及装置、计算机可读存储介质
WO2019205852A1 (zh) 确定图像捕捉设备的位姿的方法、装置及其存储介质
US8923392B2 (en) Methods and apparatus for face fitting and editing applications
CN112561978B (zh) 深度估计网络的训练方法、图像的深度估计方法、设备
CN110298319B (zh) 图像合成方法和装置
CN112862877B (zh) 用于训练图像处理网络和图像处理的方法和装置
CN113129352A (zh) 一种稀疏光场重建方法及装置
CN115578515B (zh) 三维重建模型的训练方法、三维场景渲染方法及装置
CN113780326A (zh) 一种图像处理方法、装置、存储介质及电子设备
US11836836B2 (en) Methods and apparatuses for generating model and generating 3D animation, devices and storage mediums
CN113379877B (zh) 人脸视频生成方法、装置、电子设备及存储介质
WO2020092051A1 (en) Rolling shutter rectification in images/videos using convolutional neural networks with applications to sfm/slam with rolling shutter images/videos
US20240177394A1 (en) Motion vector optimization for multiple refractive and reflective interfaces
US8891857B2 (en) Concave surface modeling in image-based visual hull
CN111833391A (zh) 图像深度信息的估计方法及装置
CN115272575B (zh) 图像生成方法及装置、存储介质和电子设备
US11741671B2 (en) Three-dimensional scene recreation using depth fusion
CN112348939A (zh) 用于三维重建的纹理优化方法及装置
US11481871B2 (en) Image-guided depth propagation for space-warping images
CN116385643B (zh) 虚拟形象生成、模型的训练方法、装置及电子设备
CN113628190B (zh) 一种深度图去噪方法、装置、电子设备及介质
US20230140006A1 (en) Electronic apparatus and controlling method thereof
CN116310408B (zh) 一种建立事件相机与帧相机数据关联的方法及装置
US20230177722A1 (en) Apparatus and method with object posture estimating
US20240029341A1 (en) Method, electronic device, and computer program product for rendering target scene

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938026

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938026

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20938026

Country of ref document: EP

Kind code of ref document: A1