WO2021237743A1 - 视频插帧方法及装置、计算机可读存储介质 - Google Patents
视频插帧方法及装置、计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021237743A1 WO2021237743A1 PCT/CN2020/093530 CN2020093530W WO2021237743A1 WO 2021237743 A1 WO2021237743 A1 WO 2021237743A1 CN 2020093530 W CN2020093530 W CN 2020093530W WO 2021237743 A1 WO2021237743 A1 WO 2021237743A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- optical flow
- frame
- maps
- input frames
- input
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000003287 optical effect Effects 0.000 claims abstract description 255
- 230000003044 adaptive effect Effects 0.000 claims abstract description 38
- 238000001308 synthesis method Methods 0.000 claims abstract description 19
- 238000010586 diagram Methods 0.000 claims description 76
- 238000012545 processing Methods 0.000 claims description 52
- 238000005070 sampling Methods 0.000 claims description 39
- 238000003780 insertion Methods 0.000 claims description 21
- 230000037431 insertion Effects 0.000 claims description 21
- 238000000605 extraction Methods 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 14
- 230000015572 biosynthetic process Effects 0.000 claims description 13
- 238000003786 synthesis reaction Methods 0.000 claims description 13
- 238000012966 insertion method Methods 0.000 claims description 10
- 239000002131 composite material Substances 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 2
- 239000000203 mixture Substances 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/0137—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes dependent on presence/absence of motion, e.g. of motion zones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0127—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
Definitions
- the present disclosure relates to the field of information display technology, and in particular, to a video frame insertion method and device, computer-readable storage medium, and electronic equipment.
- Video frame insertion is a research direction in the field of digital image processing and computer vision.
- the use of video frame insertion technology can increase the frame rate of the video.
- the video frame insertion method in the prior art consists of multiple sub-modules in parallel, and the accuracy of each module is low. As a result, the final frame insertion result will be affected by the accuracy of each module and the final synthesis module. The accuracy of the inserted frame image is low.
- a video frame insertion method including:
- an output frame is obtained using a frame synthesis method; the foregoing steps satisfy at least one of the following conditions:
- the target depth estimation model is used to obtain the two depth maps, and the target depth estimation model is generated by using the reference virtual surface normal and the target depth map generated from the real depth maps of the two input frames
- the error loss between the normals of the target virtual surface is obtained by training the initial depth estimation model
- the two input frames are two image frames at different moments in a multi-frame video image.
- the iterative residual optical flow estimation on two input frames to obtain the two initial optical flow diagrams includes:
- the final output of the Nth optical flow estimation processing is used to update the input and output of the N+1 optical flow estimation, where N is a positive integer greater than or equal to 1;
- the final output of the last optical flow estimation process is used as the two initial optical flow diagrams.
- using the final output of the Nth optical flow estimation process to update the input and output of the N+1 optical flow estimation includes:
- the two final outputs of the Nth optical flow estimation process are respectively added with the two input frames of the first optical flow estimation to obtain the two inputs of the N+1th optical flow estimation;
- the two final outputs of the Nth optical flow estimation process are respectively added with the two initial outputs of the N+1th optical flow estimation process to obtain the final output of the N+1th optical flow estimation process.
- processing two initial optical flow diagrams using pixel-adaptive convolution and joint up-sampling according to the two input frames to obtain the target optical flow diagram includes:
- the two input frames include a first input frame and a second input frame
- the two initial optical flow diagrams include a first initial optical flow diagram and a second initial optical flow diagram
- the two target optical flow diagrams include a first A target optical flow graph and a second target optical flow graph
- the first input frame corresponds to the first initial optical flow graph
- the second input frame corresponds to the second initial optical flow graph
- the target depth estimation model is one of the reference virtual surface normal generated by the real depth map of the two input frames and the target virtual surface normal generated by the target depth map.
- the training methods obtained by training the initial depth estimation model include:
- using a pixel adaptive convolution frame synthesis method to obtain an output frame according to the target optical flow map, depth map, context feature map, and frame interpolation kernel includes:
- the two projected optical flow maps, the interpolated frame kernel, the two deformed depth maps, and the frame synthesis method of pixel adaptive convolution Two deformed input frames and two deformed context feature maps are synthesized to obtain an output frame, including:
- performing frame synthesis processing including pixel adaptive convolution on the synthesized input image to obtain the output frame includes:
- the second residual module includes at least one residual sub-module, and at least one residual sub-module includes a pixel-adaptive convolutional layer.
- a projected optical flow map is determined according to the two target optical flow maps and the two depth maps, and an interpolation frame core, two deformed depth maps, and two depth maps are acquired.
- the deformed input frame and two deformed context feature maps including:
- obtaining the inserted frame of the two input frames according to the output frame includes:
- using the average deformed frame to update the output frame includes:
- a video frame interpolation device including:
- a motion estimation module configured to obtain two input frames and obtain two initial optical flow diagrams corresponding to the two input frames according to the two input frames;
- a data optimization module configured to perform up-sampling processing on the two initial optical flow graphs to obtain two target optical flow graphs
- a depth estimation module configured to obtain, according to the two input frames, a frame insertion core, two depth maps respectively corresponding to the two input frames, and two context feature maps corresponding to the two input frames respectively;
- An image synthesis module that uses a frame synthesis method to obtain an output frame according to the two target optical flow maps, the two depth maps, the two context feature maps, and the frame insertion core;
- the target depth estimation model is used to obtain the two depth maps, and the target depth estimation model is generated by using the reference virtual surface normal and the target depth map generated by the real depth maps of the two input frames
- the error loss between the normals of the target virtual surface is obtained by training the initial depth estimation model
- the two input frames are two image frames at different moments in a multi-frame video image.
- a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the video frame insertion method as described in any one of the above is implemented.
- an electronic device including:
- the memory is used to store one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the video described in any one of the above Frame insertion method.
- Fig. 1 schematically shows a flowchart of a video frame interpolation method in an exemplary embodiment of the present disclosure
- Fig. 2 schematically shows a framework diagram of optical flow estimation processing in an exemplary embodiment of the present disclosure
- Fig. 3 schematically shows a frame diagram of a pixel adaptive convolution joint up-sampling module in an exemplary embodiment of the present disclosure
- Fig. 4 schematically shows a frame diagram of monocular depth estimation constrained by a set of virtual surface normals in an exemplary embodiment of the present disclosure
- Fig. 5 schematically shows an overall frame diagram of a video frame insertion method in an exemplary embodiment of the present disclosure
- Fig. 6 schematically shows a frame diagram of a frame synthesis module with pixel adaptive convolution in an exemplary embodiment of the present disclosure
- FIG. 7 schematically shows a schematic diagram of the composition of a video frame interpolation device in an exemplary embodiment of the present disclosure
- FIG. 8 schematically shows a structural diagram of a computer system suitable for implementing an electronic device of an exemplary embodiment of the present disclosure
- FIG. 9 schematically shows a schematic diagram of a computer-readable storage medium according to some embodiments of the present disclosure.
- Example embodiments will now be described more fully with reference to the accompanying drawings.
- the example embodiments can be implemented in various forms, and should not be construed as being limited to the examples set forth herein; on the contrary, the provision of these embodiments makes the present disclosure more comprehensive and complete, and fully conveys the concept of the example embodiments To those skilled in the art.
- the described features, structures or characteristics can be combined in one or more embodiments in any suitable way.
- a video frame interpolation method is first provided.
- the above video frame interpolation method may include the following steps:
- S130 Obtain, according to the two input frames, a frame insertion core, two depth maps respectively corresponding to the two input frames, and two context feature maps respectively corresponding to the two input frames;
- S140 Obtain an output frame by using a frame synthesis method according to the two target optical flow maps, the two depth maps, the two context feature maps, and the frame insertion core;
- the above steps meet at least one of the following conditions:
- the target depth estimation model is used to obtain the two depth maps, and the target depth estimation model is generated by using the reference virtual surface normal and the target depth map generated by the real depth maps of the two input frames
- the error loss between the normals of the target virtual surface is obtained by training the initial depth estimation model
- the two input frames are two image frames at different moments in a multi-frame video image.
- the inserted frame here refers to an image frame that can be inserted between two input frames, which can reduce video motion blur and improve video quality.
- iterative residual refined optical flow prediction is adopted to perform motion estimation on two adjacent input frames to obtain the initial optical flow.
- the figure initially improves the accuracy of the interpolation result.
- the initial optical flow diagram is processed by using pixel adaptive convolution and upsampling according to the input frame to obtain the target optical flow diagram, which further improves the accuracy of the interpolation result.
- the pixel adaptive convolution is used to improve the interpolation result, which can improve well
- the quality of the interpolated frame result the obtained interpolated frame result has a higher precision, which can be applied to video enhancement, and the upgraded slow motion special effect of video post-processing, which expands the usable scene of the video interpolating method.
- step S110 two input frames are obtained, and two initial optical flow diagrams corresponding to the two input frames are obtained according to the two input frames.
- the two acquired input frames may be the first input frame and the second input frame, respectively, and then perform optical flow estimation based on the first input frame and the second input frame to obtain the first initial light.
- PWC-Net CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume
- CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume can be used to estimate the optical flow of the above two input frames through a new model obtained through training, or other models can be used to estimate the optical flow of the two input frames.
- the optical flow estimation for the above two input frames is not specifically limited in this example implementation.
- the optical flow estimation may be performed only once on the above-mentioned first input frame and the second input frame to obtain the first initial optical flow graph and the second initial optical flow graph.
- the iterative residual refinement optical flow prediction can be used to perform motion estimation on two adjacent input frames to obtain the initial optical flow graph.
- the above-mentioned first input frame and second input frame are used as input 210 to perform multiple optical flow estimation 220 processing.
- the final output 230 processed by the Nth optical flow estimation 220 is used to update the Nth optical flow estimation processing.
- N can be 1, or it can be a positive integer greater than or equal to 1, such as 2, 3, 4, etc. It is not specifically limited in this example embodiment, and N cannot exceed the light The maximum number of flow estimates.
- using the final output of the Nth optical flow estimation process to update the input and output of the N+1 optical flow estimation process includes: the two final outputs of the Nth optical flow estimation process can be used with the two final outputs of the optical flow estimation process respectively.
- the two inputs of the first optical flow estimation are added to obtain the input of the N+1th optical flow estimation; the final output of the optical flow estimation process of the N times is compared with the initial output of the N+1 optical flow estimation.
- the final output of the N+1th optical flow estimation is obtained by adding, and the final output of the last optical flow estimation processing may be used as the initial optical flow graph.
- the server can feed back the output 230 of the first optical flow estimation to the input 210 of the second optical flow estimation, that is, the second optical flow estimation.
- the input can be obtained by adding the two outputs of the first optical flow estimation to the first input frame and the second input respectively, that is, the pixel values of the two outputs of the first optical flow estimation are respectively in the first input frame and the first input frame and the second input.
- the pixel values of the two input frames are added to obtain the input of the second optical flow estimation.
- the second optical flow estimation is updated with the first output of the first optical flow estimation The second initial output.
- Obtain the second target output that is, add the pixel values of the first output and the second initial output to obtain the second target output, where the second initial output is obtained after the input of the second optical flow estimation is processed by the optical flow estimation .
- step S120 two target optical flow diagrams are obtained by up-sampling the two initial optical flow diagrams.
- the server may directly perform feature extraction on the two initial optical flow graphs, and perform at least one up-sampling process after the feature extraction is completed to obtain the target optical flow graph.
- i can represent the pixel i
- v l can be represented as the feature map of the lth layer in the convolutional neural network
- ⁇ (i) can be represented as the convolution window around the pixel i
- W l can be used to represent the first layer of the convolutional neural network.
- p i can be used to represent pixel coordinates
- b l can be used to represent the bias term of layer l of the convolutional neural network.
- the initial optical flow map 310 can be extracted through the convolutional layer 320 to obtain the reference optical flow map, and the same volume can be used for the input frame 311.
- the multi-layer 320 performs feature extraction to obtain the reference input image, and then the reference optical flow image may be subjected to multiple pixel adaptive convolution and up-sampling 330 with the reference input image as a constraint to obtain the target optical flow image 340.
- the server may perform feature extraction on the first initial optical flow graph and the second initial optical flow graph to obtain the first reference optical flow graph and the second reference optical flow graph respectively, and perform the feature extraction on the first input frame and the second input frame respectively.
- Perform feature extraction to obtain a first reference input image and a second reference input image then, using the first reference input image as a guide image, perform at least one pixel adaptive convolution joint up-sampling process on the first reference optical flow image, and Perform feature extraction to obtain a first target optical flow graph; use the second reference input image as a guide image to perform at least one pixel-adaptive convolution and joint up-sampling processing on the second reference optical flow graph, and perform feature extraction to obtain the result The second target optical flow diagram.
- two pixel adaptive convolution and joint up-sampling 330 may be performed on the above-mentioned reference optical flow graph.
- three, four or more pixel-adaptive convolution joint up-sampling 330 can also be performed on the above-mentioned reference optical flow graph, which can be based on the size relationship between the target optical flow graph and the two input frames and pixel self-adaptation.
- the multiple of upsampling determines the number of upsampling. There is no specific limitation in this exemplary embodiment.
- each pixel-adaptive convolution joint upsampling 330 performed on the above-mentioned reference optical flow graph requires the above-mentioned reference input image as a guide image, that is, a constraint condition is added to the above-mentioned pixel-adaptive convolutional up-sampling.
- the output result can be obtained by using the convolutional layer 320 to perform a feature extraction to obtain the target optical flow graph 340, which improves the accuracy of the initial optical flow graph 310 , The optimization of the initial optical flow graph 310 is completed.
- pixel adaptive convolution is based on ordinary convolution, multiplied by an adaptive kernel function K obtained from the guided feature map f, that is, the convolution operation in pixel adaptive convolution upsampling is as follows:
- i can represent the pixel i
- v l can be represented as the feature map of the lth layer in the convolutional neural network
- ⁇ (i) can be represented as the convolution window around the pixel i
- W l can be used to represent the first layer of the convolutional neural network.
- p i and p j can be used to represent pixel coordinates
- b l can be used to represent the bias term of layer l of the convolutional neural network.
- f i and f j can represent guiding feature maps, specifically, pixel j is a pixel point within a preset distance from pixel i, where the preset distance can be customized according to requirements, and is not specifically limited in this example embodiment .
- the resolution of the initial optical flow map obtained after the above-mentioned optical flow estimation is one-fourth of the input frame. Therefore, in this exemplary embodiment, two pixel self-sampling multiples of 2 can be performed.
- Adaptive convolution joint up-sampling, or a pixel-adaptive convolution joint up-sampling with an up-sampling multiple of 4 times which is not specifically limited in this example embodiment, and reference can be introduced when using pixel-adaptive two and last sampling
- the optical flow graph is used as a guide graph, which in turn enables the accuracy of upsampling.
- step S130 a frame insertion core, two depth maps respectively corresponding to the two input frames, and two context feature maps corresponding to the two input frames are obtained according to the two input frames.
- the initial depth estimation module may be used to obtain the depth map. And according to the first input frame and the second input frame, the frame insertion kernel and the first context feature map and the second context feature map are obtained.
- a pre-trained model can be used to complete the spatiotemporal context feature extraction of two input frames, and the feature map of any layer in the middle of the model can be used as the two context feature maps obtained.
- the above and the training model can be It is a VGG model or a residual network, which is not specifically limited in this example implementation.
- the initial depth estimation model may be trained to obtain the target depth estimation model, and then the depth estimation model may be used to calculate the first depth map corresponding to the first input frame and the second input frame respectively. And the second depth map.
- the pre-training model of the monocular depth model MegaDepth may be used as the initial depth estimation model, or other pre-training models may be used as the initial depth estimation model, which is not specifically limited in this example embodiment.
- the method of training the initial depth estimation model includes: first obtain the real depth map of two input frames, and calculate the three-dimensional (3D) point cloud of the real depth map, specifically, the two-dimensional depth of field The map is converted to a three-dimensional map, and then a three-dimensional (3D) point cloud can be obtained relatively simply; then a reference virtual surface normal can be generated from the 3D point cloud, and then referring to Figure 4, the server can input the input frame 410 into the initial depth estimation model 420 Obtain the target depth map 430, and then perform the calculation of the 3D point cloud 440 on the target depth map 430, and generate the target virtual surface normal 450 according to the 3D point cloud 440, and then according to the target virtual surface normal and the reference virtual surface normal.
- the first input frame and the second input frame may be input into the target depth estimation model to obtain the first depth map and the second depth map.
- step S140 a projection optical flow map is determined according to the target optical flow map and the depth map, and an interpolation frame core, a deformed depth map, a deformed input frame, and a deformed context feature map are acquired.
- the server may first pass two input frames through the target optical flow diagram obtained by the optical flow estimation module 521 and the pixel adaptive convolution combined upsampling module 530, and then The input frame 510 may be subjected to the depth map obtained by the monocular depth estimation 522 constrained by the set of virtual surface normals; the target optical flow map and the depth map may be used to obtain the projected optical flow map by using the depth-sensing optical flow projection 540.
- the related description of the optical flow estimation 521 has been described above in detail with reference to FIG. 2, so it will not be repeated here, and the related content of the pixel adaptive convolution joint up-sampling module 530 has been described above with reference to FIG. 3
- the monocular depth estimation 522 of the geometric constraint of the virtual surface normal has been described in detail with reference to FIG. 4, so it will not be repeated here.
- the first depth map may be used to perform depth-sensing optical flow projection processing on the first target optical flow map to obtain the first projected optical flow map
- the second depth map may be used to perform depth-sensing optical flow projection on the second target optical flow.
- the flow projection process obtains the second projected optical flow diagram.
- the time of the first input frame can be defined as the 0th time
- the time of the second input frame can be defined as the 1st time
- a time t is defined, which is located between the first time and the second time.
- F 0 ⁇ 1 (y) represents the optical flow of the pixel point y from the first input frame to the second input frame; D 0 (y) the depth value of the pixel point y; y ⁇ S(x) represents the pixel point y Optical flow F 0 ⁇ 1 (y), if optical flow F_(0 ⁇ 1)(y) passes through pixel point x at time t, then F t ⁇ 0 (x) can be approximated to -t F(0 ⁇ 1) (y); F t ⁇ 0 (x) represents the optical flow of pixel x from time t to the first input frame.
- the server can extract the two input frames 510 through the temporal and spatial context feature extraction 523 to obtain two context feature maps, and perform the frame interpolation kernel estimation 524 on the two input frames to obtain the frame interpolation kernel, and use the interpolation
- the frame check performs adaptive deformation 550 on the aforementioned two input frames, two depth maps, and two context feature maps to obtain two deformed input frames, two deformed depth maps, and two deformed context feature maps.
- the depth estimation may use an hourglass model
- the context feature extraction uses a pre-trained ResNet neural network
- the kernel estimation and the adaptive deformation layer are based on the U-Net neural network, which is not specifically limited in this exemplary embodiment.
- a deep learning classic backbone network can be used to generate an interpolation kernel for each pixel position according to two input frames, and in the adaptive deformation layer, according to the interpolation frame kernel and the projected optical flow diagram, the two The depth map, two input frames, and two context feature maps are deformed to obtain two deformed input frames, two deformed depth maps, and two deformed context feature maps.
- the server superimposes 560 the interpolated frame core, the projected optical flow map, the deformed input frame, the deformed depth map, and the deformed context feature map to obtain a composite image.
- the server inputs the composite image 610 to the residual network through the input layer 620, and uses the output feature image of the first residual module 630 in the residual network as the second residual module
- the feature guide map of and the input of the second residual module in order to be able to input the feature guide map, replace the convolutional layer in other residual modules except the first residual module, which is the first residual module, with pixels Adaptive convolutional layer, and then form a second residual module
- the second residual module may include at least one residual sub-module 640, wherein the at least one residual sub-module 640 includes a pixel adaptive convolution layer, the residual
- the difference sub-module can be a pixel-adaptive convolution residual block;
- the convolutional layer in the first residual module can be
- i can represent the pixel i
- v l can be represented as the feature map of the lth layer in the convolutional neural network
- ⁇ (i) can be represented as the convolution window around the pixel i
- W l can be used to represent the first layer of the convolutional neural network.
- p i and p j can be used to represent pixel coordinates
- b l can be used to represent the bias term of layer l of the convolutional neural network.
- the pixel-adaptive convolutional layer is used to replace the above-mentioned convolutional layer to obtain the second residual module, and the pixel-adaptive convolutional layer is:
- i can represent the pixel i
- v l can be represented as the feature map of the lth layer in the convolutional neural network
- ⁇ (i) can be represented as the convolution window around the pixel i
- W l can be used to represent the first layer of the convolutional neural network.
- p i and p j can be used to represent pixel coordinates
- b l can be used to represent the bias term of layer l of the convolutional neural network.
- f i and f j can represent guiding feature maps, specifically, pixel j is a pixel point within a preset distance from pixel i, where the preset distance can be customized according to requirements, and is not specifically limited in this example embodiment .
- the pixel adaptive convolution layer is based on the ordinary convolution layer, multiplied by an adaptive kernel function K obtained from the guided feature map f.
- the feature image output by the first residual module 630 is used as the guide image of the second residual module, that is, according to the feature image, a new pixel-adaptive convolutional layer in the pixel-adaptive residual block is added. Constraint conditions to enable obtaining higher precision output frames.
- the number of residual blocks in the residual network may be multiple, such as 2, 3, 4 or more, which is not specifically limited in this example embodiment.
- the server may also obtain the average deformed frame 581 of the two deformed input frames, and update the output frame 590 (that is, the final output frame, It is also an insertion frame), which may first calculate the average deformed frame according to the input frame, and then splice the average deformed frame and the output frame 650 obtained by synthesizing the aforementioned frame with pixel adaptive convolution to obtain the final output frame 590.
- the pixel values of the two deformed input frames can be added and the average value is calculated to obtain the average deformed frame.
- a new output frame 590 is obtained by adding the average deformed frame and the output frame 650, that is, the pixel values of the average deformed frame and the output frame 650 are added to obtain the new output frame 590.
- the video frame interpolation device 700 includes: a motion estimation module 710, a data optimization module 720, a depth estimation module 730, and an image synthesis module 740.
- the motion estimation module 710 can be used to obtain two input frames and obtain two initial optical flow diagrams corresponding to the two input frames according to the two input frames; the data optimization module 720 can be used to The two initial optical flow graphs are subjected to up-sampling processing to obtain two target optical flow graphs; the depth estimation module 730 may be used to obtain the interpolation frame core according to the two input frames, and the two depths corresponding to the two input frames respectively. Image and the two context feature maps corresponding to the two input frames respectively; the image synthesis module 740 can be used to perform according to the two target optical flow maps, the two depth maps, the two context feature maps, and the The frame insertion core uses a frame synthesis method to obtain an output frame.
- modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
- the features and functions of two or more modules or units described above may be embodied in one module or unit.
- the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
- an electronic device capable of implementing the aforementioned video frame insertion.
- the electronic device 800 according to such an embodiment of the present disclosure will be described below with reference to FIG. 8.
- the electronic device 800 shown in FIG. 8 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 800 is represented in the form of a general-purpose computing device.
- the components of the electronic device 800 may include, but are not limited to: the aforementioned at least one processing unit 810, the aforementioned at least one storage unit 820, a bus 830 connecting different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
- the storage unit stores program code, and the program code can be executed by the processing unit 810, so that the processing unit 810 executes the various exemplary methods described in the "Exemplary Method" section of this specification. Example steps.
- the processing unit 810 may perform step S110 as shown in FIG.
- the electronic device can implement the steps shown in FIG. 1.
- the storage unit 820 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 821 and/or a cache storage unit 822, and may further include a read-only storage unit (ROM) 823.
- RAM random access storage unit
- ROM read-only storage unit
- the storage unit 820 may also include a program/utility tool 824 having a set of (at least one) program module 825.
- program module 825 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
- the bus 830 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
- the electronic device 800 may also communicate with one or more external devices 870 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable a user to interact with the electronic device 800, and/or communicate with Any device (eg, router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 850.
- the electronic device 800 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 860.
- networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
- the network adapter 860 communicates with other modules of the electronic device 800 through the bus 830. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
- the exemplary embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present disclosure.
- a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
- a computer-readable storage medium is also provided, on which a program product capable of implementing the above-mentioned method of this specification is stored.
- various aspects of the present disclosure may also be implemented in the form of a program product, which includes program code, and when the program product runs on a terminal device, the program code is used to enable the The terminal device executes the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification.
- a program product 900 for implementing the above method according to an embodiment of the present disclosure is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be used in a terminal device, such as a personal computer. Run on the computer.
- the program product of the present disclosure is not limited thereto.
- the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
- the program product can use any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
- the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
- the program code used to perform the operations of the present disclosure can be written in any combination of one or more programming languages.
- the programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
- the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers). Shanglai is connected via the Internet).
- LAN local area network
- WAN wide area network
- an external computing device for example, using Internet service providers.
- Shanglai is connected via the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (15)
- 一种视频插帧方法,其中,包括:获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;对所述两个初始光流图进行上采样处理得到两个目标光流图;根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;上述步骤至少满足以下条件之一:对两个输入帧进行迭代残差光流估计得到与所述两个初始光流图;根据所述两个输入帧利用像素自适应卷积联合上采样的处理所述两个初始光流图得到两个目标光流图;根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;根据所述输出帧得到所述两个输入帧的插入帧;其中所述两个输入帧为多帧视频图像中两个不同时刻的图像帧。
- 根据权利要求1所述的方法,其中,所述对两个输入帧进行迭代残差光流估计得到所述两个初始光流图,包括:对所述两个输入帧进行多次光流估计处理;其中,在所述多次光流估计处理中,利用第N次所述光流估计处理的最终输出更新N+1次光流估计的输入和输出,N为大于等于1的正整数;将最后一次光流估计处理的最终输出作为所述两个初始光流图。
- 根据权利要求2所述的方法,其中,利用第N次所述光流估计处理的输出更新N+1次光流估计的输入和输出,包括:利用第N次光流估计处理的两个最终输出分别与第一次光流估计的两个输入帧进行相加得到第N+1次光流估计的两个输入;利用第N次光流估计处理的两个最终输出分别与第N+1次光流估计的两个初始输出进行相加得到第N+1次光流估计处理的最终输出。
- 根据权利要求1所述的方法,其中,根据所述两个输入帧利用像素自适应卷积联合上采样的处理两个初始光流图得到目标光流图,包括:所述两个输入帧包括第一输入帧和第二输入帧,所述两个初始光流图包括第一初始光流图个第二初始光流图,所述两个目标光流图包括第一目标光流图和第二目标光流图,其中,所述第一输入帧与所述第一初始光流图相对应,所述第二输入帧与所述第二初始光流图相对应;利用所述第一输入帧为像素自适应卷积联合上采样的引导图,对所述第一初始光流图进行像素自适应卷积联合上采样处理得到所述第一目标光流图;利用所述第二输入帧为像素自适应卷积联合上采样的引导图,对所述第二初始光流图进行像素自适应卷积联合上采样处理得到所述第二目标光流图。
- 根据权利要求4所述的方法,其中,包括:对所述第一初始光流图和第二初始光流图分别进行特征提取得到第一参考光流图和第二参考光流图,对所述第一输入帧和第二输入帧分别进行特征提取得到第一参考输入图和第二参考输入图;以所述第一参考输入图为引导图对所述第一参考光流图进行至少一次联合上采样处理,并进行特征提取得到所述第一目标光流图;以所述第二参考输入图为引导图对所述第二参考光流图进行至少一次联合上采样处理,并进行特征提取得到所述第二目标光流图。
- 根据权利要求1所述的方法,其中,在所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的中的训练方法包括:获取所述两个输入帧的真实景深图,并计算所述真实景深图的参考虚拟表面法线;根据所述两个输入帧利用初始深度估计模型得到目标景深图,并计算所述目标景深图的目标虚拟表面法线;根据所述参考虚拟表面法线和所述目标虚拟表面法线的误差损失更新所述初始深度估计模型的参数得到目标深度估计模型。
- 根据权利要求1所述的方法,其中,根据所述目标光流图、深度图、上下 文特征图以及插帧核利用像素自适应卷积帧合成方法得到输出帧,包括:根据两个目标光流图和两个深度图确定两个投影光流图,并获取插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图;利用像素自适应卷积的帧合成方法将两个投影光流图、所述插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图进行合成得到输出帧。
- 根据权利要求7所述的方法,其中,利用像素自适应卷积的帧合成方法将所述两个投影光流图、所述插帧核、所述两个变形后的深度图、所述两个变形后的输入帧以及两个变形后的上下文特征图进行合成得到输出帧,包括:将所述两个投影光流图、两个变形后的深度图、两个变形后的输入帧、插帧核以及两个变形后的上下文特征图进行拼接得到一个合成图像;对所述合成图像进行含像素自适应卷积的帧合成处理得到所述输出帧。
- 根据权利要求8所述的方法,其中,对所述合成输入图像进行含像素自适应卷积的帧合成处理得到所述输出帧包括:将所述合成图像输入第一残差模块;并以第一残差模块的输出特征图作为第二残差模块的输入和输入引导图,完成帧合成处理得到所述输出帧,其中,所述第二残差模块包含至少一个残差子模块,至少一个残差子模块包含像素自适应卷积层。
- 根据权利要求7所述的方法,其中,根据所述两个目标光流图和所述两个深度图确定投影光流图,并获取插帧核、两个变形后的深度图、两个变形后的输入帧以及两个变形后的上下文特征图,包括:根据两个深度图分别对两个目标光流图进行深度感知光流投影处理得到所述投影光流图;对所述两个输入帧进行时空上下文特征提取处理得到两个上下文特征图,并对所述两个输入帧进行插帧核估计处理得到插帧核;根据所述投影光流图和所述插帧核对所述两个输入帧、所述两个深度图、所述两个上下文特征图进行自适应变形处理得到所述两个变形后的深度图、所述两个变形后的输入帧以及所述两个变形后的上下文特征图。
- 根据权利要求7所述的方法,其中,根据所述输出帧得到所述两个输入帧的插入帧,包括:获取两个变形后输入帧的平均变形帧,并利用所述平均变形帧更新所述输出帧;将更新后的输出帧作为所述插入帧。
- 根据权利要求11所述的方法,其中,利用所述平均变形帧更新所述输出帧,包括:将所述平均变形帧和所述输出帧进行相加得到所述插入帧。
- 一种视频插帧装置,其中,包括:运动估计模块,用于获取两个输入帧并根据所述两个输入帧得到与所述两个输入帧对应的两个初始光流图;数据优化模块,用于对所述两个初始光流图进行上采样处理得到两个目标光流图;深度估计模块,用于根据所述两个输入帧得到插帧核、所述两个输入帧分别对应的两个深度图以及所述两个输入帧分别对应的两个上下文特征图;图像合成模块,根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用帧合成方法得到输出帧;上述模块至少满足以下条件之一:根据所述两个输入帧利用目标深度估计模型得到所述两个深度图,所述目标深度估计模型是利用所述两个输入帧的真实景深图生成的参考虚拟表面法线和目标景深图生成的目标虚拟表面法线之间的误差损失来对初始深度估计模型训练得到的;根据所述两个目标光流图、所述两个深度图、所述两个上下文特征图以及所述插帧核利用像素自适应卷积帧合成方法得到输出帧;根据所述输出帧得到所述两个输入帧的插入帧;其中所述两个输入帧为多帧视频图像中两个不同时刻的图像帧。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1至12中任一项所述的视频插帧方法。
- 一种电子设备,其中,包括:处理器;以及存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至12中任一项所述的视频插帧方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093530 WO2021237743A1 (zh) | 2020-05-29 | 2020-05-29 | 视频插帧方法及装置、计算机可读存储介质 |
US17/278,403 US11800053B2 (en) | 2020-05-29 | 2020-05-29 | Method, device and computer readable storage medium for video frame interpolation |
CN202080000871.7A CN114073071B (zh) | 2020-05-29 | 2020-05-29 | 视频插帧方法及装置、计算机可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/093530 WO2021237743A1 (zh) | 2020-05-29 | 2020-05-29 | 视频插帧方法及装置、计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021237743A1 true WO2021237743A1 (zh) | 2021-12-02 |
Family
ID=78745417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/093530 WO2021237743A1 (zh) | 2020-05-29 | 2020-05-29 | 视频插帧方法及装置、计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11800053B2 (zh) |
CN (1) | CN114073071B (zh) |
WO (1) | WO2021237743A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114745545A (zh) * | 2022-04-11 | 2022-07-12 | 北京字节跳动网络技术有限公司 | 一种视频插帧方法、装置、设备和介质 |
CN115661304A (zh) * | 2022-10-11 | 2023-01-31 | 北京汉仪创新科技股份有限公司 | 基于帧插值的字库生成方法、电子设备、存储介质和系统 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11871145B2 (en) * | 2021-04-06 | 2024-01-09 | Adobe Inc. | Optimization of adaptive convolutions for video frame interpolation |
US11640668B2 (en) | 2021-06-10 | 2023-05-02 | Qualcomm Incorporated | Volumetric sampling with correlative characterization for dense estimation |
CN116546183B (zh) * | 2023-04-06 | 2024-03-22 | 华中科技大学 | 基于单帧图像的具有视差效果的动态图像生成方法及系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151474A (zh) * | 2018-08-23 | 2019-01-04 | 复旦大学 | 一种生成新视频帧的方法 |
US20190138889A1 (en) * | 2017-11-06 | 2019-05-09 | Nvidia Corporation | Multi-frame video interpolation using optical flow |
WO2019168765A1 (en) * | 2018-02-27 | 2019-09-06 | Portland State University | Context-aware synthesis for video frame interpolation |
CN110351511A (zh) * | 2019-06-28 | 2019-10-18 | 上海交通大学 | 基于场景深度估计的视频帧率上变换系统及方法 |
CN110392282A (zh) * | 2018-04-18 | 2019-10-29 | 优酷网络技术(北京)有限公司 | 一种视频插帧的方法、计算机存储介质及服务器 |
CN110738697A (zh) * | 2019-10-10 | 2020-01-31 | 福州大学 | 基于深度学习的单目深度估计方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010065344A1 (en) * | 2008-11-25 | 2010-06-10 | Refocus Imaging, Inc. | System of and method for video refocusing |
US10839573B2 (en) * | 2016-03-22 | 2020-11-17 | Adobe Inc. | Apparatus, systems, and methods for integrating digital media content into other digital media content |
US10091435B2 (en) * | 2016-06-07 | 2018-10-02 | Disney Enterprises, Inc. | Video segmentation from an uncalibrated camera array |
CN109145922B (zh) | 2018-09-10 | 2022-03-29 | 成都品果科技有限公司 | 一种自动抠图系统 |
CN109379550B (zh) * | 2018-09-12 | 2020-04-17 | 上海交通大学 | 基于卷积神经网络的视频帧率上变换方法及系统 |
CN110913230A (zh) * | 2019-11-29 | 2020-03-24 | 合肥图鸭信息科技有限公司 | 一种视频帧预测方法、装置及终端设备 |
-
2020
- 2020-05-29 US US17/278,403 patent/US11800053B2/en active Active
- 2020-05-29 CN CN202080000871.7A patent/CN114073071B/zh active Active
- 2020-05-29 WO PCT/CN2020/093530 patent/WO2021237743A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190138889A1 (en) * | 2017-11-06 | 2019-05-09 | Nvidia Corporation | Multi-frame video interpolation using optical flow |
WO2019168765A1 (en) * | 2018-02-27 | 2019-09-06 | Portland State University | Context-aware synthesis for video frame interpolation |
CN110392282A (zh) * | 2018-04-18 | 2019-10-29 | 优酷网络技术(北京)有限公司 | 一种视频插帧的方法、计算机存储介质及服务器 |
CN109151474A (zh) * | 2018-08-23 | 2019-01-04 | 复旦大学 | 一种生成新视频帧的方法 |
CN110351511A (zh) * | 2019-06-28 | 2019-10-18 | 上海交通大学 | 基于场景深度估计的视频帧率上变换系统及方法 |
CN110738697A (zh) * | 2019-10-10 | 2020-01-31 | 福州大学 | 基于深度学习的单目深度估计方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114745545A (zh) * | 2022-04-11 | 2022-07-12 | 北京字节跳动网络技术有限公司 | 一种视频插帧方法、装置、设备和介质 |
CN115661304A (zh) * | 2022-10-11 | 2023-01-31 | 北京汉仪创新科技股份有限公司 | 基于帧插值的字库生成方法、电子设备、存储介质和系统 |
CN115661304B (zh) * | 2022-10-11 | 2024-05-03 | 北京汉仪创新科技股份有限公司 | 基于帧插值的字库生成方法、电子设备、存储介质和系统 |
Also Published As
Publication number | Publication date |
---|---|
CN114073071B (zh) | 2023-12-05 |
US11800053B2 (en) | 2023-10-24 |
US20220201242A1 (en) | 2022-06-23 |
CN114073071A (zh) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021237743A1 (zh) | 视频插帧方法及装置、计算机可读存储介质 | |
WO2019205852A1 (zh) | 确定图像捕捉设备的位姿的方法、装置及其存储介质 | |
US8923392B2 (en) | Methods and apparatus for face fitting and editing applications | |
CN112561978B (zh) | 深度估计网络的训练方法、图像的深度估计方法、设备 | |
CN110298319B (zh) | 图像合成方法和装置 | |
CN112862877B (zh) | 用于训练图像处理网络和图像处理的方法和装置 | |
CN113129352A (zh) | 一种稀疏光场重建方法及装置 | |
CN115578515B (zh) | 三维重建模型的训练方法、三维场景渲染方法及装置 | |
CN113780326A (zh) | 一种图像处理方法、装置、存储介质及电子设备 | |
US11836836B2 (en) | Methods and apparatuses for generating model and generating 3D animation, devices and storage mediums | |
CN113379877B (zh) | 人脸视频生成方法、装置、电子设备及存储介质 | |
WO2020092051A1 (en) | Rolling shutter rectification in images/videos using convolutional neural networks with applications to sfm/slam with rolling shutter images/videos | |
US20240177394A1 (en) | Motion vector optimization for multiple refractive and reflective interfaces | |
US8891857B2 (en) | Concave surface modeling in image-based visual hull | |
CN111833391A (zh) | 图像深度信息的估计方法及装置 | |
CN115272575B (zh) | 图像生成方法及装置、存储介质和电子设备 | |
US11741671B2 (en) | Three-dimensional scene recreation using depth fusion | |
CN112348939A (zh) | 用于三维重建的纹理优化方法及装置 | |
US11481871B2 (en) | Image-guided depth propagation for space-warping images | |
CN116385643B (zh) | 虚拟形象生成、模型的训练方法、装置及电子设备 | |
CN113628190B (zh) | 一种深度图去噪方法、装置、电子设备及介质 | |
US20230140006A1 (en) | Electronic apparatus and controlling method thereof | |
CN116310408B (zh) | 一种建立事件相机与帧相机数据关联的方法及装置 | |
US20230177722A1 (en) | Apparatus and method with object posture estimating | |
US20240029341A1 (en) | Method, electronic device, and computer program product for rendering target scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20938026 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20938026 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/06/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20938026 Country of ref document: EP Kind code of ref document: A1 |