CN113225589B - Video frame insertion processing method - Google Patents

Video frame insertion processing method Download PDF

Info

Publication number
CN113225589B
CN113225589B CN202110478127.4A CN202110478127A CN113225589B CN 113225589 B CN113225589 B CN 113225589B CN 202110478127 A CN202110478127 A CN 202110478127A CN 113225589 B CN113225589 B CN 113225589B
Authority
CN
China
Prior art keywords
boundary
image
gray
value
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110478127.4A
Other languages
Chinese (zh)
Other versions
CN113225589A (en
Inventor
龙图景
刘政伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kaishida Information Technology Co ltd
Original Assignee
Beijing Kaishida Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kaishida Information Technology Co ltd filed Critical Beijing Kaishida Information Technology Co ltd
Priority to CN202110478127.4A priority Critical patent/CN113225589B/en
Publication of CN113225589A publication Critical patent/CN113225589A/en
Application granted granted Critical
Publication of CN113225589B publication Critical patent/CN113225589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a video frame interpolation processing method, which comprises the steps of simply carrying out similarity comparison on two frames of images of an original video, judging whether a large-proportion pixel change exists between the two frames of images so as to determine whether a background changes, and if the large-proportion pixel change does not exist in the images, adopting the existing optical flow method to complement frames; if the boundary exists, the boundary of the original image is pushed to the boundary after the change by extracting the boundary before the change and the boundary after the change, and a plurality of frame interpolation images are extracted from the boundary, so that the frame compensation is carried out between two frames of images.

Description

Video frame insertion processing method
Technical Field
The invention relates to the technical field of image processing, in particular to a video frame interpolation processing method.
Background
Under the condition that the network condition is not ideal, a user always needs to actively frame down the video to ensure the video image quality, and video data is transmitted at a lower code rate, so that the high resolution and the high frame rate of the video cannot be simultaneously met, the watching effect of the video is influenced, and therefore frame insertion needs to be carried out on the video to ensure that the video is clearly and smoothly played. In the prior art, a video frame interpolation technology generally needs to perform motion estimation on an object in a scene and insert the object into a correct position of a generated frame by using a motion compensation algorithm, so that a frame interpolation effect greatly depends on the quality of motion estimation and compensation, motion capture in the prior motion compensation algorithm is generally realized by using an optical flow algorithm, but when background change and a moving object in a video simultaneously exist in the prior optical flow algorithm, the processing effect is poor.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a video frame interpolation processing method, which can effectively compensate for a moving object in a complex scene with a light-dark change.
The invention relates to a video frame insertion processing method, which comprises the following steps
S1, extracting images A and B of which two frames are adjacent on a time axis from the video file, and carrying out binarization processing on the images A and B to obtain the gray value of each pixel in the images A and B, wherein the image A is earlier than the image B on the time axis;
s2, establishing two-dimensional coordinates X-Y and X '-Y' respectively by taking a pixel at the lower left corner of the image A and the image B as an origin, carrying out gray difference operation on the pixels with the same coordinates of the image A and the image B, and combining the absolute values of the obtained differences into a difference gray image;
s3, performing gray value statistics on the virtual pixels generated by the difference in the difference gray image:
s31, if the ratio of the virtual pixels with the gray value larger than the gray threshold alpha is smaller than the threshold n, directly adopting an optical flow method to carry out motion compensation on the original image A and the image B and carrying out frame interpolation;
s32, if the ratio of the virtual pixels with the gray value larger than the gray threshold value alpha is larger than the threshold value n, the following steps are carried out to complement the frame;
s321, marking all virtual pixels with the gray values larger than the gray threshold value beta from the difference gray image, extracting coordinates of the virtual pixels, restoring the coordinates to corresponding pixels A (x, y) in the image A, marking the gray value of the pixel B (x ', y') of the corresponding coordinates in the image B as a target value gray2 to the pixel A (x, y), and using the original gray value of the pixel A (x, y) as an initial value gray1 to obtain that the marked pixels in the image A are A [ (x, y), gray1-gray2 ];
s322, extracting a first boundary of the object in the image A according to the gray value, and simultaneously extracting second boundaries of all marked pixels in the image A;
s323, if the first boundary and the second boundary are crossed, the part, belonging to the first boundary, of the boundary of the image of the overlapped part of the first boundary and the second boundary is a keeping boundary, and the rest part of the first boundary is a motion boundary; the part of the image boundary of the overlapped part of the first boundary and the second boundary, which belongs to the second boundary, is a motion boundary, and the other part of the second boundary is a holding boundary;
s3231, extracting a plurality of patterns surrounded by the second boundary, and extracting a centroid of each pattern, where a coordinate of the centroid is a0(x0, y0), a gray value of all marked pixels located on the motion boundary in a unit time is changed from an initial value gray1 to a target value gray2, a gray value of a pixel located on the new motion boundary after the new motion boundary appears is also changed from an initial value gray1 to a target value gray2 in a unit time for a duration of t, and at this time, the changed image is retained as a first frame interpolation image a1, and RGB values of the image are restored;
s3232, extracting the centroid of each graph surrounded by the new second boundary after movement, repeating the step S3231 for multiple times, and reserving the changed multiple images as multiple frame insertion images A2-An;
s4, the frame interpolation image obtained in step S3 is inserted between the image a and the image B for frame interpolation.
Further, the gray threshold ranges from 0 to 10.
Further, the threshold n should be greater than 80%.
Further, the closer the pixel point of the motion boundary is to the holding boundary on the boundary line in the step S3231, the longer the change time.
Further, in the step S323, if the first boundary and the second boundary do not intersect, the first boundary is blurred, the gray-scale values of the pixels in the first boundary are linearly changed from the initial value gray1 to the target value gray2 within the time T, and the interpolated frame image is clipped within the corresponding time node.
The invention has the beneficial effects that: the invention relates to a video frame interpolation processing method, which comprises the steps of simply carrying out similarity comparison on two frame images of an original video, judging whether a large-proportion pixel change exists between the two frame images, determining whether a background changes, and if the large-proportion pixel change does not exist, adopting the existing optical flow method to complement frames; if the boundary exists, the boundary before and after the change is extracted, the boundary of the original image is pushed to the boundary after the change, and a plurality of frame interpolation images are extracted from the boundary, so that the frame is complemented between two frame images.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of an original image according to the present invention;
FIG. 3 is a schematic diagram illustrating the generation of an interpolated frame image according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1-3: a method for processing video frames according to this embodiment includes the steps of:
s1, extracting images A and B of which two frames are adjacent on a time axis from the video file, and carrying out binarization processing on the images A and B to obtain the gray value of each pixel in the images A and B, wherein the image A is earlier than the image B on the time axis, and the images A and B are converted from RGB values into a gray value reference formula: gray is 0.3+ G0.59 + B0.11, and the calculation amount can be effectively reduced after the gray scale is converted into the gray scale, so that the calculation efficiency is improved.
S2, establishing two-dimensional coordinates X-Y and X '-Y' respectively with a pixel at the lower left corner of the image A and the image B as an origin, carrying out gray difference operation on the pixels at the same coordinates of the image A and the image B, and forming a difference gray image by the absolute value of the obtained difference.
And S3, performing gray value statistics on virtual pixels generated by using the difference in the difference gray image, wherein the virtual pixels are used for reflecting which pixels are changed when the image A is changed to the image B, and the gray values of the virtual pixels reflect the change amplitude.
S31, if the ratio of the virtual pixel with the gray value larger than the gray threshold value alpha is smaller than the threshold value n, the original image A and the image B are directly carried out the motion compensation and the frame interpolation by adopting the optical flow method, the range of the gray threshold value is between 0 and 10, because in some cases, the image generates difference value when moving in the same background, a certain threshold value is needed to accommodate the situation that the image background moves, and meanwhile, the threshold value n should be larger than 80 percent, namely, the gray value of 80 percent of the pixels is changed between 0 and 10.
S32, if the ratio of the dummy pixels having the gray-scale value larger than the gray-scale threshold α is larger than the threshold n, the following steps are performed to perform frame compensation.
S321, marking all the virtual pixels with the gray values greater than the gray threshold β from the difference gray image, extracting the coordinates of the virtual pixels, and restoring the coordinates to the corresponding pixel a (x, y) in the image a, marking the gray value of the pixel B (x ', y') in the image B as the target value gray2 to the pixel a (x, y), and the original gray value of the pixel a (x, y) as the initial value gray1, so as to obtain the marked pixels in the image a as a [ (x, y), gray1-gray2 ].
S322, extracting a first boundary of the object in the image A according to the gray value, and simultaneously extracting second boundaries of all marked pixels in the image A.
S323, if the first boundary and the second boundary are crossed, the part, belonging to the first boundary, of the boundary of the image of the overlapped part of the first boundary and the second boundary is a keeping boundary, and the rest part of the first boundary is a motion boundary; the part of the boundary of the image of the overlapped part of the first boundary and the second boundary, which belongs to the second boundary, is a moving boundary, the other part of the second boundary is a keeping boundary, the moving boundary in the first boundary is pushed towards the inside of the first boundary, and the moving boundary in the second boundary is pushed towards the inside of the second boundary, so that the first boundary realizes the moving effect in the second boundary.
S3231, extracting a plurality of graphs surrounded by the second boundary, and extracting a centroid of each graph, where the centroid extraction algorithm is an existing algorithm, and will not be described, a coordinate of the extracted centroid is a0(x0, y0), a gray value of all marked pixels located on the motion boundary in unit time is changed from an initial value gray1 to a target value gray2, a gray value of a pixel located on the new motion boundary after the new motion boundary appears is also changed from an initial value gray1 to a target value gray2 in unit time, and a duration is t, at this time, a changed image is retained as a first frame interpolation image a1, and RGB values of the image are restored, and meanwhile, a closer a pixel point of the motion boundary is to the boundary line, the longer the change time is, so that the motion boundary maintains an original shape of the motion boundary in the motion process.
And S3232, extracting the centroid of each graph surrounded by the new second boundary after the movement, repeating the step S3231 for multiple times, and reserving the plurality of changed images as a plurality of frame interpolation images A2-An.
S4, the frame interpolation image obtained in step S3 is inserted between the image a and the image B for frame interpolation.
Meanwhile, in step S323, if the first boundary and the second boundary do not intersect, the first boundary is blurred, the gray-scale values of the pixels in the first boundary are linearly changed from the initial value gray1 to the target value gray2 within the time T, and the interpolated frame image is captured within the corresponding time node, and if the first boundary and the second boundary do not want to intersect, it is indicated that the object in the image a completely exceeds the original range of the object after being rapidly moved, generally, the time between two frames is extremely short, and the type situation rarely occurs, or the area of the object itself is small enough, so that the gradual fade-out and gradual change of the object will not have a great influence on the whole picture.
The invention relates to a video frame interpolation processing method, which comprises the steps of simply carrying out similarity comparison on two frame images of an original video, judging whether a large-proportion pixel change exists between the two frame images, determining whether a background changes, and if the large-proportion pixel change does not exist, adopting the existing optical flow method to complement frames; if the boundary exists, the boundary of the original image is pushed to the boundary after the change by extracting the boundary before the change and the boundary after the change, and a plurality of frame interpolation images are extracted from the boundary, so that the frame compensation is carried out between two frames of images.
The specific implementation process is as follows:
as shown in fig. 2-:
after the images A and B are binarized and subjected to difference value calculation, the gray value change of 80% of pixels is between 0 and 10, so that the following algorithm is performed when the optical flow method is not ideal;
performing short-distance displacement on the circular objects in the images A and B, during processing, not needing object identification, firstly performing boundary extraction through a Sobel operator, extracting a first boundary in the image A, and then overlapping the image A and the image B to obtain a second boundary;
a part which belongs to the first boundary but does not coincide with the second boundary is a motion boundary;
a part which belongs to the first boundary and is overlapped with the second boundary is a holding boundary;
a portion belonging to the second boundary but not overlapping the first boundary is a holding boundary;
the part which belongs to the second boundary and is overlapped with the first boundary is a motion boundary;
the moving boundary advances to the centroid of two closed figures enclosed by the second boundary, the pixels of the moving boundary change from the initial value gray1 to the target value gray2, so that the advancing effect is caused, and a plurality of frame interpolation images are intercepted and placed between the image A and the image B, and the frame supplement can be completed.
Because the identification is not needed, only the boundary extraction and the gray change are needed under the condition of keeping the shape of the original motion boundary, the required calculation force is small, and the optical flow algorithm can be assisted to complement the frame of the video image with the complicated and changed background.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (5)

1. A video frame insertion processing method is characterized in that: comprises the steps of
S1, extracting images A and B of which two frames are adjacent on a time axis from the video file, and carrying out binarization processing on the images A and B to obtain the gray value of each pixel in the images A and B, wherein the image A is earlier than the image B on the time axis;
s2, establishing two-dimensional coordinates X-Y and X '-Y' respectively by taking a pixel at the lower left corner of the image A and the image B as an origin, carrying out gray difference operation on the pixels with the same coordinates of the image A and the image B, and combining the absolute values of the obtained differences into a difference gray image;
s3, performing gray value statistics on the virtual pixels generated by the difference in the difference gray image:
s31, if the ratio of the virtual pixels with the gray value larger than the gray threshold alpha is smaller than the threshold n, directly adopting an optical flow method to carry out motion compensation on the original image A and the image B and carrying out frame interpolation;
s32, if the ratio of the virtual pixels with the gray value larger than the gray threshold value alpha is larger than the threshold value n, the following steps are carried out to complement the frame;
s321, marking all virtual pixels with the gray values larger than the gray threshold value beta from the difference gray image, extracting coordinates of the virtual pixels, restoring the coordinates to corresponding pixels A (x, y) in the image A, marking the gray value of the pixel B (x ', y') of the corresponding coordinates in the image B as a target value gray2 to the pixel A (x, y), and using the original gray value of the pixel A (x, y) as an initial value gray1 to obtain that the marked pixels in the image A are A [ (x, y), gray1-gray2 ];
s322, extracting a first boundary of the object in the image A according to the gray value, and simultaneously extracting second boundaries of all marked pixels in the image A;
s323, if the first boundary is intersected with the second boundary, the part of the image boundary of the overlapped part of the first boundary and the second boundary, which belongs to the first boundary, is a keeping boundary, and the rest part of the first boundary is a motion boundary; the part of the image boundary of the overlapped part of the first boundary and the second boundary, which belongs to the second boundary, is a motion boundary, and the other part of the second boundary is a holding boundary;
s3231, extracting a plurality of patterns surrounded by the second boundary, and extracting a centroid of each pattern, where a coordinate of the centroid is a0(x0, y0), a gray value of all marked pixels located on the motion boundary in a unit time is changed from an initial value gray1 to a target value gray2, a gray value of a pixel located on the new motion boundary after the new motion boundary appears is also changed from an initial value gray1 to a target value gray2 in a unit time for a duration of t, and at this time, the changed image is retained as a first frame interpolation image a1, and RGB values of the image are restored;
s3232, extracting the centroid of each graph surrounded by the new second boundary after movement, repeating the step S3231 for multiple times, and reserving the changed multiple images as multiple frame insertion images A2-An;
s4, the frame interpolation image obtained in step S3 is inserted between the image a and the image B for frame interpolation.
2. The method of claim 1, wherein: the grayscale threshold ranges between 0-10.
3. The method of claim 1, wherein: the threshold n should be greater than 80%.
4. The method of claim 1, wherein: the closer the pixel point of the motion boundary is to the holding boundary on the boundary line in the step S3231, the longer the change time.
5. The method of claim 1, wherein: in step S323, if the first boundary and the second boundary do not intersect, the first boundary is blurred, the gray-scale values of the pixels in the first boundary are linearly changed from the initial value gray1 to the target value gray2 within the time T, and the interpolated frame image is clipped at the corresponding time node.
CN202110478127.4A 2021-04-30 2021-04-30 Video frame insertion processing method Active CN113225589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110478127.4A CN113225589B (en) 2021-04-30 2021-04-30 Video frame insertion processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110478127.4A CN113225589B (en) 2021-04-30 2021-04-30 Video frame insertion processing method

Publications (2)

Publication Number Publication Date
CN113225589A CN113225589A (en) 2021-08-06
CN113225589B true CN113225589B (en) 2022-07-08

Family

ID=77090242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110478127.4A Active CN113225589B (en) 2021-04-30 2021-04-30 Video frame insertion processing method

Country Status (1)

Country Link
CN (1) CN113225589B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116431857B (en) * 2023-06-14 2023-09-05 山东海博科技信息系统股份有限公司 Video processing method and system for unmanned scene
CN117281616B (en) * 2023-11-09 2024-02-06 武汉真彩智造科技有限公司 Operation control method and system based on mixed reality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605206A (en) * 2008-06-11 2009-12-16 联发科技股份有限公司 Video process apparatus and method thereof
CN101627417A (en) * 2007-05-28 2010-01-13 夏普株式会社 Image display device
CN105517671A (en) * 2015-05-25 2016-04-20 北京大学深圳研究生院 Video frame interpolation method and system based on optical flow method
CN106131567A (en) * 2016-07-04 2016-11-16 西安电子科技大学 Ultraviolet aurora up-conversion method of video frame rate based on Lattice Boltzmann
CN109922231A (en) * 2019-02-01 2019-06-21 重庆爱奇艺智能科技有限公司 A kind of method and apparatus for generating the interleave image of video
CN112184779A (en) * 2020-09-17 2021-01-05 无锡安科迪智能技术有限公司 Method and device for processing interpolation image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5645636B2 (en) * 2010-12-16 2014-12-24 三菱電機株式会社 Frame interpolation apparatus and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101627417A (en) * 2007-05-28 2010-01-13 夏普株式会社 Image display device
CN101605206A (en) * 2008-06-11 2009-12-16 联发科技股份有限公司 Video process apparatus and method thereof
CN105517671A (en) * 2015-05-25 2016-04-20 北京大学深圳研究生院 Video frame interpolation method and system based on optical flow method
CN106131567A (en) * 2016-07-04 2016-11-16 西安电子科技大学 Ultraviolet aurora up-conversion method of video frame rate based on Lattice Boltzmann
CN109922231A (en) * 2019-02-01 2019-06-21 重庆爱奇艺智能科技有限公司 A kind of method and apparatus for generating the interleave image of video
CN112184779A (en) * 2020-09-17 2021-01-05 无锡安科迪智能技术有限公司 Method and device for processing interpolation image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何天琦 等.视频帧率上转换检测技术综述.《网络与信息安全学报》.2018,第4卷(第10期),第2018085-1至2018085-11页. *

Also Published As

Publication number Publication date
CN113225589A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
US10630956B2 (en) Image processing method and apparatus
LU101981B1 (en) Traffic video background modeling method and system
CN113225589B (en) Video frame insertion processing method
EP3296953B1 (en) Method and device for processing depth images
CN109598744B (en) Video tracking method, device, equipment and storage medium
US9390511B2 (en) Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data
EP2180695B1 (en) Apparatus and method for improving frame rate using motion trajectory
CN110599400B (en) EPI-based light field image super-resolution method
WO2003036557A1 (en) Method and apparatus for background segmentation based on motion localization
CN106251348B (en) Self-adaptive multi-cue fusion background subtraction method for depth camera
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN104205826A (en) Apparatus and method for reconstructing high density three-dimensional image
CN104574331A (en) Data processing method, device, computer storage medium and user terminal
CN102457724B (en) Image motion detecting system and method
CN112233049B (en) Image fusion method for improving image definition
CN115298708A (en) Multi-view neural human body rendering
CN107295296A (en) A kind of selectively storage and restoration methods and system of monitor video
CN112785492A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112734914A (en) Image stereo reconstruction method and device for augmented reality vision
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
Wang et al. Joint framework for single image reconstruction and super-resolution with an event camera
Cho et al. Extrapolation-based video retargeting with backward warping using an image-to-warping vector generation network
Gutev et al. Exploiting depth information to increase object tracking robustness
DE102004026782A1 (en) Method and apparatus for computer-aided motion estimation in at least two temporally successive digital images, computer-readable storage medium and computer program element
CN111179281A (en) Human body image extraction method and human body action video extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant