WO2020114251A1 - 视频拼接方法、装置、电子设备及计算机存储介质 - Google Patents

视频拼接方法、装置、电子设备及计算机存储介质 Download PDF

Info

Publication number
WO2020114251A1
WO2020114251A1 PCT/CN2019/119616 CN2019119616W WO2020114251A1 WO 2020114251 A1 WO2020114251 A1 WO 2020114251A1 CN 2019119616 W CN2019119616 W CN 2019119616W WO 2020114251 A1 WO2020114251 A1 WO 2020114251A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
deformation
video
ratio
movement distance
Prior art date
Application number
PCT/CN2019/119616
Other languages
English (en)
French (fr)
Inventor
邓朔
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19892144.7A priority Critical patent/EP3893513A4/en
Publication of WO2020114251A1 publication Critical patent/WO2020114251A1/zh
Priority to US17/184,258 priority patent/US11972580B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present application relates to the technical field of video processing. Specifically, the present application relates to a video splicing method, device, electronic device, and computer storage medium.
  • the cost of posting videos taken by users decreases, so many users upload the videos taken to the corresponding video platforms.
  • the user is taking a non-linear video shot, that is, multiple video clips with similar front-to-back framing are shot at the same location, but only at different shooting times, after the shooting is completed, upload the multiple video clips with similar framing to the corresponding In front of the video platform, the multiple video clips are often stitched, and then the stitched video is uploaded to the corresponding video platform.
  • the location of the photographer's smart terminal may change slightly, resulting in jitter at the junction of the two video clips, greatly reducing the user's viewing experience.
  • This application provides a video splicing method, device, electronic equipment, and computer storage medium.
  • a video stitching method including:
  • the first image is the last frame of the first video to be stitched
  • the second image is the first frame of the second video to be stitched
  • At least one compensation frame between the first image and the second image is determined, and the first image and the second image are spliced based on the at least one compensation frame to splice the first video to be spliced and the second video to be spliced.
  • a video splicing device including:
  • the detection module is used to detect the similarity between the first image and the second image, the first image is the last frame image of the first video to be spliced, and the second image is the first frame image of the second video to be spliced;
  • a determining module configured to determine the motion vector of the first image relative to the second image when the similarity satisfies the preset condition
  • the stitching module is used to determine at least one compensation frame between the first image and the second image according to the motion vector, and stitch the first image and the second image based on the at least one compensation frame to stitch the first to-be-spliced video and the second Video to be stitched.
  • an electronic device which includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the video stitching method described above is implemented.
  • a computer-readable storage medium is provided.
  • a computer program is stored on the computer-readable storage medium, and when the program is executed by a processor, the above video stitching method is implemented.
  • FIG. 1a is an application scenario diagram of a video splicing system according to an embodiment of this application
  • FIG. 1b is a schematic flowchart of a video splicing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of splicing videos according to an embodiment of the present application.
  • FIG. 3a is a schematic diagram of an image deformation and motion process according to an embodiment of this application.
  • 3b is a schematic diagram of calculating a first difference value according to an embodiment of the present application.
  • 3c is a schematic diagram of calculating a second difference value according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of motion vectors according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the transition from a first image to a second image according to an embodiment of the application
  • FIG. 7 is a schematic diagram of a basic structure of a video splicing device according to yet another embodiment of the application.
  • FIG. 8 is a detailed schematic structural diagram of a video splicing device according to another embodiment of the application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to another embodiment of this application.
  • the traditional video frame insertion method usually has a large amount of calculation and a long running time, and its usage scene is biased towards traditional video post-processing; if it is applied to the mobile device side, the hardware in the mobile device needs to support this Function, but only a few hardware manufacturers currently support this kind of function, and the algorithm is relatively fixed, resulting in more restrictions on mobile devices and usage scenarios.
  • the traditional video insertion method is applied to the mobile device, it will be difficult to achieve real-time processing due to the limited computing power of the mobile device, resulting in poor real-time performance and the need for users to wait. Therefore, there is an urgent need for a method for stitching non-linear video shots on the mobile device side.
  • FIG. 1a is an application scenario diagram of a video splicing system according to an embodiment of the present application.
  • the video splicing system 100 includes: a target object 101, an electronic device 102, and a user 103.
  • the user 103 uses a camera 1021 installed on the electronic device 102 at a certain location to photograph the target object 101 to obtain multiple video clips.
  • the video splicing device 1022 on the electronic device 102 is started to splice multiple video clips.
  • the method described in the embodiment of the present application is used for splicing, so as to obtain a continuous video file based on the captured multiple video clips.
  • the user can start the video application 1023 on the electronic device 102 to upload the stitched video file to the video platform for sharing.
  • An embodiment of the present application provides a video splicing method.
  • the video device may be executed by an electronic device.
  • the electronic device may be the electronic device 102 shown in FIG. 1a.
  • the method includes:
  • Step S110 Detect the similarity between the first image and the second image.
  • the first image is the last frame image of the first video to be stitched
  • the second image is the first frame image of the second video to be stitched.
  • the video splicing method of the embodiment of the present application is mainly adapted to the situation where the front and back frames of the video clip to be spliced are similar (for example, a preset similarity condition is satisfied), where FIG. 2 is a video splicing embodiment of the present application schematic diagram. If the difference between the front and back frames is too large, subsequent motion vector estimation and interpolation compensation cannot be performed. Therefore, in the preprocessing stage, the similarity evaluation of the front and back frames needs to be performed to ensure the running significance of the stitching algorithm.
  • a similarity estimation method may be used to evaluate the similarity of the front and back frames, for example, to detect the similarity between the last frame image of the first video to be stitched and the first frame image of the second video to be stitched Whether the preset conditions are satisfied, and if the preset conditions are met, the subsequent steps in the video stitching method implemented in this application (that is, steps S120 and S130) can be continued to perform smooth stitching of multiple videos.
  • the method implemented in this application can be used to perform anti-shake interpolation frame compensation , Making the transition between the A and B videos smooth, so as to effectively filter the usage scenarios that do not meet the stitching conditions.
  • Step S120 if the similarity satisfies the preset condition, determine the motion vector of the first image relative to the second image.
  • the motion vector of the first image relative to the second image is determined, that is, the motion trajectory between the first image and the second image is estimated, for example Movement distance and movement direction, etc., so as to lay the necessary foundation for smooth stitching of the first to-be-spliced video and the second to-be-spliced video.
  • Step S130 Determine at least one compensation frame between the first image and the second image according to the motion vector, and stitch the first image and the second image based on the at least one compensation frame to splice the first to-be-spliced video and the second to-be-spliced video.
  • At least one compensation frame between the first image and the second image may be determined according to the determined motion vector, and according to the at least one compensation
  • the frame compensates for the evolution of the first image, so that the first image can be slowly and smoothly transitioned to the second image to realize the splicing of the first image and the second image, thereby splicing the first video to be spliced and the second video to be spliced.
  • the video stitching method determines the motion vector of the first image relative to the second image that meets the preset condition, and determines the relationship between the first image and the second image according to the motion vector At least one compensation frame, and stitching the first image and the second image based on the at least one compensation frame, provides a method for stitching non-linear video shots on the mobile device side, not only on the mobile device side in real time, Multi-segment video is stitched efficiently, and high-quality smooth smooth compensation of jitter occurs at the joint, so that multi-segment video can be smoothly transitioned, effectively reducing the occurrence of image jitter or image jump after multi-segment video stitching, and greatly improved User video splicing, publishing and viewing experience; and can cover terminal devices of Android, Apple and other operating systems, without the support of specific hardware in the terminal device, avoiding dependence on hardware manufacturers, solving the traditional video insertion method for mobile devices And the limitations of the use scenario.
  • the embodiment of the present application provides another possible implementation manner, wherein the similarity meets a preset condition, including: the similarity is not less than a preset similarity threshold.
  • Detect the similarity between the first image and the second image including:
  • the first to-be-spliced video is A video
  • the last frame image of the first to-be-spliced video is I a
  • the second to-be-spliced video is B-video
  • the first frame of the second to-be-spliced video is I b
  • a preset similarity threshold is determined to determine whether the similarity between I a and I b satisfies the preset condition.
  • I a and I b respectively, corresponding to the histogram H a (i.e., the above-described first gray histogram) and H b (i.e., the above-described second histogram), and to determine I a similarity between I b, where I a according to the weights H a histogram for each gray level weight, the number of pixels corresponding to each gray level of H a and H b of each pixel corresponding to a gradation Number to determine the similarity between I a and I b .
  • Step 1 Determine a histogram of the gradation H a I a and I b of the histogram H b.
  • the gradation distribution of H a and H b are [0, 255], represents luminance from dark to light, corresponding to the colors in the image from black to white, i.e. white is 255 and black is 0.
  • the grayscale histogram is to count all the pixels in the digital image according to the size of the grayscale value, and count the frequency of their occurrence.
  • the grayscale histogram is a function of grayscale, which represents the number of pixels with a certain grayscale in the image and reflects the frequency of a certain grayscale in the image.
  • Step Two calculating a weight for each gray level H a heavy (denoted ⁇ i, i ranges from 0 to 255).
  • the ratio of the number of pixels corresponding to the grayscale to the total number of pixels is used as the weight of the grayscale.
  • H a can be calculated by the following equation for each gray level weights ⁇ i:
  • the similarity between the gray scale histogram is used to evaluate the similarity between I a and I b .
  • I a is used as a reference image, and the area with more gray scale distribution represents The more concentrated the distribution of the main information of the image is in this area, this embodiment uses this information to determine the weight of each gray scale range to ensure the stability of the video stitching method.
  • Step 3 Evaluate the similarity between I a and I b , where the similarity is recorded as S.
  • the above formula is the normalized similarity calculation formula, S ⁇ [0, 1], in the above formula Represents the number of pixels with a gray value i in I b , and a larger S value represents a higher degree of similarity.
  • the calculated similarity value S may be compared with a preset similarity threshold (for example, 0.87), and if S is not less than the preset similarity threshold, it is determined that the preset similarity condition is satisfied between I a and I b .
  • a preset similarity threshold for example 0.87
  • the similarity evaluation of the front and back frames ensures the operational significance of the stitching algorithm and effectively filters the stitching scenes that do not meet the conditions of use.
  • Step S120 determining the motion vector of the first image relative to the second image.
  • the motion vector is determined according to the first movement distance, the second movement distance, and the preset deformation ratio.
  • the embodiment of the present application assumes that the second image is fixed and only moves the first image, which is equivalent to that the second image is derived from the movement of the first image, so the motion vector of the first image needs to be calculated.
  • the preset deformation ratio includes a horizontal deformation ratio and a vertical deformation ratio, and the horizontal deformation ratio is the same as or different from the vertical deformation ratio.
  • the deformation specifically refers to stretching or shortening the image in a horizontal or vertical direction at a certain ratio.
  • the specific value of the preset deformation ratio will directly affect the estimation accuracy of the motion vector.
  • the lateral deformation ratio is 1, the lateral width remains unchanged from the original width.
  • the vertical height becomes 10, the longitudinal deformation ratio Original height.
  • the original width and the original height of the image are attributes of the image itself, that is, the original width and the original height can be known after giving a frame of image.
  • first image and the second image may be respectively laterally deformed according to the lateral deformation ratio to obtain corresponding first laterally deformed images and second laterally deformed images.
  • first image and the second image are images in the same background video taken by the same terminal device
  • the original horizontal height of the first image is the same as the original horizontal height of the second image
  • the original vertical height of the first image It is the same as the original vertical height of the second image
  • the width of the first lateral deformation image is the same as the width of the second lateral deformation image
  • the height of the first lateral deformation image is the same as the height of the second lateral deformation image.
  • the obtained first horizontal deformation image and the second horizontal deformation image can be set to be parallel in the horizontal direction and aligned at both ends, such as As shown in part (1) in FIG. 3a, this is the initial position of the first lateral deformation image and the second lateral deformation image.
  • first image and the second image may be longitudinally deformed respectively according to the longitudinal deformation ratio to obtain corresponding first longitudinally deformed images and second longitudinally deformed images.
  • first image and the second image are images in the same background video taken by the same terminal device
  • the original horizontal height of the first image is the same as the original horizontal height of the second image
  • the original vertical height of the first image It is the same as the original longitudinal height of the second image
  • the width of the first longitudinal deformed image is the same as the width of the second longitudinal deformed image
  • the height of the first longitudinal deformed image is the same as the height of the second longitudinal deformed image.
  • the obtained first longitudinally deformable image and second longitudinally deformable image may be set in the vertical direction Parallel and aligned at both ends, as shown in part (1) of Figure 3a.
  • the first horizontally deformed image may be Move sequentially in the horizontal direction, and after each movement, determine the two sub-images corresponding to the first lateral deformation image and the second lateral deformation image in the vertical direction after the movement, and calculate the The first difference value, and determine the total number of pixels moved by the first lateral deformation image after the movement, that is, the total number of pixels moved by the first lateral deformation image relative to the initial position.
  • X is a positive integer.
  • the above X pixels are between the first horizontally deformed image and the second horizontally deformed image after moving in the horizontal direction, and there is no corresponding part in the vertical direction In the case of, the total number of pixels moved by the first horizontally deformed image.
  • the above-mentioned movement in the horizontal direction may be a horizontal left movement, for example as shown in (2) in FIG. 3a, or a horizontal right movement, which is not limited in the embodiment of the present application.
  • the first horizontally deformed image is sequentially shifted to the left in the horizontal direction, and each time by m pixels, 1 ⁇ m ⁇ X, that is, each time by m pixels (such as the first Move 1 pixel at a time, move 2 pixels at a time based on the first move, move 1 pixel at a time based on the second move, etc.)
  • the sum of all movement times is X pixels
  • each time the first horizontal deformation image is moved the difference value is calculated, that is, the first horizontal deformation image and the second horizontal deformation image corresponding to each movement are calculated in the vertical direction The first difference between the two sub-images.
  • a total of L movements are made, a total of L difference values are calculated, wherein the difference values calculated each time are the same or different.
  • the difference value referred to in this paragraph refers to the first difference value.
  • the first horizontally deformed image is sequentially shifted left in the horizontal direction, and the number of pixels for each movement is not fixed (for example, the first movement is 1 pixel, the second movement is the first Move 2 pixels on the basis of the second move, and move the other 4 pixels on the basis of the second move, etc.), that is, the number of pixels per move is dynamically changed, and all movements
  • the sum of is X pixels, then each time the first horizontally deformed image is moved, the difference value is calculated once, that is, the two sub-vertically corresponding first vertical and second horizontally deformed images in the vertical direction are calculated. Difference value between images. If a total of Q movements are made, a total of Q difference values are calculated, wherein the difference values calculated each time are the same or different.
  • the difference value referred to in this paragraph refers to the first difference value.
  • the minimum value of each first difference value is determined, and The first difference value corresponding to the minimum value is recorded as the first target difference value, that is, the difference value with the smallest value is determined from the plurality of difference values, and the difference value with the smallest value is determined as the first target difference value.
  • the total number of moving pixels corresponding to the first target difference value is determined, and the total number of pixels is determined as the first movement distance of the first image in the horizontal direction. If the first move is 1 pixel, the second move is based on the first move, then move 2 pixels, the third move is based on the second move, move 4 pixels, etc. Etc., a total of X pixels have been moved, and the determined first target difference value is the difference value of the corresponding portion in the vertical direction between the first and second horizontally deformed images after the third movement, Then, the total number of moved pixels corresponding to the first target difference value is 7, that is, the total number of pixels moved three times before, so that the first movement distance of the first image in the horizontal direction is 7 pixels.
  • the first longitudinally deformed image may be Move sequentially in the vertical direction, and after each movement, determine the two sub-images corresponding to the first longitudinally deformed image and the second longitudinally deformed image in the horizontal direction after the movement, and calculate the The second difference value, and determines the total number of pixels moved by the first longitudinally deformed image during this movement.
  • Y is a positive integer, and the above Y pixels are between the first longitudinally deformed image and the second longitudinally deformed image after moving in the vertical direction, and there is no corresponding part in the horizontal direction In the case of, the total number of pixels moved by the first longitudinally deformed image.
  • the above-mentioned movement in the vertical direction may be vertically downward, for example, as shown in (3) in FIG. 3a, or vertically upward, which is not limited in the embodiment of the present application. Let's introduce it vertically as an example, as shown below:
  • the first longitudinally deformed image is sequentially moved up in the vertical direction, and each time moves by n pixels, 1 ⁇ n ⁇ X, that is, n pixels are moved up each time (such as the first Move 1 pixel at a time, move 2 pixels at a time based on the first move, move 1 pixel at a time based on the second move, etc.),
  • the sum of all movement times is Y pixels, then each time the first longitudinally deformed image is moved, the difference value is calculated, that is, the first longitudinally deformed image and the second longitudinally deformed image corresponding to the horizontal direction after one movement are calculated.
  • the second difference between the two sub-images is if the first longitudinally deformed image is sequentially moved up in the vertical direction, and each time moves by n pixels, 1 ⁇ n ⁇ X, that is, n pixels are moved up each time (such as the first Move 1 pixel at a time, move 2 pixels at a time based on the first move, move 1 pixel at a time based on the second move, etc.
  • the first transversely deformed image is moved up by n pixels to obtain the moved
  • two sub-images corresponding to each other that is, capable of overlapping each other
  • a total of R times are moved, a total of R difference values are calculated, wherein the difference values calculated each time are the same or different.
  • the difference value referred to in this paragraph refers to the second difference value.
  • the first longitudinally deformed image is sequentially moved up in the vertical direction relative to the second longitudinally deformed image, and the number of pixels per movement is not fixed (for example, moving 1 pixel for the first time, the first The second move is based on the first move and then moves 2 pixels, the third move is based on the second move and then moves 4 pixels, etc.), that is, the number of pixels per move is Dynamically changed, the sum of all movement times is Y pixels, then each time the first longitudinal deformation image is moved, the difference value is calculated, that is, the first longitudinal deformation image and the second longitudinal deformation image after the movement are horizontal The difference value of the corresponding part in the direction. If P moves a total of P times, a total of P difference values are calculated, wherein the difference values calculated each time are the same or different. In addition, the difference value referred to in this paragraph refers to the second difference value.
  • the minimum value of each second difference value is recorded as the second target difference value, that is, the difference value with the smallest value is determined from the plurality of difference values, and the difference value with the smallest value is determined as the second target difference value.
  • the total number of moving pixels corresponding to the second target difference value is determined, and the total number of pixels is determined as the second movement distance of the first image in the vertical direction. If the first move is 1 pixel, the second move is based on the first move, then move 2 pixels, the third move is based on the second move, move 4 pixels, etc. Etc., a total of Y pixels have been moved, and the determined second target difference value is the difference value of the corresponding portion of the third longitudinally deformed image and the second longitudinally deformed image in the horizontal direction after the third movement.
  • the total number of moved pixels corresponding to the two target difference values is 7, that is, the total number of pixels moved three times before, so that the second moving distance of the first image in the vertical direction is 7 pixels.
  • the movement vector is determined according to the first movement distance, the second movement distance and the preset deformation ratio.
  • the first movement distance and the second movement distance need to be calculated. Adjust in the opposite direction according to the corresponding deformation ratio.
  • the adjustment of the reverse direction of the first movement distance is: calculating the first ratio of the first movement distance and the lateral deformation ratio
  • the adjustment of the reverse direction of the second movement distance is: calculating the second ratio of the second movement distance and the longitudinal deformation ratio.
  • the first ratio is the first movement distance adjusted in the reverse direction according to the lateral deformation ratio
  • the second ratio is the second movement distance adjusted in the reverse direction according to the longitudinal deformation ratio.
  • the motion vector may be determined according to the first ratio and the second ratio.
  • the motion vector is the sum of the direction vector of the first ratio and the direction vector of the second ratio.
  • the first difference value diff m between the two sub-images corresponding to the first horizontal deformation image and the second horizontal deformation image in the vertical direction can be based on Calculated by the following formula:
  • the width in the above formula represents the width of the first image and the second image after horizontal deformation, Means that after the first horizontally deformed image is shifted by m points to the left in the horizontal direction, the sub-image portion corresponding to the second horizontally deformed image in the vertical direction, The part of the sub-image corresponding to the vertical direction of the second horizontal deformation and the first horizontal deformation image shifted m points to the left in the horizontal direction, the size of the two sub-images is the same, function It means to calculate and normalize the difference between corresponding pixels of two sub-images.
  • the second difference value diff n between two sub-images corresponding to the horizontal direction between the first longitudinal deformation image and the second longitudinal deformation image Can be calculated according to the following formula:
  • the height in the above formula represents the height of the first image and the second image after longitudinal deformation, Indicates that after the first longitudinally deformed image is shifted by n points in the vertical direction, the sub-image portion corresponding to the second longitudinally deformed image in the horizontal direction, Represents the part of the sub-image corresponding to the horizontal direction of the second longitudinally deformed image and the first longitudinally deformed image shifted by n points in the vertical direction, the size of the two sub-images is the same, function It means to calculate and normalize the difference between corresponding pixels of two sub-images.
  • the obtained multiple first difference values can also be represented in a rectangular coordinate system, respectively, and the multiple first difference values are connected in sequence to form a curve, and the minimum value in the curve This is the first target difference value.
  • the multiple second difference values obtained can also be expressed in a rectangular coordinate system, and the multiple second difference values are connected in sequence to form a curve, and the minimum value in the curve is the second target Difference value. As shown in Figure 4.
  • the motion vector of the first image can be calculated by the following formula:
  • the first ratio representing the ratio of the first movement distance to the lateral deformation
  • the second ratio representing the ratio of the second movement distance to the longitudinal deformation.
  • the direction pointed by the vector is the movement direction of the first image, its angle is recorded as ⁇ , and its length is the movement distance, as shown in FIG. 5.
  • the sum of the difference between the two deformed images is calculated, which can characterize the relative distance of the vectors corresponding to the two sub-images.
  • a method of calculating the difference value based on the relative distance can increase the calculation speed.
  • An embodiment of the present application provides another possible implementation manner, in which, when determining at least one compensation frame between the first image and the second image according to the motion vector, any compensation time interval of a predetermined compensation duration may be used In the above, the compensation frame between the first image and the second image is determined according to the motion vector.
  • the first image can be evolved and compensated.
  • the specific compensation strategy is: first determine the first preset parameter in any compensation time interval; then calculate the first product between the first preset parameter and the vector of the second image; then calculate the second preset parameter
  • the second product between the vector of the third image and the second preset parameter is the difference between the preset value and the first preset parameter, where the third image is the first image after motion according to the motion vector and The overlapping part between the second images; then, according to the first product and the second product, determine the image frame in the compensation time interval, and render the image frame to obtain the compensation frame.
  • corresponding compensation frames can be obtained on multiple compensation time intervals of a predetermined compensation duration, so that the first image and the second image are spliced based on the plurality of compensation frames to splice the first video to be spliced Splice with the second video.
  • ⁇ in the above formula is the above-mentioned first preset parameter
  • T is a certain compensation time interval
  • is proportional to the time parameter T
  • is a preset value and can be set to 1
  • I interpolation indicates the calculated compensation frame.
  • the GPU Graphics Processing Unit
  • the GPU can be used to render at least one obtained compensation frame, and the first image is converted into the second image, that is, the first The fusion of the second image and the first image after motion is shown in FIG. 6.
  • the method in this step does not require global information, so it is parallel in algorithm, and can be combined with rendering on the mobile device side to achieve real-time rendering and encoding.
  • the method provided in the embodiments of the present application can take advantage of GPU rendering synthesis on the mobile device side to smoothly compensate for the jitter between the two videos, so that the two videos can smoothly transition, which is different from the traditional template transition effects
  • the method of this embodiment has stronger adaptability, and at the same time uses the GPU to accelerate the algorithm, its jitter compensation is real-time, and the impact on the user is small, which can guarantee the user's experience to the maximum extent.
  • the design of the mobile device side is based on the rapidity and the accuracy of compensation, and uses the GPU of the mobile device side to accelerate the algorithm.
  • this method covers Android (Android) mobile device and iOS (Apple) mobile device, and can cover most users as much as possible.
  • the device 70 may include a detection module 71, a determination module 72, and a splicing module 73.
  • the detection module 71 is used to detect the similarity between the first image and the second image, the first image is the last frame image of the first video to be stitched, and the second image is the first frame image of the second video to be stitched;
  • the determining module 72 is used to determine the motion vector of the first image relative to the second image when the similarity satisfies the preset condition;
  • the stitching module 73 is used to determine at least one compensation frame between the first image and the second image according to the motion vector, and stitch the first image and the second image based on the at least one compensation frame to stitch the first to-be-spliced video and the second Video to be stitched.
  • FIG. 8 is a detailed schematic structural diagram of a video splicing processing device according to yet another embodiment of the present application.
  • the device 80 may include a detection module 81, a determination module 82, and a splicing module 83.
  • the detection module 81 in FIG. 8 implements the same function as the detection module 71 in FIG. 7
  • the determination module 82 in FIG. 8 implements the same function as the determination module 72 in FIG. 7
  • the splicing module in FIG. 8 The function realized by 83 is the same as the splicing module 73 in FIG. 7 and will not be repeated here.
  • the video splicing device shown in FIG. 8 is described in detail below:
  • the similarity degree satisfies the preset condition, including: the similarity degree is not less than the preset similarity threshold.
  • the detection module 81 is specifically configured to determine the first grayscale histogram of the first image and the second grayscale histogram of the second image; to determine the grayscale of each grayscale of the first grayscale histogram Weight; and used to determine the first image and the second according to the determined weight, the number of pixels corresponding to each gray in the first gray histogram and the number of pixels corresponding to each gray in the second gray histogram The similarity between images.
  • the determination module 82 includes a deformation submodule 721, a first determination submodule 722, a second determination submodule 723, and a third determination submodule 724;
  • the deformation submodule 721 is configured to perform horizontal deformation and vertical deformation on the first image and the second image based on the preset deformation ratio, respectively;
  • the first determining submodule 722 is configured to determine the first movement distance of the first image in the horizontal direction according to the horizontal movement distance of the first image after the horizontal deformation relative to the second image after the horizontal deformation;
  • the second determining submodule 723 is used to determine the second movement distance of the first image in the vertical direction according to the vertical movement distance of the first image after the longitudinal deformation relative to the second image after the longitudinal deformation;
  • the third determination submodule 724 is used to determine the motion vector according to the first movement distance, the second movement distance, and the preset deformation ratio.
  • the preset deformation ratio includes a horizontal deformation ratio and a vertical deformation ratio
  • the deformation submodule 721 includes a first deformation unit 7211 and a second deformation unit 7212;
  • the first deformation unit 7211 is configured to perform lateral deformation on the first image and the second image respectively according to the lateral deformation ratio to obtain corresponding first lateral deformation images and second lateral deformation images;
  • the second deformation unit 7212 is configured to respectively perform longitudinal deformation on the first image and the second image according to the longitudinal deformation ratio to obtain corresponding first longitudinal deformation images and second longitudinal deformation images.
  • the first deformation unit 7211 is specifically configured to set the first horizontal deformation image and the second horizontal deformation image to be parallel in the horizontal direction and aligned at both ends; to adjust the first horizontal deformation The image moves sequentially in the horizontal direction, and after each movement, determine the two sub-images corresponding to the first horizontally deformed image and the second horizontally deformed image in the vertical direction after the movement, and calculate between the two sub-images The first difference value, and determine the total number of pixels moved by the first horizontally deformed image after the movement; determine the minimum value of each first difference value, and record the first difference value corresponding to the minimum value as The first target difference value; determining the total number of moving pixels corresponding to the first target difference value as the first movement distance of the first image in the horizontal direction.
  • the second deformation unit 7212 is specifically configured to set the first longitudinal deformation image and the second longitudinal deformation image to be parallel in the vertical direction and aligned at both ends; The image moves sequentially in the vertical direction, and after each movement, determine the two sub-images corresponding to the first longitudinally deformed image and the second longitudinally deformed image in the horizontal direction after the movement, and calculate between the two sub-images The second difference value, and determine the total number of pixels moved by the first longitudinally deformed image during the movement; determine the minimum value of each second difference value, and record the second difference value corresponding to the minimum value as A second target difference value; determining the total number of moving pixels corresponding to the second target difference value as the second movement distance of the first image in the vertical direction.
  • the third determination submodule 724 includes a first calculation unit 7241, a second calculation unit 7242, and a determination unit 7243;
  • the first calculation unit 7241 is used to calculate a first ratio of the first movement distance to the lateral deformation ratio
  • the second calculation unit 7242 is used to calculate a second ratio of the second movement distance to the longitudinal deformation ratio
  • the determining unit 7243 is used to determine the motion vector according to the first ratio and the second ratio.
  • the splicing module 83 includes a fourth determination submodule 831, a first calculation submodule 832, a second calculation submodule 833, and a processing submodule 834;
  • the fourth determining submodule 831 is used to determine the first preset parameter in any compensation time interval
  • the first calculation submodule 832 is used to calculate a first product between the first preset parameter and the vector of the second image
  • the second calculation sub-module 833 is used to calculate a second product between the second preset parameter and the vector of the third image, where the third image is the overlapping portion between the first image and the second image after movement according to the motion vector;
  • the processing submodule 834 is configured to determine the image frame in the compensation time interval according to the first product and the second product, and render the image frame to obtain the compensation frame.
  • the stitching module 83 is configured to render the at least one compensation frame and transition the first image to the second image
  • the device provided by the embodiment of the present application determines the motion vector of the first image relative to the second image satisfying the preset similarity condition, and determines the relationship between the first image and the second image according to the motion vector At least one compensation frame, and stitching the first image and the second image based on the at least one compensation frame, provides a method for stitching non-linear video shots on the mobile device, not only in real time on the mobile device 3.
  • the electronic device 900 shown in FIG. 9 includes a processor 901 and a memory 903.
  • the processor 901 is connected to the memory 903, for example, via the bus 902.
  • the electronic device 900 may further include a transceiver 904. It should be noted that in practical applications, the transceiver 904 is not limited to one, and the structure of the electronic device 900 does not constitute a limitation on the embodiments of the present application.
  • the processor 901 is applied in the embodiment of the present application, and is used to realize the functions of the detection module, the determination module, and the splicing module shown in FIG. 7 or FIG. 8.
  • the processor 901 may be a CPU, general-purpose processor, DSP, ASIC, FPGA, or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of the present application.
  • the processor 901 may also be a combination that realizes a calculation function, for example, includes one or more microprocessor combinations, a combination of a DSP and a microprocessor, and so on.
  • the bus 902 may include a path to transfer information between the above components.
  • the bus 902 may be a PCI bus, an EISA bus, or the like.
  • the bus 902 can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 9, but it does not mean that there is only one bus or one type of bus.
  • the memory 903 may be ROM or other types of static storage devices that can store static information and instructions, RAM or other types of dynamic storage devices that can store information and instructions, or EEPROM, CD-ROM or other optical disk storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be used by a computer Access to any other media, but not limited to this.
  • the memory 903 is used to store application program codes for executing the solution of the present application, and is controlled and executed by the processor 901.
  • the processor 901 is used to execute the application program code stored in the memory 903 to implement the actions of the video splicing apparatus provided in the embodiment shown in FIG. 7 or FIG. 8.
  • the electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, compared with the conventional technology, it can be achieved by determining that the preset is satisfied Conditional motion vector of the first image relative to the second image, and according to the motion vector, determine at least one compensation frame between the first image and the second image, and stitch the first image and the second image based on the at least one compensation frame, Provides a method for splicing non-linear video shots on the mobile device side.
  • Multi-segment videos can be smoothly transitioned to ensure that the videos uploaded by users are smoother, effectively reducing the occurrence of image jitter or image jumps after multi-segment video stitching, greatly improving the user's video stitching, publishing and viewing experience, and covering Terminal devices for operating systems such as Android and Apple do not require the support of specific hardware in the terminal device, which avoids dependence on hardware manufacturers and solves the limitations of traditional video insertion methods on mobile devices and usage scenarios.
  • An embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the program is executed by a processor, the method shown in Embodiment 1 is implemented.
  • determining the motion vector of the first image relative to the second image satisfying the preset condition, and according to the motion vector, determining at least one compensation frame between the first image and the second image, and based on at least one Compensation frame stitching the first image and the second image provides a method for stitching non-linear video shots on the mobile device, not only can stitch multiple videos in real time and efficiently on the mobile device, high texture Smoothly compensate for the jitter that occurs at the joint, so that multiple videos can be smoothly transitioned, ensure that the video uploaded by the user is smoother, effectively reduce the occurrence of image jitter or image jump after the multi-segment video is spliced, and greatly improve the user's Video splicing, publishing and viewing experience, and can cover terminal devices of Android, Apple and other operating systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及视频处理领域,公开了一种视频拼接方法、装置、电子设备及计算机可读存储介质,其中,视频拼接方法包括:检测第一图像与第二图像之间的相似度,第一图像为第一待拼接视频的最后一帧图像,第二图像为第二待拼接视频的首帧图像;接着,如果相似度满足预设条件,则确定第一图像相对于所述第二图像的运动向量;接着,根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与所述第二图像,以拼接第一待拼接视频与第二待拼接视频。

Description

视频拼接方法、装置、电子设备及计算机存储介质
本申请要求于2018年12月7日提交中国专利局、申请号为201811496469.3、申请名称为“视频拼接方法、装置、电子设备及计算机存储介质”的中国专利申请的优先权。
技术领域
本申请涉及视频处理技术领域,具体而言,本申请涉及一种视频拼接方法、装置、电子设备及计算机存储介质。
背景技术
随着流量费用的降低,用户发表拍摄的视频的成本也随之降低,于是有很多用户将拍摄的视频上传到相应的视频平台。若用户进行的是非线性的视频拍摄,即在同一地点拍摄了多个前后取景相似的视频片段,且只是拍摄时间不同,则在完成拍摄以后,将该多个前后取景相似的视频片段上传到相应的视频平台前,往往会对该多个视片段进行拼接,随后将拼接后的视频上传到相应的视频平台。
然而,在拼接两段取景相似的视频片段时,由于拍摄者的智能终端位置可能发生轻微改变,导致两段视频衔接处发生抖动,极大降低用户观看体验。
发明内容
本申请提供一种视频拼接方法、装置、电子设备及计算机存储介质。
一方面,提供了一种视频拼接方法,包括:
检测第一图像与第二图像之间的相似度,第一图像为第一待拼接视频的最后一帧图像,第二图像为第二待拼接视频的首帧图像;
如果所述相似度满足预设条件,则确定第一图像相对于第二图像的运动向量;
根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并 基于至少一个补偿帧拼接第一图像与第二图像,以拼接第一待拼接视频与第二待拼接视频。
另一方面,提供了一种视频拼接装置,包括:
检测模块,用于检测第一图像与第二图像之间的相似度,第一图像为第一待拼接视频的最后一帧图像,第二图像为第二待拼接视频的首帧图像;
确定模块,用于当相似度满足预设条件时,确定第一图像相对于第二图像的运动向量;
拼接模块,用于根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,以拼接第一待拼接视频与第二待拼接视频。
另一方面,提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行所述程序时实现上述的视频拼接方法。
另一方面,提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现上述的视频拼接方法。
本申请附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1a为本申请一个实施例的视频拼接系统的应用场景图;
图1b为本申请一个实施例的视频拼接方法的流程示意图;
图2为本申请一个实施例的对视频进行拼接的示意图;
图3a为本申请一个实施例的图像形变及运动过程的示意图;
图3b为本申请一个实施例的计算第一差异值的示意图;
图3c为本申请一个实施例的计算第二差异值的示意图;
图4为本申请一个实施例的差异值曲线示意图;
图5为本申请一个实施例的运动向量的示意图;
图6为本申请一个实施例的第一图像过渡为第二图像的示意图;
图7为本申请又一实施例的视频拼接装置的基本结构示意图;
图8为本申请又一实施例的视频拼接装置的详细结构示意图;
图9为本申请另一实施例的电子设备的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。
传统的视频拼接往往是基于视频插帧的方法,例如将30FPS(Frames Per Second,每秒传输帧数)的视频通过插帧补偿生成60FPS的视频,且主要用于单个视频内容内的插帧,但是对于非线性的视频拍摄,在对两段 视频进行拼接时,由于其前后帧的关系存在不确定性,因而不能完全满足插帧的条件。
此外,传统的视频插帧方法,通常运算量比较大、运行时间比较长,其使用场景偏向于传统的视频后处理;如果将其应用于移动设备端,则需要移动设备中的硬件能够支持该功能,但是目前只有极少数的硬件厂商支持了此类功能,而且算法较为固定,导致对移动设备及使用场景有较多的限制。另外,即使将传统的视频插帧方法应用于移动设备端,也会因为移动设备的计算能力有限,导致很难做到实时处理,造成实时性较差、需要用户等待等情况的发生。因此,亟需一种在移动设备端对非线性的视频拍摄进行拼接的方法。
图1a为本申请一个实施例的视频拼接系统的应用场景图。如图1a所示,视频拼接系统100包括:目标物体101、电子设备102和用户103。用户103在某个地点,使用电子设备102上安置的摄像装置1021,对目标物体101进行拍摄,得到多个视频片段。然后,启动电子设备102上的视频拼接装置1022对多个视频片段进行拼接。例如,对每相邻的两个待拼接视频“视频1”和“视频2”,使用本申请实施例所述的方法进行拼接,从而基于拍摄的多个视频片段得到一个连续的视频文件。之后,用户可以启动电子设备102上的视频应用程序1023,将拼接后的视频文件上传至视频平台进行分享。
本申请一个实施例提供了一种视频拼接方法,如图1b所示,可以由电子设备执行,该电子设备可以是图1a所示的电子设备102,该方法包括:
步骤S110,检测第一图像与第二图像之间的相似度,第一图像为第一待拼接视频的最后一帧图像,第二图像为第二待拼接视频的首帧图像。
具体地,本申请实施例的视频拼接方法,主要适应于待拼接视频片段的前后帧相似(例如满足预设相似度条件)的情况,其中,图2为本申请实施例的对视频进行拼接的示意图。若前后帧的差异过大,则无法进行后续的运动向量估计与插帧补偿,因此,在预处理阶段,需要对前后帧进行 相似性评价,保证拼接算法的运行意义。
进一步地,在具体应用中,可以使用相似性估计方法,对前后帧进行相似性评价,例如检测第一待拼接视频的最后一帧图像与第二待拼接视频的首帧图像之间的相似度是否满足预设条件,如果满足预设条件,则可以继续进行本申请实施的视频拼接方法中的后续步骤(即步骤S120与步骤S130),进行多个视频的平滑拼接。换言之,当用户需要拼接A段视频与B段视频时,若A段视频的尾帧与B段视频的首帧是强关联的,则可使用本申请实施的方法,进行防抖动插帧补偿,使得A段视频与B段视频之间可以平滑过渡,从而有效过滤不符合拼接条件的使用场景。
步骤S120,如果相似度满足预设条件,则确定第一图像相对于第二图像的运动向量。
具体地,当第一图像与第二图像之间满足预设相似度条件时,确定第一图像相对于第二图像的运动向量,即估计第一图像与第二图像之间的运动轨迹,例如运动距离与运动方向等,从而奠定后续对第一待拼接视频与第二待拼接视频进行平滑拼接的必要基础。
步骤S130,根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,以拼接第一待拼接视频与第二待拼接视频。
具体地,在确定出第一图像相对于第二图像的运动向量之后,可以根据该确定出的运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并根据该至少一个补偿帧对第一图像进行演化补偿,使得可以由第一图像慢慢地平滑过渡为第二图像,实现第一图像与第二图像的拼接,从而拼接第一待拼接视频与第二待拼接视频。
本申请实施例提供的视频拼接方法,与传统技术相比,通过确定满足预设条件的第一图像相对于第二图像的运动向量,以及根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,提供了一种能够在移动设备端对非线性的视频拍摄片段进行拼接的方法,不仅可以在移动设备端实时、高效地对多段视频 进行拼接,高质地对衔接处发生的抖动进行平滑补偿,使得多段视频之间可以平滑过渡,有效减少多段视频拼接后的图像抖动或图像跳变等情况的发生,极大提升用户的视频拼接、发布及观看体验;而且能够覆盖安卓、苹果等操作系统的终端设备,无需终端设备中特定硬件的支撑,避免了对硬件厂商的依赖,解决了传统视频插帧方法对移动设备及使用场景的限制。
本申请实施例提供了另一种可能的实现方式,其中,相似度满足预设条件,包括:相似度不小于预设相似度阈值。
检测第一图像与第二图像之间的相似度,包括:
确定第一图像的第一灰度直方图与第二图像的第二灰度直方图;
确定所述第一灰度直方图各个灰度的权重;
根据所确定的权重、第一灰度直方图中各个灰度对应的像素个数及第二灰度直方图中各个灰度对应的像素个数,确定第一图像与第二图像之间的相似度。
具体地,假如第一待拼接视频为A视频,且第一待拼接视频的尾帧图像为I a,第二待拼接视频为B视频,且第二待拼接视频的首帧图像为I b,则在检测第一图像与第二图像之间的相似度是否满足预设条件时,可以通过确定I a与I b之间的相似度,并检测该相似度是否不小于(即大于或等于)预设相似度阈值,来确定I a与I b之间的相似度是否满足预设条件。其中,如果I a与I b之间的相似度不小于预设相似度阈值,则确定I a与I b之间的相似度满足预设条件,可以继续进行后续的其它操作步骤,否则不满足预设条件,不能进行后续的其它操作步骤。
进一步地,可以根据I a与I b分别对应的灰度直方图H a(即上述的第一灰度直方图)与H b(即上述的第二灰度直方图),来确定I a与I b之间的相似度,其中,可以根据I a的灰度直方图H a中各个灰度的权重、H a中各个灰度对应的像素个数及H b中各个灰度对应的像素个数,确定I a与I b之间的相似度。
下面通过具体示例对确定第一图像与第二图像之间的相似度进行介绍:
第一步:确定I a的灰度直方图H a与I b的灰度直方图H b
具体地,H a与H b的灰度的分布范围均为[0,255],表示亮度从深到浅,对应图像中的颜色为从黑到白,即白色为255,黑色为0。其中,灰度直方图是将数字图像中的所有像素,按照灰度值的大小,统计其出现的频率。灰度直方图是灰度级的函数,它表示图像中具有某种灰度级的像素的个数,反映了图像中某种灰度出现的频率。
第二步:计算H a中各个灰度的权重(记作ε i,i的取值范围为0至255)。
具体地,针对每个灰度,将该灰度对应的像素的个数与像素总数的比值作为该灰度的权重。例如,可以通过如下公式来计算H a中各个灰度的权重ε i
Figure PCTCN2019119616-appb-000001
其中,上式中的
Figure PCTCN2019119616-appb-000002
表示I a中具有灰度值i的像素的个数,
Figure PCTCN2019119616-appb-000003
表示I a中各种灰度值的像素的总数。
进一步地,本申请实施例使用灰度直方图的相似程度对I a与I b之间的相似程度进行评价,本实施例中将I a作为参考图像,其灰度分布越多的区域,代表图像的主要信息的分布越集中在此区域,本实施例利用此信息,确定每个灰度范围的权重,保障视频拼接方法的稳定性。
第三步:评价I a与I b之间的相似度,其中,相似度记作S。
具体地,可以通过如下公式来计算I a与I b之间的相似度:
Figure PCTCN2019119616-appb-000004
其中,上式为归一化后的相似度计算公式,S∈[0,1],上式中的
Figure PCTCN2019119616-appb-000005
表示I b中具有灰度值i的像素的个数,S值越大代表相似程度越高。
进一步地,可以将计算得到的相似度值S与预设相似度阈值(例如0.87)进行比较,如果S不小于预设相似度阈值,则确定I a与I b之间满足预设相似度条件。
对于本实现方式,通过对前后帧进行相似性评价,确保了拼接算法的运行意义,有效过滤不符合使用条件的拼接场景。
本申请实施例提供了另一种可能的实现方式,步骤S120(确定第一图像相对于第二图像的运动向量)具体包括:
基于预设形变比率,对第一图像与第二图像分别进行横向形变与纵向形变;
根据横向形变后的第一图像相对于横向形变后的第二图像的水平移动距离,确定第一图像在水平方向上的第一运动距离;
根据纵向形变后的第一图像相对于纵向形变后的第二图像的垂直移动距离,确定第一图像在垂直方向上的第二运动距离;
根据第一运动距离、第二运动距离以及预设形变比率,确定运动向量。
具体地,本申请实施例假定第二图像是固定不变的,只对第一图像进行运动,相当于第二图像是由第一图像运动而来的,故需要计算第一图像的运动向量。
进一步地,在计算第一图像的运动向量前,需要先根据预设形变比率对第一图像与第二图像分别进行横向形变与纵向形变。其中,预设形变比率包括横向形变比率与纵向形变比率,横向形变比率与纵向形变比率相同或者不相同。
本申请实施例中,所述的形变具体是指按一定比率,在横向或者纵向上对图像进行拉伸或者缩短。预设形变比率的具体数值将会直接影响到运动向量的估计精度。在实际应用中,如果横向形变比率为1,则横向宽度保持原始宽度不变。如果纵向高度变为高度10,则纵向形变比率
Figure PCTCN2019119616-appb-000006
Figure PCTCN2019119616-appb-000007
原始高度。其中,图像的原始宽度与原始高度是图像本身的属性,即给出一帧图像后就可以获知其原始宽度与原始高度。
进一步地,可以根据横向形变比率对第一图像与第二图像分别进行横向形变,得到相应的第一横向形变图像与第二横向形变图像。其中,由于第一图像与第二图像是由同一终端设备拍摄的相同背景的视频中的图像,所以第一图像的原始横向高度与第二图像的原始横向高度相同,第一图像的原始纵向高度与第二图像的原始纵向高度相同,于是第一横向形变图像的宽度与第二横向形变图像的宽度相同,第一横向形变图像的高度与第二 横向形变图像的高度相同。
进一步地,为便于计算后续计算第一横向形变图像在水平方向上的移动距离,可以将得到的第一横向形变图像与第二横向形变图像,设置为在水平方向上平行且两端对齐,如图3a中的第(1)部分所示,此为第一横向形变图像与第二横向形变图像的初始位置。
进一步地,可以根据纵向形变比率对第一图像与第二图像分别进行纵向形变,得到相应的第一纵向形变图像与第二纵向形变图像。其中,由于第一图像与第二图像是由同一终端设备拍摄的相同背景的视频中的图像,所以第一图像的原始横向高度与第二图像的原始横向高度相同,第一图像的原始纵向高度与第二图像的原始纵向高度相同,于是第一纵向形变图像的宽度与第二纵向形变图像的宽度相同,第一纵向形变图像的高度与第二纵向形变图像的高度相同。
进一步地,为便于计算后续计算第一纵向形变图像相对于第二纵向形变图像在垂直方向上的移动距离,可以将得到的第一纵向形变图像与第二纵向形变图像,设置为在垂直方向上平行且两端对齐,如图3a中的第(1)部分所示。
进一步地,在根据横向形变后的第一图像相对于横向形变后的第二图像的水平移动距离,确定第一图像在水平方向上的第一运动距离的过程中,可以将第一横向形变图像在水平方向上依次移动,并在每次移动后,确定该次移动后的第一横向形变图像与第二横向形变图像在垂直方向上相对应的两个子图像,计算该两个子图像之间的第一差异值,并确定该次移动后第一横向形变图像产生移动的像素点总数,即第一横向形变图像相对于初始位置移动的像素点总数。
若总共移动X个像素点,X为正整数,上述的X个像素点为在水平方向上移动后的第一横向形变图像与第二横向形变图像之间,在垂直方向上不存在相对应部分的情况下,第一横向形变图像总共移动的像素点数。
上述在水平方向上的移动可以为水平左移,例如图3a中的(2)所示,也可以为水平右移,本申请实施例不对其做限制。下面以水平左移为例, 对其进行介绍,如下所示:
在一种情况下,若将第一横向形变图像在水平方向上依次左移,且每次均移动m个像素点,1≤m≤X,即每次左移m个像素点(例如第一次移动1个像素点,第二次移动是在第一次移动的基础上再移动1个像素点,第三次移动是在第二次移动的基础上再移动1个像素点等等),所有移动次数的总和为X个像素点,则每移动一次第一横向形变图像,就计算一次差异值,即计算一次移动后的第一横向形变图像与第二横向形变图像在垂直方向上相对应的两个子图像之间的第一差异值。
如附图3b所示,第一横向形变图像与第二横向形变图像在水平方向上平行且两端对齐后,将第一横向形变图像向左移动m个像素点,得到虚线所示的移动后的第一横向形变图像,然后将移动后的第一横向形变图像和第二横向形变图像在垂直方向上对齐后,得到相互对应(即能够相互重叠)的两个子图像。
假如总共移动了L次,则总共计算得到L个差异值,其中,每次计算得到的差异值相同或不同。另外,本段涉及到的差异值均指第一差异值。
在另一种情况下,若将第一横向形变图像在水平方向上依次左移,且每次移动的像素点数不固定(例如第一次移动1个像素点,第二次移动是在第一次移动的基础上再移动2个像素点,第三次移动是在第二次移动的基础上再移动4个像素点等等),即每次移动的像素点数是动态改变的,所有移动次数的总和为X个像素点,则每移动一次第一横向形变图像,就计算一次差异值,即计算一次移动后的第一横向形变图像与第二横向形变图像在垂直方向上相对应的两个子图像之间的差异值。假如总共移动了Q次,则总共计算得到Q个差异值,其中,每次计算得到的差异值相同或不同。另外,本段涉及到的差异值均指第一差异值。
进一步地,在计算得到每次移动后的第一横向形变图像与第二横向形变图像在垂直方向上相对应部分的多个第一差异值后,确定各个第一差异值中的最小值,并将该最小值对应的第一差异值记作第一目标差异值,即从多个差异值中确定取值最小的差异值,并将该取值最小的差异值确定为 第一目标差异值。
进一步地,在确定出第一目标差异值之后,确定该第一目标差异值所对应的移动的像素点总数,并将该像素点总数确定为第一图像在水平方向上的第一运动距离。假如第一次移动1个像素点,第二次移动是在第一次移动的基础上再移动2个像素点,第三次移动是在第二次移动的基础上再移动4个像素点等等,总共移动了X个像素点,且确定出的第一目标差异值为第三移动后的第一横向形变图像与第二横向形变图像之间在垂直方向上的相对应部分的差异值,则该第一目标差异值所对应的移动的像素点总数为7,即前三次移动的像素点总数,从而得到第一图像在水平方向上的第一运动距离为7个像素点。
进一步地,在根据纵向形变后的第一图像相对于纵向形变后的第二图像的垂直移动距离,确定第一图像在垂直方向上的第二运动距离的过程中,可以将第一纵向形变图像在垂直方向上依次移动,并在每次移动后,确定该次移动后的第一纵向形变图像与第二纵向形变图像在水平方向上相对应的两个子图像,计算该两个子图像之间的第二差异值,并确定该次移动时所述第一纵向形变图像产生移动的像素点总数。
若总共移动Y个像素点,Y为正整数,上述的Y个像素点为在垂直方向上移动后的第一纵向形变图像与第二纵向形变图像之间,在水平方向上不存在相对应部分的情况下,第一纵向形变图像总共移动的像素点数。
上述在垂直方向上的移动可以为垂直向下,例如图3a中的(3)所示,也可以为垂直向上,本申请实施例不对其做限制。下面以垂直向上为例,对其进行介绍,如下所示:
在一种情况下,若将第一纵向形变图像在垂直方向上依次上移,且每次均移动n个像素点,1≤n≤X,即每次上移n个像素点(例如第一次移动1个像素点,第二次移动是在第一次移动的基础上再移动1个像素点,第三次移动是在第二次移动的基础上再移动1个像素点等等),所有移动次数的总和为Y个像素点,则每移动一次第一纵向形变图像,就计算一次差异值,即计算一次移动后的第一纵向形变图像与第二纵向形变图像在水 平方向上相对应的两个子图像之间的第二差异值。
如附图3c所示,第一纵向形变图像与第二纵向形变图像在垂直方向上平行且两端对齐后,将第一横向形变图像向上移动n个像素点,得到虚线所示的移动后的第一纵向形变图像,然后将移动后的第一纵向形变图像和第二纵向形变图像在水平方向上对齐后,得到相互对应(即能够相互重叠)的两个子图像。
假如总共移动了R次,则总共计算得到R个差异值,其中,每次计算得到的差异值相同或不同。另外,本段涉及到的差异值均指第二差异值。
在另一种情况下,若将第一纵向形变图像相对于第二纵向形变图像在垂直方向上依次上移,且每次移动的像素点数不固定(例如第一次移动1个像素点,第二次移动是在第一次移动的基础上再移动2个像素点,第三次移动是在第二次移动的基础上再移动4个像素点等等),即每次移动的像素点数是动态改变的,所有移动次数的总和为Y个像素点,则每移动一次第一纵向形变图像,就计算一次差异值,即计算一次移动后的第一纵向形变图像与第二纵向形变图像在水平方向上相对应部分的差异值。假如总共移动了P次,则总共计算得到P个差异值,其中,每次计算得到的差异值相同或不同。另外,本段涉及到的差异值均指第二差异值。
进一步地,在计算得到每次移动后的第一纵向形变图像与第二纵向形变图像在水平方向上相对应部分的多个第二差异值后,确定各个第二差异值中的最小值,并将该最小值对应的第二差异值记作第二目标差异值,即从多个差异值中确定取值最小的差异值,并将该取值最小的差异值确定为第二目标差异值。
进一步地,在确定出第二目标差异值之后,确定该第二目标差异值所对应的移动的像素点总数,并将该像素点总数确定为第一图像在垂直方向上的第二运动距离。假如第一次移动1个像素点,第二次移动是在第一次移动的基础上再移动2个像素点,第三次移动是在第二次移动的基础上再移动4个像素点等等,总共移动了Y个像素点,且确定出的第二目标差异值为第三移动后的第一纵向形变图像与第二纵向形变图像在水平方向上 相对应部分的差异值,则该第二目标差异值所对应的移动的像素点总数为7,即前三次移动的像素点总数,从而得到第一图像在垂直方向上的第二运动距离为7个像素点。
进一步地,在得到上述的第一运动距离与第二运动距离后,根据第一运动距离、第二运动距离以及预设形变比率,确定运动向量。其中,在确定运行向量的过程中,由于第一运动距离与第二距离都是根据形变后的第一图像计算得到的,所以在计算运动向量时,需要对第一运动距离与第二运动距离分别根据相应的形变比率进行反方向的调整。
其中,第一运动距离的反方向调整为:计算第一运动距离与横向形变比率的第一比值,第二运动距离的反方向调整为:计算第二运动距离与纵向形变比率的第二比值。其中,第一比值即为根据横向形变比率反方向调整后的第一运动距离,第二比值即为根据纵向形变比率反方向调整后的第二运动距离。
进一步地了,在得到第一比值与第二比值后,可以根据第一比值与第二比值,确定运动向量。例如运动向量为第一比值的方向向量与第二比值的方向向量的和。
下面给出本实现方式中计算运动向量的一种可行方式:
假如第一图像为图像A,第二图像为图像B,第一图像与第二图像的横向形变比率为
Figure PCTCN2019119616-appb-000008
第一图像与第二图像的纵向形变比率为
Figure PCTCN2019119616-appb-000009
第一横向形变图像为
Figure PCTCN2019119616-appb-000010
第二横向形变图像为
Figure PCTCN2019119616-appb-000011
第一纵向形变图像为
Figure PCTCN2019119616-appb-000012
第二纵向形变图像为
Figure PCTCN2019119616-appb-000013
则第一横向形变图像在水平方向上左移m个点后,第一横向形变图像与第二横向形变图像在垂直方向上相对应的两个子图像之间的第一差异值diff m,可以根据如下公式计算得到:
Figure PCTCN2019119616-appb-000014
其中,上式中的width表示第一图像与第二图像横向形变后的宽度,
Figure PCTCN2019119616-appb-000015
表示第一横向形变图像在水平方向上左移m个点后,与第二横向形变图像在垂直方向上相对应的子图像部分,
Figure PCTCN2019119616-appb-000016
Figure PCTCN2019119616-appb-000017
表示第二横向形变与在水平方向上左移m个点的第一横向形变图像在垂 直方向上相对应的子图像部分,两个子图像的大小是相同的,函数
Figure PCTCN2019119616-appb-000018
表示计算两个子图像相应像素之间差值的总和并进行归一化。
类似地,第一纵向形变图像在垂直方向上移n个点后,第一纵向形变图像与第二纵向形变图像之间在水平方向上相对应的两个子图像之间的第二差异值diff n,可以根据如下公式计算得到:
Figure PCTCN2019119616-appb-000019
其中,上式中的height表示第一图像与第二图像纵向形变后的高度,
Figure PCTCN2019119616-appb-000020
表示第一纵向形变图像在垂直方向上移n个点后,与第二纵向形变图像在水平方向上相对应的子图像部分,
Figure PCTCN2019119616-appb-000021
Figure PCTCN2019119616-appb-000022
表示第二纵向形变图像与在垂直方向上移n个点的第一纵向形变图像在水平方向上相对应的子图像部分,两个子图像的大小是相同的,函数
Figure PCTCN2019119616-appb-000023
表示计算两个子图像相应像素之间差值的总和并进行归一化。
进一步地,在具体情况中,还可以将得到的多个第一差异值分别表示在直角坐标系中,并将多个第一差异值依次连线,构成一条曲线,该曲线中的极小值即为第一目标差异值。类似地,也可以将得到的多个第二差异值分别表示在直角坐标系中,并将多个第二差异值依次连线,构成一条曲线,该曲线中的极小值即为第二目标差异值。如图4所示。
进一步地,第一图像的运动向量可以通过下式计算得到:
Figure PCTCN2019119616-appb-000024
其中,上式中的
Figure PCTCN2019119616-appb-000025
表示第一图像的运动向量,
Figure PCTCN2019119616-appb-000026
表示第一运动距离与横向形变比率的第一比值,
Figure PCTCN2019119616-appb-000027
表示第二运动距离与纵向形变比率的第二比值。
Figure PCTCN2019119616-appb-000028
向量所指的方向即为第一图像的运动方向,其角度记为θ,其长度为运动距离,如图5所示。
另外,在通过上式(3)与(4)估算第一差异值时,计算的是两个形变后的图像之间的差值之和,可以表征两个子图像所对应向量的相对距离,这种根据相对距离计算差异值的方式,能够提高运算速度。
本申请实施例提供了另一种可能的实现方式,其中,在根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧时,可以在预定的补偿时长的任一补偿时间区间上,根据运动向量,确定第一图像与第二图像之间的补偿帧。
具体地,在获得第一图像的运动向量(包括运动方向与运动距离)后,便可以对第一图像进行演化补偿。其中,具体的补偿策略为:首先确定在任一补偿时间区间上的第一预设参数;接着计算第一预设参数与第二图像的向量之间的第一乘积;接着计算第二预设参数与第三图像的向量之间的第二乘积,该第二预设参数为预设数值与第一预设参数间的差值,其中,第三图像为根据运动向量运动后的第一图像与第二图像之间的重叠部分;接着根据第一乘积与第二乘积,确定该补偿时间区间上的图像帧,并渲染图像帧,得到补偿帧。
其中,通过上述方式可以在预定的补偿时长的多个补偿时间区间上,得到对应的多个补偿帧,从而基于该多个补偿帧拼接第一图像与第二图像,以拼接第一待拼接视频与第二待拼接视频。
进一步地,给出本实现方式中计算确定补偿帧的一种可行方式:
Figure PCTCN2019119616-appb-000029
α∝T
其中,上式中的α为上述的第一预设参数,T为某个补偿时间区间,α正比于时间参数T,
Figure PCTCN2019119616-appb-000030
表示根据运动向量运动后的第一图像与第二图像之间的重叠部分,
Figure PCTCN2019119616-appb-000031
表示第二图像,μ为预设数值,可以设置为1,I interpolation表示计算得到的补偿帧。
换言之,T每取一个数值,α就对应取一个值,从而可以得到一个相应的图像补偿帧I interpolation。其中,计算出的补偿帧有多个,而且补偿帧的个数与时间参数T的取值密切相关。
进一步地,在得到多个补偿帧后,可以利用终端设备的GPU(Graphics Processing Unit,图形处理器)对得到的至少一个补偿帧进行渲染,将第一图像过渡为第二图像,即实现了第二图像与运动后的第一图像的融合,如图 6所示。另外,本步骤的方法无需全局的信息,因此在算法上是可并行的,在移动设备端均可与渲染结合,实现实时的渲染以及编码。
本申请实施例提供的方法,可以利用移动设备端GPU渲染合成的优势,对两段视频之间的抖动进行平滑补偿,使两段视频之间可以平滑过渡,不同于传统的模板类转场特效,本实施例的方法有着更强的适应性,同时使用GPU对算法进行加速,其抖动的补偿是实时的,对用户造成的影响较小,能够最大限度的保障用户的使用体验。
另外,本申请实施例提供的方法,移动设备端设计,从快速性以及补偿准确性出发,并利用了移动设备端的GPU对算法进行加速。(1)在性能上,选择使用可并行的算法,加快计算速度,并结合GPU渲染,实现实时拼接的效果。(2)在准确性上,针对非线性的视频拍摄拼接进行优化定制,减少多段视频拼接后的图像抖动,提供更好的视频拼接以及发布体验。(3)在适用性上,本方法覆盖Android(安卓)移动设备端以及iOS(苹果)移动设备端,能够尽量覆盖多数用户。
本申请再一实施例提供了一种视频拼接装置,其结构示意图如图7所示,该装置70可以包括检测模块71、确定模块72与拼接模块73,其中,
检测模块71用于检测第一图像与第二图像之间的相似度,第一图像为第一待拼接视频的最后一帧图像,第二图像为第二待拼接视频的首帧图像;
确定模块72用于当相似度满足预设条件时,确定第一图像相对于第二图像的运动向量;
拼接模块73用于根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,以拼接第一待拼接视频与第二待拼接视频。
具体地,图8为本申请再一实施例提供的一种视频拼接处理装置的详细结构示意图,该装置80可以包括检测模块81、确定模块82与拼接模块83。其中,图8中的检测模块81所实现的功能与图7中的检测模块71相同,图8中的确定模块82所实现的功能与图7中的确定模块72相同, 图8中的拼接模块83所实现的功能与图7中的拼接模块73相同,在此不再赘述。下面对图8所示的视频拼接装置进行详细介绍:
其中,相似度满足预设条件,包括:相似度不小于预设相似度阈值。
在一种实现方式中,检测模块81具体用于确定第一图像的第一灰度直方图与第二图像的第二灰度直方图;以确定所述第一灰度直方图各个灰度的权重;及用于根据所确定的权重、第一灰度直方图中各个灰度对应的像素个数及第二灰度直方图中各个灰度对应的像素个数,确定第一图像与第二图像之间的相似度。
在另一种实现方式中,确定模块82包括形变子模块721、第一确定子模块722、第二确定子模块723与第三确定子模块724;
形变子模块721用于基于预设形变比率,对第一图像与第二图像分别进行横向形变与纵向形变;
第一确定子模块722用于根据横向形变后的第一图像相对于横向形变后的第二图像的水平移动距离,确定第一图像在水平方向上的第一运动距离;
第二确定子模块723用于根据纵向形变后的第一图像相对于纵向形变后的第二图像的垂直移动距离,确定第一图像在垂直方向上的第二运动距离;
第三确定子模块724用于根据第一运动距离、第二运动距离以及预设形变比率,确定运动向量。
在另一种实现方式中,预设形变比率包括横向形变比率与纵向形变比率,形变子模块721包括第一形变单元7211与第二形变单元7212;其中,
第一形变单元7211用于根据横向形变比率对第一图像与第二图像分别进行横向形变,得到相应的第一横向形变图像与第二横向形变图像;
第二形变单元7212用于根据纵向形变比率对第一图像与第二图像分别进行纵向形变,得到相应的第一纵向形变图像与第二纵向形变图像。
在另一种实现方式中,第一形变单元7211具体用于,将所述第一横向形变图像与所述第二横向形变图像设置为在水平方向上平行且两端对齐;将第一横向形变图像在水平方向上依次移动,并在每次移动后,确定 该次移动后的第一横向形变图像与第二横向形变图像在垂直方向上相对应的两个子图像,计算该两个子图像之间的第一差异值,并确定该次移动后所述第一横向形变图像产生移动的像素点总数;确定各个第一差异值中的最小值,并将该最小值对应的第一差异值记作第一目标差异值;将与所述第一目标差异值相对应的移动的像素点总数,确定为所述第一图像在水平方向上的第一运动距离。
在另一种实现方式中,第二形变单元7212具体用于,将所述第一纵向形变图像与所述第二纵向形变图像设置为在垂直方向上平行且两端对齐;将第一纵向形变图像在垂直方向上依次移动,并在每次移动后,确定该次移动后的第一纵向形变图像与第二纵向形变图像在水平方向上相对应的两个子图像,计算该两个子图像之间的第二差异值,并确定该次移动时所述第一纵向形变图像产生移动的像素点总数;确定各个第二差异值中的最小值,并将该最小值对应的第二差异值记作第二目标差异值;将与所述第二目标差异值相对应的移动的像素点总数,确定为所述第一图像在垂直方向上的第二运动距离。
在另一种实现方式中,第三确定子模块724包括第一计算单元7241、第二计算单元7242与确定单元7243;
第一计算单元7241用于计算第一运动距离与横向形变比率的第一比值;
第二计算单元7242用于计算第二运动距离与纵向形变比率的第二比值;
确定单元7243用于根据第一比值与第二比值,确定运动向量。
在另一种实现方式中,拼接模块83包括第四确定子模块831、第一计算子模块832、第二计算子模块833及处理子模块834;
第四确定子模块831用于确定在任一补偿时间区间上的第一预设参数;
第一计算子模块832用于计算第一预设参数与第二图像的向量之间的第一乘积;
第二计算子模块833用于计算第二预设参数与第三图像的向量之间 的第二乘积,第三图像为根据运动向量运动后的第一图像与第二图像之间的重叠部分;
处理子模块834用于根据第一乘积与第二乘积,确定该补偿时间区间上的图像帧,并渲染图像帧,得到补偿帧。
在另一种实现方式中,拼接模块83用于,对所述至少一个补偿帧进行渲染,将所述第一图像过渡为所述第二图像
本申请实施例提供的装置,与传统技术相比,通过确定满足预设相似性条件的第一图像相对于第二图像的运动向量,以及根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,提供了一种能够在移动设备端对非线性的视频拍摄片段进行拼接的方法,不仅能够在移动设备端可以实时、高效地对多段视频进行拼接,高质地对衔接处发生的抖动进行平滑补偿,使得多段视频之间可以平滑过渡,确保用户上传的视频更加平滑,有效减少多段视频拼接后的图像抖动或图像跳变等情况的发生,从而极大提升用户的视频拼接、发布及观看体验,而且能够覆盖安卓、苹果等操作系统的终端设备,无需终端设备中特定硬件的支撑,避免了对硬件厂商的依赖,解决了传统视频插帧方法对移动设备及使用场景的限制。
本申请另一实施例提供了一种电子设备,如图9所示,图9所示的电子设备900包括:处理器901和存储器903。其中,处理器901和存储器903相连,如通过总线902相连。进一步地,电子设备900还可以包括收发器904。需要说明的是,实际应用中收发器904不限于一个,该电子设备900的结构并不构成对本申请实施例的限定。
其中,处理器901应用于本申请实施例中,用于实现图7或图8所示的检测模块、确定模块及拼接模块的功能。
处理器901可以是CPU,通用处理器,DSP,ASIC,FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器901也可以是实现计算功能的组合,例如包含一个或多个微处理器 组合,DSP和微处理器的组合等。
总线902可包括一通路,在上述组件之间传送信息。总线902可以是PCI总线或EISA总线等。总线902可以分为地址总线、数据总线、控制总线等。为便于表示,图9中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器903可以是ROM或可存储静态信息和指令的其他类型的静态存储设备,RAM或者可存储信息和指令的其他类型的动态存储设备,也可以是EEPROM、CD-ROM或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
存储器903用于存储执行本申请方案的应用程序代码,并由处理器901来控制执行。处理器901用于执行存储器903中存储的应用程序代码,以实现图7或图8所示实施例提供的视频拼接装置的动作。
本申请实施例提供的电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时,与传统技术相比,可实现:通过确定满足预设条件的第一图像相对于第二图像的运动向量,以及根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,提供了一种能够在移动设备端对非线性的视频拍摄片段进行拼接的方法,不仅可以在移动设备端实时、高效地对多段视频进行拼接,高质地对衔接处发生的抖动进行平滑补偿,使得多段视频之间可以平滑过渡,确保用户上传的视频更加平滑,有效减少多段视频拼接后的图像抖动或图像跳变等情况的发生,极大提升用户的视频拼接、发布及观看体验,而且能够覆盖安卓、苹果等操作系统的终端设备,无需终端设备中特定硬件的支撑,避免了对硬件厂商的依赖,解决了传统视频插帧方法对移动设备及使用场景的限制。
本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现实施例一所示的方法。与传统技术相比,通过确定满足预设条件的第一图像相对于第二图像的运 动向量,以及根据运动向量,确定第一图像与第二图像之间的至少一个补偿帧,并基于至少一个补偿帧拼接第一图像与第二图像,提供了一种能够在移动设备端对非线性的视频拍摄片段进行拼接的方法,不仅可以在移动设备端实时、高效地对多段视频进行拼接,高质地对衔接处发生的抖动进行平滑补偿,使得多段视频之间可以平滑过渡,确保用户上传的视频更加平滑,有效减少多段视频拼接后的图像抖动或图像跳变等情况的发生,极大提升用户的视频拼接、发布及观看体验,而且能够覆盖安卓、苹果等操作系统的终端设备,无需终端设备中特定硬件的支撑,避免了对硬件厂商的依赖,解决了传统视频插帧方法对移动设备及使用场景的限制。
本申请实施例提供的计算机可读存储介质适用于上述方法的任一实施例。在此不再赘述。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (19)

  1. 一种视频拼接方法,由电子设备执行,包括:
    检测第一图像与第二图像之间的相似度,所述第一图像为第一待拼接视频的最后一帧图像,所述第二图像为第二待拼接视频的首帧图像;
    如果所述相似度满足预设条件,则确定所述第一图像相对于所述第二图像的运动向量;
    根据所述运动向量,确定所述第一图像与所述第二图像之间的至少一个补偿帧,并基于所述至少一个补偿帧拼接所述第一图像与所述第二图像,以拼接所述第一待拼接视频与所述第二待拼接视频。
  2. 根据权利要求1所述的方法,其中,所述检测第一图像与第二图像之间的相似度,包括:
    确定第一图像的第一灰度直方图与第二图像的第二灰度直方图;
    确定所述第一灰度直方图各个灰度的权重;
    根据所确定的权重、所述第一灰度直方图中各个灰度对应的像素个数及所述第二灰度直方图中各个灰度对应的像素个数,确定所述第一图像与所述第二图像之间的相似度。
  3. 根据权利要求2所述的方法,其中,所述确定所述第一灰度直方图各个灰度的权重,包括:
    针对每个灰度,将该灰度对应的像素的个数与像素总数的比值作为该灰度的权重。
  4. 根据权利要求1所述的方法,其中,所述确定所述第一图像相对于所述第二图像的运动向量,包括:
    基于预设形变比率,对所述第一图像与所述第二图像分别进行横向形变与纵向形变;
    根据横向形变后的第一图像相对于横向形变后的第二图像的水平移动距离,确定所述第一图像在水平方向上的第一运动距离;
    根据纵向形变后的第一图像相对于纵向形变后的第二图像的垂直移动距离,确定所述第一图像在垂直方向上的第二运动距离;
    根据所述第一运动距离、所述第二运动距离以及所述预设形变比率, 确定所述运动向量。
  5. 根据权利要求4所述的方法,其中,所述预设形变比率包括横向形变比率与纵向形变比率,所述基于预设形变比率,对所述第一图像与所述第二图像分别进行横向形变与纵向形变,包括:
    根据所述横向形变比率对所述第一图像与所述第二图像分别进行横向形变,得到相应的第一横向形变图像与第二横向形变图像;
    根据所述纵向形变比率对所述第一图像与所述第二图像分别进行纵向形变,得到相应的第一纵向形变图像与第二纵向形变图像。
  6. 根据权利要求5所述的方法,其中,所述根据横向形变后的第一图像相对于横向形变后的第二图像的水平移动距离,确定所述第一图像在水平方向上的第一运动距离,包括:
    将所述第一横向形变图像与所述第二横向形变图像设置为在水平方向上平行且两端对齐;
    将第一横向形变图像在水平方向上依次移动,并在每次移动后,确定该次移动后的第一横向形变图像与第二横向形变图像在垂直方向上相对应的两个子图像,计算该两个子图像之间的第一差异值,并确定该次移动后所述第一横向形变图像产生移动的像素点总数;
    确定各个第一差异值中的最小值,并将该最小值对应的第一差异值记作第一目标差异值;
    将与所述第一目标差异值相对应的移动的像素点总数,确定为所述第一图像在水平方向上的第一运动距离。
  7. 根据权利要求5所述的方法,其中,所述根据纵向形变后的第一图像相对于纵向形变后的第二图像的垂直移动距离,确定所述第一图像在垂直方向上的第二运动距离,包括:
    将所述第一纵向形变图像与所述第二纵向形变图像设置为在垂直方向上平行且两端对齐;
    将第一纵向形变图像在垂直方向上依次移动,并在每次移动后,确定该次移动后的第一纵向形变图像与第二纵向形变图像在水平方向上相对应的两个子图像,计算该两个子图像之间的第二差异值,并确定该次移动 时所述第一纵向形变图像产生移动的像素点总数;
    确定各个第二差异值中的最小值,并将该最小值对应的第二差异值记作第二目标差异值;
    将与所述第二目标差异值相对应的移动的像素点总数,确定为所述第一图像在垂直方向上的第二运动距离。
  8. 根据权利要求4所述的方法,其中,所述预设形变比率包括横向形变比率与纵向形变比率,所述根据所述第一运动距离、所述第二运动距离以及所述预设形变比率,确定所述运动向量,包括:
    计算所述第一运动距离与所述横向形变比率的第一比值;
    计算所述第二运动距离与所述纵向形变比率的第二比值;
    根据所述第一比值与所述第二比值,确定所述运动向量。
  9. 根据权利要求1所述的方法,其中,所述根据所述运动向量,确定所述第一图像与所述第二图像之间的至少一个补偿帧,包括:
    确定在任一补偿时间区间上的第一预设参数;
    计算第一预设参数与所述第二图像的向量之间的第一乘积;
    计算第二预设参数与第三图像的向量之间的第二乘积,所述第三图像为根据所述运动向量运动后的第一图像与第二图像之间的重叠部分;
    根据所述第一乘积与所述第二乘积,确定该补偿时间区间上的图像帧,并渲染所述图像帧,得到所述补偿帧。
  10. 根据权利要求1所述的方法,其中,所述基于所述至少一个补偿帧拼接所述第一图像与所述第二图像,包括:
    对所述至少一个补偿帧进行渲染,将所述第一图像过渡为所述第二图像。
  11. 一种视频拼接装置,包括:
    检测模块,用于检测第一图像与第二图像之间的相似度,所述第一图像为第一待拼接视频的最后一帧图像,所述第二图像为第二待拼接视频的首帧图像;
    确定模块,用于当所述相似度满足预设条件时,确定所述第一图像相对于所述第二图像的运动向量;
    拼接模块,用于根据所述运动向量,确定所述第一图像与所述第二图像之间的至少一个补偿帧,并基于所述至少一个补偿帧拼接所述第一图像与所述第二图像,以拼接所述第一待拼接视频与所述第二待拼接视频。
  12. 根据权利要求11所述的装置,其中,所述检测模块用于,确定第一图像的第一灰度直方图与第二图像的第二灰度直方图;确定所述第一灰度直方图各个灰度的权重;根据所确定的权重、所述第一灰度直方图中各个灰度对应的像素个数及所述第二灰度直方图中各个灰度对应的像素个数,确定所述第一图像与所述第二图像之间的相似度。
  13. 根据权利要求11所述的装置,其中,所述确定模块包括:
    形变子模块,用于基于预设形变比率,对所述第一图像与所述第二图像分别进行横向形变与纵向形变;
    第一确定子模块,用于根据横向形变后的第一图像相对于横向形变后的第二图像的水平移动距离,确定所述第一图像在水平方向上的第一运动距离;
    第二确定子模块,用于根据纵向形变后的第一图像相对于纵向形变后的第二图像的垂直移动距离,确定所述第一图像在垂直方向上的第二运动距离;
    第三确定子模块,用于根据所述第一运动距离、所述第二运动距离以及所述预设形变比率,确定所述运动向量。
  14. 根据权利要求13所述的装置,其中,所述行变子模块包括:
    第一形变单元,用于根据所述横向形变比率对所述第一图像与所述第二图像分别进行横向形变,得到相应的第一横向形变图像与第二横向形变图像;
    第二形变单元,用于根据所述纵向形变比率对所述第一图像与所述第二图像分别进行纵向形变,得到相应的第一纵向形变图像与第二纵向形变图像。
  15. 根据权利要求13所述的装置,其中,所述第三确定子模块包括:
    第一计算单元,用于计算所述第一运动距离与所述横向形变比率的第一比值;
    第二计算单元,用于计算所述第二运动距离与所述纵向形变比率的第二比值;
    确定单元,用于根据所述第一比值与所述第二比值,确定所述运动向量。
  16. 根据权利要求11所述的装置,其中,所述拼接模块包括:
    第四确定子模块,用于确定在任一补偿时间区间上的第一预设参数;
    第一计算子模块,用于计算第一预设参数与所述第二图像的向量之间的第一乘积;
    第二计算子模块,用于计算第二预设参数与第三图像的向量之间的第二乘积,所述第三图像为根据所述运动向量运动后的第一图像与第二图像之间的重叠部分;
    处理子模块,用于根据所述第一乘积与所述第二乘积,确定该补偿时间区间上的图像帧,并渲染所述图像帧,得到所述补偿帧。
  17. 根据权利要求11所述的装置,其中,所述拼接模块用于,对所述至少一个补偿帧进行渲染,将所述第一图像过渡为所述第二图像。
  18. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1-10任一项所述的视频拼接方法。
  19. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现权利要求1-10任一项所述的视频拼接方法。
PCT/CN2019/119616 2018-12-07 2019-11-20 视频拼接方法、装置、电子设备及计算机存储介质 WO2020114251A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19892144.7A EP3893513A4 (en) 2018-12-07 2019-11-20 METHOD AND APPARATUS FOR CONNECTING VIDEO, ELECTRONIC DEVICE, AND COMPUTER STORAGE MEDIA
US17/184,258 US11972580B2 (en) 2018-12-07 2021-02-24 Video stitching method and apparatus, electronic device, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811496469.3 2018-12-07
CN201811496469.3A CN111294644B (zh) 2018-12-07 2018-12-07 视频拼接方法、装置、电子设备及计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/184,258 Continuation US11972580B2 (en) 2018-12-07 2021-02-24 Video stitching method and apparatus, electronic device, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020114251A1 true WO2020114251A1 (zh) 2020-06-11

Family

ID=70975171

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/119616 WO2020114251A1 (zh) 2018-12-07 2019-11-20 视频拼接方法、装置、电子设备及计算机存储介质

Country Status (4)

Country Link
US (1) US11972580B2 (zh)
EP (1) EP3893513A4 (zh)
CN (1) CN111294644B (zh)
WO (1) WO2020114251A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970559B (zh) * 2020-07-09 2022-07-22 北京百度网讯科技有限公司 视频获取方法、装置、电子设备及存储介质
CN111970562A (zh) * 2020-08-17 2020-11-20 Oppo广东移动通信有限公司 视频处理方法、视频处理装置、存储介质与电子设备
CN112200739A (zh) * 2020-09-30 2021-01-08 北京大米科技有限公司 一种视频处理的方法、装置、可读存储介质和电子设备
CN114724055A (zh) * 2021-01-05 2022-07-08 华为技术有限公司 视频切换方法、装置、存储介质及设备
CN114979758B (zh) * 2021-02-26 2023-03-21 影石创新科技股份有限公司 视频拼接方法、装置、计算机设备和存储介质
WO2022193090A1 (zh) * 2021-03-15 2022-09-22 深圳市大疆创新科技有限公司 视频处理方法、电子设备及计算机可读存储介质
CN113269086A (zh) * 2021-05-24 2021-08-17 苏州睿东科技开发有限公司 一种vlog剪辑方法和剪辑系统
CN113254700B (zh) * 2021-06-03 2024-03-05 北京有竹居网络技术有限公司 交互视频编辑方法、装置、计算机设备及存储介质
CN113901267A (zh) * 2021-10-18 2022-01-07 深圳追一科技有限公司 动作视频的生成方法、装置、设备及介质
CN114040179B (zh) * 2021-10-20 2023-06-06 重庆紫光华山智安科技有限公司 一种图像的处理方法及装置
CN114125324B (zh) * 2021-11-08 2024-02-06 北京百度网讯科技有限公司 一种视频拼接方法、装置、电子设备及存储介质
CN114495855B (zh) * 2022-01-24 2023-08-04 海宁奕斯伟集成电路设计有限公司 视频数据转换电路、方法及显示设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123726A1 (en) * 2001-12-27 2003-07-03 Lg Electronics Inc. Scene change detection apparatus
CN101304490A (zh) * 2008-06-20 2008-11-12 北京六维世纪网络技术有限公司 一种拼合视频的方法和装置
CN102157009A (zh) * 2011-05-24 2011-08-17 中国科学院自动化研究所 基于运动捕获数据的三维人体骨架运动编辑方法
CN104240224A (zh) * 2013-06-20 2014-12-24 富泰华工业(深圳)有限公司 视频分析系统及方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6665450B1 (en) * 2000-09-08 2003-12-16 Avid Technology, Inc. Interpolation of a sequence of images using motion analysis
US20050008240A1 (en) * 2003-05-02 2005-01-13 Ashish Banerji Stitching of video for continuous presence multipoint video conferencing
US20100074340A1 (en) * 2007-01-08 2010-03-25 Thomson Licensing Methods and apparatus for video stream splicing
EP2659455A1 (en) * 2010-12-29 2013-11-06 Thomson Licensing Method for generating motion synthesis data and device for generating motion synthesis data
JP6030072B2 (ja) * 2011-01-28 2016-11-24 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 動くオブジェクトの動きベクトルに基づく比較
JP2013165488A (ja) * 2012-01-11 2013-08-22 Panasonic Corp 画像処理装置、撮像装置、およびプログラム
US20130290514A1 (en) * 2012-04-27 2013-10-31 Alcatel-Lucent Usa Inc. Dynamic interstitial transitions
CN103501415B (zh) * 2013-10-01 2017-01-04 中国人民解放军国防科学技术大学 一种基于重叠部分结构变形的视频实时拼接方法
US20150155009A1 (en) * 2013-12-03 2015-06-04 Nokia Corporation Method and apparatus for media capture device position estimate- assisted splicing of media
JP2016059015A (ja) * 2014-09-12 2016-04-21 株式会社東芝 画像出力装置
CN105893920B (zh) * 2015-01-26 2019-12-27 阿里巴巴集团控股有限公司 一种人脸活体检测方法和装置
WO2016180486A1 (en) * 2015-05-12 2016-11-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Composite scalable video streaming
US9635307B1 (en) * 2015-12-18 2017-04-25 Amazon Technologies, Inc. Preview streaming of video data
US10185877B2 (en) * 2016-07-08 2019-01-22 Huawei Technologies Co., Ltd. Systems, processes and devices for occlusion detection for video-based object tracking
CN106331480B (zh) * 2016-08-22 2020-01-10 北京交通大学 基于图像拼接的视频稳像方法
US10754529B2 (en) * 2016-10-28 2020-08-25 Adobe Inc. Facilitating editing of virtual-reality content using a virtual-reality headset
CN106657816A (zh) * 2016-11-07 2017-05-10 湖南源信光电科技有限公司 一种基于orb算法的图像配准和图像融合并行的多路视频快速拼接算法
US10332242B2 (en) * 2017-02-02 2019-06-25 OrbViu Inc. Method and system for reconstructing 360-degree video
US11153465B2 (en) * 2017-06-21 2021-10-19 Dell Products L.P. System and method of processing video of a tileable wall
CN107958442A (zh) * 2017-12-07 2018-04-24 中国科学院自动化研究所 多幅显微图像拼接中的灰度校正方法及装置
US10600157B2 (en) * 2018-01-05 2020-03-24 Qualcomm Incorporated Motion blur simulation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123726A1 (en) * 2001-12-27 2003-07-03 Lg Electronics Inc. Scene change detection apparatus
CN101304490A (zh) * 2008-06-20 2008-11-12 北京六维世纪网络技术有限公司 一种拼合视频的方法和装置
CN102157009A (zh) * 2011-05-24 2011-08-17 中国科学院自动化研究所 基于运动捕获数据的三维人体骨架运动编辑方法
CN104240224A (zh) * 2013-06-20 2014-12-24 富泰华工业(深圳)有限公司 视频分析系统及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3893513A4 *

Also Published As

Publication number Publication date
EP3893513A1 (en) 2021-10-13
US11972580B2 (en) 2024-04-30
EP3893513A4 (en) 2022-01-26
CN111294644B (zh) 2021-06-25
CN111294644A (zh) 2020-06-16
US20210183013A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
WO2020114251A1 (zh) 视频拼接方法、装置、电子设备及计算机存储介质
US10600157B2 (en) Motion blur simulation
US9558543B2 (en) Image fusion method and image processing apparatus
TWI602152B (zh) 影像擷取裝置及其影像處理方法
WO2020143191A1 (en) Image frame prediction method, image frame prediction apparatus and head display apparatus
US8988529B2 (en) Target tracking apparatus, image tracking apparatus, methods of controlling operation of same, and digital camera
AU2011216119B2 (en) Generic platform video image stabilization
US9118840B2 (en) Image processing apparatus which calculates motion vectors between images shot under different exposure conditions, image processing method, and computer readable medium
US9148622B2 (en) Halo reduction in frame-rate-conversion using hybrid bi-directional motion vectors for occlusion/disocclusion detection
CN104219533B (zh) 一种双向运动估计方法和视频帧率上转换方法及系统
JPWO2006137253A1 (ja) 画像生成装置および画像生成方法
JPWO2008087721A1 (ja) 画像合成装置、画像合成方法、プログラム
CN109191506B (zh) 深度图的处理方法、系统及计算机可读存储介质
JP6172935B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
US20140375843A1 (en) Image processing apparatus, image processing method, and program
US8194141B2 (en) Method and apparatus for producing sharp frames with less blur
CN103973963A (zh) 图像获取装置及其图像处理方法
JP6270413B2 (ja) 画像処理装置、撮像装置、および画像処理方法
CN109600667B (zh) 一种基于网格与帧分组的视频重定向的方法
CN113014817B (zh) 高清高帧视频的获取方法、装置及电子设备
JP6282133B2 (ja) 撮像装置、その制御方法、および制御プログラム
WO2023093281A1 (zh) 图像处理及模型训练方法及电子设备
JP2016201037A (ja) 画像処理装置、画像処理方法及びプログラム
US11195247B1 (en) Camera motion aware local tone mapping
JP6808446B2 (ja) 画像処理装置、画像処理方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892144

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019892144

Country of ref document: EP

Effective date: 20210707