WO2019228219A1 - 一种去除视频抖动的方法及装置 - Google Patents

一种去除视频抖动的方法及装置 Download PDF

Info

Publication number
WO2019228219A1
WO2019228219A1 PCT/CN2019/087693 CN2019087693W WO2019228219A1 WO 2019228219 A1 WO2019228219 A1 WO 2019228219A1 CN 2019087693 W CN2019087693 W CN 2019087693W WO 2019228219 A1 WO2019228219 A1 WO 2019228219A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pair
images
information
original
Prior art date
Application number
PCT/CN2019/087693
Other languages
English (en)
French (fr)
Inventor
陈睿智
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to JP2020563582A priority Critical patent/JP7383642B2/ja
Priority to EP19810675.9A priority patent/EP3806445A4/en
Publication of WO2019228219A1 publication Critical patent/WO2019228219A1/zh
Priority to US17/106,682 priority patent/US11317008B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/213Circuitry for suppressing or minimising impulsive noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Definitions

  • the present application relates to the field of video processing, and in particular, to a method and a device for removing video jitter.
  • the application also relates to an electronic device and a computer-readable storage medium.
  • a length of video is formed by many frames of images that change rapidly and continuously.
  • the video de-jittering scheme in the prior art is a non-real-time de-jittering scheme, which cannot meet the requirements for real-time processing of live video and short videos.
  • This application provides a method for removing video jitter, which aims to solve the technical problem that the real-time technology cannot dejitter in real time.
  • the present application proposes a method for removing video jitter.
  • the method includes: determining position information of a feature point pair in each pair of original images according to the position information of the feature point pair in each pair of compressed images, wherein one feature point pair It is composed of two corresponding feature points on the two images before and after each pair of images, and the original image is the image before compression; according to the position information of the feature point pair in each pair of original images, the next image in each pair of original images is determined Position transformation information relative to the previous image; according to the position transformation information of the next image in the n pair of original images relative to the previous image, obtain the deformation information corresponding to the previous image in the mth pair of original images, where n and m are A positive integer, m is not greater than n; according to the deformation information corresponding to the previous image in the m-th pair of original images, the previous image in the m-th pair of original images is deformed to obtain the previous image in the m-th pair of original images after dithering is
  • the method further includes: storing the original image in the first queue; and storing the position transformation information of the latter image with respect to the previous image in each pair of the original images in the second queue.
  • the step of obtaining the deformation information corresponding to the previous image in the mth pair of original images according to the position transformation information of the next image in the n pairs of original images with respect to the previous image includes: storing in the first queue When the number of images reaches the first number, and when the position transformation information stored in the second queue reaches the first number, the m-th pair of original images is obtained according to the position transformation information of the next image relative to the previous image in the n pairs of original images The deformation information corresponding to the previous image.
  • the method further includes: before storing the image in the first queue again, removing the image of the head of the first queue; and Before the second queue stores the position transformation information, the position transformation information of the second queue leader is taken out.
  • the method further comprises: compressing each pair of original images by a first multiple; determining feature points on each image in each pair of images after compression; and compressing two corresponding ones of the two images before and after the compression of each pair of images
  • the feature point is determined as a feature point pair; the position information of the feature point pair in each pair of images after compression is determined.
  • the step of determining the position information of the feature point pairs in each pair of the original images based on the position information of the feature point pairs in each pair of compressed images includes: locating the position information of the feature point pairs in each pair of compressed images The first multiple is enlarged to obtain the position information of the feature point pairs in each pair of original images.
  • the step of determining the position information of the feature point pairs in each pair of original images based on the position information of the feature point pairs in each pair of compressed images includes: partitioning the two images before and after in each pair of original images; For the position information of the feature point pair in the corresponding partition in the original image, determine the position transformation information of the corresponding partition of the next image in the pair of original images relative to the corresponding partition of the previous image; The position transformation information of the corresponding partition of the previous image determines the position transformation information of the next image relative to the previous image in each pair of original images.
  • the position information is coordinates
  • the position transformation information is a transformation matrix
  • the deformation information is a transformation matrix
  • the step of deforming the previous image in the m-th original image includes: according to the deformation matrix corresponding to the previous image in the m-th original image, Deformation of the previous image in the m-th pair of original images; crop the edges of the previous image in the m-th pair of original images after deformation.
  • the present application also proposes a device for removing video jitter.
  • the device includes: a location information acquisition first unit, configured to determine the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images; Position information, where a feature point pair is composed of two corresponding feature points on two images before and after in each pair of images, the original image is an image before compression; a position transformation information acquisition unit is configured to The position information of the feature point pairs in the image determines the position transformation information of the next image with respect to the previous image in each pair of original images; the deformation information acquisition unit is used to determine the position of the next image in the n pairs of original images relative to the previous image Position transformation information to obtain the deformation information corresponding to the previous image in the m-th original image, where n and m are positive integers and m is not greater than n; the deformation processing unit is configured to correspond to the previous image in the original image according to the m-th The deformation information is obtained by deforming the previous image in the
  • the device further includes: an image storage unit configured to store the original image in the first queue; and a position transformation information storage unit configured to transform the position of the next image in each pair of the original images relative to the previous image The information is stored in a second queue.
  • the device further includes a compression unit for compressing each pair of original images by a first multiple; a feature point determination unit for determining a feature point on each image in each pair of images after compression; a feature point A pairing determining unit is configured to determine two corresponding feature points on the two images before and after each pair of compressed images as one feature point pair; and a second unit for obtaining position information is used to determine a feature in each pair of images after compression Point-to-point location information.
  • a compression unit for compressing each pair of original images by a first multiple
  • a feature point determination unit for determining a feature point on each image in each pair of images after compression
  • a feature point A pairing determining unit is configured to determine two corresponding feature points on the two images before and after each pair of compressed images as one feature point pair
  • a second unit for obtaining position information is used to determine a feature in each pair of images after compression Point-to-point location information.
  • the position transformation information acquisition unit includes: an image partitioning subunit for partitioning each image before and after in each pair of original images; and a position transformation information acquisition first subunit for acquiring data in each pair of original images.
  • the position information of each feature point pair in the corresponding partition determines the position conversion information of the corresponding partition of the next image to the corresponding partition of the previous image in each pair of original images; the position conversion information obtains a second subunit for In the figure, the position transformation information of the corresponding partition of the next image to the corresponding partition of the previous image is used to determine the position transformation information of the next image to the previous image in each pair of original images.
  • the deformation processing unit includes: a deformation sub-unit, configured to deform the previous image in the m-th pair of original images according to a deformation matrix corresponding to the previous image in the m-th pair of original images; The edge of the previous image in the original image after cropping deformation.
  • the present application also proposes an electronic device, the electronic device includes: a processor; a memory for storing a program for removing video jitter, and when the program is read and executed by the processor, performs the following operations:
  • the position information of the feature point pairs in each pair of images determines the position information of the feature point pairs in each pair of original images.
  • one feature point pair is composed of two corresponding feature points on the two previous and next images in each pair of images.
  • the original image is an image before compression; according to the position information of the feature point pair in each pair of original images, the position transformation information of the next image in each pair with respect to the previous image is determined; according to n pairs of the next image in the original image Relative to the previous image's position transformation information, obtain the deformation information corresponding to the previous image in the m-th original image, where n and m are positive integers, and m is not greater than n; according to the m-th corresponding to the previous image in the original image
  • the deformation information is obtained by deforming the previous image in the m-th original image to obtain the previous image in the m-th original image after dither removal.
  • the present application also proposes a computer-readable storage medium storing a program for removing video jitter.
  • the program When the program is read and executed by a processor, the program performs the following operations: according to the position information of the feature point pairs in each pair of compressed images To determine the position information of the feature point pairs in each pair of original images, where one feature point pair is composed of two corresponding feature points on the two images before and after each pair of images, and the original image is an image before compression; according to each For the position information of the feature point pairs in the original image, determine the position transformation information of the next image in the pair of original images relative to the previous image; according to the position transformation information of the next image in the n pairs of original images relative to the previous image, obtain The deformation information corresponding to the previous image in the m-th original image, where n and m are positive integers, and m is not greater than n; according to the deformation information corresponding to the previous image in the m-th original image, to the m-th original image The previous image is
  • the technical solution for removing video jitter proposed in this application first determines the position information of the feature point pairs in each pair of original images based on the position information of the feature point pairs in each pair of compressed images.
  • the device performs various processes relatively quickly, so this technical means can be used to obtain the position information of each feature point pair on the image in real time for each collected image.
  • the position transformation of the last image to the previous image in each pair of original images is determined correspondingly in real time according to the position information of the feature point pairs in each pair of original images. information.
  • the deformation information corresponding to the previous image in the m-th original image is obtained, and according to the deformation corresponding to the previous image in the m-th original image Information to deform the previous image to obtain the previous image after dither removal.
  • other images subsequent to the previous image are sequentially deformed and debounced, so as to achieve real-time debounce.
  • the technical solution does not rely on other auxiliary equipment while real-time debounce, and has great convenience, which solves the technical problems that cannot be real-time debounced or needs external gyro shift in the prior art. .
  • FIG. 1 is a flowchart of an embodiment of a method for removing video jitter provided by this application
  • FIG. 2 is a schematic diagram of feature points involved in a method for removing video jitter provided by the present application
  • FIG. 3 is a schematic diagram of a partition transformation matrix involved in a method for removing video jitter provided by the present application
  • FIG. 4 is a schematic diagram of a correspondence relationship between each image and a corresponding partition transformation matrix involved in the method for removing video jitter provided by the present application;
  • FIG. 5 is a schematic diagram of each matrix applied to obtaining a deformation matrix involved in the method for removing video jitter provided by the present application;
  • FIG. 6 is a schematic diagram of image deformation processing involved in a method for removing video jitter provided by the present application
  • FIG. 7 is a schematic diagram of an embodiment of a device for removing video jitter provided by this application.
  • FIG. 1 is a flowchart of an embodiment of a method for removing video jitter provided by this application. The following describes the technical solution of the method for removing video jitter provided by the present application with reference to the flow of an embodiment of the method for removing video jitter shown in FIG. 1.
  • a length of video is formed by many frames of images that change rapidly and continuously.
  • the video will therefore experience a "jitter" phenomenon.
  • This application aims to solve the problem of removing the jitter of video in real time.
  • the method for removing video jitter shown in FIG. 1 includes:
  • Step S101 Determine the position information of the feature point pairs in each pair of original images according to the position information of the feature point pairs in each pair of compressed images, where one feature point pair is determined by two corresponding ones of the two images before and after each pair of images. Constituted by several characteristic points.
  • step S101 the position information of the feature point pairs in each pair of original images before compression is determined by using the position information of the feature point pairs in each pair of images after compression. Therefore, before step S101, step S100 may be included: acquiring position information of the feature point pairs in each pair of images after compression.
  • Step S100 may specifically include the following steps:
  • step S100-1 the original image is stored as a first queue.
  • the multiple frames of images are arranged in a first queue in sequence, and every two adjacent frames of the image are a pair of images, where the order of the previous is the previous image , The order of the latter is the next image.
  • the queue may be implemented specifically in an image buffer.
  • the image buffer refers to a memory in a computer system dedicated to storing images being synthesized or displayed.
  • FIG. 4 illustrates a schematic diagram of the image buffer.
  • step S100-2 each pair of original images is compressed by a first multiple.
  • the original image may be compressed by a first multiple, and the multiple may be a preset value.
  • the compressed image is smaller than the pre-compressed image by a factor of one, and the electronic device will process it faster, so that after each new image is collected and compressed, subsequent steps can be performed quickly, such as determining the new image.
  • Feature points and position information of each feature point See Figure 2.
  • the two images on the right in Figure 2 are the compressed previous frame image and the compressed current frame image.
  • the width and height of the previous frame and the current frame before compression on the left will be smaller than the width and height of the current frame and the previous frame after compression on the right by a factor of one.
  • Step S100-3 determining feature points on each image before and after in each pair of images after compression.
  • Feature points refer to a series of pixel points on an image that can characterize the contours and appearance of the scene. Generally, this series of points will have obvious characteristics, such as a larger gray value, that is, the color of the image at that point is darker, and the point can be determined as a characteristic point. For example, if the point P on the compressed current frame in FIG. 2 can characterize the characteristics of the captured scene, the point P can be used as a feature point on the compressed current frame image.
  • step S100-4 two corresponding feature points on each of the images before and after each pair of compressed images are determined as one feature point pair.
  • Each image before and after has its own series of several feature points.
  • a certain feature point on the previous image may have corresponding feature points on the latter image.
  • the two corresponding feature points both represent a certain point of the scene in the image, then the two corresponding feature points form a feature point pair.
  • the feature point P on the current frame after compression and the feature point P on the previous frame image after compression both of which represent a certain feature point of the captured scene, then the two corresponding The feature points form a feature point pair.
  • step S100-5 the position information of the feature point pairs in each pair of compressed images is determined.
  • the position information of the feature point pair refers to the relative position of the corresponding two feature point pairs in the corresponding image, and the position information may be the coordinates of the feature point on the corresponding image.
  • the position coordinates of the feature point P on the compressed current frame in FIG. 2 is (u, v).
  • the corresponding feature point P on the compressed previous frame image also has coordinate values.
  • the position information of the two feature points on the respective images is the position information of a feature point pair on the pair of images. There are multiple feature point pairs on the two images adjacent to each other after compression, so the position information of multiple feature point pairs on the adjacent images can be obtained.
  • step S101 may be performed: according to the position information of the feature point pairs in each pair of compressed images, each pair of original images is determined. Location information of feature point pairs.
  • the compressed current frame image and the previous frame image are compressed by the first multiple of the original image before compression, after obtaining the position information of the feature point pairs in each pair of compressed images, that is, after compression is obtained.
  • the feature points on each image in each pair of images before compression can be obtained. Is the position information of the feature point pairs formed by the feature points in each pair of images. For example, in FIG. 2, the coordinates (u, v) of the feature points P on the current image after compression are enlarged by the first multiple, and the coordinates (su, sv) of the feature points P on the current frame image before compression can be obtained.
  • the coordinates of the feature point P on the first frame of image before compression can also be obtained.
  • Two corresponding feature points P in the current frame after compression and the previous frame after compression constitute a feature point pair P in the current frame after compression and the previous frame image.
  • Two feature points P corresponding to the current frame before compression and one frame before compression constitute a feature point pair P between the current frame before compression and the previous frame.
  • Step S102 Determine position transformation information of the next image with respect to the previous image in each pair of original images according to the position information of the feature point pair in each pair of original images.
  • step S102 may be performed by dividing each pair of original images into multiple partitions. After the position transformation information of a partition on the current frame image to a corresponding partition on the previous frame image is determined, the corresponding correspondences are divided. The combination of the position transformation information of each partition is the position transformation information of the current frame image to the previous frame image in each pair of images.
  • step S102 may include the following steps:
  • step S102-1 each pair of front and back images in the original image are partitioned; as shown in the example of FIG. 3, the current frame image and the previous frame image are divided into six partitions.
  • the current frame image four feature points are exemplified in the upper-left partition: C 0 , C 1 , C 2 , and C 3.
  • Four corresponding feature points P 0 , P 1 , and P are also exemplified in the previous frame. 2, P 3.
  • Step S102-2 determining the position transformation information of the corresponding partition of the next image to the corresponding partition of the previous image in each pair according to the position information of each feature point pair in the corresponding partition in the original image;
  • the position information of the feature points on the next image is different from the position information of the corresponding feature points on the previous image.
  • the position of the feature points on the next image is the same as the position of the corresponding feature points on the previous image.
  • the information difference is the position transformation information of the feature points on the next image to the corresponding feature points on the previous image.
  • the difference between the position information of the respective feature points of the corresponding partition on the next image and the position information of the respective feature points of the corresponding partition on the previous image is the position transformation of the corresponding partition of the next original image to the corresponding partition of the previous original image. information.
  • the previous frame image example in FIG. 3 has 4 feature points P 0 , P 1 , P 2 , and P 3.
  • These 4 feature points correspond to the corresponding 4 feature points C 0 , respectively on the current frame image.
  • C 1 , C 2 , C 3 the 4 feature points on the previous frame image and the 4 feature points on the current frame image all represent the same features of the scene being captured, so the 4 points on the two images before and after correspond to each other to form 4 features.
  • Point right In the case where the upper left corner partition example of the previous frame has 4 feature points, the position information of the four feature points P 0 , P 1 , P 2 , and P 3 constitutes a matrix corresponding to the upper left corner partition on the previous frame of the image.
  • the position information of the four points C 0 , C 1 , C 2 , and C 3 on the current frame image constitutes a corresponding matrix of the upper-left partition on the current frame image.
  • the matrix corresponding to the upper-left partition on the current frame is transformed to the matrix corresponding to the upper-left partition on the previous frame image.
  • This transformation matrix is the upper-left partition on the current frame image to the upper-left partition on the previous frame image.
  • Position transformation information or position transformation matrix In FIG. 3, the position transformation information or the position transformation matrix H 00 of the upper left corner partition in the current frame image to the upper left corner partition in the upper frame image is exemplified.
  • the position information corresponding to the feature points in the upper left corner of the current frame image or the matrix is multiplied by the position transformation matrix H 00 to calculate the position information corresponding to the feature points in the upper left corner of the previous frame image.
  • the position transformation information of the lower left corner partition on the current frame image to the lower left corner partition on the previous frame image can be expressed as H 10 .
  • the position transformation information of the other four corresponding partitions can be expressed as H 01 , H 02 , H 11 , and H 12 in this order .
  • Step S102-3 determining position transformation information of the next image with respect to the previous image in each pair of original images according to the position transformation information of the corresponding partition of the next original image in each pair with respect to the previous original image .
  • step S102-2 the position transformation information H 00 , H 01 , H 02 , H 10 , H 11 , H 12 of each partition of the current frame image to the corresponding partition of the previous frame image has been obtained, then the corresponding position of each partition
  • the combination of the transformation information can represent the position transformation information of the current frame image to the previous frame image.
  • the partition transformation matrix from the current frame to the previous frame that has been exemplified in Figure 3 is the position transformation information of the current frame image to the previous frame image. .
  • Step S102-4 storing the position transformation information of the next image with respect to the previous image in each pair of original images as a second queue.
  • the position transformation information between the pair of images may be stored in a queue, and the queue may be named a second queue.
  • the queue may be specifically stored by the partition transformation matrix buffer.
  • the partition transformation matrix buffer may be a memory specifically used to store transformation matrices in a computer system.
  • FIG. 4 illustrates a schematic diagram of the partition transformation matrix buffer.
  • Step S103 Obtain deformation information corresponding to the previous image in the mth pair of original images according to the position transformation information of the next image in the n pair of original images with respect to the previous image, where n and m are positive integers and m is not greater than n .
  • step S103 illustrates how to implement step S103.
  • the original path buffer, optimized path buffer, and optimized path buffer stored in the deformation matrix iterative optimizer need to be used for processing. The role of each buffer in this step.
  • the partition transformation matrix buffer in FIG. 5 the position transformation information of the next image relative to the previous image is stored.
  • the partition transformation matrix buffer can store position transformation information between a certain number of images.
  • the position transformation information between a certain number of images is stored in the order of generation, and the position transformation information between the generated images is arranged at the tail of the partition transformation matrix buffer.
  • the partition transformation matrix buffer exemplified in FIG. 5 is capable of storing position transformation information corresponding to n pairs of images, that is, storing n position transformation information or a position transformation matrix.
  • the rightmost set of partition transformation matrices in FIG. 5 represents the position transformation matrix between the first image and the second image acquired by the image collector, and so on.
  • the transformation matrix represents the position transformation matrix from the last image to its previous image.
  • the partition transformation matrix buffer shown in FIG. 5 has a fixed length, that is, it can store at most n position transformation information.
  • the image buffer in FIG. 4 also has a fixed length, and the length of the image buffer is the same as the length of the partition transform array buffer, that is, the image buffer can store a maximum of n images.
  • the first pair of images is in the image.
  • the serial numbers in the buffer are n-1 and n-2.
  • Obtain the deformation information corresponding to the previous image in the first pair of original images that is, obtain the deformation information corresponding to the frame number n-1 in the image buffer.
  • the following steps may also be performed: before storing the image in the first queue again, removing the image of the head of the first queue; and again to the second queue Before the queue stores the position transformation information, the position transformation information of the second queue head is taken out. After the team head image is taken out of the image buffer and the team head position transformation information is taken out of the partition transformation matrix buffer, the new image storage and the new position transformation information storage can be vacated.
  • H n-1,0 indicates the first partition position conversion information in the team leader position conversion information in the second queue storing the position conversion information
  • H n-1,1 indicates the second partition position conversion information.
  • H n-1,5 represents the position transformation information of the sixth partition.
  • a second queue position H 0,0 squadron tail indicates the storage location of the conversion information converted first partition position information conversion information
  • H 0,1 represents the second partition position conversion information
  • the position conversion information of the j-th partition position in the information and the position conversion information of the position number i is ... obtained by multiplying the position conversion information of the j-th partition in the position conversion information of the number 0 by the product.
  • C ni, j is equal to the product of H n-1, j and H n-2, j ... all the way to H 0, j .
  • the optimal path register stores a weighted average value Q i, j .
  • the weighted average value Q i, j is obtained by taking a weighted average value of the following three: the image number i in the image queue and the jth The position information of the partition adjacent partition, the position information of the jth partition on the adjacent frame image on the i image, and C i, j in the original path buffer.
  • the weighted average value is represented by Q i, j .
  • the Q i, j is temporarily stored in the optimized path buffer, and then overwritten in the optimized path buffer, and recorded as P i, j. .
  • B j represents the deformation information corresponding to each partition of the team head image.
  • B 0 represents the deformation information corresponding to the first partition of the team head image.
  • B 1 represents the deformation information corresponding to the second partition of the team head image ... and so on, if the team head image is divided into 6 partitions, then B 5 represents the deformation information corresponding to the 6th partition of the team leader image.
  • the combination of B 0 , B 1 , B 2 , B 3 , B 4 , and B 5 constitutes the deformation information corresponding to the team leader image in the image buffer. See Figure 6 for the team leader image obtained by the deformation matrix optimization iterator. Deformation information.
  • the previous image After obtaining the deformation information of the previous image in the first pair of images after step S103, the previous image may be deformed by using the deformation information, see step S104.
  • step S104 according to the deformation information corresponding to the previous image in the m-th pair of original images, the previous image in the m-th pair of original images is deformed to obtain the previous image in the m-th pair of original images after dither removal.
  • the deformation information is represented by a deformation matrix
  • the deformation matrix corresponding to the previous image in the original image is deformed by partitioning the previous image, that is, the position information of the image is adjusted by using the deformation information obtained in step S103.
  • the position information of the feature point P exists in the deformation matrix of the third partition of the team head image, and the position information of the feature point P existing in the deformation matrix and the position information of the feature point P on the third head of the team image There are some differences.
  • the point P on the team leader image is adjusted to coincide with the position of the feature point P in the deformation information of the third partition to eliminate the positional difference.
  • the positions of the feature points of the other partitions on the team leader image should also be adjusted to the positions of the corresponding feature points in the deformation information, so that the adjusted image shown in FIG. 6 can be obtained.
  • the technical solution for removing video jitter proposed in this application first determines the position information of the feature point pairs in each pair of original images based on the position information of the feature point pairs in each pair of compressed images.
  • the device performs various processes relatively quickly, so this technical means can be used to obtain the position information of each feature point pair on the image in real time for each collected image.
  • the position transformation of the last image to the previous image in each pair of original images is determined correspondingly in real time according to the position information of the feature point pairs in each pair of original images. information.
  • the deformation information corresponding to the previous image in the first pair of original images is obtained, and according to the corresponding information of the previous image in the first pair of original images Deformation information, which deforms the previous image to obtain the previous image after dithering is removed.
  • other images subsequent to the previous image are sequentially deformed and debounced, so as to achieve real-time debounce.
  • the technical solution does not rely on other auxiliary equipment while real-time debounce, and has great convenience, which solves the technical problems that cannot be real-time debounced or needs external gyro shift in the prior art. .
  • FIG. 7 is a schematic structural diagram of an embodiment of a device for removing video jitter provided by this application.
  • This device embodiment corresponds to the method embodiment shown in FIG. 1, so it is described relatively simply.
  • the device can be applied to various electronic devices.
  • the device embodiments described below are only schematic.
  • the apparatus for removing video jitter shown in FIG. 7 includes: a position information acquisition first unit 701, configured to determine position information of feature point pairs in each pair of original images according to the position information of feature point pairs in each pair of compressed images Wherein, one feature point pair is composed of two corresponding feature points on two images before and after in each pair of images, the original image is an image before compression; a position transformation information obtaining unit 702 is configured to The position information of the feature point pair determines the position transformation information of the next image in each pair of the original image with respect to the previous image; the deformation information acquisition unit 703 is configured to determine the position of the next image in the n pairs of the original image with respect to the previous image Transform information to obtain the deformation information corresponding to the previous image in the m-th original image, where n and m are positive integers, and m is not greater than n; the deformation processing unit 704 is configured to correspond to the previous image in the original image according to the m-th The deformation information is obtained by deforming the previous image in the
  • the device further includes: an image storage unit configured to store the original image in the first queue; and a position transformation information storage unit configured to transform the position of the next image in each pair of the original images relative to the previous image The information is stored in a second queue.
  • the device further includes a compression unit for compressing each pair of original images by a first multiple; a feature point determination unit for determining a feature point on each image in each pair of images after compression; a feature point A pairing determining unit is configured to determine two corresponding feature points on the two images before and after each pair of compressed images as one feature point pair; and a second unit for obtaining position information is used to determine a feature in each pair of images after compression Point-to-point location information.
  • a compression unit for compressing each pair of original images by a first multiple
  • a feature point determination unit for determining a feature point on each image in each pair of images after compression
  • a feature point A pairing determining unit is configured to determine two corresponding feature points on the two images before and after each pair of compressed images as one feature point pair
  • a second unit for obtaining position information is used to determine a feature in each pair of images after compression Point-to-point location information.
  • the position transformation information acquiring unit 702 includes: an image partitioning subunit for partitioning each image before and after in each pair of original images; and a position transformation information acquiring first subunit for using each pair of original images Position information of each feature point pair in the corresponding partition in the image, to determine the position conversion information of the corresponding partition of the next image in the original image to the corresponding partition of the previous image in each pair; the position conversion information obtains a second sub-unit, which is used according to each pair The position conversion information of the corresponding partition of the next image to the corresponding partition of the previous image in the original image is used to determine the position conversion information of the next image to the previous image in each pair of original images.
  • the deformation processing unit 704 includes: a deformation subunit, configured to deform the previous image in the mth pair of original images according to a deformation matrix corresponding to the previous image in the mth pair of original images; a cropping subunit, It is used to crop the edge of the previous image in the m-th pair of original images after deformation.
  • the present application also provides an embodiment of an electronic device for removing video jitter.
  • the electronic device in this embodiment includes: a processor; a memory for storing a program for removing video jitter, and the program is read by the processor.
  • the execution is performed, the following operations are performed: according to the position information of the feature point pairs in each pair of images after compression, the position information of the feature point pairs in each pair of the original image is determined, wherein one feature point pair consists of two forward and backward in each pair of images The image is composed of two corresponding feature points, and the original image is an image before compression. According to the position information of the feature point pair in each pair of original images, the position transformation of the next image in each pair of original images relative to the previous image is determined.
  • the deformation information corresponding to the previous image in the mth pair of original images according to the position transformation information of the next image in the n pair of original images relative to the previous image, where n and m are positive integers and m is not greater than n; According to the deformation information corresponding to the previous image in the m-th original image, the previous image in the m-th original image is deformed to obtain the m-th original image in the original image after dithering is removed. Images. For related technical features, reference may be made to the method embodiments, and details are not described herein again.
  • This application also proposes a computer-readable storage medium. Since the embodiments of the computer-readable storage medium are basically similar to the method embodiments, the description is relatively simple. For related parts, please refer to the corresponding description of the method embodiments provided above.
  • the computer-readable storage medium embodiments described below are merely exemplary.
  • the computer-readable medium may be included in the device described in the foregoing embodiments; it may also exist alone without being assembled into the device.
  • the computer-readable medium carries one or more programs. When the one or more programs are executed by the device, the device is caused to determine each pair of original images according to the position information of the feature point pairs in each pair of compressed images.
  • the position information of the feature point pairs in the middle where one feature point pair is composed of two corresponding feature points on the two images before and after in each pair of images, the original image is an image before compression; according to the feature points in each pair of original images
  • the position information of the pair determines the position transformation information of the next image with respect to the previous image in each pair of original images; according to the position transformation information of the next image with respect to the previous image in the n pairs of original images, the m-th pair of original images is obtained
  • the deformation information corresponding to the previous image where n and m are positive integers, and m is not greater than n; according to the deformation information corresponding to the previous image in the m-th original image, the previous image in the m-th original image is deformed to obtain The m-th pair of previous images after the dithering is removed.
  • a computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
  • processors CPUs
  • input / output interfaces output interfaces
  • network interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-persistent memory, random access memory (RAM), and / or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-persistent, removable and non-removable media. Information can be stored by any method or technology. Information may be computer-readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transmitting medium may be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include non-transitory computer-readable media, such as modulated data signals and carrier waves.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access
  • this application may be provided as a method, a system, or a computer program product. Therefore, this application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, this application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种去除视频抖动的方法及装置,所述方法包括:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。通过以上步骤实时地去除视频抖动。

Description

一种去除视频抖动的方法及装置
本申请要求2018年05月31日递交的申请号为201810554266.9、发明名称为“一种去除视频抖动的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频处理领域,具体涉及一种去除视频抖动的方法及装置。本申请还涉及一种电子设备及计算机可读存储介质。
背景技术
一段时长的视频是由很多帧快速连续变动的图像形成的。当拍摄视频时,由于视频采集设备与景物之间的相对运动会使得拍摄出的快速变动的图像之间存在较大位移,视频因此会出现抖动的现象。
现有技术下的视频去抖动方案为非实时去抖方案,无法满足视频直播和短视频对于实时处理的要求。
发明内容
本申请提供一种去除视频抖动的方法,旨在解决现有技术中无法实时去抖动的技术问题。
本申请提出一种去除视频抖动的方法,所述方法包括:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
可选的,还包括:将原图像存储在第一队列;将每对原图像中后一个图像相对于前一个图像的位置变换信息存储在第二队列。
可选的,所述根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获 取第m对原图像中前一个图像对应的变形信息步骤包括:在所述第一队列存储的图像达到第一数量时,以及在所述第二队列存储的位置变换信息达到第一数量时,根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息。
可选的,在所述获取第一对原图像中前一个图像对应的变形信息步骤之后,还包括:再次向第一队列存储图像前,将第一队列队首的图像取出;以及再次向第二队列存储位置变换信息前,将第二队列队首的位置变换信息取出。
可选的,还包括:将每对原图像压缩第一倍数;确定压缩后的每对图像中每个图像上的特征点;将压缩后的每对图像中前后两个图像上对应的两个特征点确定为一个特征点对;确定压缩后的每对图像中特征点对的位置信息。
可选的,所述根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息步骤包括:将压缩后的每对图像中特征点对的位置信息扩大第一倍数,得到每对原图像中特征点对的位置信息。
可选的,所述根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息步骤包括:将每对原图像中前后两个图像分区;根据每对原图像中相应分区中特征点对的位置信息,确定每对原图像中后一个图像相应分区相对于前一个图像相应分区的位置变换信息;根据每对原图中后一个图像相应分区相对于前一个图像相应分区的位置变换信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息。
可选的,所述位置信息为坐标,所述位置变换信息为变换矩阵,所述变形信息为变形矩阵。
可选的,所述根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形步骤包括:根据第m对原图像中前一个图像对应的变形矩阵,对第m对原图像中前一个图像分区变形;裁剪变形后第m对原图像中前一个图像的边缘。
本申请还提出一种去除视频抖动的装置,所述装置包括:位置信息获取第一单元,用于根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;位置变换信息获取单元,用于根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;变形信息获取单元,用于根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第 m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;变形处理单元,用于根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
可选的,所述装置还包括:图像存储单元,用于将原图像存储在第一队列;位置变换信息存储单元,用于将每对原图像中后一个图像相对于前一个图像的位置变换信息存储在第二队列。
可选的,所述装置还包括:压缩单元,用于将每对原图像压缩第一倍数;特征点确定单元,用于确定压缩后的每对图像中每个图像上的特征点;特征点对确定单元,用于将压缩后的每对图像中前后两个图像上对应的两个特征点确定为一个特征点对;位置信息获取第二单元,用于确定压缩后的每对图像中特征点对的位置信息。
可选的,所述位置变换信息获取单元包括:图像分区子单元,用于将每对原图像中的前后每个图像分区;位置变换信息获取第一子单元,用于根据每对原图像中相应分区中各特征点对的位置信息,确定每对原图像中后一个图像的相应分区到前一个图像的相应分区的位置变换信息;位置变换信息获取第二子单元,用于根据每对原图中后一个图像的相应分区到前一个图像的相应分区的位置变换信息,确定每对原图像中后一个图像到前一个图像的位置变换信息。
可选的,所述变形处理单元包括:变形子单元,用于根据第m对原图像中前一个图像对应的变形矩阵,对第m对原图像中前一个图像分区变形;裁剪子单元,用于裁剪变形后第m对原图像中前一个图像的边缘。
本申请还提出一种电子设备,所述电子设备包括:处理器;存储器,用于存储去除视频抖动的程序,所述程序在被所述处理器读取执行时,执行如下操作:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
本申请还提出一种计算机可读存储介质,其上存储有去除视频抖动的程序,该程序被处理器读取执行时,执行如下操作:根据压缩后的每对图像中特征点对的位置信息, 确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
本申请提出的去除视频抖动的技术方案,首先根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,由于原图像压缩后会变小,电子设备进行各项处理会比较快,因此采取该种技术手段可以对每个采集到的图像实时地获取该图像上各个特征点对的位置信息。当实时地获取到每对图像上特征点对的位置信息之后,相应地实时地根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像到前一个图像的位置变换信息。当获得了n对原图像中后一个图像到前一个图像的位置变换信息之后,获取第m对原图像中前一个图像对应的变形信息,并根据该第m原图像中前一个图像对应的变形信息,对该前一个图像进行变形,得到去除抖动后的该前一个图像。以此类推,对该前一个图像之后的其他图像依次变形、去抖,从而做到实时去抖。同时,该技术方案在实时去抖的同时还不依赖于其他辅助设备,具有较大的便利性,解决了现有技术中无法实时去抖或实时去抖时需要借助外在陀螺移的技术问题。
附图说明
图1是本申请提供的去除视频抖动的方法的一个实施例的流程图;
图2是本申请提供的去除视频抖动的方法中所涉及的特征点的示意图;
图3是本申请提供的去除视频抖动的方法中所涉及的分区变换阵的示意图;
图4是本申请提供的去除视频抖动的方法中所涉及的各图像与相应分区变换阵对应关系的示意图;
图5是本申请提供的去除视频抖动的方法中所涉及的获取变形矩阵时所应用到的各个矩阵的示意图;
图6是本申请提供的去除视频抖动的方法中所涉及的对图像变形处理的示意图;
图7是本申请提供的去除视频抖动的装置的一个实施例的示意图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本申请。但是本申请能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本申请内涵的情况下做类似推广,因此本申请不受下面公开的具体实施的限制。
本申请提供一种去除视频抖动的方法,图1是本申请提供的去除视频抖动的方法的一个实施例的流程。下面结合图1示出的去除视频抖动的方法的一个实施例的流程来阐述本申请提供的去除视频抖动的方法的技术方案。
一段时长的视频是由很多帧快速连续变动的图像形成的。当拍摄视频时,由于视频采集设备与景物之间的相对运动会使得拍摄出的快速变动的图像之间存在较大位移,视频因此会出现“抖动”的现象。本申请旨在解决实时地去除视频的抖动问题。
图1示出的去除视频抖动的方法包括:
步骤S101,根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成。
该步骤S101是通过压缩后的每对图像中特征点对的位置信息,来确定压缩前每对原图像中特征点对的位置信息。因此在步骤S101之前,可以包括步骤S100:获取压缩后的每对图像中特征点对的位置信息。
步骤S100具体可以包括以下步骤:
步骤S100-1,将原图像存储成第一队列。
当使用视频拍摄设备在一段时长内拍摄到多帧图像后,将该多帧图像按先后次序排成第一队列,每相邻两帧图像为一对图像,其中次序在先的为前一个图像,次序在后的为后一个图像。该队列可以是具体在图像缓冲器中实现。图像缓冲器是指计算机系统中专门用来存放正在合成或显示的图像的存储器,图4例示了图像缓冲器的示意图。
步骤S100-2,将每对原图像压缩第一倍数。
在对一段时长视频中的多帧图像快速去除抖动的过程中,可以将原图像进行压缩第一倍数,该倍数可以是预设值。压缩后的图像相对于压缩前的图像更小第一倍数,电子设备处理起来会更快,使得对每采集到新图像并压缩后,就能快速地进行后续步骤,比如:确定该新图像上的特征点以及各特征点的位置信息。见图2,图2中右边两个图像为压缩后的上一帧图像与压缩后的当前帧图像。左边压缩前的上一帧及当前帧图像的宽、高会比右边压缩后的当前帧图像及上一帧图像的宽、高要小第一倍数。
步骤S100-3,确定压缩后的每对图像中前后每个图像上的特征点。
特征点,指的是图像上能够表征所拍摄景物轮廓、外形等特征的一系列像素点。通常,这一系列点会有比较明显的特征,比如灰度值较大,也就是该点处图像颜色较深,就可以将该点确定为特征点。例如,图2中压缩后的当前帧上的点P如果可以表征所拍摄景物的特征,则该点P就可以作为压缩后当前帧图像上的特征点。
步骤S100-4,将压缩后的每对图像中前后每个图像上相对应的两个特征点确定为一个特征点对。
前后每个图像上都有各自的一系列若干特征点,其中,前一个图像上的某个特征点,在后一个图像上可能有与其相对应的特征点。比如,这两个相对应的特征点都表征的图像上所拍摄景物的某个点,则这两个相对应的特征点构成一个特征点对。见图2中,压缩后的当前帧上的特征点P与压缩后的上一帧图像上特征点P,两者均表征所拍摄景物的某个相同的特征点,那么这两个相对应的特征点构成一个特征点对。
步骤S100-5,确定压缩后的每对图像中特征点对的位置信息。
特征点对的位置信息,指的是相对应的两个特征点对在相应图像中的相对位置,该位置信息可以是特征点的在相应图像上的坐标。例如,图2中压缩后的当前帧上特征点P的位置坐标为(u,v)。另外,压缩后的上一帧图像上相对应特征点P也有坐标值。这两个特征点在各自图像上的位置信息就是这一对图像上一个特征点对的位置信息。压缩后的前后相邻的两个图像上存在多个特征点对,因此可以获取前后相邻的图像上多个特征点对的位置信息。
执行步骤S100后,也就是获取到压缩后的每对图像中特征点对的位置信息步骤之后,可以执行步骤S101:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息。
由于压缩后的当前帧图像以及上一帧图像都是由压缩前的原图像压缩第一倍数而来,在获取到压缩后的每对图像中特征点对的位置信息之后,也就是获得压缩后每对图像上各个图像上特征点的位置信息之后,只要将压缩后每对图像上各个图像上特征点的位置信息再扩大第一倍数,即可得到压缩前每对图像中各个图像上特征点的位置信息,也就是可得到每对图像中特征点构成的特征点对的位置信息。例如,图2中将压缩后的当前中图像上特征点P的坐标(u,v)扩大第一倍数,即可得到压缩前当前帧图像上特征点P的坐标(su,sv)。同理,将压缩后上1帧图像上特征点P的坐标扩大第一倍数,也可以得到压缩前上1帧图像上特征点P的坐标。压缩后当前帧与压缩后上1帧中两个相对 应的特征点P构成压缩后当前帧与上1帧图像中的特征点对P。压缩前当前帧与压缩前上1帧中两个相对应的特征点P构成压缩前当前帧与上1帧中的特征点对P。
步骤S102,根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息。
在本申请中,步骤S102可以通过将每对原图像分成多个分区,当前帧图像上某一分区到上一帧图像上相对应的某一分区的位置变换信息确定之后,被分成的若干对应的各分区的位置变换信息组合起来就是每对图像中当前帧图像到前一帧图像的位置变换信息。
具体地,步骤S102可以包括以下步骤:
步骤S102-1,将每对原图中的前后每个图像分区;如图3示例所示,将当前帧图像与上一帧图像都均分成六个分区。其中当前帧图像中左上角分区中示例了4个特征点:C 0、C 1、C 2、C 3,上1帧图像上也示例了4个相对应的特征点P 0、P 1、P 2、P 3
步骤S102-2,根据每对原图中相应分区中各特征点对的位置信息,确定每对原图中后一个图像的相应分区到前一个图像的相应分区的位置变换信息;
由于前后两个图像存在相对移动,所以后一个图像上特征点的位置信息与前一个图像上相应特征点的位置信息不同,后一个图像上特征点的位置与前一个图像上相应特征点的位置信息差异,就是后一个图像上特征点到前一个图像上相应特征点的位置变换信息。后一个图像上相应分区的各特征点的位置信息到前一个图像上相应分区的各相应特征点的位置信息之间的差异,就是后一个原图像相应分区到前一个原图像相应分区的位置变换信息。例如,图3中的上一帧图像示例有4个特征点P 0、P 1、P 2、P 3,这4个特征点在当前帧图像上分别对应着相应的4个特征点C 0、C 1、C 2、C 3。如前文所述,上一帧图像上的4个特征点与当前帧图像上4个特征点都表征所拍摄景物的相同的特征,所以前后两个图像上的4个点相互对应构成4个特征点对。在上1帧图像左上角分区示例有4个特征点的情况下,这4个特征点P 0、P 1、P 2、P 3的位置信息构成上1帧图像上左上角分区对应的矩阵。同理,当前帧图像上4个点C 0、C 1、C 2、C 3的位置信息构成当前帧图像上左上角分区的对应的矩阵。由当前帧上左上角分区对应的矩阵变换到上一帧图像上左上角分区对应的矩阵,存在一个变换矩阵,这一变换矩阵就是当前帧图像上左上角分区到上1帧图像上左上角分区的位置变换信息或位置变换矩阵。图3中,例示了当前帧图像中左上角分区到上1帧图像中左上角分区的位置变换信息或位置变换矩阵H 00。也 就是当前帧图像中左上角中各特征点对应的位置信息或矩阵乘以位置变换矩阵H 00便可以计算出上1帧图像中左上角各特征点对应的位置信息。当然,当前帧图像上左上角分区中的特征点C 1乘以H 00中相应位置的数值也能得到上1帧图像上左上角分区中对应的特征点P 1的位置信息。同理,当前帧图像上左下角分区到上1帧图像上左下角分区的位置变换信息可以表示为H 10。其他4个相互对应的分区相互之间的位置变换信息则依次可以表示为H 01、H 02、H 11、H 12
步骤S102-3,根据每对原图中后一个原图像的相应分区相对于前一个原图像的相应分区的位置变换信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息。
基于步骤S102-2已经得到了当前帧图像各分区到上1帧图像各相应分区的位置变换信息H 00、H 01、H 02、H 10、H 11、H 12,则该各分区对应的位置变换信息组合起来便可以表征当前帧图像到上一帧图像的位置变换信息,图3中已经例示的从当前帧到上1帧的分区变换矩阵就是当前帧图像到上1帧图像的位置变换信息。
步骤S102-4,将各对原图像中后一个图像相对于前一个图像的位置变换信息存储成第二队列。
在基于步骤S102-3得到当前帧图像到上1帧图像的位置变换信息之后,可以将该对图像之间的位置变换信息存储到队列中,该队列可以命名为第二队列。该队列具体可以由分区变换阵缓冲器具体存储。分区变换阵缓冲器可以是计算机系统中专门用来存放变换矩阵的存储器,图4例示了分区变换阵缓冲器的示意图。
步骤S103,根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n、m为正整数,m不大于n。
下面举例说明如何实现步骤S103,我们以当m=1时为例,也就是如何获取第1对原图像中前一个个图像对应的变形信息。要获取第1对原图像中前一个图像对应的变形信息,需要利用变形矩阵迭代优化器中的原始路径缓冲器、优化路径暂存器、优化路径缓冲器存储的位置信息进行处理,下面分别介绍各缓冲器在本步骤中所起的作用。
见图5,图5中分区变换阵缓冲器中存储的是后一个图像相对于前一个图像的位置变换信息。该分区变换阵缓冲器能够存储一定数量的图像之间的位置变换信息。该一定数量的图像之间的位置变换信息按产生的先后顺序存储,后产生的图像之间的位置变换信息排列在该分区变换阵缓冲器的队尾。图5中例示的分区变换阵缓冲器能够存储n对图像之间对应的位置变换信息,也就是存储了n个位置变换信息或位置变换矩阵。其中 图5中最右侧的一组分区变换阵代表图像采集器最先采集到的第1个图像与第2个图像之间的位置变换矩阵,依次类推,图5中最左边的一组分区变换矩阵代表最后一个图像到其前1个图像之间的位置变换矩阵。
图5中所示的分区变换阵缓冲器具有固定的长度,也就是最多能存储n个位置变换信息。相应地,图4中的图像缓冲器也具有固定的长度,并且该图像缓冲器的长度与分区变换阵缓冲器的长度相同,也就是该图像缓冲器最多能存n个图像。当该分区变换阵缓冲器已经存满了n个位置变换信息或位置变换矩阵后,同时当图像缓冲器已经存满了n个图像时,触发以下步骤:获取第一对原图像中前一个图像对应的变形信息。例如,图4中例示的图像缓冲器中第一队列可以存储n个图像,最先获取的第1个图像与最先获取的第二个图像为第1对图像,该第1对图像在图像缓冲器中的序号为n-1与n-2。获取第一对原图像中前一个图像对应的变形信息,也就是获取图像缓冲器中序号为n-1的那帧图像对应的变形信息。在所述获取第一对原图像中前一个图像对应的变形信息步骤之后,还可以进行以下步骤:再次向第一队列存储图像前,将第一队列队首的图像取出;以及再次向第二队列存储位置变换信息前,将第二队列队首的位置变换信息取出。经过将队首图像取出图像缓冲器以及将队首位置变换信息取出分区变换阵缓冲器,可以为新图像存储以及新的位置变换信息的存储空出位置。
图5中,H n-1,0表示存储位置变换信息的第二队列中的队首位置变换信息中的第1个分区位置变换信息,H n-1,1表示第2个分区位置变换信息,依次类推H n-1,5表示第6个分区的位置变换信息。同理H 0,0表示存储位置变换信息的第二队列中队尾位置变换信息的第1个分区位置变换信息,H 0,1表示第2个分区位置变换信息,依次类推H 0,5表示第6个分区的位置变换信息。
图5中,原始路径缓冲器存储的是对第二队列中最新存储的位置变换信息中某个分区位置变换信息与之前在先存储的各位置变换信息中相应分区位置变换信息的乘积,也就是C i,j=H 0,j*H 1,j*……H i-1,j*H i,j,其中C i,j表示对第二队列中序号为(i+1)的位置变换信息中的第j个分区位置变换信息与序号为i的位置变换信息中的第j个分区位置变换信息…一直到序号为0的位置变换信息中的第j个分区位置变换信息取乘积得到的。例如,当i=n-1时,C n-i,j等于H n-1,j与H n-2,j…一直到H 0,j的乘积。
图5中,优化路径暂存器存储的是加权平均值Q i,j,加权平均值Q i,j通过对以下三者取加权平均值得到:图像队列里序号为i的图像上与第j分区相邻分区的位置信息、与序 号为i图像上相邻帧图像上第j分区的位置信息、原始路径缓冲器中的C i,j。该加权平均值用Q i,j表示,每当得到该加权平均值后,该Q i,j先暂存在优化路径缓冲器中,然后覆盖到优化路径缓冲器中,并记作P i,j。显然,当i=n-1时,P n-1,j则表示该值是由以下三项加权平均得到的:第1队列里队首图像上与第j分区相邻分区的位置信息、队首帧图像前1帧图像上第j分区的位置信息、原始路径缓冲器中的C n-i,j
将P n-i,j -1*C n-i,j这一乘积结果记作B j,B j就表示队首图像各分区对应的变形信息。例如,当j=1时,B 0就表示队首图像第1分区对应的变形信息,同理,B 1就表示队首图像第2分区对应的变形信息…依次类推,假如队首图像被分成6个分区,则B 5就表示队首图像第6分区对应的变形信息。B 0、B 1、B 2、B 3、B 4、B 5组合起来就构成图像缓冲器中队首图像对应的变形信息,见图6所示变形矩阵优化迭代器所得到的队首图像对应的变形信息。
在经过步骤S103获取到第一对图像中前个图像的变形信息之后,就可以利用该变形信息对该前个图像进行变形处理,参见步骤S104。
步骤S104,根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
继续以第一对图像中前一个图像为例,当基于步骤S103示例的步骤得到1对图像中前1个图像对应的变形信息之后,在变形信息用变形矩阵进行表示的情况下,根据第一对原图像中前一个图像对应的变形矩阵,对该前一个图像分区变形,也就是利用步骤S103所得到的变形信息对图像进行位置信息调整。例如,图6中队首图像的第3分区的变形矩阵中存在特征点P的位置信息,该变形矩阵中存在的特征点P的位置信息与队首图像上第3分区上特征点P的位置信息存在一定的差异。为了消除该位置差异,将队首图像上的点P调整至与第3分区变形信息中的特征点P的位置重合,即可消除位置差异。同理,队首图像上其他的分区的特征点的位置也应调整至变形信息中相对应特征点的位置,如此便可得到图6中所示调整后的图像。在将队首图像调整位置信息后,将变形信息之外的图像裁剪掉,便可以达到去除位置差异的效果。
本申请提出的去除视频抖动的技术方案,首先根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,由于原图像压缩后会变小,电子设备进行各项处理会比较快,因此采取该种技术手段可以对每个采集到的图像实时地获取该图像上各个特征点对的位置信息。当实时地获取到每个图像上特征点对的位置信息之 后,相应地实时地根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像到前一个图像的位置变换信息。当获得了n对原图像中后一个图像到前一个图像的位置变换信息之后,获取第一对原图像中前一个图像对应的变形信息,并根据该第一对原图像中前一个图像对应的变形信息,对该前一个图像进行变形,得到去除抖动后的该前一个图像。以此类推,对该前一个图像之后的其他图像依次变形、去抖,从而做到实时去抖。同时,该技术方案在实时去抖的同时还不依赖于其他辅助设备,具有较大的便利性,解决了现有技术中无法实时去抖或实时去抖时需要借助外在陀螺移的技术问题。
本申请还提供一种去除视频抖动的装置,图7是本申请提供的一种去除视频抖动的装置的一个实施例的结构示意图。该装置实施例与图1所示的方法实施例相对应,所以描述得比较简单,相关的部分请参见上述提供的方法实施例的对应说明即可。该装置具体可以应用于各种电子设备中。下面描述的装置实施例仅仅是示意性的。
图7示出的去除视频抖动的装置,包括:位置信息获取第一单元701,用于根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;位置变换信息获取单元702,用于根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;变形信息获取单元703,用于根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;变形处理单元704,用于根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
可选的,所述装置还包括:图像存储单元,用于将原图像存储在第一队列;位置变换信息存储单元,用于将每对原图像中后一个图像相对于前一个图像的位置变换信息存储在第二队列。
可选的,所述装置还包括:压缩单元,用于将每对原图像压缩第一倍数;特征点确定单元,用于确定压缩后的每对图像中每个图像上的特征点;特征点对确定单元,用于将压缩后的每对图像中前后两个图像上对应的两个特征点确定为一个特征点对;位置信息获取第二单元,用于确定压缩后的每对图像中特征点对的位置信息。
可选的,所述位置变换信息获取单元702包括:图像分区子单元,用于将每对原图像中的前后每个图像分区;位置变换信息获取第一子单元,用于根据每对原图像中相应分区中各特征点对的位置信息,确定每对原图像中后一个图像的相应分区到前一个图像 的相应分区的位置变换信息;位置变换信息获取第二子单元,用于根据每对原图中后一个图像的相应分区到前一个图像的相应分区的位置变换信息,确定每对原图像中后一个图像到前一个图像的位置变换信息。
可选的,所述变形处理单元704包括:变形子单元,用于根据第m对原图像中前一个图像对应的变形矩阵,对第m对原图像中前一个图像分区变形;裁剪子单元,用于裁剪变形后第m对原图像中前一个图像的边缘。
本申请还提供一种去除视频抖动的电子设备的一个实施例,该实施例中的电子设备包括:处理器;存储器,用于存储去除视频抖动的程序,所述程序在被所述处理器读取执行时,执行如下操作:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。相关技术特征可以参考方法实施例,这里不再赘述。
本申请还提出一种计算机可读存储介质,由于计算机可读存储介质实施例基本相似于方法实施例,所以描述得比较简单,相关的部分请参见上述提供的方法实施例的对应说明即可。下面描述的计算机可读存储介质实施例仅仅是示意性的。
本申请还提供的一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的装置中所包含的;也可以是单独存在,而未装配入该装置中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该装置:根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。相关技术特征可以参考方法实施例,这里不再赘述。
本申请虽然以较佳实施例公开如上,但其并不是用来限定本申请,任何本领域技术人员在不脱离本申请的精神和范围内,都可以做出可能的变动和修改。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
1、计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
2、本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (16)

  1. 一种去除视频抖动的方法,其特征在于,所述方法包括:
    根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;
    根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;
    根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;
    根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中的前一个图像。
  2. 根据权利要求1所述的方法,其特征在于,还包括:将原图像存储在第一队列;
    将每对原图像中后一个图像相对于前一个图像的位置变换信息存储在第二队列。
  3. 根据权利要求2所述的方法,其特征在于,所述根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息步骤包括:
    在所述第一队列存储的图像达到第一数量时,以及在所述第二队列存储的位置变换信息达到第一数量时,根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息。
  4. 根据权利要求3所述的方法,其特征在于,在所述获取第一对原图像中前一个图像对应的变形信息步骤之后,还包括:
    再次向第一队列存储图像前,将第一队列队首的图像取出;以及
    再次向第二队列存储位置变换信息前,将第二队列队首的位置变换信息取出。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    将每对原图像压缩第一倍数;
    确定压缩后的每对图像中每个图像上的特征点;
    将压缩后的每对图像中前后两个图像上对应的两个特征点确定为一个特征点对;
    确定压缩后的每对图像中特征点对的位置信息。
  6. 根据权利要求5所述的方法,其特征在于,所述根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息步骤包括:
    将压缩后的每对图像中特征点对的位置信息扩大第一倍数,得到每对原图像中特征点对的位置信息。
  7. 根据权利要求3所述的方法,其特征在于,所述根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息步骤包括:
    将每对原图像中前后两个图像分区;
    根据每对原图像中相应分区中特征点对的位置信息,确定每对原图像中后一个图像相应分区相对于前一个图像相应分区的位置变换信息;
    根据每对原图中后一个图像相应分区相对于前一个图像相应分区的位置变换信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息。
  8. 根据权利要求7所述的方法,其特征在于,所述位置信息为坐标,所述位置变换信息为变换矩阵,所述变形信息为变形矩阵。
  9. 根据权利要求8所述的方法,其特征在于,所述根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形步骤包括:
    根据第m对原图像中前一个图像对应的变形矩阵,对第m对原图像中前一个图像分区变形;
    裁剪变形后第m对原图像中前一个图像的边缘。
  10. 一种去除视频抖动的装置,其特征在于,所述装置包括:
    位置信息获取第一单元,用于根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;
    位置变换信息获取单元,用于根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;
    变形信息获取单元,用于根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;
    变形处理单元,用于根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
  11. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    图像存储单元,用于将原图像存储在第一队列;
    位置变换信息存储单元,用于将每对原图像中后一个图像相对于前一个图像的位置 变换信息存储在第二队列。
  12. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    压缩单元,用于将每对原图像压缩第一倍数;
    特征点确定单元,用于确定压缩后的每对图像中每个图像上的特征点;
    特征点对确定单元,用于将压缩后的每对图像中前后两个图像上对应的两个特征点确定为一个特征点对;
    位置信息获取第二单元,用于确定压缩后的每对图像中特征点对的位置信息。
  13. 根据权利要求10所述的装置,其特征在于,所述位置变换信息获取单元包括:
    图像分区子单元,用于将每对原图像中的前后每个图像分区;
    位置变换信息获取第一子单元,用于根据每对原图像中相应分区中各特征点对的位置信息,确定每对原图像中后一个图像的相应分区到前一个图像的相应分区的位置变换信息;
    位置变换信息获取第二子单元,用于根据每对原图中后一个图像的相应分区到前一个图像的相应分区的位置变换信息,确定每对原图像中后一个图像到前一个图像的位置变换信息。
  14. 根据权利要求10所述的装置,其特征在于,所述变形处理单元包括:
    变形子单元,用于根据第m对原图像中前一个图像对应的变形矩阵,对第m对原图像中前一个图像分区变形;
    裁剪子单元,用于裁剪变形后第m对原图像中前一个图像的边缘。
  15. 一种电子设备,其特征在于,所述电子设备包括:
    处理器;
    存储器,用于存储去除视频抖动的程序,所述程序在被所述处理器读取执行时,执行如下操作:
    根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;
    根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;
    根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;
    根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
  16. 一种计算机可读存储介质,其上存储有去除视频抖动的程序,其特征在于,该程序被处理器读取执行时,执行如下操作:
    根据压缩后的每对图像中特征点对的位置信息,确定每对原图像中特征点对的位置信息,其中,一个特征点对由每对图像中前后两个图像上对应的两个特征点构成,所述原图像为压缩前的图像;
    根据每对原图像中特征点对的位置信息,确定每对原图像中后一个图像相对于前一个图像的位置变换信息;
    根据n对原图像中后一个图像相对于前一个图像的位置变换信息,获取第m对原图像中前一个图像对应的变形信息,其中,n和m为正整数,m不大于n;
    根据第m对原图像中前一个图像对应的变形信息,对第m对原图像中前一个图像变形,得到去除抖动后的第m对原图像中前一个图像。
PCT/CN2019/087693 2018-05-31 2019-05-21 一种去除视频抖动的方法及装置 WO2019228219A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020563582A JP7383642B2 (ja) 2018-05-31 2019-05-21 映像ジッターを除去するための方法及び装置
EP19810675.9A EP3806445A4 (en) 2018-05-31 2019-05-21 METHOD AND DEVICE FOR ELIMINATION OF VIDEO JITTER
US17/106,682 US11317008B2 (en) 2018-05-31 2020-11-30 Method and apparatus for removing video jitter

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810554266.9 2018-05-31
CN201810554266.9A CN110557522A (zh) 2018-05-31 2018-05-31 一种去除视频抖动的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/106,682 Continuation US11317008B2 (en) 2018-05-31 2020-11-30 Method and apparatus for removing video jitter

Publications (1)

Publication Number Publication Date
WO2019228219A1 true WO2019228219A1 (zh) 2019-12-05

Family

ID=68697432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087693 WO2019228219A1 (zh) 2018-05-31 2019-05-21 一种去除视频抖动的方法及装置

Country Status (6)

Country Link
US (1) US11317008B2 (zh)
EP (1) EP3806445A4 (zh)
JP (1) JP7383642B2 (zh)
CN (1) CN110557522A (zh)
TW (1) TW202005353A (zh)
WO (1) WO2019228219A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11317008B2 (en) 2018-05-31 2022-04-26 Alibaba Group Holding Limited Method and apparatus for removing video jitter

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11134180B2 (en) * 2019-07-25 2021-09-28 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Detection method for static image of a video and terminal, and computer-readable storage medium
CN113132560B (zh) * 2019-12-31 2023-03-28 武汉Tcl集团工业研究院有限公司 一种视频处理方法及计算机设备、计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473294A (zh) * 2010-04-30 2012-05-23 松下电器产业株式会社 摄像装置、图像处理装置和图像处理方法
CN103927731A (zh) * 2014-05-05 2014-07-16 武汉大学 无需pos辅助的低空遥感影像快速自动拼接方法
CN106878612A (zh) * 2017-01-05 2017-06-20 中国电子科技集团公司第五十四研究所 一种基于在线全变差优化的视频稳定方法
CN107705288A (zh) * 2017-09-04 2018-02-16 武汉工程大学 伪目标运动强干扰下的危险气体泄漏红外视频检测方法

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3505199B2 (ja) * 1992-06-30 2004-03-08 株式会社リコー ビデオカメラジッタ補正装置、データ圧縮装置、データ伸長装置、データ圧縮方法及びデータ伸長方法
US6762758B2 (en) * 2001-08-23 2004-07-13 Ati Technologies Inc. System, method, and apparatus for compression of video data using offset values
AUPR899401A0 (en) * 2001-11-21 2001-12-13 Cea Technologies Pty Limited Method and apparatus for non-motion detection
JP2004343483A (ja) * 2003-05-16 2004-12-02 Acutelogic Corp 手振れ補正装置および方法、手振れ検出装置
US7369741B2 (en) * 2003-11-17 2008-05-06 Fiber Optics Network Solutions Corp. Storage adapter with dust cap posts
WO2008111169A1 (ja) * 2007-03-13 2008-09-18 Fujitsu Microelectronics Limited 画像処理装置、画像処理方法、画像処理プログラムおよび記録媒体
US8150191B2 (en) * 2008-10-14 2012-04-03 Interra Systems Inc. Method and system for calculating blur artifacts in videos using user perception threshold
EP2360669A1 (en) * 2010-01-22 2011-08-24 Advanced Digital Broadcast S.A. A digital video signal, a method for encoding of a digital video signal and a digital video signal encoder
US9083845B2 (en) * 2010-12-23 2015-07-14 Samsung Electronics Co., Ltd. Global arming method for image processing pipeline
US9277129B2 (en) * 2013-06-07 2016-03-01 Apple Inc. Robust image feature based video stabilization and smoothing
JP6192507B2 (ja) * 2013-11-20 2017-09-06 キヤノン株式会社 画像処理装置、その制御方法、および制御プログラム、並びに撮像装置
US9311690B2 (en) * 2014-03-11 2016-04-12 Adobe Systems Incorporated Video denoising using optical flow
JP6336341B2 (ja) * 2014-06-24 2018-06-06 キヤノン株式会社 撮像装置及びその制御方法、プログラム、記憶媒体
US10447926B1 (en) * 2015-06-19 2019-10-15 Amazon Technologies, Inc. Motion estimation based video compression and encoding
US10303925B2 (en) * 2016-06-24 2019-05-28 Google Llc Optimization processes for compressing media content
US9838604B2 (en) * 2015-10-15 2017-12-05 Ag International Gmbh Method and system for stabilizing video frames
US10425582B2 (en) * 2016-08-25 2019-09-24 Facebook, Inc. Video stabilization system for 360-degree video data
US10534837B2 (en) * 2017-11-13 2020-01-14 Samsung Electronics Co., Ltd Apparatus and method of low complexity optimization solver for path smoothing with constraint variation
CN109905590B (zh) * 2017-12-08 2021-04-27 腾讯科技(深圳)有限公司 一种视频图像处理方法及装置
CN110599549B (zh) * 2018-04-27 2023-01-10 腾讯科技(深圳)有限公司 界面显示方法、装置及存储介质
CN110493488B (zh) * 2018-05-15 2021-11-26 株式会社理光 视频稳像方法、视频稳像装置和计算机可读存储介质
CN110557522A (zh) 2018-05-31 2019-12-10 阿里巴巴集团控股有限公司 一种去除视频抖动的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473294A (zh) * 2010-04-30 2012-05-23 松下电器产业株式会社 摄像装置、图像处理装置和图像处理方法
CN103927731A (zh) * 2014-05-05 2014-07-16 武汉大学 无需pos辅助的低空遥感影像快速自动拼接方法
CN106878612A (zh) * 2017-01-05 2017-06-20 中国电子科技集团公司第五十四研究所 一种基于在线全变差优化的视频稳定方法
CN107705288A (zh) * 2017-09-04 2018-02-16 武汉工程大学 伪目标运动强干扰下的危险气体泄漏红外视频检测方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HENG GUO ET AL: "Joint Video Stitching and Stabilization From Moving Cameras", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 25, no. 11, 30 November 2016 (2016-11-30), pages 5491 - 5503, XP011624280 *
See also references of EP3806445A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11317008B2 (en) 2018-05-31 2022-04-26 Alibaba Group Holding Limited Method and apparatus for removing video jitter

Also Published As

Publication number Publication date
TW202005353A (zh) 2020-01-16
JP2021524960A (ja) 2021-09-16
EP3806445A1 (en) 2021-04-14
JP7383642B2 (ja) 2023-11-20
US11317008B2 (en) 2022-04-26
CN110557522A (zh) 2019-12-10
US20210084198A1 (en) 2021-03-18
EP3806445A4 (en) 2022-03-23

Similar Documents

Publication Publication Date Title
WO2019228219A1 (zh) 一种去除视频抖动的方法及装置
WO2020001168A1 (zh) 三维重建方法、装置、设备和存储介质
WO2021115136A1 (zh) 视频图像的防抖方法、装置、电子设备和存储介质
US11871127B2 (en) High-speed video from camera arrays
WO2022042124A1 (zh) 超分辨率图像重建方法、装置、计算机设备和存储介质
JP2004088244A (ja) 画像処理装置、画像処理方法、および画像フレームデータ記憶媒体、並びにコンピュータ・プログラム
CN104820966B (zh) 一种空时配准解卷积的非同步多视频超分辨率方法
US11862053B2 (en) Display method based on pulse signals, apparatus, electronic device and medium
US8155476B2 (en) Image processing apparatus, image processing method, and program
WO2021179954A1 (zh) 视频处理方法、装置、设备及存储介质
KR20200057849A (ko) 영상의 리타겟팅을 위한 영상 처리 장치 및 방법
WO2023221636A1 (zh) 视频处理方法、装置、设备、存储介质和程序产品
CN105427235A (zh) 一种图像浏览方法及系统
JP2008523489A (ja) 画像サイズを変更する方法および装置
JP2008113292A (ja) 動き推定方法,装置,そのプログラムおよびその記録媒体
WO2019114044A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN115619690A (zh) 一种数据处理方法及相关装置
CN114463213A (zh) 视频处理方法、视频处理装置、终端及存储介质
CN114782251A (zh) 视频超分方法、装置、电子设备及可读存储介质
JP7159582B2 (ja) 監視ビデオにおけるデータの拡張方法及び装置
WO2021035643A1 (zh) 监视图像生成方法、装置、设备和系统、图像处理设备
JP7110007B2 (ja) 画像処理装置、撮像装置、画像処理装置の制御方法、プログラムおよび記憶媒体
CN114170085A (zh) 一种时空超分辨率实时重建方法及系统
CN117714810A (zh) 图像生成装置、方法、电子设备及存储介质
CN105844688A (zh) 虚拟现实渲染方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19810675

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020563582

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019810675

Country of ref document: EP

Effective date: 20210111