US20210084198A1 - Method and apparatus for removing video jitter - Google Patents

Method and apparatus for removing video jitter Download PDF

Info

Publication number
US20210084198A1
US20210084198A1 US17/106,682 US202017106682A US2021084198A1 US 20210084198 A1 US20210084198 A1 US 20210084198A1 US 202017106682 A US202017106682 A US 202017106682A US 2021084198 A1 US2021084198 A1 US 2021084198A1
Authority
US
United States
Prior art keywords
pair
images
raw images
feature point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/106,682
Other versions
US11317008B2 (en
Inventor
Ruizhi CHEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Assigned to ALIBABA GROUP HOLDING LIMITED reassignment ALIBABA GROUP HOLDING LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, RUIZHI
Publication of US20210084198A1 publication Critical patent/US20210084198A1/en
Application granted granted Critical
Publication of US11317008B2 publication Critical patent/US11317008B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/213Circuitry for suppressing or minimising impulsive noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Definitions

  • a video with a length of a period of time is formed by many frames of images that change rapidly and continuously.
  • the relative movement between a video capture device and a scene can cause a relatively large displacement between the taken images that change rapidly, the video can be jittery.
  • Conventional video jitter removing solutions cannot meet the requirements for real-time processing of live video and short videos.
  • Embodiments of the present disclosure provide methods and apparatuses for removing video jitter.
  • the method can include: determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determining position change information of a subsequent image relative to a preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquiring deformation information corresponding to the preceding image in a m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, wherein n and m are positive integers, and m is not greater than n; and deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images for removing jitter in the preceding image in the
  • FIG. 1 is a flowchart of an exemplary method for removing video jitter, consistent with some embodiments of the present disclosure.
  • FIG. 2 is a schematic diagram of exemplary feature points, consistent with some embodiments of the present disclosure.
  • FIG. 3 is a schematic diagram of an exemplary partition transformation matrix, consistent with some embodiments of the present disclosure.
  • FIG. 4 is a schematic diagram of an exemplary relationship between each image and corresponding partition transformation matrices, consistent with some embodiments of the present disclosure.
  • FIG. 5 is a schematic diagram of exemplary matrices used for acquiring a deformation matrix, consistent with some embodiments of the present disclosure.
  • FIG. 6 is a schematic diagram of exemplary image deformation processing, consistent with some embodiments of the present disclosure.
  • FIG. 7 is a schematic diagram of an exemplary apparatus for removing video jitter, consistent with some embodiments of the present disclosure.
  • Embodiments of the present application overcome these issues by removing video jitter in a manner to allow for real-time processing of live video and short videos.
  • FIG. 1 is a flowchart of an exemplary method for removing video jitter, consistent with some embodiments of the present disclosure.
  • the technical solution provided by the embodiments of the current disclosure aims to address a jitter problem of a video in real time.
  • the method can include the following steps.
  • step S 101 position information of feature point pairs in each pair of raw images is determined according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images.
  • step S 101 the position information of the feature point pairs in each pair of uncompressed raw images is determined through the position information of the feature point pairs in each pair of compressed images.
  • the method can include step S 100 (not shown) prior to S 101 .
  • step S 100 the position information of the feature point pairs in each pair of compressed images is acquired.
  • Step S 100 can specifically include the following steps.
  • step S 100 - 1 the raw images are stored into a first queue.
  • the multiple frames of images are arranged in the first queue in sequence, and every two adjacent frames of images are a pair of images, where the first one is a preceding image, and the last one is a subsequent image.
  • the queue can be specifically implemented in an image buffer.
  • the image buffer refers to a memory, in a computer system, dedicated to storing images being synthesized or displayed. An exemplary image buffer is shown in FIG. 4 .
  • step S 100 - 2 each pair of raw images is compressed by a factor of a number.
  • the raw images can be compressed by a factor of the number, and the number can be a preset value.
  • each pair of raw images can be compressed by a factor of 3.
  • the compressed image is smaller by a factor of the number (e.g., smaller by a factor of 3), and an electronic device can process faster, so that every time a new image is captured and compressed, the subsequent steps can be quickly performed, such as: determining feature points on the new image and position information of each feature point.
  • the two images on the right in FIG. 2 are the compressed preceding frame of image and the compressed current frame of image.
  • the width and height of the preceding and current frames of images before compression on the left can be less than the width and height of the current frame of image and the preceding frame of image after compression on the right a factor of the number (e.g., less by a factor of 3).
  • step S 100 - 3 feature points on each of consecutive images in each pair of compressed images are determined.
  • the feature points refer to a series of pixels on the images that can characterize the contours, appearance and other features of the scene taken.
  • this series of points can have relatively obvious features, for example, the gray value is relatively large, that is, the image at the point is relatively dark, and the point can be determined as a feature point.
  • the point P can be used as a feature point on the compressed current frame of image.
  • step S 100 - 4 two corresponding feature points on each of consecutive images in each pair of compressed images are determined as a feature point pair.
  • Each of the consecutive images has its own series of several feature points, where a certain feature point on the preceding image can have a corresponding feature point on the subsequent image.
  • a certain feature point on the preceding image can have a corresponding feature point on the subsequent image.
  • the two corresponding feature points both characterize a certain point of the taken scene on the image
  • the two corresponding feature points constitute a feature point pair.
  • the feature point P on the compressed current frame and the feature point P on the compressed preceding frame of image both characterize the same feature point of the scene taken, then these two corresponding feature points constitute a feature point pair.
  • step S 100 - 5 the position information of the feature point pairs in each pair of compressed images is determined.
  • the position information of the feature point pairs refers to the relative positions of the two corresponding feature point pairs in the corresponding images, and the position information can be coordinates of the feature points on the corresponding images.
  • the position coordinates of the feature point P on the compressed current frame in FIG. 2 are (u, v).
  • the corresponding feature point P on the compressed preceding frame of image also has coordinate values.
  • the position information of the two feature points on the respective images is the position information of one feature point pair on the pair of images. There are multiple feature point pairs on the two compressed images adjacent to each other, so the position information of the multiple feature point pairs on the images that are adjacent to each other can be acquired.
  • step S 101 of FIG. 1 can be performed: the position information of the feature point pairs in each pair of raw images is determined according to the position information of the feature point pairs in each pair of compressed images.
  • the compressed current frame of image and the preceding frame of image are compressed by a factor of the number of the uncompressed raw images, after the position information of the feature point pairs in each pair of compressed images is acquired, that is, after the position information of the feature points on each image in each pair of compressed images is obtained, as long as the position information of the feature points on each image in each pair of compressed images is expanded by a factor of the number (e.g., expanded by a factor of 3), the position information of the feature points on each image in each pair of uncompressed images can be obtained, which is the position information of the feature point pairs formed by the feature points in each pair of images. For example, in FIG.
  • the coordinates (u, v) of the feature point P on the current compressed image are expanded by a factor of the number (e.g., expanded by a factor of 3), and the coordinates (su, sv) of the feature point P on the uncompressed current frame of image can be obtained.
  • the coordinates of the feature point P on the compressed preceding frame of image are expanded by a factor of the number (e.g., expanded by a factor of 3), and the coordinates of the feature point P on the uncompressed preceding frame of image can also be obtained.
  • the two corresponding feature points P in the compressed current frame and the compressed preceding frame constitute a feature point pair P in the compressed current and preceding frames of images.
  • the two corresponding feature points P in the uncompressed current frame and the uncompressed preceding frame constitute a feature point pair P in the uncompressed current and preceding frames.
  • step S 102 position change information of the subsequent image relative to the preceding image in each pair of raw images is determined according to the position information of the feature point pairs in each pair of raw images.
  • each pair of raw images can be divided into multiple partitions.
  • the position change information from a certain partition on the current frame of image to a corresponding partition on the preceding frame of image is determined.
  • the position change information of the divided several corresponding partitions combined is the position change information from the current frame of image to the preceding frame of image in each pair of images.
  • step S 102 can include the following steps:
  • step S 102 - 1 each of consecutive images in each pair of raw images is partitioned; and as shown in the example of FIG. 3 , both the current frame of image and the preceding frame of image are divided into six partitions. Among them, four feature points are illustrated in the partition at the upper left corner of the current frame of image:
  • step S 102 - 2 the position change information from the corresponding partition of the subsequent image to the corresponding partition of the preceding image in each pair of raw images is determined according to the position information of each feature point pair in the corresponding partition of each pair of raw images.
  • the position information of the feature points on the subsequent image is different from the position information of the corresponding feature points on the preceding image
  • position information difference between the position of the feature points on the subsequent image and that of the corresponding feature points on the preceding image is the position change information from the feature points on the subsequent image to the corresponding feature points on the preceding image.
  • the difference between the position information of each feature point of the corresponding partition on the subsequent image and the position information of each corresponding feature point of the corresponding partition on the preceding image is the position change information from the corresponding partition of the subsequent raw image to the corresponding partition of the preceding raw image.
  • the 3 has 4 feature points P 0 P 1 P 2 P 3 , and these 4 feature points respectively correspond to the corresponding 4 feature points C 0 C 1 C 2 C 3 on the current frame of image.
  • the 4 feature points on the preceding frame of image and the 4 feature points on the current frame of image all characterize the same feature of the scene taken, so the 4 points on the two consecutive images correspond to each other to constitute 4 feature point pairs.
  • the position information of these 4 feature points P 0 P 1 P 2 P 3 constitutes a matrix corresponding to the partition at the upper left corner of the preceding frame of image.
  • the position information of 4 points C 0 C 1 C 2 C 3 on the current frame of image constitutes the corresponding matrix of the partition at the upper left corner of the current frame of image.
  • a transformation matrix can be determined to represent transformation from the matrix corresponding to the partition at the upper left corner of the current frame to the matrix corresponding to the partition at the upper left corner of the preceding frame of image.
  • the transformation matrix is the position change information or the position change matrix from the partition at the upper left corner of the current frame of image to the partition at the upper left corner of the preceding frame of image.
  • FIG. 3 the position change information or the position change matrix H 00 from the partition at the upper left corner in the current frame of image to the partition at the upper left corner in the preceding frame of image is illustrated.
  • the position information or matrix corresponding to each feature point in the upper left corner of the current frame of image can be multiplied by the position change matrix H 00 to calculate the position information corresponding to each feature point in the upper left corner of the preceding frame of image.
  • the feature point C 1 in the partition at the upper left corner of the current frame of image can be multiplied by a value at the corresponding position in H 00 to obtain the position information of the corresponding feature point P 1 in the partition at the upper left corner of the preceding frame of image.
  • the position change information from the partition at the lower left corner of the current frame of image to the partition at the lower left corner of the preceding frame of image can be expressed as H 10 .
  • the position change information between the other four corresponding partitions can be expressed as H 01 H 02 H 11 H 12 in turn.
  • step S 102 - 3 according to the position change information of the corresponding partition of the subsequent raw image relative to the corresponding partition of the preceding raw image in each pair of raw images, the position change information of the subsequent image relative to the preceding image in each pair of raw images is determined.
  • step S 102 - 2 Based on step S 102 - 2 , the position change information H 00 H 01 H 02 H 10 H 11 H 12 from each partition of the current frame of image to each corresponding partition of the preceding frame of image has been obtained, and the position change information corresponding to each partition can be combined to characterize the position change information of the current frame of image to the preceding frame of image, and the partition transformation matrix from the current frame to the preceding frame as illustrated in FIG. 3 is the position change information from the current frame of image to the preceding frame of image.
  • step S 102 - 4 the position change information of the subsequent image relative to the preceding image in each pair of raw images is stored into a second queue.
  • the position change information between the pair of images can be stored into a queue, which can be referred to as the second queue.
  • the queue can be specifically stored by a partition transformation matrix buffer.
  • the partition transformation matrix buffer can be a memory, in a computer system, dedicated to storing transformation matrices.
  • An exemplary partition transformation matrix buffer is shown in FIG. 4 .
  • step S 103 deformation information corresponding to the preceding image in the m-th pair of raw images is acquired according to the position change information of the subsequent image relative to the preceding image in the n pairs of raw images, where n and m are positive integers, and m is not greater than n.
  • it is necessary to use the position information stored in an original path buffer, an optimized path temporary register and an optimized path buffer in a deformation matrix iterative optimizer for processing, and the role of each buffer in this step is introduced below.
  • the position change information of the subsequent image relative to the preceding image is stored in the partition transformation matrix buffer.
  • the partition transformation matrix buffer can store the position change information between a certain number of images.
  • the position change information between the certain number of images is stored in the order of generation, and the position change information between the images generated later is arranged at the end of the partition transformation matrix buffer.
  • the partition transformation matrix buffer illustrated in FIG. 5 can store the corresponding position change information between n pairs of images, that is, it stores n position change information or position change matrices.
  • the rightmost set of partition transformation matrices in FIG. 5 represents the position change matrices between the first image and the second image.
  • the first image and the second image are collected by an image collector in the first place.
  • the leftmost set of partition transformation matrices in FIG. 5 represents the position change matrices between the last image and the preceding image.
  • the partition transformation matrix buffer shown in FIG. 5 has a fixed length, that is, it can store n position change information at most.
  • the image buffer in FIG. 4 also has a fixed length, and the length of the image buffer is the same as the length of the partition transformation matrix buffer, that is, the image buffer can store n images at most.
  • the 4 can store n images, a pair of the first image and the second image that is acquired in the first place is the first pair of images, and the sequence numbers of the first pair of images in the image buffer are n ⁇ 1 and n ⁇ 2.
  • the first image is image no. n ⁇ 1 and the second image is image no. n ⁇ 2.
  • the deformation information corresponding to the preceding image in the first pair of raw images is acquired.
  • the deformation information corresponding to the frame of image with the sequence number of n ⁇ 1 in the image buffer is acquired.
  • the following steps can be further performed: before new images are stored in the first queue, the image at the head of the first queue is taken out; and before position change information of the new images is stored in the second queue, the position change information at the head of the second queue is taken out. After the image at the head of the queue is taken out of the image buffer and the position change information at the head of the queue is taken out of the partition transformation matrix buffer, positions can be freed for the storage of the new images and the storage of new position change information.
  • H n-1,0 represents the first partition position change information in the position change information at the head of the second queue storing the position change information
  • H n-1,1 represents the second partition position change information
  • H n-1,5 represents the sixth partition position change information
  • H 0,0 represents the first partition position change information in the position change information at the tail of the second queue storing the position change information
  • H 0,1 represents the second partition position change information
  • H 0,5 represents the sixth partition position change information.
  • C n-i,j is equal to the product of H n-1,j and H n-2,j . . . until H 0,j .
  • the optimized path temporary register stores a weighted average Q i,j
  • the weighted average Q i,j is obtained by taking the weighted average of the following three: the position information of the partition adjacent to the j-th partition on the image with the sequence number of i in the image queue, the position information of the j-th partition on the frame of image adjacent to the image with the sequence number of i, and C i,j in the original path buffer.
  • the weighted average is expressed by Q i,j .
  • the Q i,j is temporarily stored in the optimized path buffer, and then overlaid in the optimized path buffer, and recorded as P i,j .
  • P n-1,j means that the value is obtained by the weighted average of the following three: the position information of the partition adjacent to the j-th partition on the image at the head of the first queue, the position information of the j-th partition on the preceding frame of image of the frame of image at the head of the queue, and C n-i,j in the original path buffer.
  • B j represents the deformation information corresponding to each partition of the image at the head of the queue.
  • B 0 represents the deformation information corresponding to the first partition of the image at the head of the queue
  • B 1 represents the deformation information corresponding to the second partition of the image at the head of the queue . . . and so on
  • B 5 represents the deformation information corresponding to the sixth partition of the image at the head of the queue.
  • B 0 B 1 B 2 B 3 B 4 B 5 is shown in FIG. 6 .
  • the deformation information of the preceding image in the first pair of images is acquired through step S 103 , the deformation information can be used to perform deformation processing on the preceding image, which is included in step S 104 .
  • step S 104 the preceding image in the m-th pair of raw images is deformed according to the deformation information corresponding to the preceding image in the m-th pair of raw images to obtain the preceding image in the m-th pair of raw images with jitter removed.
  • the preceding image in the first pair of images continues to be taken as an example.
  • the deformation information corresponding to the preceding image in the first pair of images is obtained based on the step illustrated in step S 103 , when the deformation information is represented by a deformation matrix, the partition of the preceding image is deformed according to the deformation matrix corresponding to the preceding image in the first pair of raw images, that is, the position information of the image is adjusted by using the deformation information obtained in step S 103 .
  • the 6 contains the position information of the feature point P, and there are some differences between the position information of the feature point P existing in the deformation matrix and the position information of the feature point P on the third partition of the image at the head of the queue.
  • the point P on the image at the head of the queue is adjusted to coincide with the position of the feature point P in the deformation information of the third partition, and the position difference can be eliminated.
  • the positions of the feature points of other partitions on the image at the head of the queue should also be adjusted to the positions of the corresponding feature points in the deformation information, so that the adjusted image shown in FIG. 6 can be obtained.
  • the image outside the deformation information is cropped, and the effect of eliminating the position difference can be achieved.
  • Embodiments of the current disclosure provides technical solutions for removing video jitter.
  • the position information of the feature point pairs in each pair of raw images is determined according to the position information of the feature point pairs in each pair of compressed images.
  • the raw images can become smaller after compression, and the electronic device can perform various processing relatively quickly, accordingly the position information of each feature point pair on the image can be acquired in real time for each image captured.
  • the position change information of the subsequent image relative to the preceding image in each pair of raw images is determined in real time correspondingly according to the position information of the feature point pairs in each pair of raw images.
  • FIG. 7 is a schematic diagram of an exemplary apparatus for removing video jitter, consistent with some embodiments of the present disclosure.
  • the method shown in FIG. 1 can be performed by the exemplary apparatus shown in FIG. 7 .
  • the apparatus can include: a position information acquisition first unit 701 , a position change information acquisition unit 702 , a deformation information acquisition unit 703 , and a deformation processing unit 704 .
  • Position information acquisition first unit 701 is configured to determine position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images.
  • Position change information acquisition unit 702 is configured to determine position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images.
  • Deformation information acquisition unit 703 is configured to acquire deformation information corresponding to the preceding image in the m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, where n and m are positive integers, and m is not greater than n.
  • Deformation processing unit 704 is configured to deform the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images to obtain the preceding image in the m-th pair of raw images with jitter removed.
  • the apparatus further includes: an image storage unit, configured to store the raw images into a first queue; and a position change information storage unit, configured to store the position change information of the subsequent image relative to the preceding image in each pair of raw images into a second queue.
  • the apparatus further includes: a compression unit, configured to compress each pair of raw images by a factor of a number; a feature point determination unit, configured to determine feature points on each image in each pair of compressed images; a feature point pair determination unit, configured to determine two corresponding feature points on the two consecutive images in each pair of compressed images as a feature point pair; and a position information acquisition second unit, configured to determine position information of the feature point pairs in each pair of compressed images.
  • a compression unit configured to compress each pair of raw images by a factor of a number
  • a feature point determination unit configured to determine feature points on each image in each pair of compressed images
  • a feature point pair determination unit configured to determine two corresponding feature points on the two consecutive images in each pair of compressed images as a feature point pair
  • a position information acquisition second unit configured to determine position information of the feature point pairs in each pair of compressed images.
  • position change information acquisition unit 702 includes: an image partitioning subunit, configured to partition consecutive images in each pair of raw images; a position change information acquisition first subunit, configured to determine position change information of a corresponding partition of the subsequent image relative to a corresponding partition of the preceding image in each pair of raw images according to the position information of the feature point pairs in the corresponding partition of each pair of raw images; and a position change information acquisition second subunit, configured to determine the position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position change information of the corresponding partition of the subsequent image relative to the corresponding partition of the preceding image in each pair of raw images.
  • deformation processing unit 704 includes: a deformation subunit, configured to deform the partition of the preceding image in the m-th pair of raw images according to a deformation matrix corresponding to the preceding image in the m-th pair of raw images; and a cutting subunit, configured to cut an edge of the preceding image in the m-th pair of raw images after deformation.
  • Embodiments of the present disclosure provide an electronic device for removing video jitter.
  • the electronic device in the embodiment includes: a processor; and a memory for storing a program for removing video jitter, and when the program is read and executed by the processor, the following operations are performed: determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determining position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquiring deformation information corresponding to the preceding image in the m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, where n and m are positive integers, and m is not greater than n; and deforming the preceding image in the m-th pair of raw
  • Embodiments of the present disclosure provide a computer-readable medium.
  • the computer-readable medium can be included in the apparatus described in the above-mentioned embodiment; or it can exist alone without being assembled into the apparatus.
  • the above computer-readable medium carries one or more programs. When executed by the apparatus, the above one or more programs cause the apparatus to: determine position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determine position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquire deformation information corresponding to the preceding image in the m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, where n and m are positive integers, and m is not greater than n; and deform the preced
  • Embodiments of the present disclosure also provide a non-transitory computer readable medium that stores a set of instructions that is executable by at least one processor of a computer to cause the computer to perform the above-mentioned methods.
  • non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same.
  • the device may include one or more processors (CPUs), an input/output interface, a network interface, or a memory.
  • the above described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods.
  • the computing units and other functional units described in this disclosure can be implemented by hardware, or software, or a combination of hardware and software. It is understood that multiple ones of the above described modules/units may be combined as one module/unit, and each of the above described modules/units may be further divided into a plurality of sub-modules/sub-units.
  • the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a component may include A or B, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or A and B. As a second example, if it is stated that a component may include A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.

Abstract

Embodiments of the present disclosure provide methods and apparatuses for removing video jitter. The method can include: determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determining position change information of a subsequent image relative to a preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquiring deformation information corresponding to the preceding image in a m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image inn pairs of raw images, wherein n and m are positive integers, and m is not greater than n; and deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images for removing jitter in the preceding image in the m-th pair of raw images.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present disclosure claims the benefits of priority to International Application No. PCT/CN2019/087693, filed on May 21, 2019, which claims priority to Chinese Patent Application No. 201810554266.9, filed on May 31, 2018, both of which are incorporated herein by reference in their entireties.
  • BACKGROUND
  • A video with a length of a period of time is formed by many frames of images that change rapidly and continuously. When a video is taken, the relative movement between a video capture device and a scene can cause a relatively large displacement between the taken images that change rapidly, the video can be jittery. Conventional video jitter removing solutions cannot meet the requirements for real-time processing of live video and short videos.
  • SUMMARY
  • Embodiments of the present disclosure provide methods and apparatuses for removing video jitter. The method can include: determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determining position change information of a subsequent image relative to a preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquiring deformation information corresponding to the preceding image in a m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, wherein n and m are positive integers, and m is not greater than n; and deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images for removing jitter in the preceding image in the m-th pair of raw images.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings described herein are used to provide further understanding of the present disclosure and constitute a part of the present disclosure. Exemplary embodiments of the present disclosure and descriptions of the exemplary embodiments are used to explain the present disclosure and are not intended to constitute inappropriate limitations to the present disclosure. In the accompanying drawings:
  • FIG. 1 is a flowchart of an exemplary method for removing video jitter, consistent with some embodiments of the present disclosure.
  • FIG. 2 is a schematic diagram of exemplary feature points, consistent with some embodiments of the present disclosure.
  • FIG. 3 is a schematic diagram of an exemplary partition transformation matrix, consistent with some embodiments of the present disclosure.
  • FIG. 4 is a schematic diagram of an exemplary relationship between each image and corresponding partition transformation matrices, consistent with some embodiments of the present disclosure.
  • FIG. 5 is a schematic diagram of exemplary matrices used for acquiring a deformation matrix, consistent with some embodiments of the present disclosure.
  • FIG. 6 is a schematic diagram of exemplary image deformation processing, consistent with some embodiments of the present disclosure.
  • FIG. 7 is a schematic diagram of an exemplary apparatus for removing video jitter, consistent with some embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • To facilitate understanding of the solutions in the present disclosure, the technical solutions in some of the embodiments of the present disclosure will be described with reference to the accompanying drawings. It is appreciated that the described embodiments are merely a part of rather than all the embodiments of the present disclosure. Consistent with the present disclosure, other embodiments can be obtained without departing from the principles disclosed herein. Such embodiments shall also fall within the protection scope of the present disclosure.
  • As stated above, conventional video jitter removing solutions cannot meet the requirements for real-time processing of live video and short videos. Embodiments of the present application overcome these issues by removing video jitter in a manner to allow for real-time processing of live video and short videos.
  • FIG. 1 is a flowchart of an exemplary method for removing video jitter, consistent with some embodiments of the present disclosure. The technical solution provided by the embodiments of the current disclosure aims to address a jitter problem of a video in real time. The method can include the following steps.
  • In step S101, position information of feature point pairs in each pair of raw images is determined according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images.
  • In step S101, the position information of the feature point pairs in each pair of uncompressed raw images is determined through the position information of the feature point pairs in each pair of compressed images. The method can include step S100 (not shown) prior to S101. In step S100, the position information of the feature point pairs in each pair of compressed images is acquired.
  • Step S100 can specifically include the following steps.
  • In step S100-1, the raw images are stored into a first queue. When multiple frames of images are taken within a period of time using the video taking device, the multiple frames of images are arranged in the first queue in sequence, and every two adjacent frames of images are a pair of images, where the first one is a preceding image, and the last one is a subsequent image. The queue can be specifically implemented in an image buffer. The image buffer refers to a memory, in a computer system, dedicated to storing images being synthesized or displayed. An exemplary image buffer is shown in FIG. 4.
  • In step S100-2, each pair of raw images is compressed by a factor of a number.
  • In the process of quickly removing the jitter of the several frames of images in a video of a period of time, the raw images can be compressed by a factor of the number, and the number can be a preset value. For example, each pair of raw images can be compressed by a factor of 3. Compared with the uncompressed image, the compressed image is smaller by a factor of the number (e.g., smaller by a factor of 3), and an electronic device can process faster, so that every time a new image is captured and compressed, the subsequent steps can be quickly performed, such as: determining feature points on the new image and position information of each feature point. The two images on the right in FIG. 2 are the compressed preceding frame of image and the compressed current frame of image. The width and height of the preceding and current frames of images before compression on the left can be less than the width and height of the current frame of image and the preceding frame of image after compression on the right a factor of the number (e.g., less by a factor of 3).
  • In step S100-3, feature points on each of consecutive images in each pair of compressed images are determined.
  • The feature points refer to a series of pixels on the images that can characterize the contours, appearance and other features of the scene taken. Usually, this series of points can have relatively obvious features, for example, the gray value is relatively large, that is, the image at the point is relatively dark, and the point can be determined as a feature point. For example, if a point P on the compressed current frame in FIG. 2 can characterize the features of the scene taken, then the point P can be used as a feature point on the compressed current frame of image.
  • In step S100-4, two corresponding feature points on each of consecutive images in each pair of compressed images are determined as a feature point pair.
  • Each of the consecutive images has its own series of several feature points, where a certain feature point on the preceding image can have a corresponding feature point on the subsequent image. For example, if the two corresponding feature points both characterize a certain point of the taken scene on the image, the two corresponding feature points constitute a feature point pair. As shown in FIG. 2, the feature point P on the compressed current frame and the feature point P on the compressed preceding frame of image both characterize the same feature point of the scene taken, then these two corresponding feature points constitute a feature point pair.
  • In step S100-5, the position information of the feature point pairs in each pair of compressed images is determined.
  • The position information of the feature point pairs refers to the relative positions of the two corresponding feature point pairs in the corresponding images, and the position information can be coordinates of the feature points on the corresponding images. For example, the position coordinates of the feature point P on the compressed current frame in FIG. 2 are (u, v). In addition, the corresponding feature point P on the compressed preceding frame of image also has coordinate values. The position information of the two feature points on the respective images is the position information of one feature point pair on the pair of images. There are multiple feature point pairs on the two compressed images adjacent to each other, so the position information of the multiple feature point pairs on the images that are adjacent to each other can be acquired.
  • After step S100 is performed, that is, after the step of acquiring the position information of the feature point pairs in each pair of compressed images, step S101 of FIG. 1 can be performed: the position information of the feature point pairs in each pair of raw images is determined according to the position information of the feature point pairs in each pair of compressed images.
  • Since the compressed current frame of image and the preceding frame of image are compressed by a factor of the number of the uncompressed raw images, after the position information of the feature point pairs in each pair of compressed images is acquired, that is, after the position information of the feature points on each image in each pair of compressed images is obtained, as long as the position information of the feature points on each image in each pair of compressed images is expanded by a factor of the number (e.g., expanded by a factor of 3), the position information of the feature points on each image in each pair of uncompressed images can be obtained, which is the position information of the feature point pairs formed by the feature points in each pair of images. For example, in FIG. 2, the coordinates (u, v) of the feature point P on the current compressed image are expanded by a factor of the number (e.g., expanded by a factor of 3), and the coordinates (su, sv) of the feature point P on the uncompressed current frame of image can be obtained. In the same way, the coordinates of the feature point P on the compressed preceding frame of image are expanded by a factor of the number (e.g., expanded by a factor of 3), and the coordinates of the feature point P on the uncompressed preceding frame of image can also be obtained. The two corresponding feature points P in the compressed current frame and the compressed preceding frame constitute a feature point pair P in the compressed current and preceding frames of images. The two corresponding feature points P in the uncompressed current frame and the uncompressed preceding frame constitute a feature point pair P in the uncompressed current and preceding frames.
  • In step S102, position change information of the subsequent image relative to the preceding image in each pair of raw images is determined according to the position information of the feature point pairs in each pair of raw images.
  • In step S102, each pair of raw images can be divided into multiple partitions. The position change information from a certain partition on the current frame of image to a corresponding partition on the preceding frame of image is determined. The position change information of the divided several corresponding partitions combined is the position change information from the current frame of image to the preceding frame of image in each pair of images.
  • Specifically, step S102 can include the following steps:
  • In step S102-1, each of consecutive images in each pair of raw images is partitioned; and as shown in the example of FIG. 3, both the current frame of image and the preceding frame of image are divided into six partitions. Among them, four feature points are illustrated in the partition at the upper left corner of the current frame of image:
  • C0
    Figure US20210084198A1-20210318-P00001
    C1
    Figure US20210084198A1-20210318-P00001
    C2
    Figure US20210084198A1-20210318-P00001
    C3, and four corresponding feature points P0
    Figure US20210084198A1-20210318-P00001
    P1
    Figure US20210084198A1-20210318-P00001
    P2
    Figure US20210084198A1-20210318-P00001
    P3 are also illustrated on the preceding frame of image.
  • In step S102-2, the position change information from the corresponding partition of the subsequent image to the corresponding partition of the preceding image in each pair of raw images is determined according to the position information of each feature point pair in the corresponding partition of each pair of raw images.
  • Due to the relative movement of the two consecutive images, the position information of the feature points on the subsequent image is different from the position information of the corresponding feature points on the preceding image, and position information difference between the position of the feature points on the subsequent image and that of the corresponding feature points on the preceding image is the position change information from the feature points on the subsequent image to the corresponding feature points on the preceding image. The difference between the position information of each feature point of the corresponding partition on the subsequent image and the position information of each corresponding feature point of the corresponding partition on the preceding image is the position change information from the corresponding partition of the subsequent raw image to the corresponding partition of the preceding raw image. For example, the preceding frame of image in FIG. 3 has 4 feature points P0
    Figure US20210084198A1-20210318-P00001
    P1
    Figure US20210084198A1-20210318-P00001
    P2
    Figure US20210084198A1-20210318-P00001
    P3, and these 4 feature points respectively correspond to the corresponding 4 feature points C0
    Figure US20210084198A1-20210318-P00001
    C1
    Figure US20210084198A1-20210318-P00001
    C2
    Figure US20210084198A1-20210318-P00001
    C3 on the current frame of image. As mentioned earlier, the 4 feature points on the preceding frame of image and the 4 feature points on the current frame of image all characterize the same feature of the scene taken, so the 4 points on the two consecutive images correspond to each other to constitute 4 feature point pairs. In the case where there are 4 feature points in the partition example at the upper left corner of the preceding frame of image, the position information of these 4 feature points P0
    Figure US20210084198A1-20210318-P00001
    P1
    Figure US20210084198A1-20210318-P00001
    P2
    Figure US20210084198A1-20210318-P00001
    P3 constitutes a matrix corresponding to the partition at the upper left corner of the preceding frame of image. Similarly, the position information of 4 points C0
    Figure US20210084198A1-20210318-P00001
    C1
    Figure US20210084198A1-20210318-P00001
    C2
    Figure US20210084198A1-20210318-P00001
    C3 on the current frame of image constitutes the corresponding matrix of the partition at the upper left corner of the current frame of image. A transformation matrix can be determined to represent transformation from the matrix corresponding to the partition at the upper left corner of the current frame to the matrix corresponding to the partition at the upper left corner of the preceding frame of image. The transformation matrix is the position change information or the position change matrix from the partition at the upper left corner of the current frame of image to the partition at the upper left corner of the preceding frame of image. In FIG. 3, the position change information or the position change matrix H00 from the partition at the upper left corner in the current frame of image to the partition at the upper left corner in the preceding frame of image is illustrated. That is, the position information or matrix corresponding to each feature point in the upper left corner of the current frame of image can be multiplied by the position change matrix H00 to calculate the position information corresponding to each feature point in the upper left corner of the preceding frame of image. Accordingly, the feature point C1 in the partition at the upper left corner of the current frame of image can be multiplied by a value at the corresponding position in H00 to obtain the position information of the corresponding feature point P1 in the partition at the upper left corner of the preceding frame of image. Similarly, the position change information from the partition at the lower left corner of the current frame of image to the partition at the lower left corner of the preceding frame of image can be expressed as H10. The position change information between the other four corresponding partitions can be expressed as H01
    Figure US20210084198A1-20210318-P00001
    H02
    Figure US20210084198A1-20210318-P00001
    H11
    Figure US20210084198A1-20210318-P00001
    H12 in turn.
  • In step S102-3, according to the position change information of the corresponding partition of the subsequent raw image relative to the corresponding partition of the preceding raw image in each pair of raw images, the position change information of the subsequent image relative to the preceding image in each pair of raw images is determined.
  • Based on step S102-2, the position change information H00
    Figure US20210084198A1-20210318-P00001
    H01
    Figure US20210084198A1-20210318-P00001
    H02
    Figure US20210084198A1-20210318-P00001
    H10
    Figure US20210084198A1-20210318-P00001
    H11
    Figure US20210084198A1-20210318-P00001
    H12 from each partition of the current frame of image to each corresponding partition of the preceding frame of image has been obtained, and the position change information corresponding to each partition can be combined to characterize the position change information of the current frame of image to the preceding frame of image, and the partition transformation matrix from the current frame to the preceding frame as illustrated in FIG. 3 is the position change information from the current frame of image to the preceding frame of image.
  • In step S102-4, the position change information of the subsequent image relative to the preceding image in each pair of raw images is stored into a second queue.
  • After the position change information from the current frame of image to the preceding frame of image is obtained based on step S102-3, the position change information between the pair of images can be stored into a queue, which can be referred to as the second queue. The queue can be specifically stored by a partition transformation matrix buffer. The partition transformation matrix buffer can be a memory, in a computer system, dedicated to storing transformation matrices. An exemplary partition transformation matrix buffer is shown in FIG. 4.
  • In step S103, deformation information corresponding to the preceding image in the m-th pair of raw images is acquired according to the position change information of the subsequent image relative to the preceding image in the n pairs of raw images, where n and m are positive integers, and m is not greater than n.
  • The following example illustrates how to implement step S103, that is, taking m=1 as an example, how to acquire the deformation information corresponding to the preceding image in the first pair of raw images. To acquire the deformation information corresponding to the preceding image in the first pair of raw images, it is necessary to use the position information stored in an original path buffer, an optimized path temporary register and an optimized path buffer in a deformation matrix iterative optimizer for processing, and the role of each buffer in this step is introduced below.
  • As shown in FIG. 5, the position change information of the subsequent image relative to the preceding image is stored in the partition transformation matrix buffer. The partition transformation matrix buffer can store the position change information between a certain number of images. The position change information between the certain number of images is stored in the order of generation, and the position change information between the images generated later is arranged at the end of the partition transformation matrix buffer. The partition transformation matrix buffer illustrated in FIG. 5 can store the corresponding position change information between n pairs of images, that is, it stores n position change information or position change matrices. The rightmost set of partition transformation matrices in FIG. 5 represents the position change matrices between the first image and the second image. The first image and the second image are collected by an image collector in the first place. The leftmost set of partition transformation matrices in FIG. 5 represents the position change matrices between the last image and the preceding image.
  • The partition transformation matrix buffer shown in FIG. 5 has a fixed length, that is, it can store n position change information at most. Correspondingly, the image buffer in FIG. 4 also has a fixed length, and the length of the image buffer is the same as the length of the partition transformation matrix buffer, that is, the image buffer can store n images at most. When the partition transformation matrix buffer is full of n position change information or position change matrices, and when the image buffer is full of n images, the following step is triggered: acquiring the deformation information corresponding to the preceding image in the first pair of raw images. For example, the first queue in the image buffer illustrated in FIG. 4 can store n images, a pair of the first image and the second image that is acquired in the first place is the first pair of images, and the sequence numbers of the first pair of images in the image buffer are n−1 and n−2. The first image is image no. n−1 and the second image is image no. n−2. The deformation information corresponding to the preceding image in the first pair of raw images is acquired. The deformation information corresponding to the frame of image with the sequence number of n−1 in the image buffer is acquired.
  • After the step of acquiring the deformation information corresponding to the preceding image in the first pair of raw images, the following steps can be further performed: before new images are stored in the first queue, the image at the head of the first queue is taken out; and before position change information of the new images is stored in the second queue, the position change information at the head of the second queue is taken out. After the image at the head of the queue is taken out of the image buffer and the position change information at the head of the queue is taken out of the partition transformation matrix buffer, positions can be freed for the storage of the new images and the storage of new position change information.
  • In FIG. 5, Hn-1,0 represents the first partition position change information in the position change information at the head of the second queue storing the position change information, Hn-1,1 represents the second partition position change information, and so on, Hn-1,5 represents the sixth partition position change information. Similarly, H0,0 represents the first partition position change information in the position change information at the tail of the second queue storing the position change information, H0,1 represents the second partition position change information, and so on, H0,5 represents the sixth partition position change information.
  • In FIG. 5, the original path buffer stores a product of certain partition position change information in the newly stored position change information in the second queue and the corresponding partition position change information in the previously stored position change information, that is Ci,j=H0,j*H1,j* . . . Hi-1,j*Hi,j, where Ci,j represents a product of the j-th partition position change information in the position change information with the sequence number of (i+1) and the j-th partition position change information in the position change information with the sequence number of i in the second queue . . . until the j-th partition position change information in the position change information with the sequence number of 0. For example, when i=n−1, Cn-i,j is equal to the product of Hn-1,j and Hn-2,j . . . until H0,j.
  • In FIG. 5, the optimized path temporary register stores a weighted average Qi,j, and the weighted average Qi,j is obtained by taking the weighted average of the following three: the position information of the partition adjacent to the j-th partition on the image with the sequence number of i in the image queue, the position information of the j-th partition on the frame of image adjacent to the image with the sequence number of i, and Ci,j in the original path buffer. The weighted average is expressed by Qi,j. Whenever the weighted average is obtained, the Qi,j is temporarily stored in the optimized path buffer, and then overlaid in the optimized path buffer, and recorded as Pi,j. Obviously, when i=n−1, Pn-1,j means that the value is obtained by the weighted average of the following three: the position information of the partition adjacent to the j-th partition on the image at the head of the first queue, the position information of the j-th partition on the preceding frame of image of the frame of image at the head of the queue, and Cn-i,j in the original path buffer.
  • The product result Pn-i,j −1*Cn-i,j is recorded as Bj, and Bj represents the deformation information corresponding to each partition of the image at the head of the queue. For example, when j=1, B0 represents the deformation information corresponding to the first partition of the image at the head of the queue, and similarly, B1 represents the deformation information corresponding to the second partition of the image at the head of the queue . . . and so on, if the image at the head of the queue is divided into 6 partitions, B5 represents the deformation information corresponding to the sixth partition of the image at the head of the queue. B0
    Figure US20210084198A1-20210318-P00001
    B1
    Figure US20210084198A1-20210318-P00001
    B2
    Figure US20210084198A1-20210318-P00001
    B3
    Figure US20210084198A1-20210318-P00001
    B4
    Figure US20210084198A1-20210318-P00001
    B5. The information is combined to form the deformation information corresponding to the image at the head of the queue in the image buffer. Exemplary deformation information corresponding to the image at the head of the queue obtained by a deformation matrix optimization iterator is shown in FIG. 6.
  • After the deformation information of the preceding image in the first pair of images is acquired through step S103, the deformation information can be used to perform deformation processing on the preceding image, which is included in step S104.
  • In step S104, the preceding image in the m-th pair of raw images is deformed according to the deformation information corresponding to the preceding image in the m-th pair of raw images to obtain the preceding image in the m-th pair of raw images with jitter removed.
  • The preceding image in the first pair of images continues to be taken as an example. After the deformation information corresponding to the preceding image in the first pair of images is obtained based on the step illustrated in step S103, when the deformation information is represented by a deformation matrix, the partition of the preceding image is deformed according to the deformation matrix corresponding to the preceding image in the first pair of raw images, that is, the position information of the image is adjusted by using the deformation information obtained in step S103. For example, the deformation matrix of the third partition of the image at the head of the queue in FIG. 6 contains the position information of the feature point P, and there are some differences between the position information of the feature point P existing in the deformation matrix and the position information of the feature point P on the third partition of the image at the head of the queue. In order to eliminate the position difference, the point P on the image at the head of the queue is adjusted to coincide with the position of the feature point P in the deformation information of the third partition, and the position difference can be eliminated. Similarly, the positions of the feature points of other partitions on the image at the head of the queue should also be adjusted to the positions of the corresponding feature points in the deformation information, so that the adjusted image shown in FIG. 6 can be obtained. After the position information of the image at the head of the queue is adjusted, the image outside the deformation information is cropped, and the effect of eliminating the position difference can be achieved.
  • Embodiments of the current disclosure provides technical solutions for removing video jitter. First, the position information of the feature point pairs in each pair of raw images is determined according to the position information of the feature point pairs in each pair of compressed images. The raw images can become smaller after compression, and the electronic device can perform various processing relatively quickly, accordingly the position information of each feature point pair on the image can be acquired in real time for each image captured. After the position information of the feature point pairs on each image is acquired in real time, the position change information of the subsequent image relative to the preceding image in each pair of raw images is determined in real time correspondingly according to the position information of the feature point pairs in each pair of raw images. After the position change information of the subsequent image relative to the preceding image in the n pairs of raw images is acquired, deformation information corresponding to the preceding image in the first pair of raw images is acquired, and the preceding image is deformed according to the deformation information corresponding to the preceding image in the first pair of raw images to obtain the preceding image with jitter removed. Similarly, other images after the preceding image are sequentially deformed and jitter-removed, thereby achieving real-time jitter removing. At the same time, the technical solutions provided by the embodiments do not rely on other auxiliary devices while removing jitter in real time, and has greater convenience. In contrast, in some conventional systems real-time jitter removing cannot be achieved or external gyros are needed for real-time jitter removing.
  • FIG. 7 is a schematic diagram of an exemplary apparatus for removing video jitter, consistent with some embodiments of the present disclosure. The method shown in FIG. 1 can be performed by the exemplary apparatus shown in FIG. 7.
  • The apparatus can include: a position information acquisition first unit 701, a position change information acquisition unit 702, a deformation information acquisition unit 703, and a deformation processing unit 704.
  • Position information acquisition first unit 701 is configured to determine position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images.
  • Position change information acquisition unit 702 is configured to determine position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images.
  • Deformation information acquisition unit 703 is configured to acquire deformation information corresponding to the preceding image in the m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, where n and m are positive integers, and m is not greater than n.
  • Deformation processing unit 704 is configured to deform the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images to obtain the preceding image in the m-th pair of raw images with jitter removed.
  • Optionally, the apparatus further includes: an image storage unit, configured to store the raw images into a first queue; and a position change information storage unit, configured to store the position change information of the subsequent image relative to the preceding image in each pair of raw images into a second queue.
  • Optionally, the apparatus further includes: a compression unit, configured to compress each pair of raw images by a factor of a number; a feature point determination unit, configured to determine feature points on each image in each pair of compressed images; a feature point pair determination unit, configured to determine two corresponding feature points on the two consecutive images in each pair of compressed images as a feature point pair; and a position information acquisition second unit, configured to determine position information of the feature point pairs in each pair of compressed images.
  • Optionally, position change information acquisition unit 702 includes: an image partitioning subunit, configured to partition consecutive images in each pair of raw images; a position change information acquisition first subunit, configured to determine position change information of a corresponding partition of the subsequent image relative to a corresponding partition of the preceding image in each pair of raw images according to the position information of the feature point pairs in the corresponding partition of each pair of raw images; and a position change information acquisition second subunit, configured to determine the position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position change information of the corresponding partition of the subsequent image relative to the corresponding partition of the preceding image in each pair of raw images.
  • Optionally, deformation processing unit 704 includes: a deformation subunit, configured to deform the partition of the preceding image in the m-th pair of raw images according to a deformation matrix corresponding to the preceding image in the m-th pair of raw images; and a cutting subunit, configured to cut an edge of the preceding image in the m-th pair of raw images after deformation.
  • Embodiments of the present disclosure provide an electronic device for removing video jitter. The electronic device in the embodiment includes: a processor; and a memory for storing a program for removing video jitter, and when the program is read and executed by the processor, the following operations are performed: determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determining position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquiring deformation information corresponding to the preceding image in the m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, where n and m are positive integers, and m is not greater than n; and deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images to obtain the preceding image in the m-th pair of raw images with jitter removed.
  • Embodiments of the present disclosure provide a computer-readable medium. The computer-readable medium can be included in the apparatus described in the above-mentioned embodiment; or it can exist alone without being assembled into the apparatus. The above computer-readable medium carries one or more programs. When executed by the apparatus, the above one or more programs cause the apparatus to: determine position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images; determine position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images; acquire deformation information corresponding to the preceding image in the m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, where n and m are positive integers, and m is not greater than n; and deform the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images to obtain the preceding image in the m-th pair of raw images with jitter removed.
  • It is appreciated that terms “first,” “second,” and so on used in the specification, claims, and the drawings of the present disclosure are used to distinguish similar objects. These terms do not necessarily describe a particular order or sequence. The objects described using these terms can be interchanged in appropriate circumstances. That is, the procedures described in the exemplary embodiments of the present disclosure could be implemented in an order other than those shown or described herein. In addition, terms such as “comprise,” “include,” and “have” as well as their variations are intended to cover non-exclusive inclusion. For example, a process, method, system, product, or device including a series of steps or units are not necessarily limited to the steps or units clearly listed. In some embodiments, they may include other steps or units that are not clearly listed or inherent to the process, method, product, or device.
  • Embodiments of the present disclosure also provide a non-transitory computer readable medium that stores a set of instructions that is executable by at least one processor of a computer to cause the computer to perform the above-mentioned methods. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The device may include one or more processors (CPUs), an input/output interface, a network interface, or a memory.
  • It is appreciated that the above described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in this disclosure can be implemented by hardware, or software, or a combination of hardware and software. It is understood that multiple ones of the above described modules/units may be combined as one module/unit, and each of the above described modules/units may be further divided into a plurality of sub-modules/sub-units.
  • It is appreciated that the above descriptions are only exemplary embodiments provided in the present disclosure. Consistent with the present disclosure, those of ordinary skill in the art may incorporate variations and modifications in actual implementation, without departing from the principles of the present disclosure. Such variations and modifications shall all fall within the protection scope of the present disclosure.
  • Unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a component may include A or B, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or A and B. As a second example, if it is stated that a component may include A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
  • In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method. In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation, the scope of the embodiments being defined by the following claims.

Claims (27)

1. A method for removing video jitter, comprising:
determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images;
determining position change information of a subsequent image relative to a preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images;
acquiring deformation information corresponding to the preceding image in a m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, wherein n and m are positive integers, and m is not greater than n; and
deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images for removing jitter in the preceding image in the m-th pair of raw images.
2. The method according to claim 1, further comprising:
storing the raw images into a first queue; and
storing the position change information of the subsequent image relative to the preceding image in each pair of raw images into a second queue.
3. (canceled)
4. (canceled)
5. The method according to claim 1, prior to determining the position information of feature point pairs in each pair of raw images according to the position information of feature point pairs in each pair of compressed images, further comprising:
compressing each pair of raw images by a factor of a number;
determining feature points on each image in each pair of compressed images;
determining two corresponding feature points on the two consecutive images in each pair of compressed images as a feature point pair; and
determining position information of the feature point pairs in each pair of compressed images.
6. The method according to claim 5, wherein determining the position information of the feature point pairs in each pair of raw images according to the position information of the feature point pairs in each pair of compressed images comprises:
expanding the position information of the feature point pairs in each pair of compressed images by a factor of the number to obtain the position information of the feature point pairs in each pair of raw images.
7. The method according to claim 1, wherein determining the position information of the feature point pairs in each pair of raw images according to the position information of the feature point pairs in each pair of compressed images comprises:
partitioning the two consecutive images in each pair of raw images;
determining position change information of a corresponding partition of the subsequent image relative to a corresponding partition of the preceding image in each pair of raw images according to the position information of the feature point pairs in the corresponding partition of each pair of raw images; and
determining the position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position change information of the corresponding partition of the subsequent image relative to the corresponding partition of the preceding image in each pair of raw images.
8. The method according to claim 7, wherein the position information is coordinates, the position change information is a transformation matrix, and the deformation information is a deformation matrix.
9. The method according to claim 8, wherein deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images comprises:
deforming the partition of the preceding image in the m-th pair of raw images according to the deformation matrix corresponding to the preceding image in the m-th pair of raw images; and
cropping a frame edge of the preceding image in the m-th pair of raw images after deformation.
10. An apparatus for removing video jitter, comprising:
a memory storing a set of instructions; and
one or more processors configured to execute the set of instructions to cause the apparatus to perform:
determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images;
determining position change information of a subsequent image relative to a preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images;
acquiring deformation information corresponding to the preceding image in a m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, wherein n and m are positive integers, and m is not greater than n; and
deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images for removing jitter in the preceding image in the m-th pair of raw images.
11. The apparatus according to claim 10, wherein the one or more processors configured to execute the set of instructions to cause the apparatus to further perform:
storing the raw images into a first queue; and
storing the position change information of the subsequent image relative to the preceding image in each pair of raw images into a second queue.
12. (canceled)
13. (canceled)
14. The apparatus according to claim 10, wherein the one or more processors configured to execute the set of instructions to cause the apparatus to further perform:
compressing each pair of raw images by a factor of a number;
determining feature points on each image in each pair of compressed images;
determining two corresponding feature points on the two consecutive images in each pair of compressed images as a feature point pair; and
determining position information of the feature point pairs in each pair of compressed images.
15. The apparatus according to claim 14, wherein determining the position information of the feature point pairs in each pair of raw images according to the position information of the feature point pairs in each pair of compressed images comprises:
expanding the position information of the feature point pairs in each pair of compressed images by a factor of the number to obtain the position information of the feature point pairs in each pair of raw images.
16. The apparatus according to claim 10, wherein determining the position information of the feature point pairs in each pair of raw images according to the position information of the feature point pairs in each pair of compressed images comprises:
partitioning the two consecutive images in each pair of raw images;
determining position change information of a corresponding partition of the subsequent image relative to a corresponding partition of the preceding image in each pair of raw images according to the position information of the feature point pairs in the corresponding partition of each pair of raw images; and
determining the position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position change information of the corresponding partition of the subsequent image relative to the corresponding partition of the preceding image in each pair of raw images.
17. The apparatus according to claim 16, wherein the position information is coordinates, the position change information is a transformation matrix, and the deformation information is a deformation matrix.
18. (canceled)
19. A computer-readable storage medium that stores a set of instructions that is executable by at least one processor of a computer to cause the computer to perform a method for removing video jitter, the method comprising:
determining position information of feature point pairs in each pair of raw images according to position information of feature point pairs in each pair of compressed images, wherein one feature point pair is composed of two corresponding feature points on two consecutive images in each pair of images, and the raw images are uncompressed images;
determining position change information of a subsequent image relative to a preceding image in each pair of raw images according to the position information of the feature point pairs in each pair of raw images;
acquiring deformation information corresponding to the preceding image in a m-th pair of raw images according to the position change information of the subsequent image relative to the preceding image in n pairs of raw images, wherein n and m are positive integers, and m is not greater than n; and
deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images for removing jitter in the preceding image in the m-th pair of raw images.
20. The non-transitory computer readable medium of claim 19, wherein the at least one processor is configured to execute the set of instructions to cause the computer to further perform:
storing the raw images into a first queue; and
storing the position change information of the subsequent image relative to the preceding image in each pair of raw images into a second queue.
21. (canceled)
22. (canceled)
23. The non-transitory computer readable medium of claim 19, wherein prior to determining the position information of feature point pairs in each pair of raw images according to the position information of feature point pairs in each pair of compressed images, the at least one processor is configured to execute the set of instructions to cause the computer to further perform:
compressing each pair of raw images by a factor of a number;
determining feature points on each image in each pair of compressed images;
determining two corresponding feature points on the two consecutive images in each pair of compressed images as a feature point pair; and
determining position information of the feature point pairs in each pair of compressed images.
24. The non-transitory computer readable medium of claim 23, wherein determining the position information of the feature point pairs in each pair of raw images according to the position information of the feature point pairs in each pair of compressed images comprises:
expanding the position information of the feature point pairs in each pair of compressed images by a factor of the number to obtain the position information of the feature point pairs in each pair of raw images.
25. The non-transitory computer readable medium of claim 19, wherein determining the position information of the feature point pairs in each pair of raw images according to the position information of the feature point pairs in each pair of compressed images comprises:
partitioning the two consecutive images in each pair of raw images;
determining position change information of a corresponding partition of the subsequent image relative to a corresponding partition of the preceding image in each pair of raw images according to the position information of the feature point pairs in the corresponding partition of each pair of raw images; and
determining the position change information of the subsequent image relative to the preceding image in each pair of raw images according to the position change information of the corresponding partition of the subsequent image relative to the corresponding partition of the preceding image in each pair of raw images.
26. The non-transitory computer readable medium of claim 25, wherein the position information is coordinates, the position change information is a transformation matrix, and the deformation information is a deformation matrix.
27. The non-transitory computer readable medium of claim 26, wherein deforming the preceding image in the m-th pair of raw images according to the deformation information corresponding to the preceding image in the m-th pair of raw images comprises:
deforming the partition of the preceding image in the m-th pair of raw images according to the deformation matrix corresponding to the preceding image in the m-th pair of raw images; and
cropping a frame edge of the preceding image in the m-th pair of raw images after deformation.
US17/106,682 2018-05-31 2020-11-30 Method and apparatus for removing video jitter Active US11317008B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201810554266.9A CN110557522A (en) 2018-05-31 2018-05-31 Method and device for removing video jitter
CN201810554266.9 2018-05-31
PCT/CN2019/087693 WO2019228219A1 (en) 2018-05-31 2019-05-21 Method and device for removing video jitter

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087693 Continuation WO2019228219A1 (en) 2018-05-31 2019-05-21 Method and device for removing video jitter

Publications (2)

Publication Number Publication Date
US20210084198A1 true US20210084198A1 (en) 2021-03-18
US11317008B2 US11317008B2 (en) 2022-04-26

Family

ID=68697432

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/106,682 Active US11317008B2 (en) 2018-05-31 2020-11-30 Method and apparatus for removing video jitter

Country Status (6)

Country Link
US (1) US11317008B2 (en)
EP (1) EP3806445A4 (en)
JP (1) JP7383642B2 (en)
CN (1) CN110557522A (en)
TW (1) TW202005353A (en)
WO (1) WO2019228219A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11134180B2 (en) * 2019-07-25 2021-09-28 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Detection method for static image of a video and terminal, and computer-readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110557522A (en) 2018-05-31 2019-12-10 阿里巴巴集团控股有限公司 Method and device for removing video jitter
CN113132560B (en) * 2019-12-31 2023-03-28 武汉Tcl集团工业研究院有限公司 Video processing method, computer equipment and computer readable storage medium

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6762758B2 (en) * 2001-08-23 2004-07-13 Ati Technologies Inc. System, method, and apparatus for compression of video data using offset values
AUPR899401A0 (en) * 2001-11-21 2001-12-13 Cea Technologies Pty Limited Method and apparatus for non-motion detection
JP2004343483A (en) 2003-05-16 2004-12-02 Acutelogic Corp Device and method for correcting camera-shake and device for detecting camera shake
US7369741B2 (en) * 2003-11-17 2008-05-06 Fiber Optics Network Solutions Corp. Storage adapter with dust cap posts
WO2008111169A1 (en) 2007-03-13 2008-09-18 Fujitsu Microelectronics Limited Image processing apparatus, method of image processing, image processing program and recording medium
US8150191B2 (en) * 2008-10-14 2012-04-03 Interra Systems Inc. Method and system for calculating blur artifacts in videos using user perception threshold
EP2360669A1 (en) * 2010-01-22 2011-08-24 Advanced Digital Broadcast S.A. A digital video signal, a method for encoding of a digital video signal and a digital video signal encoder
JP5184574B2 (en) * 2010-04-30 2013-04-17 パナソニック株式会社 Imaging apparatus, image processing apparatus, and image processing method
US20120162449A1 (en) * 2010-12-23 2012-06-28 Matthias Braun Digital image stabilization device and method
US9277129B2 (en) * 2013-06-07 2016-03-01 Apple Inc. Robust image feature based video stabilization and smoothing
JP6192507B2 (en) 2013-11-20 2017-09-06 キヤノン株式会社 Image processing apparatus, control method thereof, control program, and imaging apparatus
US9311690B2 (en) * 2014-03-11 2016-04-12 Adobe Systems Incorporated Video denoising using optical flow
CN103927731B (en) * 2014-05-05 2017-01-11 武汉大学 Low-altitude remote sensing image rapid and automatic splicing method without POS assisting
JP6336341B2 (en) 2014-06-24 2018-06-06 キヤノン株式会社 Imaging apparatus, control method therefor, program, and storage medium
US10447926B1 (en) * 2015-06-19 2019-10-15 Amazon Technologies, Inc. Motion estimation based video compression and encoding
US10303925B2 (en) * 2016-06-24 2019-05-28 Google Llc Optimization processes for compressing media content
US9838604B2 (en) * 2015-10-15 2017-12-05 Ag International Gmbh Method and system for stabilizing video frames
US10425582B2 (en) * 2016-08-25 2019-09-24 Facebook, Inc. Video stabilization system for 360-degree video data
CN106878612B (en) * 2017-01-05 2019-05-31 中国电子科技集团公司第五十四研究所 A kind of video stabilizing method based on the optimization of online total variation
CN107705288B (en) * 2017-09-04 2021-06-01 武汉工程大学 Infrared video detection method for dangerous gas leakage under strong interference of pseudo-target motion
US10740431B2 (en) * 2017-11-13 2020-08-11 Samsung Electronics Co., Ltd Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
CN109905590B (en) * 2017-12-08 2021-04-27 腾讯科技(深圳)有限公司 Video image processing method and device
CN108682036B (en) * 2018-04-27 2022-10-25 腾讯科技(深圳)有限公司 Pose determination method, pose determination device and storage medium
CN110493488B (en) * 2018-05-15 2021-11-26 株式会社理光 Video image stabilization method, video image stabilization device and computer readable storage medium
CN110557522A (en) 2018-05-31 2019-12-10 阿里巴巴集团控股有限公司 Method and device for removing video jitter

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11134180B2 (en) * 2019-07-25 2021-09-28 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Detection method for static image of a video and terminal, and computer-readable storage medium

Also Published As

Publication number Publication date
WO2019228219A1 (en) 2019-12-05
CN110557522A (en) 2019-12-10
JP7383642B2 (en) 2023-11-20
TW202005353A (en) 2020-01-16
US11317008B2 (en) 2022-04-26
EP3806445A1 (en) 2021-04-14
JP2021524960A (en) 2021-09-16
EP3806445A4 (en) 2022-03-23

Similar Documents

Publication Publication Date Title
US11317008B2 (en) Method and apparatus for removing video jitter
WO2021179898A1 (en) Action recognition method and apparatus, electronic device, and computer-readable storage medium
US11222211B2 (en) Method and apparatus for segmenting video object, electronic device, and storage medium
US9021347B2 (en) Information processing method and apparatus, program, and storage medium
US10007990B2 (en) Generating composite images using estimated blur kernel size
CN112446363A (en) Image splicing and de-duplication method and device based on video frame extraction
US20060215036A1 (en) Method and apparatus for video stabilization
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
RU2632272C1 (en) Synthetic image creating method
US20190005133A1 (en) Method, apparatus and arrangement for summarizing and browsing video content
JP4659793B2 (en) Image processing apparatus and image processing method
US9392146B2 (en) Apparatus and method for extracting object
CN115294493A (en) Visual angle path acquisition method and device, electronic equipment and medium
CN110443244B (en) Graphics processing method and related device
KR101826463B1 (en) Method and apparatus for synchronizing time line of videos
Jia et al. Fast face hallucination with sparse representation for video surveillance
US11483493B2 (en) Camera image conversion method capable of reducing processing time
JPWO2023042337A5 (en)
CN113628133A (en) Rain and fog removing method and device based on video image
US10846827B2 (en) Image processing device, image processing method, and storage medium
JP4435838B2 (en) Method and apparatus for enhancing digital image
KR20220002626A (en) Picture-based multidimensional information integration method and related devices
JP6686890B2 (en) Information processing apparatus, information processing method, and program
WO2022179554A1 (en) Video splicing method and apparatus, and computer device and storage medium
US20240147047A1 (en) Vertex change detection for enhanced document capture

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ALIBABA GROUP HOLDING LIMITED, CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHEN, RUIZHI;REEL/FRAME:055070/0027

Effective date: 20210125

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE