CN114205521A - Image stabilizing method, device, equipment and storage medium for motion video image - Google Patents

Image stabilizing method, device, equipment and storage medium for motion video image Download PDF

Info

Publication number
CN114205521A
CN114205521A CN202111336933.4A CN202111336933A CN114205521A CN 114205521 A CN114205521 A CN 114205521A CN 202111336933 A CN202111336933 A CN 202111336933A CN 114205521 A CN114205521 A CN 114205521A
Authority
CN
China
Prior art keywords
image
motion vector
images
adjacent frames
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111336933.4A
Other languages
Chinese (zh)
Inventor
杜伟
陈金玉
程海涛
陈伯建
邹彪
吴文斌
张伟豪
林鸿伟
吴晓杰
叶剑锋
王佳颖
王淼
朱松涛
任伟达
朱晓康
孙鸿博
孔祥玉
林凡雨
崔书刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuandu Internet Technology Co ltd
Sgcc General Aviation Co ltd
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Original Assignee
Beijing Yuandu Internet Technology Co ltd
Sgcc General Aviation Co ltd
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuandu Internet Technology Co ltd, Sgcc General Aviation Co ltd, State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd filed Critical Beijing Yuandu Internet Technology Co ltd
Priority to CN202111336933.4A priority Critical patent/CN114205521A/en
Publication of CN114205521A publication Critical patent/CN114205521A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for stabilizing a moving video image. The method comprises the steps of obtaining images of two adjacent frames in a motion video; respectively determining the characteristic points of a previous frame image and a next frame image in the images of two adjacent frames; calculating the integral motion vector of the two current adjacent frames of images according to the characteristic point of the previous frame of image and the characteristic point of the next frame of image; calculating the dithering motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images; and performing motion compensation on the previous frame image in the current two adjacent frame images through Kalman filtering and the dithering motion vectors of the current two adjacent frame images to obtain the stable image of the next frame image. Through the embodiments herein, it is achieved that motion characteristics of picture content are maintained while removing jitter of a moving video image.

Description

Image stabilizing method, device, equipment and storage medium for motion video image
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method, an apparatus, a device, and a storage medium for stabilizing a moving video image.
Background
With the development of image technology and computer technology, video becomes an important and direct information transmission medium. The generalization and diversification of human video demands have brought forth various camera platforms, handheld, vehicle-mounted, airborne and other various camera devices into the lives of people. Sometimes, the camera device has to be used on a vibration platform, the vibration environment necessarily causes mechanical vibration, so that video image shake is caused, the shake is often difficult to eliminate and is particularly obvious under a high magnification lens, and the shake seriously influences the observation and monitoring of the video.
The concept of image stabilization is to eliminate various kinds of judder of video images, including mechanical judder and random noise-induced judder. Generally, there are three image stabilization methods: optical image stabilization, mechanical image stabilization and electronic image stabilization. The optical image stabilization achieves the image stabilization effect by adding some special optical elements; the mechanical image stabilization estimates mechanical motion through a sensor, and then eliminates jitter through motion compensation; the electronic image stabilization judges the inter-frame motion through digital signal processing, and then achieves the image stabilization effect through conversion compensation. However, the image stabilization method in the prior art has a poor image stabilization effect on a moving image frame, and is easy to eliminate high-frequency jitter and affect normal movement of the frame content.
There is a need for a method for stabilizing moving video images, so as to solve the problem of the prior art that the high frequency jitter is eliminated and the normal motion of the moving picture content is affected.
Disclosure of Invention
To solve the problem in the prior art that the normal motion of the picture content is affected while eliminating the high-frequency jitter, embodiments herein provide an image stabilization method, apparatus, device, and storage medium for a moving video image, which can solve the problem in the prior art that the normal motion of the moving picture is affected while eliminating the high-frequency jitter.
Embodiments herein provide a method of image stabilization of a moving video image, the method comprising,
acquiring images of two adjacent frames in a motion video;
respectively determining the characteristic points of a previous frame image and a next frame image in the images of the two adjacent frames;
calculating the overall motion vector of the two current adjacent frames of images according to the characteristic points of the previous frame of image and the characteristic points of the next frame of image, wherein the overall motion vector comprises a picture content motion vector and a jitter motion vector;
calculating a jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images;
and performing motion compensation on the previous frame image in the current two adjacent frame images through Kalman filtering and a jitter motion vector in the overall motion vector of the current two adjacent frame images to obtain the image stabilization of the next frame image.
Embodiments herein also provide an image stabilization apparatus for a moving video image, including,
the image acquisition unit is used for acquiring images of two adjacent frames in the motion video;
the image characteristic point determining unit is used for respectively determining the characteristic points of the image of the previous frame and the image of the next frame in the images of the two adjacent frames;
the overall motion vector calculation unit is used for calculating the overall motion vectors of two current adjacent frames of images according to the characteristic points of the previous frame of image and the characteristic points of the next frame of image, and the overall motion vectors comprise picture content motion vectors and shake motion vectors;
the jitter motion vector calculation unit is used for calculating a jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images;
and the motion compensation unit is used for performing motion compensation on the previous frame image in the current two adjacent frame images through Kalman filtering and the jitter motion vector in the overall motion vector of the current two adjacent frame images to obtain the stable image of the next frame image.
Embodiments herein also provide a computer device comprising a memory, a processor, and a computer program stored on the memory, the processor implementing the above-described method when executing the computer program.
Embodiments herein also provide a computer storage medium having a computer program stored thereon, the computer program, when executed by a processor of a computer device, performing the above-described method.
By utilizing the embodiment of the text, the calculated overall motion vectors of the current two adjacent frames of images comprise the picture content motion vector and the jitter motion vector, the jitter motion vector in the overall motion vector of the current two adjacent frames of images is calculated according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images, and finally the image stabilization is carried out according to the jitter motion vector in the overall motion vector of the current two adjacent frames of images, so that the high-frequency jitter of the video images is eliminated, the motion characteristics of the picture content of the video images are kept, and the problem that the normal motion of the picture content is influenced while the high-frequency jitter is eliminated in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation system of an image stabilization method for a moving video image according to an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a method for image stabilization of a moving video image according to an embodiment of the present disclosure;
FIG. 3 illustrates a process of calculating a dithering motion vector in a global motion vector of two current adjacent frames of images according to an embodiment of the present disclosure;
FIG. 4 illustrates a process of performing motion compensation on a next frame of image in two current adjacent frames of images according to an embodiment of the present disclosure;
FIG. 5 illustrates a process for determining an image cropping area according to an embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating an exemplary image stabilization apparatus for motion video images;
FIG. 7 is a schematic diagram illustrating a process for image stabilization of a moving video image according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating a motion relationship between a previous frame image and a next frame image in two adjacent frames of images according to the embodiment.
[ description of reference ]:
101. an image pickup unit;
102. moving an object;
103. a presentation unit;
601. an image acquisition unit;
602. an image feature point determination unit;
603. a whole motion vector calculation unit;
604. a shake motion vector calculation unit;
605. a motion compensation unit;
802. a computer device;
804. a processing device;
806. a storage resource;
808. a drive mechanism;
810. an input/output module;
812. an input device;
814. an output device;
816. a presentation device;
818. a graphical user interface;
820. a network interface;
822. a communication link;
824. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments herein described are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
Fig. 1 is a schematic diagram of a real-time system of an image stabilization method for a moving video image according to an embodiment of the present disclosure, including: an image pickup unit 101, a moving object 102, and a presentation unit 103. In the embodiment, the image capturing unit 101 captures a moving object 102 and presents a video image on the presentation unit 103, and when the image capturing unit 101 captures the moving object 102, the video image captured by the image capturing unit 101 is shaken due to mechanical shaking or random noise of the image capturing unit 101, and therefore the video image presented on the presentation unit 103 is shaken, resulting in poor user experience. In the prior art, when a video image presented on the presentation unit 103 is stabilized, normal motion of the picture content of the moving object 102 captured by the image capturing unit 101 is affected, thereby reducing the quality of the video image on the presentation unit 103.
The method, the device, the equipment and the storage medium for stabilizing the moving video image, which are described in the embodiments of the present disclosure, can be applied to the image capturing unit 101 shown in fig. 1, so as to solve the problem that the normal motion of the moving picture content is affected while eliminating the high-frequency jitter in the prior art.
In particular, embodiments herein provide an image stabilization method for a moving video image, which can effectively solve the problem in the prior art that normal motion of a moving picture is affected while high-frequency jitter is eliminated. Fig. 2 is a flowchart illustrating an image stabilization method for a moving video image according to an embodiment of the disclosure. The management of live lesson privileges is described in this figure, but may include more or fewer operational steps based on routine or non-creative efforts. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual system or apparatus product executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures. As shown in particular in fig. 2, the method comprises:
step 201: acquiring images of two adjacent frames in a motion video;
step 202: respectively determining the characteristic points of a previous frame image and a next frame image in the images of the two adjacent frames;
step 203: calculating the overall motion vector of the two current adjacent frames of images according to the characteristic points of the previous frame of image and the characteristic points of the next frame of image, wherein the overall motion vector comprises a picture content motion vector and a jitter motion vector;
step 204: calculating a jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images;
step 205: and performing motion compensation on the previous frame image in the current two adjacent frame images through Kalman filtering and a jitter motion vector in the overall motion vector of the current two adjacent frame images to obtain the image stabilization of the next frame image.
According to the method of the embodiment, the calculated overall motion vectors of the current two adjacent frames of images comprise the picture content motion vector and the jitter motion vector, the jitter motion vector in the overall motion vector of the current two adjacent frames of images is calculated according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images, and finally image stabilization is performed according to the jitter motion vector in the overall motion vector of the current two adjacent frames of images, so that the high-frequency jitter of the video images is eliminated, the motion characteristics of the picture content of the video images are maintained, and the problem that the normal motion of the picture content is influenced while the high-frequency jitter is eliminated in the prior art is solved.
In this embodiment, images of any two adjacent frames in an offline motion video may be acquired to stabilize the offline motion video, or images of any two adjacent frames in a real-time captured video may be acquired to stabilize the motion video in real time. The feature points of the images described in the embodiments of this document are corner points (Harris) of the images, and motion vectors of two adjacent frames of images can be calculated according to the motion magnitude and direction of the same corner point in the two adjacent frames of images. In addition, a certain motion relationship exists between the previous frame image and the next frame image in the two adjacent frames of images, as shown in fig. 9, the direction indicated by the arrow is a global motion vector of a certain pixel point in the next frame image relative to the pixel in the previous frame image, the global motion vector includes displacement and rotation of the pixel point, wherein the displacement indicates displacement of the pixel point in the x direction and the y direction of the image plane. Further, the overall motion vector includes a picture content motion vector indicating a normal motion of the picture content in the moving video image, i.e., a motion of an object in the captured moving video, and a shake motion vector indicating a shake of the moving video image caused by a mechanical motion of the capturing device, etc.
It should be noted that the image stabilization method for the moving video image described in the embodiment of this specification may be applied to a camera in a ceiling cabin of an unmanned aerial vehicle, and may also be applied to other image pickup apparatuses, and this specification is not limited.
According to an embodiment of the present disclosure, in order to increase the image processing speed, the processing procedure of the image stabilization method for a moving video image described in the embodiment of the present disclosure is preferentially performed in the GPU of the image capturing unit 101, and the processing performed in the GPU has a faster operation speed than the processing performed in the CPU.
According to one embodiment herein, the step 202 of determining the feature points of the previous frame image and the next frame image in the images of the two adjacent frames further comprises respectively,
converting the images of the two adjacent frames into a gray-scale map from an RGBA format;
calculating the characteristic points of the previous frame of image in the two adjacent frames of images by using an angular point detection method;
and calculating the characteristic points of the next frame image in the two adjacent frame images by an LK optical flow estimation method.
Wherein the RGBA format represents a colored image format, in the present embodiment, in order to reduce the amount of calculation, the colored image format is converted into a grayscale map. And then calculating the characteristic points of the previous frame image in the two adjacent frames of images by using a corner point detection method. Specifically, the corner detection method calculates the change values of pixels of a previous frame of image after slight movement in the x and y directions according to the characteristic that the image brightness changes on the x and y planes of the video image are large after the corner region moves, performs descending sorting of the change values on all pixels in the previous frame of image according to the change values, and selects a certain number of pixel points in the sorting result as the feature points of the previous frame of image according to the calculation requirement.
And then calculating the feature points of the next frame of image by an LK optical flow estimation method, specifically, firstly, obtaining images with different resolutions by establishing a Gaussian pyramid and adopting a layering strategy from coarse to fine, and predicting the positions of the feature points of the previous frame of image in the next frame of image by combining the pyramid layer with the same resolution with the LK optical flow estimation method, thereby obtaining the feature points of the next frame of image. The corner point detection method and the LK optical flow estimation method described in the embodiments of the present description are mature schemes at present, and are not described in detail in the embodiments of the present description.
According to one embodiment of the present disclosure, the step 203 of calculating the global motion vector of the two current adjacent frames of images according to the feature points of the previous frame of image and the feature points of the next frame of image further comprises,
respectively calculating the displacement and the rotation angle of the position of each feature point of the next frame image relative to the position of the corresponding feature point of the previous frame image in the two adjacent current frame images to obtain the motion vector of each feature point;
and obtaining the overall motion vector of the current two adjacent frames of images according to a least square method and the motion vector of each characteristic point.
In the embodiment of the invention, the motion vectors of the obtained feature points have differences due to the possible distortion of the current two adjacent frames of images, and in order to reduce the differences between the overall motion vector and the motion vectors of the feature points, the difference between the overall motion vector and the motion vectors of the feature points is minimized by a least square method, so that the calculation accuracy of the overall motion vector is improved.
According to an embodiment herein, as shown in fig. 3, the step 204 of calculating the judder motion vector in the global motion vector of the current two adjacent frames of images according to the picture content motion vector in the global motion vector of the current two adjacent frames of images and the global motion vector of the last two adjacent frames of images further comprises,
step 301: calculating the picture content motion vector in the overall motion vector of the two current adjacent frames of images according to the picture content motion vector in the overall motion vector of the two previous adjacent frames of images, the overall motion vector of the two current adjacent frames of images and a preset weight, wherein the preset weight is a decimal between 0 and 1;
step 302: and calculating the jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the current two adjacent frames of images.
In this embodiment, the next image in the two previous adjacent images is the previous image in the two current adjacent images. The overall motion vector represents the motion situation of a next frame image relative to a previous frame image in the current two adjacent frame images, and the overall motion vector comprises the motion situation of normal motion of picture content in the current two adjacent frame images and the shaking situation of the images.
According to one embodiment of the present disclosure, the step 301 calculates a dithering motion vector in the global motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the current two adjacent frames of images by the formula (1),
mean_V′=(beta*mean_V+(1.0-beta)*measure (1)
wherein mean _ V' represents a picture content motion vector in the global motion vector of the current two adjacent frames of images, mean _ V represents a picture content motion vector in the global motion vector of the last two adjacent frames of images, mean represents a global motion vector of the current two adjacent frames of images, and beta represents the preset weight;
when the measure is the whole motion vector of the first two adjacent frame images in the motion video, the picture content motion vector mean _ V' in the whole motion vector of the first two adjacent frame images is equal to the whole motion vector measure of the first two adjacent frame images.
In this embodiment, the preset weight may be obtained through artificial experience, and the overall motion vector of all two consecutive adjacent frames in the motion video image may be iterated through formula (1), for example, the motion video has n frames, if the overall motion vector of the 2 nd frame image relative to the 1 st frame image is a, the picture content motion vector of the 2 nd frame image relative to the 1 st frame image is also a, if the overall motion vector of the 3 rd frame image relative to the 2 nd frame image is B, the preset weight is 0.8, the sum of the picture content motion vector a of the 2 nd frame image relative to the 1 st frame image, which is 0.8 times of the picture content motion vector of the 3 rd frame image relative to the 2 nd frame image, and the overall motion vector B of the 3 rd frame image relative to the 2 nd frame image, which is 0.2 times of the picture content motion vector of the n-1 th frame image relative to the n-1 th frame image, is obtained through formula (1), and the n-1 st frame image relative to the n-1 st frame image, which is 0.8 times of the picture content motion vector of the n-1 st frame image -the sum of the picture content motion vector of 2 frame images and 0.2 times the overall motion vector of the nth frame image relative to the n-1 st frame image.
According to an embodiment of the present disclosure, as shown in fig. 4, the step 205 performs motion compensation on a previous frame of image in the two current adjacent frames of images through kalman filtering and a dithering motion vector in the global motion vector of the two current adjacent frames of images, and obtaining an image stabilization of the next frame of image further includes,
step 401: obtaining a motion compensation matrix corresponding to a jitter motion vector in the overall motion vectors of the current two adjacent frames of images by adopting Kalman filtering;
step 402: and performing motion compensation on the previous frame image in the two current adjacent frame images according to the motion compensation matrix and the previous frame image in the two current adjacent frame images to obtain a stable image of the next frame image.
In this embodiment, the specific calculation process in step 401 is to predict, by using a kalman filtering method, an accumulated shake motion vector of a next frame image in the current two adjacent frame images relative to a first frame image of the motion video, accumulate a shake motion vector in the overall motion vector of the current two adjacent frame images and shake motion vectors in overall motion vectors of all two adjacent frame images before the current two adjacent frame images in the motion video to obtain a measured accumulated shake motion vector, and then calculate a motion compensation matrix according to the predicted accumulated shake motion vector, the measured accumulated shake motion vector, and a kalman gain, where the motion compensation matrix includes a translation size and a rotation angle of an image.
And then storing the predicted accumulated shaking motion vector, the measured accumulated shaking motion vector and a next frame image of the current two adjacent frame images. The specific calculation process in step 402 is to copy the previous frame image of the two current adjacent frame images, then perform motion compensation on the copied previous frame image according to the motion compensation matrix, and use the copied first frame image after motion compensation as the stable image of the next frame image.
When image stabilization is performed on two adjacent frames of images after the two adjacent frames of images, namely, the image of the previous frame of image of the two adjacent frames is the image of the next frame of image of the two adjacent frames of images. Predicting the accumulated jitter motion vector of the next frame image in the two adjacent frames of images relative to the first frame image of the motion video by a Kalman filtering method, then accumulating the jitter motion vector in the overall motion vector of the two adjacent frames of images and the stored accumulated jitter motion vector, updating and measuring the accumulated jitter motion vector, then calculating a motion compensation matrix according to the predicted accumulated jitter motion vector, the measured accumulated jitter motion vector and a Kalman gain, copying the stored next frame image of the current two adjacent frames of images, then performing motion compensation on the copied next frame image of the current two adjacent frames of images according to the motion compensation matrix, and taking the copied next frame image of the current two adjacent frames of images after motion compensation as the stable image of the next frame image in the two adjacent frames of images.
For example, the image of the previous frame of the two adjacent frames of images is the (n-1) th frame of image in the motion video, the image of the next frame of image of the two adjacent frames of images is the nth frame of image in the motion video, the image of the previous frame of image of the two adjacent frames of images is the nth frame of image in the motion video, and the image of the next frame of image of the two adjacent frames of images is the (n + 1) th frame of image in the motion video. According to the method shown in fig. 4, firstly, a kalman filtering method is adopted to predict the accumulated shake motion vector a of the nth frame image relative to the first frame image of the motion video, and the shake motion vector in the overall motion vector of the nth frame image relative to the n-1 th frame image and the shake motion vectors in the overall motion vectors of all the two adjacent frame images before the n-1 th frame image in the motion video are accumulated to obtain a measured accumulated shake motion vector B, and then a motion compensation matrix C is calculated according to the predicted accumulated shake motion vector a, the measured accumulated shake motion vector B and the kalman gain. The predicted accumulated dither motion vector a, the measured accumulated dither motion vector B, and the nth frame image are then stored. Then copying the (n-1) th frame image, then performing motion compensation on the copied (n-1) th frame image according to the motion compensation matrix C, and taking the copied (n-1) th frame image subjected to motion compensation as a stable image of the (n) th frame image.
When image stabilization of the (n + 1) th frame image is carried out, predicting the accumulated jitter motion vector A 'of the (n + 1) th frame image relative to the first frame image of the motion video through a Kalman filtering method, accumulating the jitter motion vector and the accumulated motion vector B in the overall motion vector of the (n + 1) th frame image relative to the n-th frame image to obtain a measured accumulated jitter motion vector B', then calculating a motion compensation matrix C 'according to the predicted accumulated jitter motion vector A', the measured accumulated jitter motion vector B 'and a Kalman gain, copying the stored n-th frame image, then carrying out motion compensation on the copied n-th frame image according to the motion compensation matrix C', and taking the copied n-th frame image after motion compensation as the image stabilization of the n + 1-th frame image.
Because kalman filtering is a mature scheme at present, the embodiments of this specification are not described in detail.
In the conventional electronic image stabilization algorithm, for a case of severe jitter, a black frame phenomenon may occur around an image after motion compensation is performed on an image picture, according to an embodiment of this document, in order to prevent a black frame from occurring on a final image stabilization picture, as shown in fig. 5, step 402 performs motion compensation on a previous frame image in the current two adjacent frame images according to the motion compensation matrix, and further includes after obtaining an image stabilization of the next frame image,
step 501: determining an image clipping area according to the motion compensation matrix and the size of the motion video image;
step 502: and cutting the stable image of the next frame of image according to the image cutting area.
In this embodiment, the image cropping area represents a black frame width appearing around an image stabilization of a next frame image obtained after a previous frame image in two current adjacent frame images is subjected to motion compensation. After motion compensation is performed on the previous frame of image according to the translation size and the rotation angle of the image in the motion compensation matrix, the widths of black frames appearing around the steady image of the next frame of image may be different. Then determining an image cutting area according to the maximum width of the black frame around the image, cutting the width of the maximum black frame around the image stabilization of the next frame of image according to the image cutting area, and finally amplifying the cut image to the original size, thereby eliminating the black frame around the image after image stabilization.
For example, a previous frame of image is translated upwards by 10 pixel widths according to the motion compensation matrix, a black frame with 10 pixel widths appears in the lower boundary of the stationary image of the next frame of image after motion compensation, the black frame with 10 pixel widths appearing in the lower boundary needs to be cut, and then the cut image is enlarged to the size of the original image. However, if only the black frame with the width of 10 pixels appearing at the lower boundary of the image is cut, the image will be distorted after being amplified, so that the image cutting area is the width of 10 pixels around the image, and the image stabilization of the next frame of image after motion compensation is cut according to the image cutting area and then amplified to the size of the original next frame of image, thereby eliminating the black frame in the image stabilization picture.
According to one embodiment herein, in order to reduce the impact of cropping on the sharpness of a video picture, step 501 further comprises after determining an image cropping area according to the motion compensation matrix and the size of the motion video image,
determining an image cutting proportion according to the cutting area and the resolution of the motion video image;
and cutting the stable image of the next frame of image according to the image cutting proportion.
In this embodiment, the resolution of the moving video image indicates how many pixels are in a certain range of the image, and the more pixels are in the same range, the higher the resolution of the image is. After the image is cut and amplified, the number of pixel points in the same range is changed, so that the image cutting proportion needs to be dynamically determined according to the cutting area and the resolution. Specifically, taking a rectangular image as an example, the remaining width of the long side is calculated according to the width of the long side of the image to be cut, then the cutting width of the short side of the image is calculated according to the remaining width of the long side and the resolution, and finally the image cutting proportion is determined according to the size of the cut image and the size of the original image, so that the influence of cutting on the definition of a video picture is reduced.
For example, if the image cropping zone is a region that crops 0.5cm around a 15cm by 10cm image and the resolution of the image is 640 by 480 pixels per inch, then the image cropping percentage is determined to crop 5% of the entire image size, with the borders removed. Preferably, the image cropping ratio of the embodiment ranges from 5% to 20%.
According to an embodiment herein, in order to reduce the influence of the abrupt zoom on the experience effect of the user, clipping the image stabilization of the subsequent frame image according to the image cropping ratio further comprises,
dividing the image cropping scale into a number of sub-image cropping scales,
sequencing the sub-image clipping proportion according to a progressive order from small to large;
and cutting the stable image of the next frame image and the stable image of the subsequent frame image according to the ordered sub-image cutting proportion.
In this embodiment, the values of the plurality of sub-image cropping ratios are gradually increased, for example, if the determined image cropping ratio is 5%, 5% may be divided into 5 sub-image cropping ratios, the obtained sub-image cropping ratios are sorted in an order of gradual increase from small to large to 1%, 2%, 3%, 4%, and 5%, and finally, the image stabilization of the motion-compensated next frame image and the image stabilization of the subsequent frame image are cropped according to the sub-image cropping ratios. For example, the stable image of the nth frame is cropped according to the sub-image cropping ratio of 1%, the stable image of the (n + 1) th frame is cropped according to the sub-image cropping ratio of 2%, and the stable image of the (…) th +4 th frame is cropped according to the sub-image cropping ratio of 5%. Therefore, the influence of the sudden change and the zooming of the picture on the experience effect of the user is reduced. Preferably, the number of the sub-image cropping ratios in the embodiment herein is 30.
Based on the same inventive concept, the embodiment of the present specification further provides an image stabilization apparatus for a moving video image, as shown in fig. 6, the image stabilization apparatus for a moving video image includes an image acquisition unit 601, an image feature point determination unit 602, an overall motion vector calculation unit 603, a shake motion vector calculation unit 604, and a motion compensation unit 605. In particular, the amount of the solvent to be used,
an image obtaining unit 601, configured to obtain images of two adjacent frames in a moving video;
an image feature point determining unit 602, configured to determine feature points of a previous frame image and a next frame image in the images of the two adjacent frames respectively;
a global motion vector calculating unit 603, configured to calculate a global motion vector of two current adjacent frames of images according to the feature point of the previous frame of image and the feature point of the next frame of image, where the global motion vector includes a picture content motion vector and a shake motion vector;
a dithering motion vector calculating unit 604, configured to calculate a dithering motion vector in the overall motion vectors of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the previous two adjacent frames of images;
and a motion compensation unit 605, configured to perform motion compensation on a next frame image in the current two adjacent frame images through kalman filtering and a shake motion vector in the whole motion vectors of the current two adjacent frame images, so as to obtain a stable image of the next frame image.
The advantages obtained by the device are consistent with those obtained by the method, and the embodiments of the present description are not repeated.
As shown in fig. 7, which is a schematic flow chart illustrating a process of image stabilization for a moving video image according to an embodiment of the present disclosure, in this figure, a process of image stabilization for a moving video image is described, it is obvious that the steps in this figure are not the only steps for implementing image stabilization for a moving video image, and for a person skilled in the art, other steps for implementing image stabilization for a moving video image may also be obtained according to the steps in this figure without creative efforts. Specifically, the method comprises the following steps of,
step 701: and acquiring images of two adjacent frames in the motion video.
In this step, images of two adjacent frames in the moving video image are first acquired. In this embodiment, images of any two adjacent frames in an offline motion video may be acquired to stabilize the offline motion video, or images of any two adjacent frames in a real-time captured video may be acquired to stabilize the motion video in real time.
Step 702: and carrying out format conversion on the two adjacent frames of images.
In this step, the format of the moving video image is converted into a gray map in order to reduce the amount of calculation.
Step 703: and calculating the characteristic points of the previous frame of image by a corner point detection method.
In the step, the corner detection method calculates the change values of the pixels of the previous frame of image after slight movement in the x and y directions according to the characteristic that the image brightness changes on the x and y planes of the video image are large after the corner region moves, performs descending sorting of the change values on all the pixels in the previous frame of image according to the change values, and selects a certain number of pixel points from the sorting result as the feature points of the previous frame of image according to the calculation requirement.
Step 704: and calculating the characteristic points of the next frame of image by an LK optical flow estimation method.
In the step, firstly, images with different resolutions are obtained by establishing a Gaussian pyramid and adopting a coarse-to-fine layering strategy, and the pyramid layer with the same resolution is combined with an LK optical flow estimation method to predict the position of the feature point of the previous frame image in the next frame image, so as to obtain the feature point of the next frame image.
Step 705: and calculating the overall motion vector of the current two adjacent frames of images.
In the embodiment, the overall motion vector represents the motion of the next frame image relative to the previous frame image in the two current adjacent frame images, and the overall motion vector comprises a picture content motion vector and a shake motion vector, wherein the picture content motion vector represents the picture content of the motion video image, and the shake motion vector represents the shake generated on the motion video picture by the mechanical shake or random noise of the camera unit. In this step, the displacement and the rotation angle of the position of each feature point of the next frame image relative to the position of the corresponding feature point of the previous frame image in the two adjacent current frame images are respectively calculated to obtain the motion vector of each feature point. And then obtaining the overall motion vector of the current two adjacent frames of images according to a least square method and the motion vector of each characteristic point. The difference between the overall motion vector and the motion vector of each characteristic point is minimized through a least square method, and the calculation precision of the overall motion vector is improved.
Step 706: and calculating the picture content motion vector of the current two adjacent frame images.
In this step, the picture content motion vector of the current two adjacent frame images is calculated by formula (1) in this specification. Specifically, the picture content motion vector in the overall motion vector of the two previous adjacent frames of images is calculated according to the picture content motion vector in the overall motion vector of the two previous adjacent frames of images, the overall motion vector of the two current adjacent frames of images and the preset weight. It should be noted that the picture content motion vector of the first two adjacent frames in the motion video is equal to the overall motion vector.
Step 707: and calculating the dithering motion vector of the current two frames of images.
In this step, a dithering motion vector in the overall motion vectors of the current two adjacent frames of images is calculated according to the overall motion vectors of the current two adjacent frames of images and the picture content motion vectors in the overall motion vectors of the current two adjacent frames of images.
Step 708: and calculating a motion compensation matrix corresponding to the jitter motion vector.
In the step, a motion compensation matrix corresponding to a jitter motion vector in the overall motion vectors of the current two adjacent frames of images is obtained by adopting Kalman filtering. Because kalman filtering is a mature scheme at present, the embodiments of this specification are not described in detail.
Step 709: and performing motion compensation on the previous frame image to obtain a stable image of the next frame image.
In this step, motion compensation is performed according to the motion compensation matrix obtained in step 708 and the previous image in the two current adjacent frames of images to obtain a stable image of the next frame of image. Specifically, a previous frame image of the current two adjacent frame images is copied, then the copied previous frame image of the current two adjacent frame images is subjected to motion compensation according to the motion compensation matrix, and the previous frame image of the copied current two adjacent frame images subjected to motion compensation is used as a stable image of a next frame image in the current two adjacent frame images.
When image stabilization is performed on two adjacent frames of images after the two adjacent frames of images, namely, the image of the previous frame of image of the two adjacent frames is the image of the next frame of image of the two adjacent frames of images. Estimating the accumulated overall motion vector of the next image relative to the first image of the moving video in the two adjacent images by the kalman filtering method in step 708, then accumulating the jitter motion vector in the overall motion vector of the two adjacent frames of images and the stored accumulated jitter motion vector, updating the accumulated jitter motion vector, then calculating a motion compensation matrix according to the estimated accumulated overall motion vector, accumulated jitter motion vector and Kalman gain of the next frame image in the next two adjacent frame images relative to the first frame image of the motion video, copying the stored next frame image of the current two adjacent frame images, and then, performing motion compensation on the next frame image of the copied current two adjacent frame images according to the motion compensation matrix, and taking the next frame image of the copied current two adjacent frame images after motion compensation as a stable image of the next frame image of the next two adjacent frame images.
Step 710: and determining an image cropping area and an image cropping proportion.
In this step, an image cropping area is determined according to the motion compensation matrix obtained in step 708 and the size of the motion video image, and an image cropping ratio is determined according to the cropping area and the resolution of the motion video image, so that the stabilized image of the compensated next frame image obtained in step 709 is cropped according to the image cropping ratio, thereby reducing the influence of cropping on the definition of the video image.
Step 711: and dividing the image cropping scale into a plurality of sub-image cropping scales.
In this step, the image cropping ratio determined in step 710 is divided into a plurality of sub-image cropping ratios which are progressive in a stepwise manner, for example, if the image cropping ratio determined in step 710 is 5%, 5% can be divided into 5 sub-image cropping ratios, and the obtained sub-image cropping ratios are sorted into 1%, 2%, 3%, 4%, and 5% in a stepwise progressive manner from small to large.
Step 712: and cutting the stable image of the next frame image and the stable image of the subsequent frame image according to the sub-image cutting proportion.
In this step, the image stabilization of the next frame image and the image stabilization of the subsequent frame image are cropped according to the plurality of sub-image cropping ratios obtained in step 711. For example, the cropping proportions of the sub-images divided in step 711 are sorted into 1%, 2%, 3%, 4% and 5% in a progressive order from small to large, if the steady image of the next frame of image is an nth frame of image in the motion video, the steady image of the nth frame of image is cropped according to the cropping proportion of the sub-images of 1%, the steady image of the (n + 1) th frame of image is cropped according to the cropping proportion of the sub-images of 2%, and the steady image of the … (n + 4) th frame of image is cropped according to the cropping proportion of the sub-images of 5%, so that the influence of abrupt change and zoom on the experience effect of the user is reduced.
Step 713: and outputting the cut image.
In the step, the cut image is output to a presenting unit, so that the shaking of the moving video image is eliminated, and meanwhile, the moving characteristics of the picture content of the video image are kept. In addition, through the processing of step 710 and 712, the black frame appearing around the image after image stabilization is eliminated, and the non-inductive scaling of the image stabilization picture is realized.
As shown in fig. 8, which is a schematic structural diagram of a computer apparatus according to an embodiment of the present disclosure, the moving video image stabilizing device in the present disclosure may be a computer apparatus according to the present embodiment, and the method in the present disclosure is performed. Computer device 802 may include one or more processing devices 804, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device 802 may also include any storage resources 806 for storing any kind of information, such as code, settings, data, etc. For example, and without limitation, storage resources 806 may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any storage resource may use any technology to store information. Further, any storage resource may provide volatile or non-volatile reservation of information. Further, any storage resources may represent fixed or removable components of computer device 802. In one case, when the processing device 804 executes associated instructions stored in any storage resource or combination of storage resources, the computer device 802 can perform any of the operations of the associated instructions. The computer device 802 also includes one or more drive mechanisms 808, such as a hard disk drive mechanism, an optical disk drive mechanism, etc., for interacting with any storage resource.
Computer device 802 may also include an input/output module 810(I/O) for receiving various inputs (via input device 812) and for providing various outputs (via output device 814). One particular output mechanism may include a presentation device 816 and an associated Graphical User Interface (GUI) 818. In other embodiments, input/output module 810(I/O), input device 812, and output device 814 may also be excluded, as just one computer device in a network. Computer device 802 may also include one or more network interfaces 820 for exchanging data with other devices via one or more communication links 822. One or more communication buses 824 couple the above-described components together.
Communication link 822 may be implemented in any manner, such as over a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. The communication link 822 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
Corresponding to the methods in fig. 2-5 and 7, the embodiments herein also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the above steps.
Embodiments herein also provide a computer readable instruction, wherein when the instruction is executed by a processor, the program causes the processor to execute the method as shown in fig. 2-5, 7.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, meaning that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional units in the embodiments herein may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (10)

1. A method for image stabilization of a moving video image, the method comprising,
acquiring images of two adjacent frames in a motion video;
respectively determining the characteristic points of a previous frame image and a next frame image in the images of the two adjacent frames;
calculating the overall motion vector of the two current adjacent frames of images according to the characteristic points of the previous frame of image and the characteristic points of the next frame of image, wherein the overall motion vector comprises a picture content motion vector and a jitter motion vector;
calculating a jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images;
and performing motion compensation on the previous frame image in the current two adjacent frame images through Kalman filtering and a jitter motion vector in the overall motion vector of the current two adjacent frame images to obtain the image stabilization of the next frame image.
2. The method of claim 1, wherein calculating the dithering motion vector in the global motion vector of the two adjacent frames further comprises calculating the dithering motion vector in the global motion vector of the two adjacent frames according to the overall motion vector of the two adjacent frames and the picture content motion vector in the global motion vector of the two adjacent frames,
calculating the picture content motion vector in the overall motion vector of the two current adjacent frames of images according to the picture content motion vector in the overall motion vector of the two previous adjacent frames of images, the overall motion vector of the two current adjacent frames of images and a preset weight, wherein the preset weight is a decimal between 0 and 1;
and calculating the jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the current two adjacent frames of images.
3. The method according to claim 2, wherein the formula for calculating the motion vector of the picture content in the global motion vector of the two adjacent frames according to the motion vector of the picture content in the global motion vector of the two adjacent frames, the global motion vector of the two adjacent frames and the preset weight is as follows,
mean_V′=(beta*mean_V+(1.0-beta)*measure
wherein mean _ V' represents a picture content motion vector in the global motion vector of the current two adjacent frames of images, mean _ V represents a picture content motion vector in the global motion vector of the last two adjacent frames of images, mean represents a global motion vector of the current two adjacent frames of images, and beta represents the preset weight;
when the measure is the whole motion vector of the first two adjacent frame images in the motion video, the picture content motion vector mean _ V' in the whole motion vector of the first two adjacent frame images is equal to the whole motion vector measure of the first two adjacent frame images.
4. The method of claim 1, wherein the motion compensation of the previous frame of image in the two adjacent frames of images through Kalman filtering and the dithering motion vector in the global motion vector of the two adjacent frames of images to obtain the image stabilization of the next frame of image further comprises,
obtaining a motion compensation matrix corresponding to a jitter motion vector in the overall motion vectors of the current two adjacent frames of images by adopting Kalman filtering;
and performing motion compensation on the previous frame image in the two current adjacent frame images according to the motion compensation matrix to obtain a stable image of the next frame image.
5. The method according to claim 4, wherein motion compensation is performed on a previous image in the two current adjacent images according to the motion compensation matrix, and after obtaining the image stabilization of the next image, the method further comprises,
determining an image clipping area according to the motion compensation matrix and the size of the motion video image;
and cutting the stable image of the next frame of image according to the image cutting area.
6. The method of image stabilization of a moving video image according to claim 5, further comprising, after determining an image cropping area based on the motion compensation matrix and the size of the moving video image,
determining an image cutting proportion according to the cutting area and the resolution of the motion video image;
and cutting the stable image of the next frame of image according to the image cutting proportion.
7. The method of image stabilization for a moving video image according to claim 6, wherein clipping the image stabilization for the subsequent frame image according to the image clipping ratio further comprises,
dividing the image cutting proportion into a plurality of sub-image cutting proportions;
sequencing the sub-image clipping proportion according to a progressive order from small to large;
and cutting the stable image of the next frame image and the stable image of the subsequent frame image according to the ordered sub-image cutting proportion.
8. An image stabilization apparatus for a moving video image, comprising,
the image acquisition unit is used for acquiring images of two adjacent frames in the motion video;
the image characteristic point determining unit is used for respectively determining the characteristic points of the image of the previous frame and the image of the next frame in the images of the two adjacent frames;
the overall motion vector calculation unit is used for calculating the overall motion vectors of two current adjacent frames of images according to the characteristic points of the previous frame of image and the characteristic points of the next frame of image, and the overall motion vectors comprise picture content motion vectors and shake motion vectors;
the jitter motion vector calculation unit is used for calculating a jitter motion vector in the overall motion vector of the current two adjacent frames of images according to the overall motion vector of the current two adjacent frames of images and the picture content motion vector in the overall motion vector of the last two adjacent frames of images;
and the motion compensation unit is used for performing motion compensation on the previous frame image in the current two adjacent frame images through Kalman filtering and the jitter motion vector in the overall motion vector of the current two adjacent frame images to obtain the stable image of the next frame image.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory, wherein the computer program, when executed by the processor, performs the instructions of the method of any one of claims 1-7.
10. A computer storage medium having a computer program stored thereon, the computer program being adapted to perform the instructions of the method of any one of claims 1 to 7 when the computer program is run by a processor of a computer device.
CN202111336933.4A 2021-11-12 2021-11-12 Image stabilizing method, device, equipment and storage medium for motion video image Pending CN114205521A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111336933.4A CN114205521A (en) 2021-11-12 2021-11-12 Image stabilizing method, device, equipment and storage medium for motion video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111336933.4A CN114205521A (en) 2021-11-12 2021-11-12 Image stabilizing method, device, equipment and storage medium for motion video image

Publications (1)

Publication Number Publication Date
CN114205521A true CN114205521A (en) 2022-03-18

Family

ID=80647418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111336933.4A Pending CN114205521A (en) 2021-11-12 2021-11-12 Image stabilizing method, device, equipment and storage medium for motion video image

Country Status (1)

Country Link
CN (1) CN114205521A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114945107A (en) * 2022-04-15 2022-08-26 北京奕斯伟计算技术股份有限公司 Video processing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692692A (en) * 2009-11-02 2010-04-07 彭健 Method and system for electronic image stabilization
JP2012034361A (en) * 2010-07-30 2012-02-16 Fujitsu Ltd Camera shake correction method and camera shake correction device
JP2015139139A (en) * 2014-01-23 2015-07-30 キヤノン株式会社 Video processing apparatus and video processing method
CN109743495A (en) * 2018-11-28 2019-05-10 深圳市中科视讯智能系统技术有限公司 Video image electronic stability augmentation method and device
CN111539872A (en) * 2020-04-23 2020-08-14 南京理工大学 Real-time electronic image stabilization method for video image under random jitter interference

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692692A (en) * 2009-11-02 2010-04-07 彭健 Method and system for electronic image stabilization
JP2012034361A (en) * 2010-07-30 2012-02-16 Fujitsu Ltd Camera shake correction method and camera shake correction device
JP2015139139A (en) * 2014-01-23 2015-07-30 キヤノン株式会社 Video processing apparatus and video processing method
CN109743495A (en) * 2018-11-28 2019-05-10 深圳市中科视讯智能系统技术有限公司 Video image electronic stability augmentation method and device
CN111539872A (en) * 2020-04-23 2020-08-14 南京理工大学 Real-time electronic image stabilization method for video image under random jitter interference

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114945107A (en) * 2022-04-15 2022-08-26 北京奕斯伟计算技术股份有限公司 Video processing method and related device
CN114945107B (en) * 2022-04-15 2024-02-02 北京奕斯伟计算技术股份有限公司 Video processing method and related device

Similar Documents

Publication Publication Date Title
EP2005387B1 (en) Constructing image panorama using frame selection
JP5651118B2 (en) Image processing apparatus and image processing method
US8422827B2 (en) Image correction apparatus and image correction method
EP3620989A1 (en) Information processing method, information processing apparatus, and program
JP2006129236A (en) Ringing eliminating device and computer readable recording medium with ringing elimination program recorded thereon
CN110383332B (en) Image processing device, image processing method, and image processing program
US6803954B1 (en) Filtering control method for improving image quality of bi-linear interpolated image
EP2407926B1 (en) Image processing apparatus, image processing method, and recording medium
CN111325667A (en) Image processing method and related product
US11922598B2 (en) Image processing apparatus, image processing method, and storage medium
CN106846250B (en) Super-resolution reconstruction method based on multi-scale filtering
CN114205521A (en) Image stabilizing method, device, equipment and storage medium for motion video image
US20090041331A1 (en) Recursive image filtering
JP2016201037A (en) Image processing device, image processing method, and program
CN112055255B (en) Shooting image quality optimization method and device, smart television and readable storage medium
JP3959547B2 (en) Image processing apparatus, image processing method, and information terminal apparatus
JP2006215657A (en) Method, apparatus, program and program storage medium for detecting motion vector
KR100828194B1 (en) Apparatus and method for deciding a blurriness of digital pictures, and system for image processing using the same
WO2013150824A1 (en) Image processing device and method, and image processing program
JP2008271246A (en) Blur correction device, blur correction method, and electronic device equipped with the blur correction device
CN113099128B (en) Video processing method and video processing system
JP2007157001A (en) Image resolution enhancement method, image resolution enhancement device and image recognition device
JP2019097004A (en) Image generation apparatus, image generation method and image generation program
CN113554017B (en) Method, device, terminal and storage medium for determining instrument state
US20040095557A1 (en) Motion picture analyzing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 102209 7th floor, block C, No.18, Binhe Avenue, future science and Technology City, Changping District, Beijing

Applicant after: State Grid Power Space Technology Co.,Ltd.

Applicant after: STATE GRID FUJIAN ELECTRIC POWER Co.,Ltd.

Applicant after: STATE GRID FUJIAN ELECTRIC POWER Research Institute

Applicant after: STATE GRID CORPORATION OF CHINA

Applicant after: Beijing Yuandu Internet Technology Co.,Ltd.

Address before: 102209 7th floor, block C, No.18, Binhe Avenue, future science and Technology City, Changping District, Beijing

Applicant before: SGCC GENERAL AVIATION Co.,Ltd.

Applicant before: STATE GRID FUJIAN ELECTRIC POWER Co.,Ltd.

Applicant before: STATE GRID FUJIAN ELECTRIC POWER Research Institute

Applicant before: STATE GRID CORPORATION OF CHINA

Applicant before: Beijing Yuandu Internet Technology Co.,Ltd.

CB02 Change of applicant information