CN105657432A - Video image stabilizing method for micro unmanned aerial vehicle - Google Patents

Video image stabilizing method for micro unmanned aerial vehicle Download PDF

Info

Publication number
CN105657432A
CN105657432A CN201610018259.8A CN201610018259A CN105657432A CN 105657432 A CN105657432 A CN 105657432A CN 201610018259 A CN201610018259 A CN 201610018259A CN 105657432 A CN105657432 A CN 105657432A
Authority
CN
China
Prior art keywords
frame
image
dot
video
present frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610018259.8A
Other languages
Chinese (zh)
Inventor
黄俊仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201610018259.8A priority Critical patent/CN105657432A/en
Publication of CN105657432A publication Critical patent/CN105657432A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a video image stabilizing method for a micro unmanned aerial vehicle. The method is divided into two parts, the first part is to calculate the rotating angle of a current frame to carry out rotation elimination processing on the video; and the second part is to calculate a global amount of exercise of the current frame by using a bit plane matching algorithm and carry out curve fitting on the global amount of exercise to eliminate video jitter. The video image stabilizing method for the micro unmanned aerial vehicle provided by the invention can be used for efficiently processing video image stabilization in real time, is suitable for videos with rotation of various angles, and can be used for effectively eliminating the video jitter to better meet the demands in the specific application field of video image stabilization of the unmanned aerial vehicle.

Description

A kind of video image stabilization method towards MAV
Technical field
The present invention relates to technical field of image processing, be specifically related to a kind of video image stabilization method towards MAV.
Background technology
In recent years, along with improving constantly of automatic technology, unmanned plane is obtained for fast development at military, industrial and civil area. Video processing based on unmanned plane shooting also becomes an important branch in computer vision field. Inevitably there is fuselage and rock and action sudden change in unmanned plane, cause that the video image obtained occurs obscuring phenomenons such as rocking, had a strong impact on the effect that subsequent video processes in flight course. Therefore, use and steady to this as technology fuzzy rock video and process, have extraordinary application prospect at UAV Video analysis field.
Digital image stabilization method refers to owing to shooting platform self is unstable, causes that the image blurring video rocked obtained by this platform carries out stabilization process, the method finally giving a stable and smooth sequence of video images. Digital image stabilization method substantially can be divided into 3 classes: the steady picture of machinery, photorefractive crystals and electronic steady image. Wherein the steady picture of machinery and photorefractive crystals are due to shortcomings such as device fabrication difficulty are big, cost is high, volume is big, how restricted in the application. And electronic steady image has the advantages such as cost operation low, easy, motility be strong, it it is the steady study hotspot as technical field at present.
Common electronic image stabilization method has: Block Matching Algorithm, Gray Projection method and characteristic matching method.
Block Matching Algorithm is modal method for estimating motion vector, and by suitable searching route, quick and precisely search best matching blocks obtains motion vector. But Block Matching Algorithm is based in block Movement consistency it is assumed that translational motion can only be estimated, when image exists rotary motion, acquired results precision is relatively low even there will be error hiding situation, limits the practical application of the method.
Gray Projection method image processing speed is very fast, but the prescription of handled image is higher, if to be primarily due to handled picture quality relatively low for this, then Gray scale projection curve change is inconspicuous, it is difficult to obtain motion vector accurately.
Characteristic matching method is to choose the typical characteristic in image, such as features such as edge, profile and angle points, carries out estimation by characteristic matching. Key technology therein is how to extract feature and mate correct feature. Due to the method preferably close to the visual characteristic of the mankind and the useful information using image in a large number, it is provided that steady as result preferably. This kind of method, for the effect of steady picture, often uses more complicated feature, feature extraction and characteristic matching amount of calculation relatively larger, is unfavorable for real-time process. And the video that the anglec of rotation is less generally can only be carried out steady picture by the method in actual treatment, the Video Stabilization effect that simultaneously there is big angle rotary and shake is poor.
Unmanned plane, as a kind of special flight carrier, has increasingly complex kinetic characteristic, causes that the video of shooting often exists substantial amounts of big angle rotary and violent dither image.It addition, UAV Video is normally used for real time processing system, for instance follow the tracks of system. These situations are required for digital image stabilization method and can in real time video be processed. Digital image stabilization method so how reasonable in design, it is possible to processing the video that there is big angle rotary and shake simultaneously, have higher processing capability in real time again, can't expend more resource, existing all kinds of digital image stabilization methods all cannot solve this difficult problem preferably.
Summary of the invention
It is an object of the invention to provide a kind of video image stabilization method towards MAV, this invention solves in prior art steady the process of real-time high-efficiency cannot exist the technical problem of video of big angle rotary and shake as processing method.
The invention provides a kind of video image stabilization method towards MAV, comprise the following steps:
Step S100: the image of consecutive frame between two in the video obtained by unmanned plane is set to reference frame and present frame, reference frame and present frame are uniformly divided into many sub regions respectively, a provincial characteristics point is extracted in every sub regions, each provincial characteristics point is described as the characteristic vector of 128 dimensions, and to present frame provincial characteristics point set F1With reference frame regions characteristic point set F2By dual way nearest neighbor distance than matching method, try to achieve from present frame to reference frame match point set {<dot, dot '>, dot �� F1,dot���F2And match point set {<dot', dot>, dot ' �� F from reference frame to present frame2,dot��F1, and using the common factor of the two as match point set M;
Step S200: obtain the estimation anglec of rotation between reference frame and present frame by similarity transformation model, the estimation anglec of rotation between cumulative multiple consecutive frames, obtaining the present frame anglec of rotation relative to l frame, by present frame rotation angles degree, be eliminated the video rotated;
Step S300: calculated the global motion amount �� rotating present frame eliminating in the video rotated by Bit-plane matchingt, and to global motion amount ��tCarry out curve fitting, obtain main motion amountPressObtain rotating the motion compensation quantity of present frameAccording to motion compensation quantityCompensate rotating present frame, obtain stable video.
Further, step S100 comprises the following steps:
Step S110: two width image uniform of reference frame and present frame are divided into many sub regions, uses FAST Corner Detection Algorithm that every sub regions is detected in every sub regions, obtains detection characteristic point;
Step S120: if detection characteristic point is multiple, then as the characteristic point of this subregion, if detection characteristic point is zero, then choose the intermediate point characteristic point as this subregion of this subregion by random fashion optional characteristic point from detection characteristic point.
Further, provincial characteristics point is described as characteristic vector by step S100, comprises the following steps:
Step S130: centered by provincial characteristics point, the image in using 8 for the territory of radius as image block P corresponding to provincial characteristics point (x, y), (x, y) is of a size of 16*16 to image block P;
Step S140: according to formula (1)��(2) calculate image block P (x, the gradient magnitude G of each pixel y) (and x, y) and the direction �� of each pixel (x, y),
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 - - - ( 1 )
�� (x, y)=arctan (Gx(x,y)/Gy(x,y))(2)
Wherein, Gx(x, y) for image block P (x, horizontal direction gradient magnitude y), Gy(x, y) for image block P (computing formula is as follows for x, vertical gradient amplitude y):
Gx(x, y)=(-101) * P (x, y) (3)
Gy(x, y)=(-101)T*P(x,y)(4)
Step S150: by image block P (x, y) sub-block of 4*4 it is divided into uniformly, obtain 16 image subblock block, gradient direction space 0��2 �� is divided into 8 directions, inside each block image, add up the rectangular histogram in each direction, and (x, y) as weight to use gradient magnitude G, each block image can obtain 8 dimensional vectors, and 16 block images there are 128 dimensional vectors.
Further, in step S100, dual way nearest neighbor distance comprises the following steps than matching method:
Step S160: take present frame provincial characteristics point set F1Middle any region characteristic point dot0, its characteristic of correspondence vector f tdot0, ask provincial characteristics point dot0 at reference frame regions characteristic point set F2In nearest neighbor point dot1 and the second Neighbor Points dot2, the distance of note provincial characteristics point dot0 and nearest neighbor point dot1 is Dist1, the distance of note provincial characteristics point dot0 and the second Neighbor Points dot2 is that Dist2, Dist1 and Dist2 press formula (5)��(6) calculating:
D i s t 1 = | | ft d o t 0 - ft d o t 1 | | = m i n d o t &Element; F 2 | | ft d o t 0 - ft d o t | | - - - ( 5 )
D i s t 2 = | | ft d o t 0 - ft d o t 2 | | = m i n d o t &Element; F 2 - { d o t 1 } | | ft d o t 0 - ft d o t | | - - - ( 6 )
If Dist1/Dist2<0.8, then dot0 and dot1 is designated as a pair coupling point set<dot0, dot1>;
Step S170: travel through present frame provincial characteristics point set F by the step in step S1601In all provincial characteristics points, all provincial characteristics points meeting Dist1/Dist2<0.8 condition are recorded in present frame match point set {<dot, dot '>, dot �� F to reference frame1,dot���F2In;
Step S180: travel through reference frame regions characteristic point set F according to the step in step S1602In all provincial characteristics points, ask for reference frame regions characteristic point set F2At present frame provincial characteristics point set F1In match point, obtain reference frame to present frame match point set {<dot', dot>, dot ' �� F2,dot��F1;
Step S190: calculate present frame to reference frame match point set {<dot, dot '>, dot �� F1,dot���F2And reference frame to present frame match point set {<dot', dot>, dot ' �� F2,dot��F1Common factor, as match point set M.
Further, step S200 comprises the following steps:
Step S210: make model for the transformation model between two adjacent two field pictures with similarity transformation, note A (x, y) and arbitrary to match point in match point set M of A ' (x ', y '), similarity transformation model equation is formula (7):
x &prime; y &prime; 1 = &epsiv; &CenterDot; c o s &theta; - &epsiv; &CenterDot; s i n &theta; t x &epsiv; &CenterDot; s i n &theta; &epsiv; &CenterDot; cos &theta; t y 0 0 1 x y 1 - - - ( 7 )
Wherein, �� is the estimation anglec of rotation of video, �� is zoom factor, x, y, for A (x, y) arbitrary to match point, x ' in, y ' to match point, tx is the translational movement in horizontal direction for arbitrary in A ' (x ', y ') and ty is the translational movement in vertical direction;
Step S220: match point set M is adopted stochastic sampling consistent RANSAC Algorithm for Solving similarity transformation model equation by matching characteristic, obtains estimating the translational movement tx in anglec of rotation ��, zoom factor ��, horizontal direction and the translational movement ty in vertical direction;
Wantonly two consecutive frames are repeated step S210��S220 and obtain the estimation anglec of rotation �� of all adjacent two two field pictures in video by step S230: from the 1st two field picture of video;
IfBe the t frame anglec of rotation relative to t-1 frame, then t frame relative to the anglec of rotation of the 1st frame isIt is the anglec of rotation according to what try to achievePresent frame is rotated, and be eliminated the frame of video rotated.
Further, step S300 comprises the following steps:
Step S310: by eliminate rotate video in any frame gray scale be 0��255 image (x, y) pixel value of position is expressed as:
F (x, y)=a727+a626+��+a020(8)
Wherein akTake 0 or 1,0��k��7, for initial bit value;
By akIt is rewritten into gk, as the bit value of improvement:
g k = a k &CirclePlus; a k + 1 , 0 &le; k &le; 6 g 7 = a 7 - - - ( 9 )
WhereinRepresent xor operation;
Step S320: each pixel on any frame image has 8 bit value gk, the kth bit value composition kth rank bit-planes b of all pixelsk(x y), any frame image has 8 bit-planes image b0(x, y)��b7(x, y), chooses the 4th bit-plane image and mates, and obtains the 4th bit-plane image of reference frame and the 4th bit-plane image of present frame, is designated as Cb respectively4And Db4;
At image Cb4Corner and center position respectively select the subimage of a M �� N, there are 5 subimage Csub1 ..., Csub5, with the window of a M �� N at image Db4Enterprising line slip, slip each time is all at image Db4On obtain the subimage of a M �� N, be designated as Dsubi, calculate the matching degree of subimage Csubi and subimage Dsubi:
D T = 1 M N &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 D s u b i ( x , y ) &CirclePlus; C s u b i ( x , y ) - - - ( 10 )
When DT value is minimum, two corresponding width images are best coupling; Respectively at image Db4Upper calculating Csub1 ..., best matching blocks image corresponding for Csub5, it is designated as Dsub1 respectively ..., Dsub5; Two width images of best coupling Csubi, Dsubi | i=1 ..., the coordinate offset amount of 5} is designated as (mi,ni), it is the motion vector of subimage Csubi;
Step S330: each subimage in 5 subimages of reference frame, calculates by step S310��320 and obtains subimage Csubi and the motion vector of subimage Dsubi, 5 motion vectors process through medium filtering the motion vector �� obtaining present frame;
Start each two field picture is all calculated the motion vector of two consecutive frames by step S310��330 from the 1st frame of video, ifBeing the t frame motion vector relative to t-1 frame, the motion vector of cumulative multiple image obtains the present frame global motion amount to the 1st frameGlobal motion amountIt is calculated as follows and obtains:
&mu; 1 t = &Sigma; i = 1 t - 1 &mu; i i + 1 - - - ( 11 ) ;
Adopt method of least square to motion vectorCarrying out curve fitting, the vector after note is smooth is main motion amountThe motion compensation quantity of present frameIt is calculated as follows:
&mu; c t = &mu; 1 t - &mu; s t - - - ( 12 )
According to gained motion compensation quantityPresent frame is compensated, obtains stable video.
The technique effect of the present invention:
The present invention provides the video image stabilization method towards MAV, by being divided into two parts by steady as processing, Part I calculates the anglec of rotation of present frame, carry out video eliminating rotation processing, Part II uses rank sort algorithm to calculate the global motion amount of present frame, and global motion amount is carried out curve fitting, thus there is the purpose of big angle rotary, shake video in the process reaching real-time high-efficiency.
Video image stabilization method towards MAV provided by the invention, can efficient real time process Video Stabilization, particularly the rotation of video and shake are separated, process targetedly with shake motion two kinds different for rotating, thus the shake video that various angle rotates can be there is in the method, and can effectively eliminate the shake of video, it is possible to well meet UAV Video surely as the requirement of this specific application area.
Specifically refer to the described below of the various embodiments that a kind of video image stabilization method towards MAV according to the present invention proposes, by apparent for the above and other aspect making the present invention.
Accompanying drawing explanation
Fig. 1 is the preferred embodiment of the present invention video image stabilization method schematic flow sheet towards MAV.
Detailed description of the invention
The accompanying drawing constituting the part of the application is used for providing a further understanding of the present invention, and the schematic description and description of the present invention is used for explaining the present invention, is not intended that inappropriate limitation of the present invention.
When the present invention provides the video that the existence that method is used for processing unmanned plane shooting rotates and shakes, first pass through Feature Points Matching algorithm, choose suitable transformation model, calculate the anglec of rotation of adjacent two frames (former frame is designated as reference frame and a later frame is designated as present frame), then according to present frame is reversely rotated by identical angle, rotate thus eliminating video.
Referring to Fig. 1, the invention provides a kind of video image stabilization method towards MAV, the method comprises the following steps:
Step S100: the image of consecutive frame between two in the video obtained by unmanned plane is set to reference frame and present frame, reference frame and present frame are uniformly divided into many sub regions respectively, a provincial characteristics point is extracted in every sub regions, each provincial characteristics point is described as the characteristic vector of 128 dimensions, and to present frame provincial characteristics point set F1With reference frame regions characteristic point set F2By dual way nearest neighbor distance than matching method, try to achieve from present frame to reference frame match point set {<dot, dot '>, dot �� F1,dot���F2And from reference frame to present frame match point set {<dot', dot>, dot ' �� F2,dot��F1, and using the common factor of the two as match point set M;
Step S200: obtain the estimation anglec of rotation between reference frame and present frame by similarity transformation model, the estimation anglec of rotation between cumulative multiple consecutive frames, obtaining the present frame anglec of rotation relative to l frame, by the present frame anglec of rotation, be eliminated the frame of video rotated;
Step S300: calculated the global motion amount �� of present frame by Bit-plane matchingt, and to global motion amount ��tCarry out curve fitting, obtain main motion amountPressObtain the motion compensation quantity of present frameAccording to motion compensation quantityPresent frame is compensated, obtains stable video.
Dual way nearest neighbor distance herein can for carry out according to a conventional method than matching method, similarity transformation model and Bit-plane matching. The method is adopted to rotate elimination frame by frame by the existing video that unmanned plane is obtained, and utilize Bit-plane matching that the current frame image rotated after eliminating is carried out jitter elimination, thus it is efficient to take full advantage of Bit-plane matching, the advantage that eradicating efficacy is good, avoid its drawback to there is the flating eradicating efficacy difference rotated simultaneously, the basis of image processing efficiency improves eradicating efficacy.
Preferably, for the corresponding relation between efficient sequence of calculation image, the present invention provides method to possess following feature for choosing of characteristic point: the characteristic point chosen should relatively evenly be distributed among handled image, and the quantity chosen should be moderate.
Preferably, step S100 comprises the following steps:
Step S110: two width image uniform of reference frame and present frame are divided into many sub regions, uses FAST Corner Detection Algorithm that every sub regions is detected in every sub regions, obtains detection characteristic point;
Step S120: if detection characteristic point is multiple, then as the characteristic point of this subregion, if detection characteristic point is zero, then choose the intermediate point characteristic point as this subregion of this subregion by random fashion optional characteristic point from detection characteristic point.
Above-mentioned steps is mainly used in selected characteristic point. By the extracting method of this extraction characteristic point, can guarantee that every sub regions of reference frame and present frame two width image has a characteristic point. For ease of subsequent descriptions, the feature point set of note reference frame is combined into F1, the feature point set of present frame is combined into F2��
Preferably, provincial characteristics point is described as characteristic vector by step S100, comprises the following steps:
Step S130: centered by provincial characteristics point, the image in using 8 for the territory of radius as image block P corresponding to this provincial characteristics point (x, y), (x, y) is of a size of 16*16 to this image block P;
Step S140: according to following formula (1)��(2) calculate image block P (x, the gradient magnitude G of each pixel y) (and x, y) and the direction �� of each pixel (x, y),
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 - - - ( 1 )
�� (x, y)=arctan (Gx(x,y)/Gy(x,y))(2)
Wherein, GxFor image block P (x, horizontal direction gradient magnitude y), GyFor image block P (computing formula is as follows for x, vertical gradient amplitude y):
Gx(x, y)=(-101) * P (x, y) (3)
Gy(x, y)=(-101)T*P(x,y)(4)
Step S150: by image block P (x, y) sub-block of 4*4 it is divided into uniformly, obtain 16 image subblock block, gradient direction space 0��2 �� is divided into 8 directions, inside each block image, add up the rectangular histogram in each direction, and (x, y) as weight to use gradient magnitude G, each block image can obtain 8 dimensional vectors, and 16 block images there are 128 dimensional vectors.
So each provincial characteristics point can obtain the vector of one 128 dimension, is designated as the characteristic vector of this Feature point correspondence. This characteristic vector can the effective characteristic such as the gradient in characteristic feature point field, direction, and there is certain noise immunity, robustness is better.
Preferably, in step S100, nearest neighbor distance comprises the following steps than matching method:
Step S160: take present frame provincial characteristics point set F1Middle any region characteristic point dot0, its characteristic of correspondence vector f tdot0, ask provincial characteristics point dot0 at reference frame regions characteristic point set F2In nearest neighbor point dot1 and the second Neighbor Points dot2, the distance of note provincial characteristics point dot0 and nearest neighbor point dot1 is Dist1, the distance of note note provincial characteristics point dot0 and the second Neighbor Points dot2 is that Dist2, Dist1 and Dist2 press formula (5)��(6) calculating:
D i s t 1 = | | ft d o t 0 - ft d o t 1 | | = m i n d o t &Element; F 2 | | ft d o t 0 - ft d o t | | - - - ( 5 )
D i s t 2 = | | ft d o t 0 - ft d o t 2 | | = m i n d o t &Element; F 2 - { d o t 1 } | | ft d o t 0 - ft d o t | | - - - ( 6 )
If Dist1/Dist2<0.8, then dot0 and dot1 is designated as a pair coupling point set<dot0, dot1>;
Step S170: travel through present frame provincial characteristics point set F by the step in step S1601In all provincial characteristics points, all provincial characteristics points meeting Dist1/Dist2<0.8 condition are recorded in present frame to reference frame match point set {<dot, dot '>, dot �� F1,dot���F2In;
Step S180: according to step traversal reference frame regions characteristic point set F in described step S1602In all provincial characteristics points, ask for reference frame regions characteristic point set F2At present frame provincial characteristics point set F1In match point, obtain reference frame to present frame match point set {<dot', dot>, dot ' �� F2,dot��F1;
Step S190: calculate described present frame to reference frame match point set {<dot, dot '>, dot �� F1,dot���F2And reference frame to present frame match point set {<dot', dot>, dot ' �� F2,dot��F1Common factor, as match point set M.
By the match point set M that above-mentioned steps obtains, matching precision is higher, it is possible to reject substantial amounts of error hiding.
When Dist1/Dist2 < illustrates when 0.8 that between 2 provincial characteristics points to be matched, the match is successful.
Preferably, step S200 comprises the following steps:
Step S210: to select similarity transformation as the transformation model between two adjacent two field pictures, make A (x, y) and A ' (x ', y ') be arbitrary to match point in match point set M, similarity transformation model equation such as formula (7):
x &prime; y &prime; 1 = &epsiv; &CenterDot; c o s &theta; - &epsiv; &CenterDot; s i n &theta; t x &epsiv; &CenterDot; s i n &theta; &epsiv; &CenterDot; cos &theta; t y 0 0 1 x y 1 - - - ( 7 )
The similarity transformation model of formula (7) comprises 4 parameters altogether, respectively: the estimation anglec of rotation �� of video, zoom factor ��, the translational movement tx in horizontal direction and the translational movement ty in vertical direction; X, y, for A (x, arbitrary to match point, x ' in y), arbitrary to match point in A ' (x ', y ') of y '.
Step S220: match point set M is adopted stochastic sampling consistent RANSAC Algorithm for Solving similarity transformation model equation by matching characteristic, obtains estimating that the translational movement tx in anglec of rotation ��, zoom factor ��, horizontal direction and the translational movement ty in vertical direction adopts the consistent RANSAC Algorithm for Solving of stochastic sampling. Solve by the conventional method of the consistent RANSAC algorithm of stochastic sampling.
Wantonly two consecutive frames are repeated step S210��S220 calculating and obtain the estimation anglec of rotation �� of all adjacent two two field pictures in video by step S230: from the 1st two field picture of video;
IfBe the t frame anglec of rotation relative to t-1 frame, then t frame relative to the anglec of rotation of the 1st frame isIt is the anglec of rotation according to what try to achievePresent frame is rotated, and be eliminated the frame of video rotated.
Adopt the method just can reach to eliminate a certain two field picture relative to rotating that the first two field picture occurs.The image procossing error caused is rotated thus eliminating in unmanned plane acquisition video.
After rotation in video image is eliminated, next the video rotated after eliminating is carried out jitter elimination process. The present invention uses Bit-plane matching and diamond search strategy to estimate and compensates translational motion between consecutive frame between two, it is possible to achieve quickly eliminate the purpose of video jitter. The basic thought of Bit-plane matching is to utilize bit plane to replace gray level image, thus realizing the Rapid matching of image block. Bit-plane matching has that amount of calculation is little, accuracy advantages of higher, but for the method for without when eliminating the video image rotated, jitter elimination effect is poor. First rotation in video is eliminated by the present invention, thus uses Bit-plane matching effectively to maximize favourable factors and minimize unfavourable ones, give full play to its advantage.
Preferably, step S300 comprises the following steps:
Step S310: by image that any frame gray scale is 0��255 (x, y) pixel value of position is expressed as:
F (x, y)=a727+a626+��+a020(8)
Wherein ak(0��k��7) take 0 or 1, for initial bit value, owing to the minor variations of grey scale pixel value will result in akBig ups and downs, by akIt is rewritten into gk, as the bit value of improvement:
g k = a k &CirclePlus; a k + 1 , 0 &le; k &le; 6 g 7 = a 7 - - - ( 9 )
WhereinRepresent xor operation;
Step S320: each pixel in any frame has 7 bit value gk(0��k��7), the kth bit value composition kth rank bit-planes b of all pixelsk(x y), any frame image has 8 bit-planes image b0(x, y)��b7(x, y), for computational efficiency, only selects the 4th bit-plane image to mate. Calculate and obtain the 4th bit-plane image of reference frame and the 4th bit-plane image of present frame, be designated as Cb respectively4And Db4��
At image Cb4Corner and center position respectively select a subimage, be sized to M �� N, obtain 5 subimage Csub1 so altogether ..., Csub5, with the window of a M �� N at image Db4Enterprising line slip, slip each time is all at image Db4On obtain the subimage of a M �� N, be designated as Dsubi, calculate the matching degree of subimage Csubi and subimage Dsubi:
D T = 1 M N &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 D s u b i ( x , y ) &CirclePlus; C s u b i ( x , y ) - - - ( 10 )
When DT value is minimum, two corresponding width images are best coupling; Respectively at image Db4Upper calculating Csub1 ..., best matching blocks image corresponding for Csub5, it is designated as Dsub1 ..., Dsub5; Two width images of best coupling Csubi, Dsubi | i=1 ..., the coordinate offset amount of 5} is designated as (mi,ni), it is the motion vector of subimage Csubi.
Step S330: each subimage in 5 subimages of reference frame, and obtain subimage Csubi and the motion vector of subimage Dsubi by step S310��320 calculating, 5 motion vectors are processed through medium filtering the motion vector �� obtaining present frame;
Start each two field picture is all calculated the motion vector of two consecutive frames by step S310��330 from the 1st frame of video, ifBeing the t frame motion vector relative to t-1 frame, the motion vector of cumulative multiple image obtains the present frame global motion amount to the 1st frameGlobal motion amountIt is calculated as follows and obtains:
&mu; 1 t = &Sigma; i = 1 t - 1 &mu; i i + 1 - - - ( 11 ) .
Adopting said method to process, can realize the coupling of image block fast and accurately, thus obtaining correct motion vector, the Video Stabilization being particularly well-suited to rotate compensation processes.
Preferably, under ensureing the premise calculating effect, in order to improve computational efficiency, the 4th bit plane in every two field picture is only used to carry out Block-matching.
Adopt method of least square to motion vectorCarrying out curve fitting, the vector after note is smooth is main motion amountThe motion compensation quantity of present frameIt is calculated as follows:
&mu; c t = &mu; 1 t - &mu; s t - - - ( 12 )
According to gained motion compensation quantityPresent frame is compensated, obtains stable present frame.
Step S320 is for subimage Csub1, and left upper apex coordinate is (x1,y1), then with a window being sized to M �� N at image Db4Enterprising line slip, in order to improve computational efficiency, the left upper apex of sliding window is only with (x1,y1) centered by, length be 10 field in slide. Slip each time is all at image Db4On obtain the subimage of a M �� N, be designated as Dsub1, calculate the matching degree of subimage Csub1 and subimage Dsub1:
D ( m , n ) = 1 M N &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 D s u b 1 ( x , y ) &CirclePlus; C s u b 1 ( x , y ) - - - ( 10 )
Wherein m, n represent that sliding window is relative to coordinate (x respectively1,y1) deviation post, therefore | m | < 10, | n | < 10;
What this matching degree calculated is unmatched bit number, and therefore the more little explanation matching effect of value is more good. Calculating all corresponding matching degrees of sliding, as D, (m, n) value is the best match position of two width images time minimum, and now corresponding side-play amount is designated as (m1,n1), it is the motion vector striked by subimage Csub1.
If video sequence is stable, its motion vector should be just smooth. But video here is often all shake, so motion vector is not just smooth yet. The present invention adopts method of least square to motion vectorCarry out curve fitting, it is possible to effectively motion vector is smoothed. Vector after note is smooth is main motion amountThe motion compensation quantity of present frameCalculate by formula (12).
The method specifically includes following steps:
For the video that the existence of unmanned plane shooting rotates and shakes, first carry out eliminating video rotation processing. The present invention passes through Feature Points Matching algorithm, choose suitable transformation model, calculating the anglec of rotation of adjacent two frames (former frame is designated as reference frame and a later frame is designated as present frame), then according to present frame is reversely rotated by identical angle, rotating thus eliminating video.
For the corresponding relation between efficient sequence of calculation image, characteristic point is chosen possess following feature: the characteristic point chosen should compare and is distributed evenly among image, and the quantity chosen should be moderate. According to these features, reference frame and present frame two width image uniform are divided into many sub regions by the present invention first respectively, FAST Corner Detection Algorithm is used to carry out feature point detection in each area, if be detected that multiple characteristic points, a characteristic point is only selected by random manner, if it is not detected that characteristic point, then choose the intermediate point of subregion as characteristic point. By the method for this extraction feature, every sub regions of reference frame and present frame two width image has a characteristic point. The feature point set of note reference frame is combined into F1, the feature point set of present frame is combined into F2��
Then each characteristic point is described with a vector, specifically comprise the following steps that
(1) centered by characteristic point, choose the image of the territory that radius is 8 as this Feature point correspondence image block P (x, y), this tile size is 16*16;
(2) calculate image block P (x, the amplitude G of the gradient of each pixel y) (and x, y) and direction �� (x, y);
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2
�� (x, y)=arctan (Gx(x,y)/Gy(x,y))
Wherein, image level direction gradient GxWith vertical gradient GyRespectively through image P (x, y) obtains with horizontal direction wave filter and vertical direction wave filter convolution, and computing formula is as follows:
Gx(x, y)=(-101) * P (x, y)
Gy(x, y)=(-101)T*P(x,y)
(3) by image P (x, y) sub-block of 4*4 it is divided into uniformly, so can obtain 16 image subblock block, gradient direction space 0��2 �� is divided into 8 directions, direction histogram is added up inside each block image, and using gradient magnitude as weight, so each block image can obtain 8 dimensional vectors, and 16 block images can obtain altogether 128 dimensional vectors.
So each characteristic point can obtain the vector of one 128 dimension, is designated as the characteristic vector of this Feature point correspondence.
Characteristic point set F to present frame and reference frame1And F2, carry out bi-directional matching by the nearest neighbor distance of characteristic vector ratio, specifically comprise the following steps that
(1) appoint and take characteristic point set F1In a characteristic point dot0, characteristic of correspondence vector is ftdot0, ask at characteristic point set F2In nearest neighbor point dot1 and the second Neighbor Points dot2, note dot0 and nearest neighbor point dot1 distance be Dist1, note dot0 and the second Neighbor Points dot2 distance be Dist2, computing formula is as follows:
D i s t 1 = | | ft d o t 0 - ft d o t 1 | | = m i n d o t &Element; F 2 | | ft d o t 0 - ft d o t | |
D i s t 2 = | | ft d o t 0 - ft d o t 2 | | = m i n d o t &Element; F 2 - { d o t 1 } | | ft d o t 0 - ft d o t | |
If Dist1/Dist2<0.8, illustrate that the match is successful, be designated as a pair coupling point set<dot0, dot1>;
(2) traversal characteristic point set F1In whole characteristic point dot, record all point sets that the match is successful, obtain match point set {<dot, dot '>};
(3) characteristic point set F is reversely sought2In whole characteristic point dot at characteristic point set F1In match point, be designated as second match point set;
(4) two match point set seek common ground, and obtain final match point set, are designated as M.
Owing to the video of unmanned plane shooting being primarily present rotation, translation and change of scale, the present invention selects similarity transformation as the transformation model between two two field pictures, if F and F ' represents reference frame and present frame, A (x, y) with A ' (x ', y ') it is a pair match point of match point set M, transformation equation is as follows:
x &prime; y &prime; 1 = &epsiv; &CenterDot; c o s &theta; - &epsiv; &CenterDot; s i n &theta; t x &epsiv; &CenterDot; s i n &theta; &epsiv; &CenterDot; cos &theta; t y 0 0 1 x y 1 - - - ( 1 )
This transformation model is totally 4 parameters, respectively: anglec of rotation ��, zoom factor ��, translational movement tx and the ty both horizontally and vertically gone up.
By matching characteristic, set M can be solved transformation equation, obtain the value of above-mentioned 4 parameters.
From the 1st frame, calculated the anglec of rotation of adjacent two frames by said method, ifBe the t frame anglec of rotation relative to t-1 frame, then t frame relative to the anglec of rotation of the 1st frame isPresent frame is rotated by the anglec of rotation according to trying to achieve, and has just reached to eliminate the purpose that video rotates.
The overall step eliminating video rotating part in the present invention is as follows:
(1) respectively two adjacent width image uniform being divided into many sub regions, every sub regions extracts a characteristic point;
(2) each characteristic point is described, obtains character pair vector, then pass through matching characteristic that dual way nearest neighbor distance obtains between two width images than matching process to set;
(3) utilize similarity transformation model to estimate the anglec of rotation between consecutive frame, obtain the present frame anglec of rotation relative to l frame by the anglec of rotation of cumulative adjacent interframe;
(4) using the anglec of rotation that present frame is rotated, be eliminated the frame of video rotated.
After carrying out elimination rotation, next carry out video eliminating dithering process. The present invention uses Bit-plane matching and diamond search strategy to estimate the translational motion of also compensation interframe, reaches quickly to eliminate the purpose of video jitter.
Bit-plane matching has that amount of calculation is little, accuracy advantages of higher, but bad for rotating video image effect. And first video is eliminated rotation by us here, it is possible to effective this defect of avoidance, give full play to its advantage of position.
The basic thought of Bit-plane matching is to utilize bit plane to replace gray level image, thus realizing the Rapid matching of image block.
The bit plane calculation procedure of image is as follows:
(1) one width gray scale 0��255 scope image (x, y) pixel value of position is represented by:
F (x, y)=a727+a626+��+a020
Wherein ak(0��k��7) take 0 or 1;
(2) because the minor variations of pixel gray value will substantially change akValue, above-mentioned coding is improved, formula is as follows:
g i = a i &CirclePlus; a i + 1 , 0 &le; i &le; 6
g7=a7
WhereinRepresent xor operation.
(3) each pixel has 7 bit value gk(0��k��7), the kth bit value composition kth rank bit-planes b of all pixelsk(x, y), such piece image, have 8 bit-planes b0(x, y)��b7(x,y)��
For computational efficiency, we only use the 4th bit plane of each image to carry out Block-matching. Choose 5 subimages of reference frame the 4th bit-plane image corner and center respectively, present frame the 4th bit-plane image calculates the match block image of correspondence.
If the subimage chosen is sized to M �� N, the search window at present frame is (M+2P) �� (N+2P), P is search window maximum displacement, then the matching degree of two width images is calculated as follows:
D i ( m , n ) = 1 M N &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 g 4 t ( x , y ) &CirclePlus; g 4 t - 1 ( x + m , y + n )
WhereinRepresent the 4th rank bit-planes after present frame and former frame coding respectively.
Di(m, n) calculates unmatched bit number between two width images, is best match position when value is minimum.
Each subimage of reference frame, the match block image corresponding with calculating gained, it is possible to calculate a motion vector. Such 5 subimages, finally can obtain 5 motion vector ��1,��,��4. 5 motion vectors are obtained through median filtering operation the motion vector �� of present frame.
Start to calculate the motion vector of adjacent two frames from the 1st frame of video, ifBeing the t frame motion vector relative to t-1 frame, carrying out adding up can be obtained by the present frame global motion amount to the 1st frameIf video sequence is stable, its motion vector should be just smooth. But video here is often all shake, so motion vector is not just smooth yet. The present invention adopts method of least square that motion vector is carried out curve fitting, it is possible to effectively motion vector is smoothed. Vector after note is smooth is main motion amountThe motion compensation quantity of present frameComputing formula is as follows:
&mu; c t = &mu; t - &mu; s t
The step that the Part II of the present invention eliminates video jitter is as follows:
(1) start to calculate the motion vector of adjacent two frames from the 1st frame of video, thus calculating the present frame global motion amount to the 1st frame;
(2) adopt method of least square to carry out curve fitting global motion amount, estimate the main motion amount of present frame;
(3) motion compensation quantity of present frame is calculated by the global motion amount of present frame and main motion gauge;
(4) with motion compensation quantity, present frame is compensated, thus obtaining stable video.
Those skilled in the art will know that the scope of the present invention is not restricted to example discussed above, it is possible to it is carried out some changes and amendment, without deviating from the scope of the present invention that appended claims limits. Although oneself is through illustrating and describing the present invention in the accompanying drawings and the description in detail, but such explanation and description are only illustrate or schematic, and nonrestrictive. The present invention is not limited to the disclosed embodiments.
By to accompanying drawing, the research of specification and claims, it will be appreciated by those skilled in the art that and realize the deformation of the disclosed embodiments when implementing the present invention. In detail in the claims, term " includes " being not excluded for other steps or element, and indefinite article " " or " one " are not excluded for multiple. The fact that some measure quoted in mutually different dependent claims do not mean that the combination of these measures can not be advantageously used. Any reference marker in claims is not construed to limit the scope of the present.

Claims (6)

1. the video image stabilization method towards MAV, it is characterised in that comprise the following steps:
Step S100: the image of consecutive frame between two in the video obtained by described unmanned plane is set to reference frame and present frame, described reference frame and described present frame are uniformly divided into many sub regions respectively, a provincial characteristics point is extracted in each described subregion, each described provincial characteristics point is described as the characteristic vector of 128 dimensions, and to described present frame provincial characteristics point set F1With described reference frame regions characteristic point set F2By dual way nearest neighbor distance than matching method, try to achieve from described present frame to described reference frame match point set {<dot, dot '>, dot �� F1,dot���F2And match point set {<dot', dot>, dot ' �� F from described reference frame to described present frame2,dot��F1, and using the common factor of the two as match point set M;
Step S200: obtain the estimation anglec of rotation between described reference frame and described present frame by similarity transformation model, the estimation anglec of rotation between cumulative multiple consecutive frames, obtain the described present frame anglec of rotation relative to l frame, described present frame is rotated the described anglec of rotation, and be eliminated the described video rotated;
Step S300: calculated the global motion amount �� rotating present frame eliminating in the described video rotated by Bit-plane matchingt, and to global motion amount ��tCarry out curve fitting, obtain main motion amountPressObtain the motion compensation quantity of described rotation present frameAccording to described motion compensation quantityDescribed rotation present frame is compensated, obtains stable video.
2. the video image stabilization method towards MAV according to claim 1, it is characterised in that comprise the following steps in described step S100:
Step S110: two width image uniform of described reference frame and described present frame are divided into many sub regions, uses FAST Corner Detection Algorithm that each described subregion is detected in each described subregion, obtains detection characteristic point;
Step S120: if described detection characteristic point is multiple, then by random fashion from detection characteristic point an optional characteristic point as the characteristic point of this subregion, if described detection characteristic point is zero, then choose the intermediate point characteristic point as this subregion of this subregion.
3. the video image stabilization method towards MAV according to claim 2, it is characterised in that in described step S100, described provincial characteristics point is described as characteristic vector, comprises the following steps:
Step S130: centered by described provincial characteristics point, the image in using 8 for the territory of radius as image block P corresponding to described provincial characteristics point (x, y), (x, y) is of a size of 16*16 to described image block P;
Step S140: according to formula (1)��(2) calculate described image block P (x, the gradient magnitude G of each pixel y) (and x, y) and the direction �� of each pixel (x, y),
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 - - - ( 1 )
�� (x, y)=arctan (Gx(x,y)/Gy(x,y))(2)
Wherein, Gx(x, y) for described image block P (x, horizontal direction gradient magnitude y), Gy(x, y) for described image block P (computing formula is as follows for x, vertical gradient amplitude y):
Gx(x, y)=(-101) * P (x, y) (3)
Gy(x, y)=(-101)T*P(x,y)(4)
Step S150: by described image block P (x, y) sub-block of 4*4 it is divided into uniformly, obtain 16 image subblock block, gradient direction space 0��2 �� is divided into 8 directions, inside each block image, add up the rectangular histogram in each direction, and (x, y) as weight to use described gradient magnitude G, each block image can obtain 8 dimensional vectors, and 16 block images there are 128 dimensional vectors.
4. the video image stabilization method towards MAV according to claim 3, it is characterised in that dual way nearest neighbor distance described in described step S100 comprises the following steps than matching method:
Step S160: take described present frame provincial characteristics point set F1Middle any region characteristic point dot0, its characteristic of correspondence vector f tdot0, ask provincial characteristics point dot0 at described reference frame regions characteristic point set F2In nearest neighbor point dot1 and the second Neighbor Points dot2, the distance remembering described provincial characteristics point dot0 and nearest neighbor point dot1 is Dist1, remember that the distance of described provincial characteristics point dot0 and described second Neighbor Points dot2 is that Dist2, Dist1 and Dist2 press formula (5)��(6) and calculate:
D i s t 1 = | | ft d o t 0 - ft d o t 1 | | = m i n d o t &Element; F 2 | | ft d o t 0 - ft d o t | | - - - ( 5 )
D i s t 2 = | | ft d o t 0 - ft d o t 2 | | = m i n d o t &Element; F 2 - { d o t 1 } | | ft d o t 0 - ft d o t | | - - - ( 6 )
If Dist1/Dist2<0.8, then dot0 and dot1 is designated as a pair coupling point set<dot0, dot1>;
Step S170: travel through described present frame provincial characteristics point set F by the step in described step S1601In all provincial characteristics points, all provincial characteristics points meeting Dist1/Dist2<0.8 condition are recorded in described present frame match point set {<dot, dot '>, dot �� F to described reference frame1,dot���F2In;
Step S180: travel through described reference frame regions characteristic point set F according to the step in described step S1602In all provincial characteristics points, ask for described reference frame regions characteristic point set F2At described present frame provincial characteristics point set F1In match point, obtain reference frame to present frame match point set {<dot', dot>, dot ' �� F2,dot��F1;
Step S190: calculate described present frame to described reference frame match point set {<dot, dot '>, dot �� F1,dot���F2And described reference frame to described present frame match point set {<dot', dot>, dot ' �� F2,dot��F1Common factor, as described match point set M.
5. the video image stabilization method towards MAV according to claim 4, it is characterised in that described step S200 comprises the following steps:
Step S210: make model for the transformation model between two adjacent two field pictures with similarity transformation, note A (x, y) and arbitrary to match point in described match point set M of A ' (x ', y '), described similarity transformation model equation is formula (7):
x &prime; y &prime; 1 = &epsiv; &CenterDot; c o s &theta; - &epsiv; &CenterDot; s i n &theta; t x &epsiv; &CenterDot; s i n &theta; &epsiv; &CenterDot; cos &theta; t y 0 0 1 x y 1 - - - ( 7 )
Wherein, �� is the estimation anglec of rotation of video, �� is zoom factor, x, y, for A (x, y) arbitrary to match point, x ' in, y ' to match point, tx is the translational movement in horizontal direction for arbitrary in A ' (x ', y ') and ty is the translational movement in vertical direction;
Step S220: described match point set M is adopted similarity transformation model equation described in the consistent RANSAC Algorithm for Solving of stochastic sampling by matching characteristic, obtains estimating the translational movement tx in anglec of rotation ��, zoom factor ��, horizontal direction and the translational movement ty in vertical direction;
Wantonly two consecutive frames are repeated step S210��S220 and obtain the estimation anglec of rotation �� of all adjacent two two field pictures in described video by step S230: from the 1st two field picture of video;
IfBe the t frame anglec of rotation relative to t-1 frame, then t frame relative to the anglec of rotation of the 1st frame isIt is the anglec of rotation according to what try to achieveDescribed present frame is rotated, obtains the described frame of video eliminating and rotating.
6. the video image stabilization method towards MAV according to claim 4, it is characterised in that described step S300 comprises the following steps:
Step S310: by eliminate rotate described video in any frame gray scale be 0��255 image (x, y) pixel value of position is expressed as:
F (x, y)=a727+a626+��+a020(8)
Wherein akTake 0 or 1,0��k��7, for initial bit value;
By akIt is rewritten into gk, as the bit value of improvement:
gk=ak?ak+1,0��k��6
g7=a7(9)
Wherein represent xor operation;
Step S320: each pixel on any frame image has 8 bit value gk, the kth bit value composition kth rank bit-planes b of all pixelsk(x y), any frame image has 8 bit-planes image b0(x, y)��b7(x, y), chooses the 4th bit-plane image and mates, and obtains the 4th bit-plane image of described reference frame and the 4th bit-plane image of described present frame, is designated as Cb respectively4And Db4;
At image Cb4Corner and center position respectively select the subimage of a M �� N, there are 5 subimage Csub1 ..., Csub5, with the window of a M �� N at image Db4Enterprising line slip, slip each time is all at image Db4On obtain the subimage of a M �� N, be designated as Dsubi, calculate the matching degree of subimage Csubi and subimage Dsubi:
D T = 1 M N &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 D s u b i ( x , y ) &CirclePlus; C s u b i ( x , y ) - - - ( 10 )
When DT value is minimum, two corresponding width images are best coupling; Respectively at image Db4Upper calculating Csub1 ..., best matching blocks image corresponding for Csub5, it is designated as Dsub1 respectively ..., Dsub5; Two width images of described best coupling Csubi, Dsubi | i=1 ..., the coordinate offset amount of 5} is designated as (mi,ni), it is the motion vector of subimage Csubi;
Step S330: each subimage in 5 subimages of described reference frame, calculate by step S310��320 and obtain subimage Csubi and the motion vector of subimage Dsubi, 5 motion vectors are processed through medium filtering the motion vector �� obtaining described present frame;
Start each two field picture is all calculated the motion vector of two consecutive frames by step S310��330 from the 1st frame of described video, ifBeing the t frame motion vector relative to t-1 frame, the motion vector of cumulative multiple image obtains the described present frame global motion amount to the 1st frameGlobal motion amountIt is calculated as follows and obtains:
&mu; 1 t = &Sigma; i = 1 t - 1 &mu; i i + 1 - - - ( 11 ) ;
Adopt method of least square to motion vectorCarrying out curve fitting, the vector after note is smooth is main motion amountThe motion compensation quantity of present frameIt is calculated as follows:
&mu; c t = &mu; 1 t - &mu; s t - - - ( 12 )
According to gained motion compensation quantityDescribed present frame is compensated, obtains stable video.
CN201610018259.8A 2016-01-12 2016-01-12 Video image stabilizing method for micro unmanned aerial vehicle Pending CN105657432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610018259.8A CN105657432A (en) 2016-01-12 2016-01-12 Video image stabilizing method for micro unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610018259.8A CN105657432A (en) 2016-01-12 2016-01-12 Video image stabilizing method for micro unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN105657432A true CN105657432A (en) 2016-06-08

Family

ID=56484270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610018259.8A Pending CN105657432A (en) 2016-01-12 2016-01-12 Video image stabilizing method for micro unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN105657432A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107343145A (en) * 2017-07-12 2017-11-10 中国科学院上海技术物理研究所 A kind of video camera electronic image stabilization method based on robust features point
WO2018027340A1 (en) * 2016-08-06 2018-02-15 SZ DJI Technology Co., Ltd. Systems and methods for mobile platform imaging
WO2018053809A1 (en) * 2016-09-23 2018-03-29 Qualcomm Incorporated Adaptive image processing in an unmanned autonomous vehicle
CN109462717A (en) * 2018-12-12 2019-03-12 深圳市至高通信技术发展有限公司 Electronic image stabilization method and terminal
WO2019196475A1 (en) * 2018-04-09 2019-10-17 华为技术有限公司 Method and device for acquiring globally matching patch
CN113128573A (en) * 2021-03-31 2021-07-16 北京航天飞腾装备技术有限责任公司 Infrared-visible light heterogeneous image matching method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383899A (en) * 2008-09-28 2009-03-11 北京航空航天大学 Video image stabilizing method for space based platform hovering
CN101924874A (en) * 2010-08-20 2010-12-22 北京航空航天大学 Matching block-grading realtime electronic image stabilizing method
CN102427505A (en) * 2011-09-29 2012-04-25 深圳市万兴软件有限公司 Video image stabilization method and system on the basis of Harris Corner
CN103079037A (en) * 2013-02-05 2013-05-01 哈尔滨工业大学 Self-adaptive electronic image stabilization method based on long-range view and close-range view switching
CN103179399A (en) * 2013-03-11 2013-06-26 哈尔滨工程大学 Method of quick bit-plane electronic image stabilization based on FPGA (field programmable gate array) platform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383899A (en) * 2008-09-28 2009-03-11 北京航空航天大学 Video image stabilizing method for space based platform hovering
CN101924874A (en) * 2010-08-20 2010-12-22 北京航空航天大学 Matching block-grading realtime electronic image stabilizing method
CN102427505A (en) * 2011-09-29 2012-04-25 深圳市万兴软件有限公司 Video image stabilization method and system on the basis of Harris Corner
CN103079037A (en) * 2013-02-05 2013-05-01 哈尔滨工业大学 Self-adaptive electronic image stabilization method based on long-range view and close-range view switching
CN103179399A (en) * 2013-03-11 2013-06-26 哈尔滨工程大学 Method of quick bit-plane electronic image stabilization based on FPGA (field programmable gate array) platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
M. TICO, ET AL: "Method Of Motion Estimation For Image Stabilization", 《2006 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS SPEECH AND SIGNAL PROCESSING PROCEEDINGS》 *
孙鹏: "无人机高清视频电子稳像算法研究", 《沈阳大学硕士论文》 *
王春才 等: "无人机电子稳像技术中角点检测算法的改进", 《影像技术》 *
费迪南德 P. 比尔: "《工程矢量力学 静力学 原书第3版》", 30 June 2003 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018027340A1 (en) * 2016-08-06 2018-02-15 SZ DJI Technology Co., Ltd. Systems and methods for mobile platform imaging
US10659690B2 (en) 2016-08-06 2020-05-19 SZ DJI Technology Co., Ltd. Systems and methods for mobile platform imaging
WO2018053809A1 (en) * 2016-09-23 2018-03-29 Qualcomm Incorporated Adaptive image processing in an unmanned autonomous vehicle
CN109792530A (en) * 2016-09-23 2019-05-21 高通股份有限公司 Adapting to image processing in nobody the autonomous vehicles
CN107343145A (en) * 2017-07-12 2017-11-10 中国科学院上海技术物理研究所 A kind of video camera electronic image stabilization method based on robust features point
WO2019196475A1 (en) * 2018-04-09 2019-10-17 华为技术有限公司 Method and device for acquiring globally matching patch
CN109462717A (en) * 2018-12-12 2019-03-12 深圳市至高通信技术发展有限公司 Electronic image stabilization method and terminal
CN113128573A (en) * 2021-03-31 2021-07-16 北京航天飞腾装备技术有限责任公司 Infrared-visible light heterogeneous image matching method

Similar Documents

Publication Publication Date Title
CN105657432A (en) Video image stabilizing method for micro unmanned aerial vehicle
CN102006425B (en) Method for splicing video in real time based on multiple cameras
CN103458261B (en) Video scene variation detection method based on stereoscopic vision
CN102184540B (en) Sub-pixel level stereo matching method based on scale space
CN108280804B (en) Multi-frame image super-resolution reconstruction method
CN109584282B (en) Non-rigid image registration method based on SIFT (scale invariant feature transform) features and optical flow model
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN104869387A (en) Method for acquiring binocular image maximum parallax based on optical flow method
CN110211169B (en) Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation
CN104504652A (en) Image denoising method capable of quickly and effectively retaining edge and directional characteristics
CN103402045A (en) Image de-spin and stabilization method based on subarea matching and affine model
CN102098440A (en) Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN102156995A (en) Video movement foreground dividing method in moving camera
Hua et al. Extended guided filtering for depth map upsampling
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN107197121A (en) A kind of electronic image stabilization method based on on-board equipment
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN107360377B (en) Vehicle-mounted video image stabilization method
CN103024247A (en) Electronic image stabilization method based on improved block matching
CN103514587B (en) Ship-based image-stabilizing method based on sea-sky boundary detecting
CN103679740A (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN103632372A (en) Video saliency image extraction method
CN103914807B (en) Non-locality image super-resolution method and system for zoom scale compensation
CN107767393B (en) Scene flow estimation method for mobile hardware
CN112598604A (en) Blind face restoration method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160608

RJ01 Rejection of invention patent application after publication