CN103414853A - Device and method for stabilizing video image sequence capable of doing multi-degree of freedom movement in real time - Google Patents

Device and method for stabilizing video image sequence capable of doing multi-degree of freedom movement in real time Download PDF

Info

Publication number
CN103414853A
CN103414853A CN2013103207979A CN201310320797A CN103414853A CN 103414853 A CN103414853 A CN 103414853A CN 2013103207979 A CN2013103207979 A CN 2013103207979A CN 201310320797 A CN201310320797 A CN 201310320797A CN 103414853 A CN103414853 A CN 103414853A
Authority
CN
China
Prior art keywords
image
sequence
point
video
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103207979A
Other languages
Chinese (zh)
Other versions
CN103414853B (en
Inventor
钟平
张康
胡睿
张秀云
庞家玉
黄凡霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201310320797.9A priority Critical patent/CN103414853B/en
Publication of CN103414853A publication Critical patent/CN103414853A/en
Application granted granted Critical
Publication of CN103414853B publication Critical patent/CN103414853B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a device for stabilizing a video image sequence capable of doing multi-degree of freedom movement in real time. The device comprises a programmable logic device, a programmable logic device connecting analog-digital converter, a digital analog converter, two digital signal processors, an image sequence buffer, a parameter configuration static memory and an image sequence buffer. The invention further provides a method for stabilizing the video image sequence capable of doing the multi-degree freedom of movement in real time, an image sequence interframe epipole converting model is adopted, the accurate positions, where a feature point is located, on image frames are forecasted, the image sequence interframe multi-degree of freedom movement is drawn by establishing a video image forecast feature point set virtual curve family, the accurate position, where the forecast feature point is located, on a corresponding stabilizing image is obtained by utilizing the dimensionality reduction image data processing method, a constraint matrix is established, and a stable output video is obtained by reestablishing an output image. According to the device and method, an electronic stabilized image can be fast achieved in a high-precision mode, and the processing efficiency of the system is greatly improved.

Description

Support the sequence of video images real-time stabilization apparatus and method of multifreedom motion
Technical field
The present invention relates to a kind of sequence of video images real-time stabilization apparatus and method of supporting multifreedom motion, belong to the digitized video processing technology field.
Background technology
At present, along with the needs of modern military technology and industrial development, scout, monitor that information is more aobvious outstanding in war, anti-terrorism, status in antitheft.Due to the reconnaissance and surveillance visual field of fixed pedestal imaging platform, be subjected to the impact of the optical system angle of visual field and environment, be constrained to the ability of picture system acquisition information.For making up the deficiency of fixed pedestal imaging platform, countries in the world are placed in the dynamic visual field that enlarges optical system on mobile platform (as mobile carriers such as surface car, naval vessel, aircraft, satellites) to scouting, surveillance, accommodate amount of information with increase.But the thing followed is threat and the impact of the variation of mobile carrier vibration, flow perturbation and movement velocity on optical system imaging quality.While as space flight, aviation and vehicular platform, being used for reconnaissance and surveillance, the various motions of moving pedestal and random vibration will have a strong impact on the stable of video image.Adopt the method for machinery or the steady picture of optics, although can reach certain steady picture effect, along with the raising of stablizing accuracy, will bring the system hardware cost to be how much magnitudes increases, and this is that image stabilization system is difficult to bear.Adopt the follow-up device of electronic steady image as machinery or optical stabilization platform, can further improve steady picture precision.Electronic steady image has the advantages such as volume is little, lightweight, low in energy consumption and intelligent, but the complexity of its algorithm remains with steady contradiction as precision the bottleneck that restricts its application.
Because the motion of imaging platform in obtaining continuously the image sequence process has how free randomness feature, concerning the electronic steady image system, the deciding factor that the multivariant compound movement parameter Estimation of its image sequence interframe overall situation is the image stabilization system performance.According to steady picture model, adopt at present the globe motion parameter estimation technique can roughly be divided into translation random motion two-dimensional parameter detection technique and complicated multiple degrees of freedom random motion parameter detecting technology.Because the complexity of translational motion parameter Estimation is relatively low, by its sequence of video images unstability caused, be easier to process, the characteristics such as have that algorithm simply, is easily realized and real-time, but because it lacks detectability to the multifreedom motion between sequence of video images.When sequence of video images interframe included on-plane surface how much motions and camera in many free vibration of 3d space and causes the factor such as parallax, two dimensional model just was difficult to bring into play effect, can't be applicable to high-precision electronic and surely look like.Complicated multiple degrees of freedom random motion parameter detecting technology, adopt the similarity transformation motion model mostly, and only having the minority algorithm is to adopt affine Transform Model, or the distant view transformation model.Because utilizing more complicated 3D rendering model, it processes the above-mentioned difficulty of mentioning, the view data treating capacity is large, the digital image stabilization method of 3D model is by the dynamic image change procedure, following the tracks of a sparse feature point set at present, and utilize the corresponding relation of image sequence interframe characteristic point, recover the 3D attitude of video camera, build new stable picture frame by processing level and smooth camera motion path and re-projection 3D point.Yet, due to the introducing of more complicated steady picture model, bring various defectives can for the hardware unit of the realization of algorithm.As adopt SFM (cross matrix) method, being actually the processing nonlinear problem, is typically to adopt the light beam adjustment to solve problem, and based in extensive trace point and three-dimensional information process, the efficiency degradation that can cause algorithm, affect the real-time of system.Simultaneously, the mistake of characteristic point coupling and during lower than the desired variation of model, owing to being difficult to extract enough characteristic informations, can make image stabilization system be difficult to bring into play its effect when the variation complexity of video image scene, even there will be the motion vector detection mistake.In addition, the movement that camera is small or almost plane scene how much (objects at a distance) also are difficult to the variation of definite three-dimensional position.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of sequence of video images high accuracy, the real-time stabilization apparatus and method that can support multifreedom motion.
In order to solve above-mentioned first technical problem, technical scheme of the present invention is to provide a kind of sequence of video images real-time stabilization device of supporting multifreedom motion, it is characterized in that: comprise programmable logic device FPGA, programmable logic device FPGA connects modulus converter A/D, digital to analog converter D A, digital signal processor DSP 1#, digital signal processor DSP 2#, image sequence buffer DDR3, parameter configuration static memory SRAM and image sequence buffer DDR3;
Programmable logic device FPGA is responsible for image is carried out to subregion; Reconstructed image frame from digital signal processor DSP 2# is reconfigured to new video, and adopt the tcp/ip communication agreement, new stable video image sequence is generated to the output of standard digital video flowing;
Digital signal processor DSP 1# and digital signal processor DSP 2#, adopt the parallel pipeline working method to process sequence of video images; Wherein, digital signal processor DSP 1# analyzes and processes image information, comprises image-region feature point detection and selection, computed image interframe constraint matrix, sets up the core transformation model between picture frame and carries out the characteristic point position prediction; The predicted characteristics point sequence that digital signal processor DSP 2# utilizes digital signal processor DSP 1# to provide, build video image characteristic point set imaginary line bunch, simultaneously, adopt view data dimension-reduction treatment method to carry out the filtering processing to imaginary line bunch, obtain corresponding its exact position at stabilized image of predicted characteristics point, build constraint matrix, for calculating the exact position of original input picture characteristic point and corresponding stabilized image feature point set, and current image frame is rebuild, simultaneously, send the image of reconstruction to programmable logic device FPGA;
Image sequence buffer DDR3, for the Stabilization of video sequences process, to the interim storage of present frame, associated frame and intermediate data;
Parameter configuration static memory SRAM, before for Video Stabilization, processing, store the system parameters of setting.
In order to solve above-mentioned second technical problem, technical scheme of the present invention is to provide a kind of sequence of video images real-time stabilization method of supporting multifreedom motion, it is characterized in that: adopt image sequence interframe epipole transformation model, predicted characteristics point is in the exact position of each picture frame, and by building video image predicted characteristics point set imaginary line bunch, multifreedom motion between the rendering image sequence frame, simultaneously, adopt the dimensionality reduction image processing method, obtain corresponding its exact position at stabilized image of predicted characteristics point, thereby structure constraint matrix, for calculating the exact position of corresponding its stabilized image feature point set of original input picture feature point set, finally by input picture is rebuild, obtain stable output video.
Preferably, utilize the conversion of multiframe epipole and average decision making algorithm to realize the high-precision forecast to current image frame feature point set position, and, by the variable condition to each frame institute's predicted characteristics point position of image sequence, obtain the multifreedom motion information of video camera in shooting process.
Preferably, by setting up the method for image sequence predicted characteristics point set virtual track set of curves, mean the motion state of the image sequence interframe that 3d space multifreedom motion video camera obtains; Adopt the dimension-reduction treatment method, predicted characteristics point set virtual track set of curves is carried out respectively to projection at X and Y-direction, by drop shadow curve is carried out respectively to the 2D smoothing processing, obtain the smooth change of predicted characteristics point set at each frame position coordinate of sequence of video images, thereby determine corresponding its constraint matrix of core in the stabilized image position of original input picture characteristic point.
Preferably, for the point of the static nature in scene, by corresponding its constraint matrix of core in the stabilized image position of original input picture characteristic point, adopt epipole conversion and average decision making algorithm, estimate that accurately original input picture changes with corresponding stabilized image characteristic point position.
Preferably, for behavioral characteristics point, process, based on the stationarity of moving object in scene at image sequence interframe movement track, determine the characteristics of the suffered minimum external force constraint of object, set up behavioral characteristics point dynamics Mathematical Modeling, solve in scene the motion feature point in each picture frame position of image sequence.
Preferably, in the stabilized image process, the choosing and following the tracks of of key feature points, based on its uniformity and conspicuousness rule distributed at image-region; The replacement policy of characteristic point is based on continuity and the characteristic point position dynamic prediction of virtual track curve.
Preferably, stablizing of sequence of video images, adopt following step to realize:
Step 1: establishing and needing sequence of video images to be processed is f s, f tFor present image.To f sIn each two field picture, first by programmable logic device FPGA, it is divided into after 16 image regions and passes to digital signal processor DSP #1 and process, in each zone, adopt selected ten characteristic points of KLT point track algorithm, and at image thereafter, by the feature point tracking technology, all characteristic points of choosing are followed the tracks of, form f sA feature point set sequence
Figure BDA00003573418900041
Wherein I the characteristic point that the s two field picture of expression video sequence is chosen;
Step 2: to video sequence f s, with present image f tCentered by, continuous 17 two field pictures form a processing unit be associated, the prediction accurate coordinates position of present frame characteristic point in image.To current processing picture frame f t, by track algorithm obtain this frame with it forward and backward adjacent 8 frames process continuously the feature point set sequence of image, be designated as: Ψ s i = { Ψ t - 8 i , . . . , Ψ t - 1 i , Ψ t i , . . . , Ψ t + 1 i , . . . , Ψ t + 8 i } , Wherein: t-8≤s≤t+8;
Step 3: based on area distribution and conspicuousness characteristics, at the feature point set of processing unit In sequence, to the feature point set of each two field picture, select eight key feature points wherein, calculate as pre-treatment picture frame f tBe adjacent the core constraint transformation matrix F of forward and backward eight two field pictures S, t
Step 4: according to image core geometrical constraint principle, adopt the epipole conversion method, by And F S, tCalculate each frame characteristic point
Figure BDA00003573418900046
At image f tIn projection line
Figure BDA00003573418900047
And by
Figure BDA00003573418900048
Intersection point or the mean value of each position of intersecting point, as characteristic point i at image f tThe prediction of middle position, thereby constitutive characteristic point set
Figure BDA00003573418900049
At present image prediction point set
Figure BDA000035734189000410
Step 5: every collection one new images enters video sequence, can make: t=t+1, and repeating step 2~step 4 can obtain the predicted characteristics sequence of point sets:
Figure BDA000035734189000411
And when 1<s<8, order
Figure BDA000035734189000412
Can obtain so a continuously predicted characteristics sequence of point sets &Omega; s i = { &Omega; 1 i , &Omega; 2 i , . . . , &Omega; t i , . . . &Omega; n i , . . . } ;
Step 6: digital signal processor DSP #1 will predict sequence of point sets continuously Pass to digital signal processor DSP #2, digital signal processor DSP #2 is usingd frame number s as the time Z axis,
Figure BDA000035734189000415
Middle coordinate at two dimensional image forms respectively X-axis and Y-axis, forms three-dimensional system of coordinate, according to
Figure BDA000035734189000416
Coordinate (the x that the frame number of point sequence reaches at each frame i, y i) value, determine each position of predicted characteristics point in newly-built coordinate system, and the corresponding points of each picture frame are linked to be to curve, form the imaginary line bunch along time shaft, and constantly gather in time new video image, its imaginary line constantly extends; The level and smooth degree of every curve that the corresponding predicted characteristics point of each frame forms, reflected the degree of stability of this characteristic point in sequence of video images, and whole imaginary line bunch has reflected the degree of stability of video camera in shooting process;
Step 7: digital signal processor DSP #2 is along Z-direction, each curve of imaginary line bunch is carried out to projection at directions X and Y-direction respectively, and take present image and be processing center, forward and backward adjacent totally 11 two field pictures are a processing unit, adopt successively the convolution algorithm method, realization is carried out the smothing filtering operation to curve, can obtain the predicted characteristics point set
Figure BDA00003573418900051
Corresponding critical sequences:
Figure BDA00003573418900052
Step 8: according to original image feature point set sequence
Figure BDA00003573418900053
In critical sequences
Figure BDA00003573418900054
Corresponding relation, according to its area distribution and conspicuousness characteristics, again exist
Figure BDA00003573418900055
With In, select eight key feature points wherein, calculate current original image frame f tCharacteristic point
Figure BDA00003573418900057
Be adjacent forward and backward five two field pictures corresponding
Figure BDA00003573418900058
Core constraint transformation matrix
Figure BDA00003573418900059
T-5<s<t+5;
Step 9: according to image core geometrical-restriction relation, adopt epipole conversion method and average decision making algorithm, by Estimate present image
Figure BDA000035734189000511
It is at the location point of stabilizer frame
Step 10: to the current image frame of processing, if characteristic point i, corresponding to the moving object in non-still image, analyzes its motion feature, adopt the dynamics Mathematical Modeling Methods, resolve its smooth motion status flag position, right
Figure BDA000035734189000513
The position of middle corresponding points is adjusted accordingly;
Step 11: to current processed frame f t, by the feature set to starting to select
Figure BDA000035734189000514
With resulting character pair collection after above-mentioned processing Position relationship, to present image f tRebuild, and the image that will rebuild
Figure BDA000035734189000516
Pass to programmable logic device FPGA;
Step 12: the image of programmable logic device FPGA to rebuilding
Figure BDA000035734189000517
Be reassembled into new video, and adopt the tcp/ip communication agreement, new stable video image sequence is generated to the output of standard digital video flowing.
A kind of sequence of video images real-time stabilization apparatus and method of supporting multifreedom motion provided by the invention compared with prior art, have following beneficial effect:
1, propose to adopt video image interframe core geometrical constraint and epipole transformation model, by adjacent multiple image, jointly participated in, the prediction of realization to the present frame characteristic point position, can improve significantly the matching precision of characteristic point, overcome the error that the own parameter of picture quality and enchancement factor and camera is brought, improve the robustness of algorithm.
2, adopt the dimension-reduction treatment method, family of curves carries out respectively projection at X and Y-direction to image sequence predicted characteristics point set virtual track, and by drop shadow curve is carried out respectively to the 2D smoothing processing, to obtain the smooth change of feature point set position in each frame of sequence of video images, can remove dexterously because the video camera multifreedom motion causes image sequence interframe unstability, improve significantly the system treatment effeciency simultaneously.
3, the stationarity of the movement locus between picture frame determines the characteristics of the constraint of the minimum external force that object is suffered according to moving object, the Mathematical Modeling that solves of motion feature point position is set up in proposition, the steady picture error that can avoid tradition to bring based on static situation interpolation stable motion object, realization is at camera and objects in images simultaneously under moving condition, to stablizing of the image sequence of video.
4, can effectively solve steady picture Processing Algorithm complexity and real time problems, realize quick, high-precision electronic steady image.
Apparatus and method provided by the invention have overcome the deficiencies in the prior art, and the matching precision of characteristic point is high, and the robustness of algorithm is high, can realize quick, high-precision electronic steady image, have improved significantly the treatment effeciency of system.
The accompanying drawing explanation
Fig. 1 is that the sequence of video images of support multifreedom motion provided by the invention is steady as the apparatus structure schematic diagram in real time;
Fig. 2 is core constraint principles figure between picture frame;
Fig. 3 is characteristic point position prediction schematic diagram;
Fig. 4 is behavioral characteristics point position smooth change schematic diagram.
Embodiment
For the present invention is become apparent, hereby with a preferred embodiment, and coordinate accompanying drawing to be described in detail below.
Fig. 1 is the sequence of video images real-time stabilization device schematic diagram of support multifreedom motion provided by the invention, and the sequence of video images real-time stabilization device of described support multifreedom motion comprises programmable logic device FPGA, picture signal modulus converter A/D and digital to analog converter D A, two digital signal processor DSP#1 and the critical pieces such as DSP#2, image buffer storage dynamic memory DDR3 and parameter configuration static memory SRAM.
Wherein, the FPGA programmable logic device is the central control unit of device, controls the workflow of whole steady picture device, is responsible for simultaneously image is carried out to subregion.Finally, to from DSP2# reconstructed image frame, reconfiguring new video, and adopt the tcp/ip communication agreement, new stable video image sequence is generated to the output of standard digital video flowing; Processor DSP1# and adopt the parallel pipeline working method to process sequence of video images by processor DSP2#.Wherein, processor DSP1# analyzes and processes image information, comprises feature point detection and selection in zone, computed image interframe constraint matrix, sets up the core transformation model between picture frame and carries out the characteristic point position prediction; The predicted characteristics point sequence that processor DSP2# utilizes DSP1# to provide, build video image characteristic point set imaginary line bunch, simultaneously, adopt view data dimension-reduction treatment method to carry out the filtering processing to imaginary line bunch, obtain predicted characteristics point corresponding its in the exact position of stabilized image, the structure constraint matrix, for calculating the exact position of original input picture characteristic point and corresponding stabilized image feature point set, and realize present frame is rebuild, simultaneously, send the image of reconstruction to FPGA; DDR3 builds the image sequence buffer, for the interim storage of steady picture process to current image frame and associated video image sequence, data.SRAM is used for surely as the system parameters of front setting, storing.
A kind of sequence of video images real-time stabilization method of supporting multifreedom motion, it utilizes the conversion of multiframe epipole and average decision making algorithm to realize the high-precision forecast to current image frame feature point set position, and, by the changing condition to each frame institute's predicted characteristics point position of image sequence, obtain the multifreedom motion information of video camera in shooting process.The search strategy that is different from traditional characteristic point coupling, the present invention proposes to utilize the core restriction relation of image sequence interframe, epipole conversion to realize the high accuracy method of estimation of each frame image features point position with the multi-frame mean decision making algorithm.As shown in Figure 2, if 1 P of photographed scene three dimensions, if same camera is taken at two diverse locations respectively, this point is projected as p and p ' on its imaging plane I and I ', be that p and p ' are corresponding match point, video camera two viewpoint C and C ' are respectively e and e ' with the intersection point on video camera imaging plane, are called as the core of two planes of delineation.Ray l ' pBe called the core line of a p in plane of delineation I ', similarly, core line l pSimilar definition is also arranged.According to the core geometrical principle between picture frame, put p the picture planar I ' on ray l ' pMeet a linear transformation, that is:
l′ p=Fp (1)
Wherein, F is that an order is 23 * 3 matrixes, and the match point p ' of p is at its core line l ' pUpper, intrinsic:
p′ TFp=0 (2)
Based on above-mentioned core geometric transformation basic principle, to the sequence chart picture frame, as shown in Figure 3, as supposition p tThat static 3D scene point is engraved in the projection of imaging plane, corresponding points p when t tSubpoint on imaging surface is respectively p at moment s=t-1 and s=t+1 T-1And p T+1, its projection core line in t moment imaging surface is respectively:
Figure BDA00003573418900084
With
Figure BDA00003573418900085
F wherein S, tBe exactly the core constraint fundamental matrix of image sequence interframe, this matrix notation present image f tBe adjacent image f sRestriction relation.Concerning image stabilization system, due to the correlation of the height of continuous videos interframe, when the core line of adjacent image frame was shared object point same in the 3D scene, they will intersect at a single-point at present image For each point of t ≠ s, all can be by l t* l sProvide, and this point is exactly the projection (picture point) of scene 3D object point on present image.Therefore, based on the core geometrical relationship, do not need extract minutiae concrete 3D position and parameter information of camera in scene, utilize current image frame to be adjacent the restriction relation of frame, this coordinate position at present frame just can calculate to a nicety.In general, utilize epipole conversion to predict the coordinate of present frame characteristic point, only need two consecutive frames just can realize.Because adopt more image to participate in, its core line still can meet at a bit at present image.But, due to error of the track algorithm that has noise and employing in image, model etc., can affect the accuracy of required position coordinates.Position for the characteristic point that more calculates to a nicety, the present invention adopts adjacent multiple image, asks the crosspoint of its core line, and the position coordinates in all crosspoints is averaged, calculate the exact position of its characteristic point, improve characteristic point forecasting reliability and robustness.At characteristic point position, predict on implementation method, at first the image sequence f to processing sA selected stack features point
Figure BDA00003573418900082
Then utilize KLT point track algorithm to follow the tracks of this initial characteristics point set along video flowing, and based on characteristic point distributing homogeneity and conspicuousness, choose 8 key feature points, utilize the corresponding relation of image sequence interframe key feature points, estimate video sequence current image frame f tWith its consecutive frame f in time s(the basic constraint matrix F of t-8<s<t+8) S, t.Each characteristic point adopts the method for asking core line crosspoint mean place, realizes the accurately predicting of present frame feature point set position, thereby obtains the more accurate predicted characteristics point set in position
Figure BDA00003573418900083
In the present invention, adopted KLT (Kanade-Lucas-Tomasi) some track algorithm, this algorithm carries out Gray-scale Matching by the translation model based on 2D, the tracking of realization character point, and carry out selecting based on the characteristic point of track algorithm, improve the feature point tracking quality.It selects characteristic point by the characteristic point selection algorithm in start image, then utilize translation model to carry out feature point tracking, for the characteristic point traced on N width image, carries out the continuity judgement by affine model, and the characteristic point of trail-and-error is rejected.In the KLT algorithm, in research during the matching problem between different images, by calculating the gray scale residual error of two translation windows, and find and minimize residual error and realize coupling.
After accurately obtaining the predicted characteristics point set, the method of image sequence predicted characteristics point set virtual track set of curves is set up in employing, the motion state that means the image sequence interframe that 3d space multifreedom motion video camera obtains, by the projected footprint set of curves is carried out to projection, smoothing processing, obtain the settling position of sequence of video images predicted characteristics point.In the present invention, by setting up the conversion constraint matrix of original input picture characteristic point and stabilized image sequence signature point
Figure BDA00003573418900091
Estimate the position of original input picture feature point set in corresponding stabilized image, and, by this change in location relation of feature point set, realize original input image sequence is rebuild.The conversion constraint matrix
Figure BDA00003573418900092
By building fantasy sport track bunch and realizing by relevant treatment.In system works, start, be the motion state information of Precise Representation at 3d space camera image sequence that multifreedom motion obtains, can select from the every two field picture image sequence the characteristic point of sufficient amount
Figure BDA00003573418900093
And at the some frames that start, first utilize the coordinate figure of the characteristic point of following the tracks of to carry out these future positions of initialization, as: when 0<s<8, order
Figure BDA00003573418900094
At follow-up picture frame, by said method, the coordinate position of the characteristic point of following the tracks of is predicted, thereby built its virtual track line bunch continuously.It should be noted that, choosing of characteristic point is to be divided into some by the image by processing, and from piece, choosing some characteristic points follows the tracks of and calculates, so that tracing point can be evenly distributed in frame, effectively make the trajectory formed in two dimensional surface, change expression 3D scene and throw into the motion change on image.For the camera of 3d space multifreedom motion, Comprise the feature point set sequence and shake because it is to pass through F by the feature point set sequence that unstable camera obtains on sequence image S, tBe converted to.As long as
Figure BDA00003573418900096
In video sequence image, there is (being that video is not blocked or blocks by object)
Figure BDA00003573418900097
Also will with
Figure BDA00003573418900098
Equally will continue.In order to obtain stable predicted characteristics sequence of point sets, the present invention is right in real time by Fuzzy Template (selecting suitable parameter) Track in the projection of 2D plane institute carries out smoothly, and smoothing processing carries out respectively independently in the horizontal and vertical directions, and the virtual track line that the predicted characteristics point of all correspondences forms is processed by Fuzzy Template, will form stably
Figure BDA000035734189000910
As long as they keep relevant to the image sequence absorbed, to the generation of trajectory with calculate just continuation, in case the characteristic point of following the tracks of exits handled picture frame, the dummy line that forms of these predicted characteristics points just stops, and calculating also just stops.In this process, can, by setting epipole generated error thresholding, build the selection to characteristic point and escape mechanism and strategy.Utilize the selected feature point set of original input picture flat
Figure BDA000035734189000911
With level and smooth after obtain Calculate the matrix that another is basic
Figure BDA00003573418900101
This fundamental matrix
Figure BDA00003573418900102
Can realize inputting unsettled raw video image f sCharacteristic point With these at new stabilized image On carry out association and constraint.Similarly, basic matrix
Figure BDA00003573418900105
Can adopt
Figure BDA00003573418900106
With In corresponding points, with 8 algorithms, calculate.Adopt again the method for epipole conversion as previously described, use resulting matrix
Figure BDA00003573418900108
With
Figure BDA00003573418900109
Estimate that these characteristic points are in the exact position with its corresponding stabilizer frame For improving precision, the present invention still adopts the method for epipole conversion, and adopts | t-s|≤5 conditions, ask the average method of core line intersection point, and determine the accurate estimation of present frame characteristic point at stabilized image frame position coordinate Finally utilize
Figure BDA000035734189001012
With
Figure BDA000035734189001013
Mapping relations, for image stabilization system supply with a kind of by the source incoming frame to the constraint of stablizing output frame, realization is carried out the position physicsl correct to the pixel of input picture frame.Due to the present invention, in steady picture process, be to obtain by calculating
Figure BDA000035734189001014
Come and build trajectory bunch, and Fuzzy smooth operation independently respectively in the 2D plane, therefore, its coordinate does not need corresponding in the projection of attainable camera with scene 3D point.But the some position after these are level and smooth is not final point of safes position, also needs by defining the transition matrix on a basis
Figure BDA000035734189001015
Accurately estimate the position of input picture characteristic point in corresponding stabilized image, finally realize rebuilding new image frame.Due to each two-dimentional defined matrix an effective several picture, matrix
Figure BDA000035734189001016
In fact just simulated one physically attainable camera look, realization will be shaken point set
Figure BDA000035734189001017
Point set with smooth motion
Figure BDA000035734189001018
Relevant, camera will be optimized by its video correction obtained in the 3d space smooth motion.
In the present invention, due to the behavioral characteristics point in scene, do not meet the core restriction relation of image sequence interframe, therefore for the behavioral characteristics point in scene, process, be based in scene moving object in the stationarity of image sequence interframe movement track, determine the suffered minimum external force constraint characteristics of object, set up behavioral characteristics point dynamics Mathematical Modeling, solve in scene the motion feature point in each picture frame position of image sequence, with solution by no means static scene moving object at the stable problem of sequence of video images, avoid tradition surely as algorithm, to utilize the object background information of irrelevant motion and the error that interpolation arithmetic brings.General tradition surely looks like algorithm, when in the corresponding scene of point of following the tracks of during a certain moving object, because these points can not reflect the motion of video camera, and usually surely as algorithm, is lost.And the image-region at moving object place is to obtain its steadiness parameter by near the picture point of the static scene this zone.When the spacing of moving object place image-region and its static scene picture point is large or, when the depth of field difference of moving object and its background is very important, adopt said method, the moving object of picture frame will be difficult to realize stablizing.In the present invention, utilize dynamic (dynamical) relevant nature, the characteristic point of moving object in scene, if movement locus has the stationarity characteristic in image sequence, moving object must be subjected to the constraint of minimum external force, thereby for predicting that behavioral characteristics point provides condition in each frame position of image sequence.
Consider some a dynamic point p in time series chart picture frame s s, it at picture frame t institute projection core line still at time s, that is: l s=F S, tp s.According to the core restriction relation of image sequence, we wish in picture frame t, with dynamic point p sThe point p of coupling tShould be positioned at this core line l sOn.And in fact, because characteristic point is moved, can't meet epipole constraints, Fig. 4 has showed the sight of three frame consecutive frames.In the present invention, estimate that the method for present image behavioral characteristics point position coordinates is based on the dynamic behaviour of behavioral characteristics point.Suppose that behavioral characteristics point is easy motion in the motion of image sequence interframe, its motion comes from and forces the small active force of its motion so, according to the relevant character of dynamics, and within the unit interval, its velocity u S, t=q S+1, t-q S, tVariation that must be as much as possible little.Can make like this us derive q S, tEnough constraints are arranged.With E, mean following time difference matrix:
- 1 1 . . . 0 0 0 - 1 . . . 0 0 . . . . . . . . . . . . . . . 0 0 . . . - 1 1 &CenterDot; . . q s - 1 , t q s , t q s + 1 , t . . = . . u s - 1 u s u s + 1 . . - - - ( 3 )
If mean matrix E is carried out to differentiate with D, when to velocity
Figure BDA00003573418900112
While operating, at this moment the difference equation matrix just will produce acceleration
Figure BDA00003573418900113
Here, It is the vector at time s.Therefore, come from order to obtain the movement locus that minimum force causes, so that every bit q S, tBe positioned at its corresponding core line l sUpper, must solve following constraint:
min q s &RightArrow; , t | | &Gamma; q s &RightArrow; , t | | 2 , Wherein: ( l s ) T q s , t = 0 , s &NotEqual; t q s , t = p t , s = t - - - ( 4 )
Here, Γ means matrix of second derivatives DE.Adopt Lagrangian multiplication operator rule can simplify the problems referred to above, separate following linear system thereby become:
( &Gamma; T &Gamma; ) &CenterDot; q s &RightArrow; , t + C T &CenterDot; &lambda; = 0 C &CenterDot; q s &RightArrow; , t = b - - - ( 5 )
In the following formula equation group, C and b are representative (4) formula linear restriction matrix and vector.Use above-mentioned equation group, just can solve independently each some q S, tThe horizontal and vertical coordinate.
In steady picture process, the choosing and following the tracks of of characteristic point, based on its uniformity and conspicuousness rule distributed at image-region; The factor such as consider the variation of scene and block, its replacement policy is based on continuity and the characteristic point position dynamic prediction of virtual track curve.Because the feature point tracking algorithm based on KLT can identify each point with the very little pixel window that surrounds it, thus these points of coupling interframe, can be by the relatively similarity realization of these windows.For the motion state of representative frame, each frame in sequence of video images, can select hundreds of such characteristic point to follow the tracks of.These characteristic points are processed, be can be and recover this fundamental matrix F S, tWith
Figure BDA00003573418900121
Enough information is provided.
Based on said method, a kind of sequence of video images real-time stabilization method of supporting multifreedom motion that the present invention proposes, adopt following step to realize:
(1) establishing sequence of video images to be processed to be f s, f tFor present image.To f sIn each two field picture, by FPGA, it is divided into after 16 image regions and passes to DSP#1 and process, in each zone, adopt selected ten characteristic points of KLT point track algorithm, and, at image thereafter, by the feature point tracking technology, all characteristic points of choosing are followed the tracks of, form f sA feature point set sequence
Figure BDA00003573418900122
Wherein
Figure BDA00003573418900123
I the characteristic point that the s two field picture of expression video sequence is chosen.
(2) to video sequence f s, with continuous 17 two field pictures, form a processing unit be associated, with the prediction accurate coordinates position of present frame characteristic point in image.To current processing picture frame f t, by track algorithm obtain this frame with it forward and backward adjacent 8 frames process continuously the feature point set sequence of image, be designated as: &Psi; s i = { &Psi; t - 8 i , . . . , &Psi; t - 1 i , &Psi; t i , . . . , &Psi; t + 1 i , . . . , &Psi; t + 8 i } , Wherein: t-8≤s≤t+8.
(3) based on area distribution and conspicuousness characteristics, at the feature point set of processing unit
Figure BDA00003573418900125
In sequence, to the feature point set of each two field picture, select eight key feature points wherein, calculate as pre-treatment picture frame f tBe adjacent the core constraint transformation matrix F of forward and backward eight two field pictures S, t.
(4) according to image core geometrical constraint principle, adopt the epipole conversion method, by And F S, tCalculate each frame characteristic point At image f tIn projection line
Figure BDA00003573418900128
And by
Figure BDA00003573418900129
Intersection point or the mean value of each position of intersecting point, as characteristic point i at image f tThe prediction of middle position, thereby constitutive characteristic point set
Figure BDA00003573418900131
At present image prediction point set
Figure BDA00003573418900132
(5) every collection one new images enters video sequence, can make: t=t+1, and repeating step (2)~(4) can obtain the predicted characteristics sequence of point sets:
Figure BDA00003573418900133
When 1<s<8, order
Figure BDA00003573418900134
We can obtain a continuous predicted characteristics sequence of point sets like this
Figure BDA00003573418900135
(6) DSP#1 will predict sequence of point sets continuously
Figure BDA00003573418900136
Pass to DSP#2, DSP#2 is usingd frame number s as the time Z axis, Middle coordinate at two dimensional image forms respectively X-axis and Y-axis, forms three-dimensional system of coordinate, according to
Figure BDA00003573418900138
Coordinate (the x that the frame number of point sequence reaches at each frame i, y i) value, determine each position of predicted characteristics point in coordinate system, and the corresponding points of each picture frame are linked to be to curve, form the imaginary line bunch along time shaft, and constantly gather in time new video image, its imaginary line constantly extends.The level and smooth degree of every curve that the corresponding predicted characteristics point of each frame forms, reflected the degree of stability of this characteristic point in sequence of video images, and whole imaginary line bunch has reflected the degree of stability of video camera in shooting process.
(7) the DSP#2 processor is along Z-direction, each curve of imaginary line bunch is carried out to projection at directions X and Y-direction respectively, and take present image and be processing center, forward and backward adjacent totally 11 two field pictures are a processing unit, adopt successively the convolution algorithm method, realization is carried out the smothing filtering operation to curve, can obtain the predicted characteristics point set Corresponding critical sequences:
Figure BDA000035734189001310
(8) according to original image feature point set sequence
Figure BDA000035734189001311
In critical sequences
Figure BDA000035734189001312
Corresponding relation, according to its area distribution and conspicuousness characteristics, again exist With
Figure BDA000035734189001314
In, select eight key feature points wherein, calculate current original image frame f tCharacteristic point
Figure BDA000035734189001315
Be adjacent forward and backward five two field pictures corresponding
Figure BDA000035734189001316
Core constraint transformation matrix
Figure BDA000035734189001317
T-5<s<t+5.
(9) according to image core geometrical-restriction relation, adopt epipole conversion method and average decision making algorithm, by
Figure BDA000035734189001318
Estimate present image
Figure BDA000035734189001319
It is at the location point of stabilizer frame
Figure BDA000035734189001320
(10) current image frame to processing, if characteristic point i, corresponding to the moving object in non-still image, analyzes its motion feature, adopt the dynamics Mathematical Modeling Methods, resolves its smooth motion status flag position, right
Figure BDA00003573418900141
The position of middle corresponding points is adjusted accordingly.
(11) to current processed frame f t, by the feature set to starting to select
Figure BDA00003573418900142
With resulting character pair collection after above-mentioned processing
Figure BDA00003573418900143
Position relationship, to present image f tRebuild, and the image that will rebuild
Figure BDA00003573418900144
Pass to FPGA.
(12) image of FPGA to rebuilding
Figure BDA00003573418900145
Be reassembled into new video, and adopt the tcp/ip communication agreement, new stable video image sequence is generated to the output of standard digital video flowing.
In implementation process, adopt following main equipment:
(1) programmable logic device FPGA: employing be the XC6SLX16-3 CSG324 of Xilinx company.324 pins are arranged, 232 usable pins, 2278 slices, each slice comprise 4 CLB, and each CLB comprises the look-up table (LUT) of 46 inputs, totally 36448 6-input LUTs.The built-in hardware resource of this FPGA has 32 DSP48A1, the built-in RAM of two MCB (storage administration piece) and 576K.
(2) digital signal processor DSP: adopt TMS320DM6446, this model has SOC (the System on Chip) flush bonding processor of DSP (DM64X) and ARM dual core simultaneously.This class processor had both had the ARM9 kernel, can move Windows or (SuSE) Linux OS, had again the DSP kernel of high primary frequency, can move fast the various complicated video processnig algorithms such as audio/video encoding/decoding, pattern recognition.The DaVinci processor also has advantage low in energy consumption simultaneously, can be widely used in various powered battery environment and use.
(3) dedicated video A/D: employing be that model is the picture signal A/D converter of AD9826, R_G_B3 input channel arranged, precision is 16bits, the collection highest frequency is 15MSPS.
(5) static random access memory SRAM: what memory device adopted is that model is the SRAM of K6R4008v1d, and storage size is 512KB, and the data wire width is 8bit, and storage cycle is 8ns, and maximum throughput is 1Gb/s.
(6) dynamic memory DDR3: what memory device adopted is that model is MT41J64M16, memory capacity is that size is 1Gb, 96-Ball FBGA encapsulation, memory space is 8Meg*16*8Banks, burst transfer position 512bit of this DDR3, burst length is 8, and maximum operation frequency 533MHz realizes zero access by double data multiplying power.The design's clock adopts the differential clocks input of 400MHz, and data access rate can reach 800Mb/s.
(7) ethernet controller: adopt the 88E1111 network adapter, can work in the 1000M pattern, data transmission rate is up to 1Gb/s.
The sequence of video images of support multifreedom motion provided by the invention surely utilizes the embedded image processing unit as device in real time, adopt the accurately predicting of the core geometrical-restriction relation realization character point position between picture frame, and employing view data dimension-reduction treatment new method, obtain the restriction relation of original input image sequence and stabilized image sequence, and input picture rebuilds, realize multifreedom motion sequence of video images real-time stabilization.
By core geometrical-restriction relation and the epipole transformation model of image sequence interframe, realize that multiframe participates in the characteristic point position prediction, improves the matching precision of characteristic point, simultaneously, adopt the dimension-reduction treatment technology, simplify the complexity of image data processing, improve the real-time of system.On algorithm, do not need to recover the 3D point position of explicit property or the shooting attitude of 3D camera, but by setting up the method for image sequence predicted characteristics point set virtual track family of curves, the motion state that means the image sequence interframe that 3d space multifreedom motion video camera obtains, and by projection and simply 2D virtual track family of curves filtering processing, obtain the core geometrical-restriction relation of source input picture and stable output image, complete the reconstruction realized the input original sequence, thereby complete the physical correction of 3D scene point in image planes.Method proposed by the invention, can realize the image sequence multifreedom motion is realized to physical correction, can avoid again simultaneously that the 3D digital image stabilization method brings some drawbacks, adopt simultaneously two CPU parallel pipeline data processing methods, as precision and this contradiction of real-time, provide a kind of method for solving surely.
From the demand angle of video stabilization, the main purpose of steady picture is under the prerequisite that guarantees real-time, to improve to greatest extent the stability of video image, meets the mankind's vision requirement.Traditional electronic image stabilization method, improve and surely as precision, will bring complicated computation model and heavy amount of calculation.The present invention utilizes image sequence core geometric theory, not only can greatly improve precision of prediction and the stability of characteristic point coordinate, and, by the view data computing is carried out to dimension-reduction treatment, reduces amount of calculation, improves the system treatment effeciency.Simultaneously aspect hardware designs, adopt many pipeline processing modes, build unified image/video basic structure and mean model and corresponding processing method thereof, to realize processing complicated multivariant DE Camera Shake with better simply model, avoid numerous and diverse computing, improve precision and the robustness of image stabilization system.Support multiple degrees of freedom electronic image stabilization method proposed by the invention, to realize that the mobile carrier imaging system obtains the key of high-quality video image, help to promote the ability of moving base carrier video camera obtaining information information, China's national defense, anti-terrorism and antitheft etc. had to important Research Significance.

Claims (8)

1. sequence of video images real-time stabilization device of supporting multifreedom motion, it is characterized in that: comprise programmable logic device FPGA, programmable logic device FPGA connects modulus converter A/D, digital to analog converter D A, digital signal processor DSP 1#, digital signal processor DSP 2#, image sequence buffer DDR3, parameter configuration static memory SRAM and image sequence buffer DDR3;
Programmable logic device FPGA is responsible for image is carried out to subregion; Reconstructed image frame from digital signal processor DSP 2# is reconfigured to new video, and adopt the tcp/ip communication agreement, new stable video image sequence is generated to the output of standard digital video flowing;
Digital signal processor DSP 1# and digital signal processor DSP 2#, adopt the parallel pipeline working method to process sequence of video images; Wherein, digital signal processor DSP 1# analyzes and processes image information, comprises image-region feature point detection and selection, computed image interframe constraint matrix, sets up the core transformation model between picture frame and carries out the characteristic point position prediction; The predicted characteristics point sequence that digital signal processor DSP 2# utilizes digital signal processor DSP 1# to provide, build video image characteristic point set imaginary line bunch, simultaneously, adopt view data dimension-reduction treatment method to carry out the filtering processing to imaginary line bunch, obtain corresponding its exact position at stabilized image of predicted characteristics point, build constraint matrix, for calculating the exact position of original input picture characteristic point and corresponding stabilized image feature point set, and present frame is rebuild, simultaneously, send the image of reconstruction to programmable logic device FPGA;
Image sequence buffer DDR3, for the Stabilization of video sequences process, to the interim storage of present frame, associated frame and intermediate data;
Parameter configuration static memory SRAM, before for Video Stabilization, processing, store the system parameters of setting.
2. sequence of video images real-time stabilization method of supporting multifreedom motion, it is characterized in that: adopt image sequence interframe epipole transformation model, predicted characteristics point is in the exact position of each picture frame, and by building video image predicted characteristics point set imaginary line bunch, multifreedom motion between the rendering image sequence frame, simultaneously, adopt the dimensionality reduction image processing method, obtain corresponding its exact position at stabilized image of predicted characteristics point, thereby structure constraint matrix, for calculating the exact position of corresponding its stabilized image feature point set of original input picture feature point set, finally by input picture is rebuild, obtain stable output video.
3. a kind of sequence of video images real-time stabilization method of supporting multifreedom motion as claimed in claim 2, it is characterized in that: utilize the conversion of multiframe epipole and average decision making algorithm to realize the high-precision forecast to current image frame feature point set position, and, by the variable condition to each frame institute's predicted characteristics point position of image sequence, obtain the multifreedom motion information of video camera in shooting process.
4. a kind of sequence of video images real-time stabilization method of supporting multifreedom motion as claimed in claim 2, it is characterized in that: by setting up the method for image sequence predicted characteristics point set virtual track set of curves, mean the motion state of the image sequence interframe that 3d space multifreedom motion video camera obtains; Adopt the dimension-reduction treatment method, predicted characteristics point set virtual track set of curves is carried out respectively to projection at X and Y-direction, by drop shadow curve is carried out respectively to the 2D smoothing processing, obtain the smooth change of predicted characteristics point set at each frame position coordinate of sequence of video images, thereby determine corresponding its constraint matrix of core in the stabilized image position of original input picture characteristic point.
5. a kind of sequence of video images real-time stabilization method of supporting multifreedom motion as claimed in claim 2, it is characterized in that: for the point of the static nature in scene, by corresponding its constraint matrix of core in the stabilized image position of original input picture characteristic point, adopt epipole conversion and average decision making algorithm, estimate that accurately original input picture changes with corresponding stabilized image characteristic point position.
6. a kind of sequence of video images real-time stabilization method of supporting multifreedom motion as claimed in claim 2, it is characterized in that: process for behavioral characteristics point, based on the stationarity of moving object in scene at image sequence interframe movement track, determine the characteristics of the suffered minimum external force constraint of object, set up behavioral characteristics point dynamics Mathematical Modeling, solve in scene the motion feature point in each picture frame position of image sequence.
7. a kind of sequence of video images real-time stabilization method of supporting multifreedom motion as claimed in claim 2, it is characterized in that: in the stabilized image process, the choosing and following the tracks of of key feature points, based on its uniformity and conspicuousness rule distributed at image-region; And the replacement policy of characteristic point is based on continuity and the characteristic point position dynamic prediction of virtual track curve.
8. a kind of sequence of video images real-time stabilization method of supporting multifreedom motion as claimed in claim 2 is characterized in that: sequence of video images stable, adopt following step to realize:
Step 1: establishing and needing sequence of video images to be processed is f s, f tFor present image.To f sIn each two field picture, first by programmable logic device FPGA, it is divided into after 16 image regions and passes to digital signal processor DSP #1 and process, in each zone, adopt selected ten characteristic points of KLT point track algorithm, and at image thereafter, by the feature point tracking technology, all characteristic points of choosing are followed the tracks of, form f sA feature point set sequence Wherein
Figure FDA00003573418800022
I the characteristic point that the s two field picture of expression video sequence is chosen;
Step 2: to video sequence f s, with present image f tCentered by, continuous 17 two field pictures form a processing unit be associated, the prediction accurate coordinates position of present frame characteristic point in image; To current processing picture frame f t, by track algorithm obtain this frame with it forward and backward adjacent 8 frames process continuously the feature point set sequence of image, be designated as: &Psi; s i = { &Psi; t - 8 i , . . . , &Psi; t - 1 i , &Psi; t i , . . . , &Psi; t + 1 i , . . . , &Psi; t + 8 i } , Wherein: t-8≤s≤t+8;
Step 3: based on area distribution and conspicuousness characteristics, at the feature point set of processing unit
Figure FDA00003573418800032
In sequence, to the feature point set of each two field picture, select eight key feature points wherein, calculate as pre-treatment picture frame f tBe adjacent the core constraint transformation matrix F of forward and backward eight two field pictures S, t
Step 4: according to image core geometrical constraint principle, adopt the epipole conversion method, by
Figure FDA00003573418800033
And F S, tCalculate each frame characteristic point
Figure FDA00003573418800034
At image f tIn projection line
Figure FDA00003573418800035
And by
Figure FDA00003573418800036
Intersection point or the mean value of each position of intersecting point, as characteristic point i at image f tThe prediction of middle position, thereby constitutive characteristic point set At present image prediction point set
Figure FDA00003573418800038
Step 5: every collection one new images enters video sequence, can make: t=t+1, and repeating step 2~step 4 can obtain the predicted characteristics sequence of point sets:
Figure FDA00003573418800039
And when 1<s<8, order Can obtain so a continuously predicted characteristics sequence of point sets &Omega; s i = { &Omega; 1 i , &Omega; 2 i , . . . , &Omega; t i , . . . &Omega; n i , . . . } ;
Step 6: digital signal processor DSP #1 will predict sequence of point sets continuously
Figure FDA000035734188000312
Pass to digital signal processor DSP #2, digital signal processor DSP #2 is usingd frame number s as the time Z axis,
Figure FDA000035734188000313
Middle coordinate at two dimensional image forms respectively X-axis and Y-axis, forms three-dimensional system of coordinate, according to
Figure FDA000035734188000314
Coordinate (the x that the frame number of point sequence reaches at each frame i, y i) value, determine each position of predicted characteristics point in newly-built coordinate system, and the corresponding points of each picture frame are linked to be to curve, form the imaginary line bunch along time shaft, and constantly gather in time new video image, its imaginary line constantly extends; The level and smooth degree of every curve that the corresponding predicted characteristics point of each frame forms, reflected the degree of stability of this characteristic point in sequence of video images, and whole imaginary line bunch has reflected the degree of stability of video camera in shooting process;
Step 7: digital signal processor DSP #2 is along Z-direction, each curve of imaginary line bunch is carried out to projection at directions X and Y-direction respectively, and take present image and be processing center, forward and backward adjacent totally 11 two field pictures are a processing unit, adopt successively the convolution algorithm method, realization is carried out the smothing filtering operation to curve, can obtain the predicted characteristics point set
Figure FDA00003573418800041
Corresponding critical sequences:
Figure FDA00003573418800042
Step 8: according to original image feature point set sequence
Figure FDA00003573418800043
In critical sequences
Figure FDA00003573418800044
Corresponding relation, according to its area distribution and conspicuousness characteristics, again exist
Figure FDA00003573418800045
With
Figure FDA00003573418800046
In, select eight key feature points wherein, calculate current original image frame and delete characteristic point
Figure FDA00003573418800047
Be adjacent forward and backward five two field pictures corresponding Core constraint transformation matrix
Figure FDA00003573418800049
T-5<s<t+5;
Step 9: according to image core geometrical-restriction relation, adopt epipole conversion method and average decision making algorithm, by Estimate present image
Figure FDA000035734188000411
It is at the location point of stabilizer frame
Figure FDA000035734188000412
Step 10: to the current image frame of processing, if characteristic point i, corresponding to the moving object in non-still image, analyzes its motion feature, adopt the dynamics Mathematical Modeling Methods, resolve its smooth motion status flag position, right
Figure FDA000035734188000413
The position of middle corresponding points is adjusted accordingly;
Step 11: to current processed frame f t, by the feature set to starting to select
Figure FDA000035734188000414
With resulting character pair collection after above-mentioned processing
Figure FDA000035734188000415
Position relationship, to present image f tRebuild, and the image that will rebuild
Figure FDA000035734188000416
Pass to programmable logic device FPGA;
Step 12: the image of programmable logic device FPGA to rebuilding
Figure FDA000035734188000417
Be reassembled into new video, and adopt the tcp/ip communication agreement, new stable video image sequence is generated to the output of standard digital video flowing.
CN201310320797.9A 2013-07-26 2013-07-26 Support the sequence of video images real-time stabilization apparatus and method of multifreedom motion Expired - Fee Related CN103414853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310320797.9A CN103414853B (en) 2013-07-26 2013-07-26 Support the sequence of video images real-time stabilization apparatus and method of multifreedom motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310320797.9A CN103414853B (en) 2013-07-26 2013-07-26 Support the sequence of video images real-time stabilization apparatus and method of multifreedom motion

Publications (2)

Publication Number Publication Date
CN103414853A true CN103414853A (en) 2013-11-27
CN103414853B CN103414853B (en) 2016-11-09

Family

ID=49607835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310320797.9A Expired - Fee Related CN103414853B (en) 2013-07-26 2013-07-26 Support the sequence of video images real-time stabilization apparatus and method of multifreedom motion

Country Status (1)

Country Link
CN (1) CN103414853B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885465A (en) * 2014-04-02 2014-06-25 中国电影器材有限责任公司 Method for generating dynamic data of dynamic seat based on video processing
CN104811588A (en) * 2015-04-10 2015-07-29 浙江工业大学 Shipborne image stabilization control method based on gyroscope
CN105137428A (en) * 2015-07-28 2015-12-09 南京航空航天大学 Dechirp signal polar format imaging algorithm FPGA (Field Programmable Gate Array) realization method
CN105472373A (en) * 2015-11-18 2016-04-06 中国兵器工业计算机应用技术研究所 Bionic electronic image stabilizing method based on vestibular reflection mechanism and device
CN106101535A (en) * 2016-06-21 2016-11-09 北京理工大学 A kind of based on local and the video stabilizing method of mass motion disparity compensation
CN107135331A (en) * 2017-03-29 2017-09-05 北京航空航天大学 The UAV Video antihunt means and device of low-latitude flying scene
CN108318506A (en) * 2018-01-23 2018-07-24 深圳大学 A kind of pipeline intelligent detection method and detecting system
CN109799363A (en) * 2017-11-16 2019-05-24 台利斯公司 Mobile engine pseudo-velocity vector determines method, storage medium and determines system
CN110246224A (en) * 2018-03-08 2019-09-17 北京京东尚科信息技术有限公司 The surface denoising method and system of grid model
CN111047579A (en) * 2019-12-13 2020-04-21 中南大学 Characteristic quality evaluation method and image characteristic uniform extraction method
CN112769937A (en) * 2021-01-12 2021-05-07 济源职业技术学院 Medical treatment solid waste supervisory systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101729763A (en) * 2009-12-15 2010-06-09 中国科学院长春光学精密机械与物理研究所 Electronic image stabilizing method for digital videos
US7779265B2 (en) * 2005-12-13 2010-08-17 Microsoft Corporation Access control list inheritance thru object(s)
CN102724387A (en) * 2012-05-26 2012-10-10 安科智慧城市技术(中国)有限公司 Electronic image stabilizing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7779265B2 (en) * 2005-12-13 2010-08-17 Microsoft Corporation Access control list inheritance thru object(s)
CN101729763A (en) * 2009-12-15 2010-06-09 中国科学院长春光学精密机械与物理研究所 Electronic image stabilizing method for digital videos
CN102724387A (en) * 2012-05-26 2012-10-10 安科智慧城市技术(中国)有限公司 Electronic image stabilizing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张阳,王宣银: "基于人眼特性的视频稳定方法", 《吉林大学学报(工学版)》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885465A (en) * 2014-04-02 2014-06-25 中国电影器材有限责任公司 Method for generating dynamic data of dynamic seat based on video processing
CN104811588B (en) * 2015-04-10 2018-04-20 浙江工业大学 A kind of boat-carrying based on gyroscope is surely as control method
CN104811588A (en) * 2015-04-10 2015-07-29 浙江工业大学 Shipborne image stabilization control method based on gyroscope
CN105137428A (en) * 2015-07-28 2015-12-09 南京航空航天大学 Dechirp signal polar format imaging algorithm FPGA (Field Programmable Gate Array) realization method
CN105137428B (en) * 2015-07-28 2018-09-04 南京航空航天大学 Go the FPGA implementation method of the polar coordinates format image-forming algorithm of slope signal
CN105472373A (en) * 2015-11-18 2016-04-06 中国兵器工业计算机应用技术研究所 Bionic electronic image stabilizing method based on vestibular reflection mechanism and device
CN106101535A (en) * 2016-06-21 2016-11-09 北京理工大学 A kind of based on local and the video stabilizing method of mass motion disparity compensation
CN106101535B (en) * 2016-06-21 2019-02-19 北京理工大学 A kind of video stabilizing method based on part and mass motion disparity compensation
CN107135331A (en) * 2017-03-29 2017-09-05 北京航空航天大学 The UAV Video antihunt means and device of low-latitude flying scene
CN107135331B (en) * 2017-03-29 2019-12-03 北京航空航天大学 The UAV Video antihunt means and device of low-latitude flying scene
CN109799363A (en) * 2017-11-16 2019-05-24 台利斯公司 Mobile engine pseudo-velocity vector determines method, storage medium and determines system
CN109799363B (en) * 2017-11-16 2022-05-24 台利斯公司 Method, storage medium, and system for determining virtual velocity vector of mobile engine
CN108318506A (en) * 2018-01-23 2018-07-24 深圳大学 A kind of pipeline intelligent detection method and detecting system
CN110246224A (en) * 2018-03-08 2019-09-17 北京京东尚科信息技术有限公司 The surface denoising method and system of grid model
CN110246224B (en) * 2018-03-08 2024-05-24 北京京东尚科信息技术有限公司 Surface denoising method and system of grid model
CN111047579A (en) * 2019-12-13 2020-04-21 中南大学 Characteristic quality evaluation method and image characteristic uniform extraction method
CN111047579B (en) * 2019-12-13 2023-09-05 中南大学 Feature quality assessment method and image feature uniform extraction method
CN112769937A (en) * 2021-01-12 2021-05-07 济源职业技术学院 Medical treatment solid waste supervisory systems

Also Published As

Publication number Publication date
CN103414853B (en) 2016-11-09

Similar Documents

Publication Publication Date Title
CN103414853A (en) Device and method for stabilizing video image sequence capable of doing multi-degree of freedom movement in real time
Lyu et al. Chipnet: Real-time lidar processing for drivable region segmentation on an fpga
CN108537876A (en) Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN115100339B (en) Image generation method, device, electronic equipment and storage medium
Hirschmüller et al. Memory efficient semi-global matching
CN109493375A (en) The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN106251395A (en) A kind of threedimensional model fast reconstructing method and system
CN109754459A (en) It is a kind of for constructing the method and system of human 3d model
CN103136775A (en) KINECT depth map cavity filling method based on local restriction reconstruction
CN113706587B (en) Rapid point cloud registration method, device and equipment based on space grid division
CN110096993A (en) The object detection apparatus and method of binocular stereo vision
CN115205463A (en) New visual angle image generation method, device and equipment based on multi-spherical scene expression
CN112513713B (en) System and method for map construction
Deng et al. ToF and stereo data fusion using dynamic search range stereo matching
CN111899326A (en) Three-dimensional reconstruction method based on GPU parallel acceleration
CN115100382B (en) Nerve surface reconstruction system and method based on hybrid characterization
CN117058334A (en) Method, device, equipment and storage medium for reconstructing indoor scene surface
CN116486038A (en) Three-dimensional construction network training method, three-dimensional model generation method and device
Liu et al. Lightweight real-time stereo matching algorithm for AI chips
CN103716639B (en) Search algorithm of frame image motion estimation
Qiu et al. A camera self-calibration method based on parallel QPSO
CN112819849A (en) Mark point-free visual motion capture method based on three eyes
CN111784579A (en) Drawing method and device
Zhang et al. Dense reconstruction for tunnels based on the integration of double-line parallel photography and deep learning
Liao et al. VI-NeRF-SLAM: a real-time visual–inertial SLAM with NeRF mapping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20161109

Termination date: 20190726

CF01 Termination of patent right due to non-payment of annual fee