CN103426182A - Electronic image stabilization method based on visual attention mechanism - Google Patents

Electronic image stabilization method based on visual attention mechanism Download PDF

Info

Publication number
CN103426182A
CN103426182A CN201310287353XA CN201310287353A CN103426182A CN 103426182 A CN103426182 A CN 103426182A CN 201310287353X A CN201310287353X A CN 201310287353XA CN 201310287353 A CN201310287353 A CN 201310287353A CN 103426182 A CN103426182 A CN 103426182A
Authority
CN
China
Prior art keywords
sub
block
image
pixel
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310287353XA
Other languages
Chinese (zh)
Other versions
CN103426182B (en
Inventor
朱娟娟
郭宝龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihang Sichuang Intelligent Technology (Xi'an) Co.,Ltd.
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201310287353.XA priority Critical patent/CN103426182B/en
Publication of CN103426182A publication Critical patent/CN103426182A/en
Application granted granted Critical
Publication of CN103426182B publication Critical patent/CN103426182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An electronic image stabilization method based on a visual attention mechanism includes the following steps: conducting foreground moving area detection on a reference frame, marking foreground moving sub blocks, extracting overall situation remarkable characteristic points in the reference frame, matching characteristic point pairs, removing error-matched characteristic point pairs, acquiring moving parameters, conducting moving filtering, conducting rapid moving compensation, and rebuilding undefined boundary information to acquire a panoramic image. By means of the method, compensation parameters are acquired through extraction, matching, checking and moving parameter calculation of the overall situation remarkable characteristic point pairs and self-adaptation filtering smooth movement, vision stability and definition between video frames are improved, instability of video sequences is removed or reduced, and observation effects of a video monitoring or tracking system is improved.

Description

Electronic image stabilization method based on vision noticing mechanism
Technical field
The invention belongs to the digital image processing techniques field, relate in particular to a kind of electronic image stabilization method based on vision noticing mechanism.
Background technology
Because people's vision system has eye storage characteristic, when picture pick-up device frequency of occurrences higher jitter, vision system easily catches the shake of picture in image, thereby can feel the dimness of vision and be difficult to object observing.The high dither meeting of video camera causes the unstable of video sequence, and the motion in video sequence can be divided into target travel and camera motion, and wherein, the former is local motion, and the latter is global motion.For the ease of observing, need to carry out level and smooth stable the processing to the motion in video sequence, electronic image stabilizing has appearred thus.Electronic image stabilizing is that the method that adopts image to process is estimated side-play amount between frame of video the technology compensated.Electronic image stabilizing has been widely used in the fields such as object detecting and tracking, walking robot, video compress and Image Mosaics at present.
The electronic steady image systematic research is mainly concentrated on overall motion estimation and the large gordian technique of image motion compensation two.The task of overall motion estimation is to determine the interframe movement side-play amount, and the study hotspot of overall motion estimation is the characteristic matching method that can process translation, rotation and zoom situation at present.In prior art, the normal feature matching method adopted mainly contains following a few class:
(1) the characteristic matching method based on the Hough straight line.As the disclosed electronic image stabilization method of Chinese invention patent application that number of patent application is 201010528024.6, denomination of invention is the ship-borne camera system electronic image stabilization method based on characteristic straight line, the method utilizes the Hough conversion to extract the sea horizon linear feature in image, then by parameter and the position of straight-line segment, is mated.The algorithm thinking of these class methods is simple and easy to be realized, but be suitable for scene, comparatively limits to, and can be difficult to even can't extract straight line for simple scenario, and can extract too much short and small straight line in a jumble for complex scene, to the straight line coupling, brings difficulty.
(2) matching method based on the Sift point of interest.The method, based on the metric space theory, is extracted the Sift unique point set of reference frame and present frame, carries out registration, can process translation, rotation, the affine and view transformation of two width images, accurately obtains the kinematic parameter of image, has higher robustness.But it is excessive to there is extraction sift unique point quantity, and the matching process complexity, can't carry out the deficiency of application in real time.
(3) matching method based on the Harris angle point.As the disclosed electronic image stabilization method of Chinese invention patent that the patent No. is 201110178881.2, denomination of invention is the electronic image stabilization method based on characteristic matching, the method is chosen the elementary cell of the Corner Feature of the lesser amt in image as estimation, carries out signature tracking.Because its feature point extraction is carried out in entire image, therefore easily drop on the local motion object, usually need to carry out signature verification and iteration, to reject the local feature point, thereby affect speed and the precision of overall motion estimation.
The image motion compensation method, as another gordian technique of electronic steady image system, is that original kinematic parameter sequence is carried out to filtering, thereby obtains jittering component, with jittering component by way of compensation parameter compensate present frame.When focusing on removing shake, it retains scanning motion.Filtering method commonly used has motion damped method, mean filter method, Kalman filter method etc. at present.Wherein, the attenuation coefficient in the motion damped method is to set by rule of thumb in experiment, can't be applicable to all video sequences; Mean filter adopts and simply is averaging computing, can introduce unnecessary low-frequency noise; Kalman filtering require process noise and observation noise priori known, and obey the Gaussian distribution of zero-mean, this is implacable in real system.
The method for estimating adopted in the aforementioned electronic image stabilization system and motion compensation process, the speed dependent of algorithm is in extraction, coupling and the interative computation of characteristic information, and there is the local motion object in the compound movement scene, because traditional feature point detection is carried out entire image, thereby can't avoid unique point to be selected on foreground target, cause the precise decreasing of overall motion estimation; Simultaneously, algorithm is difficult to process complicated randomized jitter and the scanning motion of video camera simultaneously, filtering divergence easily occurs so that the false scene of output differs greatly with real scanning scene, thereby affects observing effect; In addition, during compensation, to present frame node-by-node algorithm transformation parameter, consume operation time, affect the processing capability in real time of system, the loss of boundary information also can affect vision and observe.
Summary of the invention
Deficiency for above-mentioned technology, the object of the present invention is to provide a kind of electronic image stabilization method based on vision noticing mechanism, can eliminate the image blurring and unsettled phenomenon that carrier movement produces, effectively stablize output video, improve the observation effect of video monitoring or tracker.
To achieve these goals, the present invention takes following technical solution:
A kind of electronic image stabilization method based on vision noticing mechanism comprises the following steps:
Step 1, reference frame is carried out to foreground moving zone detect, mark foreground moving sub-block step;
Sub-step 1a, one section sequential frame image of video sequence is averaged, obtain background image B (x, y), x, y mean x axle and the y axial coordinate of pixel;
Sub-step 1b, definition k-1 image constantly are reference frame f K-1(x, y), k image constantly is present frame f k(x, y), calculate respectively the difference image of they and background image B (x, y):
Reference frame difference image D K-1(x, y)=abs[f K-1(x, y)-B (x, y)],
Present frame difference image D k(x, y)=abs[f k(x, y)-B (x, y)];
Sub-step 1c, with reference to frame difference image D K-1(x, y) and present frame difference image D k(x, y) is divided into respectively M * N sub-block of non-overlapping copies, and described sub-block is of a size of I * J pixel, calculates the mean absolute error in each sub-block:
Reference frame image sub-block mean absolute error
Figure BDA00003484644800031
Current frame image sub-block mean absolute error
Wherein, i=1 ..., I, j=l ..., J, m=1 ..., M, n=l ..., N;
Sub-step 1d, computing reference frame sub-block difference mean value and present frame sub-block difference mean value, respectively as threshold value Th1 and Th2:
Th1=∑B k-1(m,n)/(M×N),
Th2=∑B k(m,n)/(M×N);
Sub-step 1e, by binaryzation, tentatively judge whether each sub-block is the motion sub-block, definition MO K-1(m, n) is reference frame motion sub-block, MO k(m, n) is present frame motion sub-block, and Rule of judgment is as follows:
MO k - 1 ( m , n ) = 1 , if B k - 1 ( m , n ) > Th 1 0 , else ,
MO k ( m , n ) = 1 , if B k ( m , n ) > Th 2 0 , else ;
Sub-step 1f, to reference frame motion sub-block MO K-1(m, n) carries out spatial domain similarity detection, and the sub-block that does not belong to sport foreground is deleted;
Sub-step 1g, to reference frame motion sub-block MO K-1(m, n) carries out the detection of time domain similarity, and the sub-block that does not belong to sport foreground is deleted;
After spatial domain, time domain similarity detect, the final motion sub-block retained is the sport foreground zone;
Overall remarkable characteristic step in step 2, extraction reference frame;
Sub-step 2a, with reference to frame f K-1(x, y) utilizes following formula compute gradient image:
X = f k - 1 ⊗ ( - 1,0,1 ) Y = f k - 1 ⊗ ( - 1,0,1 ) T ;
Wherein,
Figure BDA00003484644800042
Mean convolution, X means the gradient image of horizontal direction, and Y means the gradient image of vertical direction, [] TMean matrix transpose operation;
Sub-step 2b, structure autocorrelation matrix R:
R = X 2 ⊗ w XY ⊗ w XY ⊗ w Y 2 ⊗ w ;
Wherein,
Figure BDA00003484644800044
For the Gaussian smoothing window function,
Figure BDA00003484644800045
Standard deviation for window function;
Sub-step 2c, calculating Harris angle point response R H:
R H=λ 1×λ 2-0.05·(λ 12);
Wherein, λ 1And λ 2Two eigenwerts for autocorrelation matrix R;
Sub-step 2d, with reference to frame f K-1(x, y) is divided into M * N sub-block of non-overlapping copies, and sub-block is of a size of I * J pixel, with reference to frame f K-1Maximum Harris angle point response in each sub-block of (x, y) is as the characteristic response value R of this sub-block HMAX(m, n);
Sub-step 2e, by characteristic response value R HMAX(m, n) carries out sequence from high to low, takes out front 20% higher value, and position corresponding to described characteristic response value is designated as to reference frame unique point (x i, y i);
Sub-step 2f, utilize the result of sub-step 1g to reference frame unique point (x i, y i) judged, judge the reference frame motion sub-block MO that this unique point is corresponding K-1(m, n) and whether be 1 in 8 adjacent fields on every side, if 1, show that this unique point belongs to moving target or, in the unreliable zone of moving boundaries, this unique point is deleted;
Step 3, feature point pair matching step;
Sub-step 3a, at reference frame f K-1In (x, y) with reference frame unique point (x i, y i) centered by, build the Window that is of a size of P * Q pixel;
Sub-step 3b, utilize full search strategy and least error and SAD criterion, at present frame f kFind corresponding matching window in (x, y), matching window is of a size of (P+2T) * (Q+2T) pixel, and the central point of matching window is present frame matching characteristic point
Figure BDA00003484644800046
Wherein, T means the pixel maximum offset of horizontal direction and vertical direction, and the computing formula of SAD criterion is: SAD ( x , y ) = Σ p = 1 P Σ q = 1 Q | f k - 1 ( p , q ) - f k ( p + x , q + y ) | , p=1,…,P,q=1,…,Q,x,y=-T,…,T;
Step 4: mistake matching characteristic point is to rejecting step;
According to the Euclidean distance formula
Figure BDA00003484644800052
The i of computing reference frame and present frame to unique point in the horizontal direction with the distance of vertical direction translational movement, the unique point of coupling is carried out to the distance checking to utilizing apart from the normality distribution characteristics, reject mistake matching characteristic point right, obtain the C of correct coupling to unique point pair;
Step 5: the acquisition step of kinematic parameter;
Reference frame unique point (x is described in sub-step 5a, foundation i, y i) and present frame matching characteristic point
Figure BDA00003484644800053
Between the kinematic parameter model of relation: x ^ y ^ = 1 - θ θ 1 x y + u v , Wherein, θ is the image rotation angle, and u is pixel vertical translation amount, and v is the pixel level translational movement, θ, u and v component movement parameter;
Sub-step 5b, the C that will correctly mate, arrange and obtain the kinematic parameter matrix equation substitution kinematic parameter model unique point: B = x ^ 1 y ^ 1 x ^ 2 y ^ 2 · · · x ^ c y ^ c , A = x 1 y 1 1 x 2 y 2 1 · · · x c y c 1 , m = θ u v ;
Sub-step 5c, solve overdetermination linear equation B=Am, the least square solution of kinematic parameter matrix m is m=(A TA) -1AB, thus kinematic parameter obtained;
Step 6: motion filtering step;
Sub-step 6a, writ state vector S (k)=[u (k), v (k), du (k), dv (k)] T, measurement vector Z (k)=[u (k), v (k)] TWherein, u (k) is k pixel vertical translation amount constantly, and v (k) is k pixel level translational movement constantly, du (k) is k instantaneous velocity corresponding to pixel vertical translation amount constantly, and dv (k) is k instantaneous velocity corresponding to pixel level translational movement constantly;
Sub-step 6b, set up the linear discrete system model, obtain state equation and observation equation:
State equation is S (k)=FS (k-1)+δ,
Observation equation is Z (k)=HS (k)+η;
Wherein, F = 1 0 1 0 0 1 0 1 0 0 1 0 0 0 0 1 State-transition matrix, H = 1 0 0 0 0 1 0 0 Be observing matrix, δ, η are separate white noise, and δ~N (0, Φ), η~N (0, Γ), the variance battle array that Φ is process noise, the variance battle array that Γ is observation noise;
Sub-step 6c, set up the system state predictive equation, and its covariance matrix predicted and upgraded, complete motion filtering:
The system state predictive equation is: S (k|k-1)=FS (k-1|k-1);
Covariance matrix P (k|k-1) to S (k|k-1) is predicted: P (k|k-1)=FP (k-1) F T+ Φ (k-1), the variance battle array that Φ is process noise;
The system state renewal equation is: S (k|k)=S (k|k-1)+K g(k) ε (K);
Upgrade the filtering variance matrix of S (k|k) under k moment state: P (k|k)=(Ψ-K g(k) H) P (k|k-1);
Wherein, K g(k)=P (k|k-1) H T(HP (k|k-1) H T+ Γ (k)) -1For the Kalman gain, ε (k)=Z (k)-HS (k|k-1) is innovation sequence, the variance battle array that Γ is observation noise, the unit matrix that Ψ is same order;
Step 7, rapid movement compensation process;
Sub-step 7a, by the difference u of the forward and backward translation motion component of filtering Jitter=u-u Filter, v Jitter=v-v FilterCombining image anglec of rotation θ, parameter by way of compensation Wherein, u is pixel vertical translation amount, and v is the pixel level translational movement, u JitterFor filtered pixel vertical translation amount, v JitterFor filtered pixel level translational movement;
Sub-step 7b, utilize the kinematic parameter model to calculate present frame f kThe rotation result of first pixel of (x, y) first trip [x, y]: x ′ y ′ = 1 - θ θ 1 x y + u jitter v jitter ;
Sub-step 7c, according to the image coordinate linear structure, carry out plus and minus calculation, calculate present frame f kThe pixel of (x, y) all the other ranks, and the new coordinate of acquisition current frame pixel [x ', y '], realize the compensation of present frame;
Step 8, rebuild undefined boundary information, obtain the panoramic picture step;
With reference frame f K-1(x, y), as initial panorama sketch, utilizes the Image Mosaics technology, reference frame and present frame are merged, determined the gray-scale value of each pixel (x ', y ') of fused image according to the warm strategy of image, be compensated image f (x ', y '), realize panoramic picture output:
f ( x ′ , y ′ ) = f k - 1 ( x ′ , y ′ ) ( x ′ , y ′ ) ∈ f k - 1 τ f k - 1 ( x ′ , y ′ ) + ξ f k ( x ′ , y ′ ) ( x ′ , y ′ ) ∈ ( f k - 1 ∩ f k ) f k ( x ′ , y ′ ) ( x ′ , y ′ ) ∈ f k
τ, ξ in above formula mean weighted value, represent the ratio of this pixel relative position and overlapping region width, and this pixel and frontier point position is poor, and τ+ξ=1,0<τ, ξ<1, and in overlapping region, τ is by 1 gradual change to 0, and ξ is by 0 gradual change to 1.
Further concrete scheme is: described step 6 also comprises the correction step of covariance matrix, continues to carry out following steps after completing sub-step 6c:
Sub-step 6d, utilize the character of innovation sequence ε (k) to judge whether filtering disperses: ε (k) Tε (k)≤γ Trace[HP (k|k-1) H T+ Γ (k)];
Wherein, γ is adjustability coefficients and γ>1;
Sub-step 6e, when formula is set up in sub-step 6d, illustrate that wave filter is in normal operating conditions, directly obtain the optimal estimation value of current state; When formula is false, show that actual error will be over γ times of theoretical estimated value, filtering will be dispersed, and now by weighting coefficient C (k), the covariance matrix P (k|k-1) in sub-step 6c be revised, complete the auto adapted filtering of kinematic parameter after correction
Correction formula is as follows:
P(k|k-1)=C(k)·F·P(k-1)·F T+Φ(k),
C ( k ) = &epsiv; ( k ) T &CenterDot; &epsiv; ( k ) - Trace [ H &CenterDot; &Phi; ( k ) &CenterDot; H T + &Gamma; ( k ) ] Trace [ H &CenterDot; F &CenterDot; P ( k ) &CenterDot; F T &CenterDot; H T ] .
Further concrete scheme is: in described sub-step 1f to reference frame motion sub-block MO K-1The concrete steps that (m, n) carries out spatial domain similarity detection are: statistical-reference frame motion sub-block MO K-1(m, n) be the quantity of 8 adjacent motion sub-blocks on every side, as motion sub-block quantity is less than 3, illustrate that this motion sub-block is the isolated sub-block differed greatly in background, do not belong to sport foreground, deleted, otherwise illustrate that this sub-block is similar with the field piece, all belong to the foreground moving zone:
MO k - 1 ( m , n ) = 1 , if &Sigma; l = 0 1 MO k - 1 ( m &PlusMinus; l , n &PlusMinus; l ) &GreaterEqual; 3 0 , else , L is the variable that value is 0,1;
Further concrete scheme is: in described sub-step 1g to reference frame motion sub-block MO K-1The concrete steps that (m, n) carries out the detection of time domain similarity are: at present frame motion sub-block MO k(m, n) judged whether the motion sub-block in 8 adjacent motion sub-blocks on every side, illustrates that if having target is continuous in time, real sport foreground, otherwise, the flase drop that should be considered as occurring once in a while, need to delete, the final motion sub-block retained is the sport foreground zone:
MO k - 1 ( m , n ) = 1 , if &Sigma; l = 0 1 MO k ( m &PlusMinus; l , n &PlusMinus; l ) &GreaterEqual; 1 0 , else , L is the variable that value is 0,1.
Further concrete scheme is: in described step 4, to the unique point of coupling, to carrying out, apart from verification step, be: the i of judgement reference frame and present frame to the unique point in the horizontal direction with the vertical direction translational movement apart from d iWhether meet the following conditions:
| d i-μ |>3 σ, μ, σ are respectively d iAverage and standard deviation,
When meeting above condition, think that this unique point is right to being mistake matching characteristic point, by its rejecting.
From above technical scheme, the inventive method is utilized the motion feature information of image, based on motion difference is cut apart the sport foreground in video sequence and background, the background area of having removed foreground target is carried out to extraction and the registration of overall remarkable characteristic, improved the precision of overall motion estimation; Simultaneously, the vision attention of human eye of take is guidance, the analog vision smoothness properties, build video camera at continuous imaging the low frequency uniform motion pattern in the time, the high fdrequency component in the global motion vector sequence is carried out to filtering, obtain compensating parameter; During compensation, utilize the linear memory structure of image, only need first transformation parameter of computed image first trip, improve the real-time performance of system, and the combining image splicing rebuild border, obtained steady and audible panoramic picture.
Compared with prior art, the present invention has following technique effect:
(1) improve Harris angle point operator, guaranteed that unique point is the vision remarkable characteristic with unique information: the present invention is to image block, extract the unique point of the some that the angle point response is larger as remarkable characteristic, the fixed qty of unique point prevents the too much situation of unique point number of extracting at complex region, and the remarkable information of unique point is avoided the mistake coupling of simple textures repeat region;
(2) owing to adopting foreground moving zone marker and eliminating, guaranteed that the unique point of extracting is all on background: feature extraction of the present invention is not directly to extract in entire image, but first moving region is judged, after mark and eliminating, only in background area, carry out feature detection, thereby guarantee that unique point has well represented the motion of video camera, i.e. global motion information;
(3) due to the distance checking in conjunction with characteristic matching, further improve the precision of overall motion estimation: when occurring that mistake is mated, adopt distance criterion, judge fast and reject this unique point pair, make the unique point that participates in kinematic parameter calculating be correct coupling, improve the precision of estimation;
(4) adopt self-adaptation Sage-Husa filtering algorithm, the analog vision smoothness properties, on the one hand preferably the smooth motion vector to reduce video jitter, effectively follow on the other hand the scanning motion of having a mind to of camera system: even disperse with respect to traditional Kalman filtering accuracy is low, self-adaptation Sage-Husa filtering is constantly revised predicted value by observation data, simultaneously by the statistical property substitution standard Kalman filter of process noise and observation noise, estimate in real time and revise, the tracking power of enhancing to mutation status, thereby reach the reduction model error, improve filtering accuracy,
(5) reconstruction in compensation and undefined district fast, guarantee smoothness and the integrality of output video when vision is observed: the present invention utilizes the relative position between pixel to have the characteristics of rotational invariance, adopt the fast-compensation method of image rotation, improve the efficiency that coordinate calculates, guarantee the real-time performance of system; Simultaneously, for avoiding image compensation, undefined dark border occurs, to visual effect, bring harmful effect, the present invention utilizes splicing and warm technology, undefined district is rebuild to the output panoramic image sequence.
The accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, below will do simple introduction to needing the accompanying drawing used in embodiment or description of the Prior Art, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the process flow diagram of the inventive method;
Fig. 2 is the unique point normal distribution of adjusting the distance;
Reference frame image after motion sub-block that Fig. 3 a has been mark;
Current frame image after motion sub-block that Fig. 3 b has been mark;
Fig. 3 c is the reference frame image of carrying out spatial domain similarity detection;
Fig. 3 d is the current frame image that carries out the detection of time domain similarity;
Fig. 4 a is the reference frame image of having extracted all unique points;
Fig. 4 b is the reference frame image of having extracted remarkable characteristic;
Fig. 4 c is the figure as a result after removal local feature point;
Fig. 4 d is the unique point figure as a result of registration in present frame;
Fig. 5 a is the image before rotation;
Fig. 5 b is postrotational image;
Fig. 6 a is that the horizontal-shift component adopts the comparing result figure after the inventive method preferred version auto adapted filtering and Kalman filtering are processed;
Fig. 6 b is that the vertical shift component adopts the comparing result figure after the inventive method preferred version auto adapted filtering and Kalman filtering are processed.
Embodiment
In order to allow above and other objects of the present invention, feature and the advantage can be more obvious, the embodiment of the present invention cited below particularly, and coordinate appended diagram, be described below in detail.
Sensitivity analysis based in the human visual system, motion being noted is known, and rocking of background can weaken the notice of vision system to foreground target, and the detection that the inconsistent motion of foreground target can the jamming pattern global motion.In order to eliminate or alleviate the wild effect of video sequence, improved the observation effect of video monitoring or tracker, the basic ideas of the inventive method are:
At first, in motion estimation module, utilize consecutive frame on average to obtain background image, then adjacent reference frame and present frame are carried out to difference with background image respectively, then the time domain based on image block and spatial domain similarity detect to obtain the foreground moving zone, and prospect and background are separated, and then in the background area of reference frame, extract the unique point that the response of Harris angle point is larger and carry out registration, solve the least square solution of overdetermination linear equation, obtain globe motion parameter;
Secondly, at motion compensating module, preferably by improving Sage-Husa filtering, the statistical property of real-time estimation and makeover process noise and observation noise, process noise and observation noise are carried out to On-line Estimation, obtain final compensating parameter, the linear fast repairing of employing image repays algorithm and Image Mosaics realizes that real time panoramic surely looks like, export the real scene of the complete smoothness of vision, guaranteed the real-time of system.
Extraction, coupling, checking and the kinematic parameter right by overall remarkable characteristic calculate, and the auto adapted filtering smooth motion obtains compensating parameter, vision degree of stability and sharpness between the raising frame of video.
It is more than core concept of the present invention, below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme of the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making under the creative work prerequisite the every other embodiment obtained, belong to the scope of protection of the invention.
A lot of details have been set forth in the following description so that fully understand the present invention, but the present invention can also adopt other to be different from alternate manner described here and implement, those skilled in the art can be in the situation that do similar popularization without prejudice to intension of the present invention, so the present invention is not subject to the restriction of following public specific embodiment.
With reference to Fig. 1, the process flow diagram that Fig. 1 is the inventive method, the concrete steps of the inventive method are as follows:
Step 1, carry out foreground moving zone with reference to frame and detect, mark foreground moving sub-block step;
In this step after extracting background image, two continuous frames image (reference frame and present frame) and background image are carried out to difference, be divided into a plurality of sub-blocks with reference to frame difference image and present frame difference image, the mean difference score value of usining in sub-block compares as threshold value, to determine the motion sub-block, utilize the time-space domain similarity to carry out piecemeal and detect the foreground moving zone row labels of going forward side by side, realize prospect in video sequence and the Fast Segmentation of background, remove the interference of moving target to vision attention;
The concrete steps of step 1 are as follows:
Sub-step 1a, one section sequential frame image of video sequence is averaged, obtain background image B (x, y), as front 25 frames (1 second) image of video sequence averaged to obtain background image, x, y mean x axle and the y axial coordinate of pixel;
Sub-step 1b, definition k-1 image constantly are reference frame f K-1(x, y), k image constantly is present frame f k(x, y), calculate respectively the difference image of they and background image B (x, y):
Reference frame difference image D K-1(x, y)=abs[f K-1(x, y)-B (x, y)],
Present frame difference image D k(x, y)=abs[f k(x, y)-B (x, y)];
Sub-step 1c, with reference to frame difference image D K-1(x, y) and present frame difference image D k(x, y) is divided into respectively M * N sub-block of non-overlapping copies, and described sub-block is of a size of I * J pixel, as 16 * 16 pixels, calculates the mean absolute error in each sub-block:
Reference frame image sub-block mean absolute error
Current frame image sub-block mean absolute error
Figure BDA00003484644800112
Wherein, i=1 ..., I, j=1 ..., J, m=1 ..., M, n=1 ..., N;
Sub-step 1d, computing reference frame sub-block difference mean value and present frame sub-block difference mean value, respectively as threshold value Th1 and Th2:
Th1=∑B k-1(m,n)/(M×N),
Th2=∑B k(m,n)/(M×N);
Sub-step 1e, tentatively judge that by binaryzation whether each sub-block is motion sub-block (Moving object is called for short MO), definition MO K-1(m, n) is reference frame motion sub-block, MO k(m, n) is present frame motion sub-block, and Rule of judgment is as follows:
MO k - 1 ( m , n ) = 1 , if B k - 1 ( m , n ) > Th 1 0 , else ,
MO k ( m , n ) = 1 , if B k ( m , n ) > Th 2 0 , else ;
Sub-step 1f, to reference frame motion sub-block MO K-1(m, n) carries out spatial domain similarity detection, i.e. statistical-reference frame motion sub-block MO K-1(m, n) be the quantity of 8 adjacent motion sub-blocks on every side, as motion sub-block quantity is less than 3, illustrate that this motion sub-block is the isolated sub-block differed greatly in background, do not belong to sport foreground, deleted, otherwise illustrate that this sub-block is similar with the field piece, all belong to the foreground moving zone:
MO k - 1 ( m , n ) = 1 , if &Sigma; l = 0 1 MO k - 1 ( m &PlusMinus; l , n &PlusMinus; l ) &GreaterEqual; 3 0 , else , L is the variable that value is 0,1;
Sub-step 1g, to reference frame motion sub-block MO K-1(m, n) carries out the detection of time domain similarity, at corresponding present frame motion sub-block MO k(m, n) judged whether the motion sub-block in 8 adjacent motion sub-blocks on every side, illustrates that if having target is continuous in time, is real sport foreground, otherwise the flase drop that should be considered as occurring once in a while needs to delete:
MO k - 1 ( m , n ) = 1 , if &Sigma; l = 0 1 MO k ( m &PlusMinus; l , n &PlusMinus; l ) &GreaterEqual; 1 0 , else , L is the variable that value is 0,1;
After spatial domain and the detection of time domain similarity, the final motion sub-block retained is the sport foreground zone.
The overall remarkable characteristic step of step 2, extraction reference frame;
At first calculate Harris angle point response in this step, and at reference frame f K-1(x, y) in each sub-block, find maximum Harris angle point response as the characteristic response value, then sorted, take out position corresponding to larger front 20% characteristic response value as unique point, be the remarkable characteristic with unique information observed visually, according to the mark result in foreground moving zone in step 1, judge whether this unique point is positioned at the foreground moving zone, if be positioned at the foreground moving zone, delete, what remain is the overall remarkable characteristic observed visually;
The concrete steps of step 2 are as follows:
Sub-step 2a, with reference to frame f K-1(x, y) utilizes following formula compute gradient image:
X = f k - 1 &CircleTimes; ( - 1,0,1 ) Y = f k - 1 &CircleTimes; ( - 1,0,1 ) T ;
Wherein,
Figure BDA00003484644800132
Mean convolution, X means the gradient image of horizontal direction, and Y means the gradient image of vertical direction, [] TMean matrix transpose operation;
Sub-step 2b, structure autocorrelation matrix R:
R = X 2 &CircleTimes; w XY &CircleTimes; w XY &CircleTimes; w Y 2 &CircleTimes; w ;
Wherein,
Figure BDA00003484644800134
For the Gaussian smoothing window function,
Figure BDA00003484644800135
Standard deviation for window function;
Sub-step 2c, calculating Harris angle point response R H:
R H=λ 1×λ 2-0.05·(λ 12);
Wherein, λ 1And λ 2Two eigenwerts for autocorrelation matrix R;
Sub-step 2d, with reference to frame f K-1(x, y) is divided into M * N sub-block of non-overlapping copies, and sub-block is of a size of I * J pixel, with reference to frame f K-1Maximum Harris angle point response in each sub-block of (x, y) is as the characteristic response value R of this sub-block HMAX(m, n);
Sub-step 2e, by characteristic response value R HMAX(m, n) carries out sequence from high to low, takes out front 20% higher value, and position corresponding to described characteristic response value is designated as to reference frame unique point (x i, y i);
Sub-step 2f, utilize the result of sub-step 1g to reference frame unique point (x i, y i) judged, judge the reference frame motion sub-block MO that this unique point is corresponding K-1(m, n) and whether be 1 in 8 adjacent fields on every side, if 1, show that this unique point belongs to moving target or, in the unreliable zone of moving boundaries, this unique point is deleted.
Step 3, feature point pair matching step;
According to the surrounding pixel block message consistance based on the correct matching double points of visual determination, by setting up Window around each unique point at reference frame, and matching window corresponding to acquisition present frame, the central point of matching window is the matching characteristic point, and reference frame unique point and present frame matching characteristic point constitutive characteristic point are right.
The concrete steps of step 3 are as follows:
Sub-step 3a, at reference frame f K-1In (x, y) with reference frame unique point (x i, y i) centered by, build the Window that is of a size of P * Q pixel;
Sub-step 3b, utilize full search strategy and least error and SAD criterion, at present frame f kFind corresponding matching window in (x, y), matching window is of a size of (P+2T) * (Q+2T) pixel, and the central point of matching window is present frame matching characteristic point
Figure BDA00003484644800141
Wherein, T means the pixel maximum offset of horizontal direction and vertical direction, and the computing formula of SAD criterion is: SAD ( x , y ) = &Sigma; p = 1 P &Sigma; q = 1 Q | f k - 1 ( p , q ) - f k ( p + x , q + y ) | , p=1,…,P,q=1,…,Q,x,y=-T,…,T;
Step 4: mistake matching characteristic point is to rejecting step;
The unique point of statistics consecutive frame (reference frame and present frame) in the horizontal direction with vertical direction on the distance of translational movement, the unique point of coupling is carried out to the distance checking to utilizing apart from the normality distribution characteristics, reject mistake matching characteristic point right, finally obtain the C of correct coupling to unique point pair;
According to the Euclidean distance formula,
Figure BDA00003484644800143
Wherein, d iFor the i of reference frame and present frame to unique point in the horizontal direction with the distance of vertical direction translational movement, as | d i-μ | during>3 σ, think that this unique point is to right for mistake matching characteristic point, by its rejecting, μ, σ are respectively d iAverage and standard deviation.
As shown in Figure 2, according to experiment statistics, d iApproximate Normal Distribution,
Figure BDA00003484644800144
Known according to " the 3 σ criterion " of normal distribution, the data on [μ-3 σ, μ+3 σ] interval account for 99.7%(Fig. 2 of total data), therefore think and work as | d i-μ | during>3 σ, this unique point is right to being mistake matching characteristic point.
Step 5: the acquisition step of kinematic parameter;
Set up the kinematic parameter model in this step, the unique point of correct coupling, to substitution kinematic parameter model, is arranged and to obtain the kinematic parameter matrix equation, by the least square solution that solves the overdetermination system of linear equations, obtain kinematic parameter;
The concrete steps of step 5 are as follows:
Reference frame unique point (x is described in sub-step 5a, foundation i, y i) and present frame matching characteristic point
Figure BDA00003484644800145
Between the kinematic parameter model of relation: x ^ y ^ = 1 - &theta; &theta; 1 x y + u v , Wherein, θ is the image rotation angle, and u is pixel vertical translation amount, and v is the pixel level translational movement, θ, u and v component movement parameter (i.e. rotation, translation parameters);
The C of sub-step 5b, correct coupling that step 4 checking is obtained, arranges and obtains the kinematic parameter matrix equation substitution kinematic parameter model unique point: B = x ^ 1 y ^ 1 x ^ 2 y ^ 2 &CenterDot; &CenterDot; &CenterDot; x ^ c y ^ c , A = x 1 y 1 1 x 2 y 2 1 &CenterDot; &CenterDot; &CenterDot; x c y c 1 , m = &theta; u v ;
Sub-step 5c, solve overdetermination linear equation B=Am, the least square solution of kinematic parameter matrix m is m=(A TA) -1AB, thus kinematic parameter obtained.
Step 6: motion filtering step;
Cumulative motion gain of parameter translation motion parametric line, answered smothing filtering to the translation motion parametric line, simulates visual motion smoothing, strengthens the tracking power to mutation status;
The concrete steps of step 6 are as follows:
Sub-step 6a, writ state vector S (k)=[u (k), v (k), du (k), dv (k)] T, measurement vector Z (k)=[u (k), v (k)] TWherein, u (k) is k pixel vertical translation amount constantly, and v (k) is k pixel level translational movement constantly, du (k) is k instantaneous velocity corresponding to pixel vertical translation amount constantly, and dv (k) is k instantaneous velocity corresponding to pixel level translational movement constantly;
Sub-step 6b, set up the linear discrete system model, obtain state equation and observation equation:
State equation is S (k)=FS (k-1)+δ,
Observation equation is Z (k)=HS (k)+η;
Wherein, F = 1 0 1 0 0 1 0 1 0 0 1 0 0 0 0 1 State-transition matrix, H = 1 0 0 0 0 1 0 0 Be observing matrix, δ, η are separate white noise, and δ~N (0, Φ), η~N (0, Γ), the variance battle array that Φ is process noise, the variance battle array that Γ is observation noise;
Sub-step 6c, set up the system state predictive equation, and its covariance matrix predicted and upgraded, complete motion filtering:
The system state predictive equation is: S (k|k-1)=FS (k-1|k-1);
Covariance matrix P (k|k-1) to S (k|k-1) is predicted: P (k|k-1)=FP (k-1) F T+ Φ (k-1), the variance battle array that Φ is process noise;
The system state renewal equation is: S (k|k)=S (k|k-1)+K g(k) ε (K);
Upgrade the filtering variance matrix of S (k|k) under k moment state: P (k|k)=(Ψ-K g(k) H) P (k|k-1), wherein, Kg (k)=P (k|k-1) H T(HP (k|k-1) H T+ Г (k)) -1For the Kalman gain, ε (k)=Z (k)-HS (k|k-1) is innovation sequence, the variance battle array that Γ is observation noise, the unit matrix that Ψ is same order.
As further preferred version, the present invention is improved aforementioned Sage-Husa filtering, increase the correction step of covariance matrix P (k|k-1), statistical property by real-time estimation and makeover process noise and observation noise, the translation motion parametric line is carried out to self-adaptive smooth filtering, and concrete steps are as follows:
Sub-step 6d, utilize the character of innovation sequence ε (k) to judge whether filtering disperses:
ε(k) T·ε(k)≤γ·Trace[H·P(k|k-1)·H T+Γ(k)];
Wherein, γ is adjustability coefficients and γ>1;
Sub-step 6e, when in sub-step 6d, formula is set up, wave filter, in normal operating conditions, directly obtains the optimal estimation value of current state; When current formula is false, show that actual error will be over γ times of theoretical estimated value, filtering will be dispersed, and now by weighting coefficient C (k), the covariance matrix P (k|k-1) in sub-step 6c be revised, and complete the auto adapted filtering of kinematic parameter after correction;
Correction formula is as follows:
P(k|k-1)=C(k)·F·P(k-1)·F T+Φ(k),
C ( k ) = &epsiv; ( k ) T &CenterDot; &epsiv; ( k ) - Trace [ H &CenterDot; &Phi; ( k ) &CenterDot; H T + &Gamma; ( k ) ] Trace [ H &CenterDot; F &CenterDot; P ( k ) &CenterDot; F T &CenterDot; H T ] .
Step 7, rapid movement compensation process;
According to compensating parameter to present frame f k(x, y) converted, combining image linear memory structure, and the plus-minus of employing linear operation, realize current frame image f kThe quick compensation of (x, y);
The concrete steps of step 7 are as follows:
Sub-step 7a, by the difference u of the forward and backward translation motion component of filtering Jitter=u-u Filter, v Jitter=v-v FilterCombining image anglec of rotation θ, parameter by way of compensation
Figure BDA00003484644800163
Wherein, u is pixel vertical translation amount, and v is the pixel level translational movement, u JitterFor filtered pixel vertical translation amount, v JitterFor filtered pixel level translational movement;
Sub-step 7b, utilize the kinematic parameter model to calculate present frame f kThe rotation result of first pixel of (x, y) first trip [x, y]: x &prime; y &prime; = 1 - &theta; &theta; 1 x y + u jitter v jitter ;
Sub-step 7c, according to the coordinate linear structure, carry out plus and minus calculation, calculate present frame f kThe pixel of (x, y) all the other ranks, and the new coordinate of acquisition current frame pixel [x ', y '], realize the compensation of present frame.
Step 8, rebuild undefined boundary information, obtain the panoramic picture step;
Utilize compensating parameter according to step 7
Figure BDA00003484644800172
To present frame f kAfter (x, y) pixel carries out coordinate transform, with reference frame f K-1(x, y), as initial panorama sketch, utilizes the Image Mosaics technology, reference frame and present frame are merged, determined the gray-scale value of each pixel (x ', y ') of fused image according to the warm strategy of image, be compensated image f (x ', y '), realize panoramic picture output:
f ( x &prime; , y &prime; ) = f k - 1 ( x &prime; , y &prime; ) ( x &prime; , y &prime; ) &Element; f k - 1 &tau; f k - 1 ( x &prime; , y &prime; ) + &xi; f k ( x &prime; , y &prime; ) ( x &prime; , y &prime; ) &Element; ( f k - 1 &cap; f k ) f k ( x &prime; , y &prime; ) ( x &prime; , y &prime; ) &Element; f k
τ, ξ in above formula mean weighted value, represent the ratio of this pixel relative position and present frame and reference frame overlapping region width, and this pixel and frontier point position is poor, and τ+ξ=1,0<τ, ξ<1, in overlapping region, τ is by 1 gradual change to 0, and ξ is by 0 gradual change to 1.Realized thus in overlapping region by reference frame f K-1(x, y) slowly is smoothly transitted into present frame f k(x, y), make the warm image effect obtained more natural, can not affect the observing effect of whole video, improves the integrality of vision.
Effect of the present invention can describe by following test experience.
Perception according to human visual to motion, human eye can't be observed the difference of single pixel, but judges motion with the lasting variation in whole zone.As shown in Figure 3 a and Figure 3 b shows, the reference frame image after motion sub-block that Fig. 3 a has been mark, the current frame image after motion sub-block that Fig. 3 b has been mark, in Fig. 3 a and Fig. 3 b, "+" symbol means that this sub-block is the motion sub-block.
The 8 neighborhood pieces to motion sub-block in reference frame image carry out the spatial domain similarity analysis, if piece existence on every side is no less than 3 motion sub-blocks, illustrate that this piece is similar with the field piece, all belong to the foreground moving zone; Otherwise judge that this piece, as the more isolated piece of residual error in background, is deleted, the candidate's prospect sub-block retained after deleting as shown in Figure 3 c.
Again motion sub-block in reference frame image is carried out to the time domain similarity analysis: if in continuous present frame, the motion sub-block appears in similar 8 positions, field, is judged as real motion; Otherwise the flase drop that should be considered as occurring once in a while, need to delete, the sport foreground zone of the final sub-block retained for detecting, as shown in Figure 3 d.
With reference to Fig. 4 a to Fig. 4 d, it is the procedure chart of overall remarkable characteristic extraction and feature point pair matching.
Fig. 4 a is all unique points that the reference frame image subregion extracts, can find out, unique point is evenly distributed in entire image, the Partial Feature point has been selected on moving target, separately there are a large amount of unique points to be positioned at the background area (as sky and ground) that texture repeats, this two category features point can cause the mistake coupling, thereby causes the reduction of overall motion estimation precision.
Fig. 4 b is that remarkable characteristic extracts figure as a result, has retained larger front 20% unique point of characteristic response, improves the uniqueness of characteristic point information.
Fig. 4 c is the figure as a result after removal mistake matching characteristic point, in conjunction with the experimental result of Fig. 3 d, whether judging characteristic point position is positioned at foreground moving or unreliable zone on every side, if directly deleted, and retain the overall remarkable characteristic of background area, thereby be beneficial to the correct coupling of unique point.
The unique point that Fig. 4 d is the registration of correspondence in current frame image is figure as a result, unique point success registration, and can process in real time.
With reference to Fig. 5 a and 5b, the principle that the linear fast repairing of image occurred during front is described is repaid is described as follows:
The linear memory structure of image has guaranteed that the relative position between pixel has rotational invariance.With reference to Fig. 5 a and 5b, Fig. 5 a is image rotation preceding pixel point position, and image is defined in rectangular domain ABCD, arbitrarily pixel E (x, y) and with the first row pixel E1 (x of its same column 1, y 1), the first row pixel E2 (x that goes together with it 2, y 2) and the first row pixel A (x of the first row A, y A) in the four summit relations that are geometrically rectangle.In arbitrary system, an apex coordinate E (x, y) of rectangle can be determined by other three apex coordinates:
x = x 1 + ( x 2 - x A ) y = y 1 + ( y 2 - y A ) - - - ( 1 )
Rotational transform is linear transformation, and the shape of rectangle does not change because of rotation.Fig. 5 b is pixel position after image rotation, after image rotation, four summits of rectangle be A ' (x ' A, y ' A), E1 ' (x 1', y 1'), E2 ' (x 2', y 2'), E ' (x ', y '), its coordinate relation still meets relational expression:
x &prime; = x 1 &prime; + ( x 2 &prime; - x A &prime; ) y &prime; = y 1 &prime; + ( x 2 &prime; - x A &prime; ) - - - ( 2 )
Therefore, utilize the rotation result of the pixel of image first trip and first just can calculate the rotation result of other all pixels.
Concrete steps are: the pixel to the first row and first row is done coordinate transform by similar variation model, and variation model is x &prime; y &prime; = 1 - &theta; &theta; 1 x y + u v ; Pixel to all the other ranks can carry out plus and minus calculation with above formula.Avoided like this each pixel of entire image is made to matrix multiplication operation, thereby effectively saved operation time, improved the efficiency that coordinate calculates.
With reference to Fig. 6 a and Fig. 6 b, so that the horizontal and vertical offset component is filtered into to example, provided the comparing result figure of the Kalman filtering of its inventive method preferred version auto adapted filtering and prior art.Owing to there being nearly camera-scanning motion at the uniform velocity on horizontal direction, so the empirical curve of Fig. 6 a is stable propradation; Only there is randomized jitter in the vertical direction in video camera, so the empirical curve of Fig. 6 b is in 0 positional fluctuation.From this figure, in Kalman filtering, select different process noise Q larger on the compensation result impact: when process noise Q value is larger, filter curve and primary curve approach, therefore without obvious filter effect; When process noise Q value hour, though can obtain level and smooth filter curve, observe 52 frames by Fig. 6 b after original shake finish, and its filtering result departs from 0 vector, thereby causes filtering divergence.And auto adapted filtering method of the present invention can effectively be avoided the phenomenon of this filtering divergence, well level and smooth jittering component, the real scan of tracking camera motion effectively simultaneously.
According to above technical scheme, at first the inventive method under the moving scene environment that has moving target, proposes detection and the method for registering of overall remarkable characteristic, improves speed and the precision of overall motion estimation; Next has solved in video camera spotting scaming process and shake occurred, improves Sage-Husa filtering to distinguish adaptively scanning and shake, follows the tracks of the real scan scene in smooth motion; The time-consuming computing that while finally having avoided image conversion, the matrix multiplication of pointwise brings, based on the linear storage organization of image, thereby propose linear operation and realize compensation fast, and the border of losing when combining image merges compensation is rebuild, eliminate or alleviate the wild effect of video sequence, improved the observation effect of video monitoring or tracker.
The above, it is only preferred embodiment of the present invention, not the present invention is done to any pro forma restriction, although the present invention discloses as above with preferred embodiment, yet not in order to limit the present invention, any those skilled in the art, within not breaking away from the technical solution of the present invention scope, when the technology contents that can utilize above-mentioned announcement is made a little change or is modified to the equivalent embodiment of equivalent variations, in every case be the content that does not break away from technical solution of the present invention, any simple modification of above embodiment being done according to technical spirit of the present invention, equivalent variations and modification, all still belong in the scope of technical solution of the present invention.

Claims (5)

1. the electronic image stabilization method based on vision noticing mechanism, is characterized in that, comprises the following steps:
Step 1, reference frame is carried out to foreground moving zone detect, mark foreground moving sub-block step;
Sub-step 1a, one section sequential frame image of video sequence is averaged, obtain background image B (x, y), x, y mean x axle and the y axial coordinate of pixel;
Sub-step 1b, definition k-1 image constantly are reference frame f K-1(x, y), k image constantly is present frame f k(x, y), calculate respectively the difference image of they and background image B (x, y):
Reference frame difference image D K-1(x, y)=abs[f K-1(x, y)-B (x, y)],
Present frame difference image D k(x, y)=abs[f k(x, y)-B (x, y)];
Sub-step 1c, with reference to frame difference image D K-1(x, y) and present frame difference image D k(x, y) is divided into respectively M * N sub-block of non-overlapping copies, and described sub-block is of a size of I * J pixel, calculates the mean absolute error in each sub-block:
Reference frame image sub-block mean absolute error
Figure FDA00003484644700011
Current frame image sub-block mean absolute error
Figure FDA00003484644700012
Wherein, i=1 ..., I, j=1 ..., J, m=1 ..., M, n=1 ..., N;
Sub-step 1d, computing reference frame sub-block difference mean value and present frame sub-block difference mean value, respectively as threshold value Th1 and Th2:
Th1=∑B k-1(m,n)/(M×N),
Th2=∑B k(m,n)/(M×N);
Sub-step 1e, by binaryzation, tentatively judge whether each sub-block is the motion sub-block, definition MO K-1(m, n) is reference frame motion sub-block, MO k(m, n) is present frame motion sub-block, and Rule of judgment is as follows:
MO k - 1 ( m , n ) = 1 , if B k - 1 ( m , n ) > Th 1 0 , else ,
MO k ( m , n ) = 1 , if B k ( m , n ) > Th 2 0 , else ;
Sub-step 1f, to reference frame motion sub-block MO K-1(m, n) carries out spatial domain similarity detection, and the sub-block that does not belong to sport foreground is deleted;
Sub-step 1g, to reference frame motion sub-block MO K-1(m, n) carries out the detection of time domain similarity, and the sub-block that does not belong to sport foreground is deleted;
After spatial domain, time domain similarity detect, the final motion sub-block retained is the sport foreground zone;
Overall remarkable characteristic step in step 2, extraction reference frame;
Sub-step 2a, with reference to frame f K-1(x, y) utilizes following formula compute gradient image:
X = f k - 1 &CircleTimes; ( - 1,0,1 ) Y = f k - 1 &CircleTimes; ( - 1,0,1 ) T ;
Wherein, Mean convolution, X means the gradient image of horizontal direction, and Y means the gradient image of vertical direction, [] TMean matrix transpose operation;
Sub-step 2b, structure autocorrelation matrix R:
R = X 2 &CircleTimes; w XY &CircleTimes; w XY &CircleTimes; w Y 2 &CircleTimes; w ;
Wherein,
Figure FDA00003484644700024
For the Gaussian smoothing window function,
Figure FDA00003484644700025
Standard deviation for window function;
Sub-step 2c, calculating Harris angle point response R H:
R H=λ 1×λ 2-0.05·(λ 12);
Wherein, λ 1And λ 2Two eigenwerts for autocorrelation matrix R;
Sub-step 2d, with reference to frame f K-1(x, y) is divided into M * N sub-block of non-overlapping copies, and sub-block is of a size of I * J pixel, with reference to frame f K-1Maximum Harris angle point response in each sub-block of (x, y) is as the characteristic response value R of this sub-block HMAX(m, n);
Sub-step 2e, by characteristic response value R HMAX(m, n) carries out sequence from high to low, takes out front 20% higher value, and position corresponding to described characteristic response value is designated as to reference frame unique point (x i, y i);
Sub-step 2f, utilize the result of sub-step 1g to reference frame unique point (x i, y i) judged, judge the reference frame motion sub-block MO that this unique point is corresponding K-1(m, n) and whether be 1 in 8 adjacent fields on every side, if 1, show that this unique point belongs to moving target or, in the unreliable zone of moving boundaries, this unique point is deleted;
Step 3, feature point pair matching step;
Sub-step 3a, at reference frame f K-1In (x, y) with reference frame unique point (x i, y i) centered by, build the Window that is of a size of P * Q pixel;
Sub-step 3b, utilize full search strategy and least error and SAD criterion, at present frame f kFind corresponding matching window in (x, y), matching window is of a size of (P+2T) * (Q+2T) pixel, and the central point of matching window is present frame matching characteristic point
Figure FDA00003484644700036
, wherein, T means the pixel maximum offset of horizontal direction and vertical direction, the computing formula of SAD criterion is: SAD ( x , y ) = &Sigma; p = 1 P &Sigma; q = 1 Q | f k - 1 ( p , q ) - f k ( p + x , q + y ) | , p=1,…,P,q=1,…,Q,x,y=-T,…,T;
Step 4: mistake matching characteristic point is to rejecting step;
According to the Euclidean distance formula The i of computing reference frame and present frame to unique point in the horizontal direction with the distance of vertical direction translational movement, the unique point of coupling is carried out to the distance checking to utilizing apart from the normality distribution characteristics, reject mistake matching characteristic point right, obtain the C of correct coupling to unique point pair;
Step 5: the acquisition step of kinematic parameter;
Reference frame unique point (x is described in sub-step 5a, foundation i, y i) and present frame matching characteristic point
Figure FDA00003484644700033
Between the kinematic parameter model of relation: x ^ y ^ = 1 - &theta; &theta; 1 x y + u v , Wherein, θ is the image rotation angle, and u is pixel vertical translation amount, and v is the pixel level translational movement, θ, u and v component movement parameter;
Sub-step 5b, the C that will correctly mate, arrange and obtain the kinematic parameter matrix equation substitution kinematic parameter model unique point: B = x ^ 1 y ^ 1 x ^ 2 y ^ 2 &CenterDot; &CenterDot; &CenterDot; x ^ c y ^ c , A = x 1 y 1 1 x 2 y 2 1 &CenterDot; &CenterDot; &CenterDot; x c y c 1 , m = &theta; u v ;
Sub-step 5c, solve overdetermination linear equation B=Am, the least square solution of kinematic parameter matrix m is m=(A TA) -1AB, thus kinematic parameter obtained;
Step 6: motion filtering step;
Sub-step 6a, writ state vector S (k)=[u (k), v (k), du (k), dv (k)] T, measurement vector Z (k)=[u (k), v (k)] TWherein, u (k) is k pixel vertical translation amount constantly, and v (k) is k pixel level translational movement constantly, du (k) is k instantaneous velocity corresponding to pixel vertical translation amount constantly, and dv (k) is k instantaneous velocity corresponding to pixel level translational movement constantly;
Sub-step 6b, set up the linear discrete system model, obtain state equation and observation equation:
State equation is S (k)=FS (k-1)+δ,
Observation equation is Z (k)=HS (k)+η;
Wherein, F = 1 0 1 0 0 1 0 1 0 0 1 0 0 0 0 1 State-transition matrix, H = 1 0 0 0 0 1 0 0 Be observing matrix, δ, η are separate white noise, and δ~N (0, Φ), η~N (0, Γ), the variance battle array that Φ is process noise, the variance battle array that Γ is observation noise;
Sub-step 6c, set up the system state predictive equation, and its covariance matrix predicted and upgraded, complete motion filtering:
The system state predictive equation is: S (k|k-1)=FS (k-1|k-1);
Covariance matrix P (k|k-1) to S (k|k-1) is predicted: P (k|k-1)=FP (k-1) F T+ Φ (k-1), the variance battle array that Φ is process noise;
The system state renewal equation is: S (k|k)=S (k|k-1)+K g(k) ε (K);
Upgrade the filtering variance matrix of S (k|k) under k moment state: P (k|k)=(Ψ-K g(k) H) P (k|k-1);
Wherein, K g(k)=P (k|k-1) H T(HP (k|k-1) H T+ Γ (k)) -1For the Kalman gain, ε (k)=Z (k)-HS (k|k-1) is innovation sequence, the variance battle array that Γ is observation noise, the unit matrix that Ψ is same order;
Step 7, rapid movement compensation process;
Sub-step 7a, by the difference u of the forward and backward translation motion component of filtering Jitter=u-u Filter, v Jitter=v-v FilterCombining image anglec of rotation θ, parameter by way of compensation
Figure FDA00003484644700043
Wherein, u is pixel vertical translation amount, and v is the pixel level translational movement, u JitterFor filtered pixel vertical translation amount, v JitterFor filtered pixel level translational movement;
Sub-step 7b, utilize the kinematic parameter model to calculate present frame f kThe rotation result of first pixel of (x, y) first trip [x, y]: x &prime; y &prime; = 1 - &theta; &theta; 1 x y + u jitter v jitter ;
Sub-step 7c, according to the image coordinate linear structure, carry out plus and minus calculation, calculate present frame f kThe pixel of (x, y) all the other ranks, and the new coordinate of acquisition current frame pixel [x ', y '], realize the compensation of present frame;
Step 8, rebuild undefined boundary information, obtain the panoramic picture step;
With reference frame f K-1(x, y), as initial panorama sketch, utilizes the Image Mosaics technology, reference frame and present frame are merged, determined the gray-scale value of each pixel (x ', y ') of fused image according to the warm strategy of image, be compensated image f (x ', y '), realize panoramic picture output:
f ( x &prime; , y &prime; ) = f k - 1 ( x &prime; , y &prime; ) ( x &prime; , y &prime; ) &Element; f k - 1 &tau; f k - 1 ( x &prime; , y &prime; ) + &xi; f k ( x &prime; , y &prime; ) ( x &prime; , y &prime; ) &Element; ( f k - 1 &cap; f k ) f k ( x &prime; , y &prime; ) ( x &prime; , y &prime; ) &Element; f k
τ, ξ in above formula mean weighted value, represent the ratio of this pixel relative position and overlapping region width, and this pixel and frontier point position is poor, and τ+ξ=1,0<τ, ξ<1, and in overlapping region, τ is by 1 gradual change to 0, and ξ is by 0 gradual change to 1.
2. the electronic image stabilization method based on vision noticing mechanism according to claim 1, it is characterized in that: described step 6 also comprises the correction step of covariance matrix, continues to carry out following steps after completing sub-step 6c:
Sub-step 6d, utilize the character of innovation sequence ε (k) to judge whether filtering disperses: ε (k) Tε (k)≤γ Trace[HP (k|k-1) H T+ Γ (k)];
Wherein, γ is adjustability coefficients and γ>1;
Sub-step 6e, when formula is set up in sub-step 6d, illustrate that wave filter is in normal operating conditions, directly obtain the optimal estimation value of current state; When formula is false, show that actual error will be over γ times of theoretical estimated value, filtering will be dispersed, and now by weighting coefficient C (k), the covariance matrix P (k|k-1) in sub-step 6c be revised, complete the auto adapted filtering of kinematic parameter after correction
Correction formula is as follows:
P(k|k-1)=C(k)·F·P(k-1)·F T+Φ(k),
C ( k ) = &epsiv; ( k ) T &CenterDot; &epsiv; ( k ) - Trace [ H &CenterDot; &Phi; ( k ) &CenterDot; H T + &Gamma; ( k ) ] Trace [ H &CenterDot; F &CenterDot; P ( k ) &CenterDot; F T &CenterDot; H T ] .
3. the electronic image stabilization method based on vision noticing mechanism according to claim 1 and 2 is characterized in that: in described sub-step 1f to reference frame motion sub-block MO K-1The concrete steps that (m, n) carries out spatial domain similarity detection are: statistical-reference frame motion sub-block MO K-1(m, n) be the quantity of 8 adjacent motion sub-blocks on every side, as motion sub-block quantity is less than 3, illustrate that this motion sub-block is the isolated sub-block differed greatly in background, do not belong to sport foreground, deleted, otherwise illustrate that this sub-block is similar with the field piece, all belong to the foreground moving zone:
MO k - 1 ( m , n ) = 1 , if &Sigma; l = 0 1 MO k - 1 ( m &PlusMinus; l , n &PlusMinus; l ) &GreaterEqual; 3 0 , else , l is the variable that value is 0,1.
4. the electronic image stabilization method based on vision noticing mechanism according to claim 1 and 2 is characterized in that: in described sub-step 1g to reference frame motion sub-block MO K-1The concrete steps that (m, n) carries out the detection of time domain similarity are: at present frame motion sub-block MO k(m, n) judged whether the motion sub-block in 8 adjacent motion sub-blocks on every side, illustrates that if having target is continuous in time, is real sport foreground, otherwise the flase drop that is considered as occurring once in a while, need to delete, and the final motion sub-block retained is the sport foreground zone:
MO k - 1 ( m , n ) = 1 , if &Sigma; l = 0 1 MO k ( m &PlusMinus; l , n &PlusMinus; l ) &GreaterEqual; 1 0 , else , L is the variable that value is 0,1.
5. the electronic image stabilization method based on vision noticing mechanism according to claim 1 and 2 is characterized in that: in described step 4, to the unique point of coupling, to carrying out, apart from verification step, be: the i of judgement reference frame and present frame to the unique point in the horizontal direction with the vertical direction translational movement apart from d iWhether meet the following conditions:
| d i-μ |>3 σ, μ, σ are respectively d iAverage and standard deviation,
While meeting above condition, think that this unique point is right to being mistake matching characteristic point, by its rejecting.
CN201310287353.XA 2013-07-09 2013-07-09 The electronic image stabilization method of view-based access control model attention mechanism Active CN103426182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310287353.XA CN103426182B (en) 2013-07-09 2013-07-09 The electronic image stabilization method of view-based access control model attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310287353.XA CN103426182B (en) 2013-07-09 2013-07-09 The electronic image stabilization method of view-based access control model attention mechanism

Publications (2)

Publication Number Publication Date
CN103426182A true CN103426182A (en) 2013-12-04
CN103426182B CN103426182B (en) 2016-01-06

Family

ID=49650872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310287353.XA Active CN103426182B (en) 2013-07-09 2013-07-09 The electronic image stabilization method of view-based access control model attention mechanism

Country Status (1)

Country Link
CN (1) CN103426182B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905826A (en) * 2014-04-10 2014-07-02 北京工业大学 Self-adaptation global motion estimation method
CN104853064A (en) * 2015-04-10 2015-08-19 海视英科光电(苏州)有限公司 Electronic image-stabilizing method based on infrared thermal imager
CN105263026A (en) * 2015-10-12 2016-01-20 西安电子科技大学 Global vector acquisition method based on probability statistics and image gradient information
CN105758867A (en) * 2016-03-11 2016-07-13 伍祥辰 High-speed microscopic defect review method
CN106128007A (en) * 2016-08-26 2016-11-16 宁波圣达精工智能科技有限公司 Intelligent compact shelf controls guard system
CN106210447A (en) * 2016-09-09 2016-12-07 长春大学 Video image stabilization method based on background characteristics Point matching
CN106357958A (en) * 2016-10-10 2017-01-25 山东大学 Region-matching-based fast electronic image stabilization method
CN106375659A (en) * 2016-06-06 2017-02-01 中国矿业大学 Electronic image stabilization method based on multi-resolution gray projection
CN106846297A (en) * 2016-12-21 2017-06-13 深圳市镭神智能系统有限公司 Pedestrian's flow quantity detecting system and method based on laser radar
CN107197121A (en) * 2017-06-14 2017-09-22 长春欧意光电技术有限公司 A kind of electronic image stabilization method based on on-board equipment
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN107578428A (en) * 2017-08-31 2018-01-12 成都观界创宇科技有限公司 Method for tracking target and panorama camera applied to panoramic picture
CN108174087A (en) * 2017-12-26 2018-06-15 北京理工大学 A kind of steady reference frame update method and the system as in of Gray Projection
CN108492328A (en) * 2018-03-23 2018-09-04 云南大学 Video interframe target matching method, device and realization device
CN108765532A (en) * 2018-05-04 2018-11-06 北京物灵智能科技有限公司 Children paint this method for establishing model, reading machine people and storage device
CN108881668A (en) * 2017-06-02 2018-11-23 北京旷视科技有限公司 Video increases steady method, apparatus, system and computer-readable medium
CN109561253A (en) * 2018-12-18 2019-04-02 深圳岚锋创视网络科技有限公司 A kind of method, apparatus and portable terminal of panoramic video stabilization
CN109816006A (en) * 2019-01-18 2019-05-28 深圳大学 A kind of sea horizon detection method, device and computer readable storage medium
CN109922258A (en) * 2019-02-27 2019-06-21 杭州飞步科技有限公司 Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera
CN110046555A (en) * 2019-03-26 2019-07-23 合肥工业大学 Endoscopic system video image stabilization method and device
CN110120023A (en) * 2019-05-14 2019-08-13 浙江工大盈码科技发展有限公司 A kind of image feedback antidote
CN110473229A (en) * 2019-08-21 2019-11-19 上海无线电设备研究所 A kind of moving target detecting method based on self-movement feature clustering
CN110856014A (en) * 2019-11-05 2020-02-28 北京奇艺世纪科技有限公司 Moving image generation method, moving image generation device, electronic device, and storage medium
CN111583151A (en) * 2020-05-09 2020-08-25 浙江大华技术股份有限公司 Video denoising method and device, and computer readable storage medium
CN112633298A (en) * 2020-12-28 2021-04-09 深圳大学 Method for measuring similarity of image/image block
CN113256679A (en) * 2021-05-13 2021-08-13 湖北工业大学 Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system
CN115004680A (en) * 2019-12-11 2022-09-02 Lg伊诺特有限公司 Image processing apparatus, image processing method, and program
WO2022262386A1 (en) * 2021-06-18 2022-12-22 哲库科技(上海)有限公司 Image processing apparatus and method, processing chip, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316368A (en) * 2008-07-18 2008-12-03 西安电子科技大学 Full view stabilizing method based on global characteristic point iteration
US20090135303A1 (en) * 2007-11-28 2009-05-28 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
CN101729763A (en) * 2009-12-15 2010-06-09 中国科学院长春光学精密机械与物理研究所 Electronic image stabilizing method for digital videos
CN103024247A (en) * 2011-09-28 2013-04-03 中国航天科工集团第二研究院二〇七所 Electronic image stabilization method based on improved block matching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135303A1 (en) * 2007-11-28 2009-05-28 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
CN101316368A (en) * 2008-07-18 2008-12-03 西安电子科技大学 Full view stabilizing method based on global characteristic point iteration
CN101729763A (en) * 2009-12-15 2010-06-09 中国科学院长春光学精密机械与物理研究所 Electronic image stabilizing method for digital videos
CN103024247A (en) * 2011-09-28 2013-04-03 中国航天科工集团第二研究院二〇七所 Electronic image stabilization method based on improved block matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱娟娟: "电子稳像理论及其应用", 《中国博士学位论文全文数据库(INFORMATION SCIENCE AND TECHNOLOGY)》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905826A (en) * 2014-04-10 2014-07-02 北京工业大学 Self-adaptation global motion estimation method
CN104853064A (en) * 2015-04-10 2015-08-19 海视英科光电(苏州)有限公司 Electronic image-stabilizing method based on infrared thermal imager
CN104853064B (en) * 2015-04-10 2018-04-17 海视英科光电(苏州)有限公司 Electronic image stabilization method based on thermal infrared imager
CN105263026A (en) * 2015-10-12 2016-01-20 西安电子科技大学 Global vector acquisition method based on probability statistics and image gradient information
CN105263026B (en) * 2015-10-12 2018-04-17 西安电子科技大学 Global vector acquisition methods based on probability statistics and image gradient information
CN105758867A (en) * 2016-03-11 2016-07-13 伍祥辰 High-speed microscopic defect review method
CN106375659B (en) * 2016-06-06 2019-06-11 中国矿业大学 Electronic image stabilization method based on multiresolution Gray Projection
CN106375659A (en) * 2016-06-06 2017-02-01 中国矿业大学 Electronic image stabilization method based on multi-resolution gray projection
CN106128007A (en) * 2016-08-26 2016-11-16 宁波圣达精工智能科技有限公司 Intelligent compact shelf controls guard system
CN106210447A (en) * 2016-09-09 2016-12-07 长春大学 Video image stabilization method based on background characteristics Point matching
CN106210447B (en) * 2016-09-09 2019-05-14 长春大学 Based on the matched video image stabilization method of background characteristics point
CN106357958A (en) * 2016-10-10 2017-01-25 山东大学 Region-matching-based fast electronic image stabilization method
CN106357958B (en) * 2016-10-10 2019-04-16 山东大学 A kind of swift electron digital image stabilization method based on Region Matching
CN106846297A (en) * 2016-12-21 2017-06-13 深圳市镭神智能系统有限公司 Pedestrian's flow quantity detecting system and method based on laser radar
CN108881668A (en) * 2017-06-02 2018-11-23 北京旷视科技有限公司 Video increases steady method, apparatus, system and computer-readable medium
CN107197121A (en) * 2017-06-14 2017-09-22 长春欧意光电技术有限公司 A kind of electronic image stabilization method based on on-board equipment
CN107197121B (en) * 2017-06-14 2019-07-26 长春欧意光电技术有限公司 A kind of electronic image stabilization method based on on-board equipment
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN107423409B (en) * 2017-07-28 2020-03-31 维沃移动通信有限公司 Image processing method, image processing device and electronic equipment
CN107578428A (en) * 2017-08-31 2018-01-12 成都观界创宇科技有限公司 Method for tracking target and panorama camera applied to panoramic picture
CN108174087A (en) * 2017-12-26 2018-06-15 北京理工大学 A kind of steady reference frame update method and the system as in of Gray Projection
CN108174087B (en) * 2017-12-26 2019-07-02 北京理工大学 A kind of steady reference frame update method and system as in of Gray Projection
CN108492328A (en) * 2018-03-23 2018-09-04 云南大学 Video interframe target matching method, device and realization device
CN108492328B (en) * 2018-03-23 2021-02-26 云南大学 Video inter-frame target matching method and device and implementation device
CN108765532A (en) * 2018-05-04 2018-11-06 北京物灵智能科技有限公司 Children paint this method for establishing model, reading machine people and storage device
CN108765532B (en) * 2018-05-04 2023-08-22 卢卡(北京)智能科技有限公司 Child drawing model building method, reading robot and storage device
US11490010B2 (en) 2018-12-18 2022-11-01 Arashi Vision Inc. Panoramic video anti-shake method and portable terminal
CN109561253A (en) * 2018-12-18 2019-04-02 深圳岚锋创视网络科技有限公司 A kind of method, apparatus and portable terminal of panoramic video stabilization
CN109816006A (en) * 2019-01-18 2019-05-28 深圳大学 A kind of sea horizon detection method, device and computer readable storage medium
CN109816006B (en) * 2019-01-18 2020-11-13 深圳大学 Sea-sky-line detection method and device and computer-readable storage medium
CN109922258B (en) * 2019-02-27 2020-11-03 杭州飞步科技有限公司 Electronic image stabilizing method and device for vehicle-mounted camera and readable storage medium
CN109922258A (en) * 2019-02-27 2019-06-21 杭州飞步科技有限公司 Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera
CN110046555A (en) * 2019-03-26 2019-07-23 合肥工业大学 Endoscopic system video image stabilization method and device
CN110120023A (en) * 2019-05-14 2019-08-13 浙江工大盈码科技发展有限公司 A kind of image feedback antidote
CN110473229B (en) * 2019-08-21 2022-03-29 上海无线电设备研究所 Moving object detection method based on independent motion characteristic clustering
CN110473229A (en) * 2019-08-21 2019-11-19 上海无线电设备研究所 A kind of moving target detecting method based on self-movement feature clustering
CN110856014B (en) * 2019-11-05 2023-03-07 北京奇艺世纪科技有限公司 Moving image generation method, moving image generation device, electronic device, and storage medium
CN110856014A (en) * 2019-11-05 2020-02-28 北京奇艺世纪科技有限公司 Moving image generation method, moving image generation device, electronic device, and storage medium
CN115004680A (en) * 2019-12-11 2022-09-02 Lg伊诺特有限公司 Image processing apparatus, image processing method, and program
CN111583151A (en) * 2020-05-09 2020-08-25 浙江大华技术股份有限公司 Video denoising method and device, and computer readable storage medium
CN111583151B (en) * 2020-05-09 2023-05-12 浙江大华技术股份有限公司 Video noise reduction method and device, and computer readable storage medium
CN112633298A (en) * 2020-12-28 2021-04-09 深圳大学 Method for measuring similarity of image/image block
CN112633298B (en) * 2020-12-28 2023-07-18 深圳大学 Method for measuring similarity of image/image block
CN113256679A (en) * 2021-05-13 2021-08-13 湖北工业大学 Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system
WO2022262386A1 (en) * 2021-06-18 2022-12-22 哲库科技(上海)有限公司 Image processing apparatus and method, processing chip, and electronic device

Also Published As

Publication number Publication date
CN103426182B (en) 2016-01-06

Similar Documents

Publication Publication Date Title
CN103426182B (en) The electronic image stabilization method of view-based access control model attention mechanism
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN110108258B (en) Monocular vision odometer positioning method
CN109684925B (en) Depth image-based human face living body detection method and device
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
CN109685045B (en) Moving target video tracking method and system
Ju et al. BDPK: Bayesian dehazing using prior knowledge
CN103955888A (en) High-definition video image mosaic method and device based on SIFT
CN110414385A (en) A kind of method for detecting lane lines and system based on homography conversion and characteristic window
Ding et al. U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement
CN106462975A (en) Method and apparatus for object tracking and segmentation via background tracking
Agrawal et al. Dense haze removal by nonlinear transformation
CN110706253B (en) Target tracking method, system and device based on apparent feature and depth feature
US20220222785A1 (en) Image defogging method based on simulated polarization foggy scene data set
CN111914596B (en) Lane line detection method, device, system and storage medium
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN103870847A (en) Detecting method for moving object of over-the-ground monitoring under low-luminance environment
Lai et al. Single image dehazing with optimal transmission map
Jung et al. Multispectral fusion of rgb and nir images using weighted least squares and convolution neural networks
CN115953312A (en) Joint defogging detection method and device based on single image and storage medium
Kim et al. Single image dehazing of road scenes using spatially adaptive atmospheric point spread function
Kim et al. Dehazing using Non-local Regularization with Iso-depth Neighbor-Fields.
Yang et al. RIFO: Restoring images with fence occlusions
Ali et al. A comparative study of various image dehazing techniques
Babu et al. Development and performance evaluation of enhanced image dehazing method using deep learning networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210330

Address after: Floor 5, block D, Boyuan science and Technology Plaza, No.99, Yanxiang Road, Yanta District, Xi'an City, Shaanxi Province, 710000

Patentee after: Xijiao Sichuang Intelligent Technology Research Institute (Xi'an) Co.,Ltd.

Address before: 710071 No. 2 Taibai South Road, Shaanxi, Xi'an

Patentee before: XIDIAN University

TR01 Transfer of patent right
CP02 Change in the address of a patent holder

Address after: Room 709, 7th Floor, Building B, No. 168 Kechuang Road, Yanta District, Xi'an City, Shaanxi Province (Xi'an University of Electronic Science and Technology Science Park), 710071

Patentee after: Xijiao Sichuang Intelligent Technology Research Institute (Xi'an) Co.,Ltd.

Address before: Floor 5, block D, Boyuan science and Technology Plaza, No.99, Yanxiang Road, Yanta District, Xi'an City, Shaanxi Province, 710000

Patentee before: Xijiao Sichuang Intelligent Technology Research Institute (Xi'an) Co.,Ltd.

CP02 Change in the address of a patent holder
CP03 Change of name, title or address

Address after: Room 709, 7th Floor, Building B, No. 168 Kechuang Road, Yanta District, Xi'an City, Shaanxi Province (Xi'an University of Electronic Science and Technology Science Park), 710071

Patentee after: Xihang Sichuang Intelligent Technology (Xi'an) Co.,Ltd.

Country or region after: Zhong Guo

Address before: Room 709, 7th Floor, Building B, No. 168 Kechuang Road, Yanta District, Xi'an City, Shaanxi Province (Xi'an University of Electronic Science and Technology Science Park), 710071

Patentee before: Xijiao Sichuang Intelligent Technology Research Institute (Xi'an) Co.,Ltd.

Country or region before: Zhong Guo

CP03 Change of name, title or address