CN105976330B - A kind of embedded greasy weather real time video image stabilization - Google Patents

A kind of embedded greasy weather real time video image stabilization Download PDF

Info

Publication number
CN105976330B
CN105976330B CN201610272906.8A CN201610272906A CN105976330B CN 105976330 B CN105976330 B CN 105976330B CN 201610272906 A CN201610272906 A CN 201610272906A CN 105976330 B CN105976330 B CN 105976330B
Authority
CN
China
Prior art keywords
image
point
frame
present frame
harris
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610272906.8A
Other languages
Chinese (zh)
Other versions
CN105976330A (en
Inventor
王洪玉
杨梦雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201610272906.8A priority Critical patent/CN105976330B/en
Publication of CN105976330A publication Critical patent/CN105976330A/en
Application granted granted Critical
Publication of CN105976330B publication Critical patent/CN105976330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of method of embedded greasy weather Real Time Video Image Stabilized, shake present in greasy weather video and contrast decline are solved the problems, such as.The method of the present invention includes following steps: A, being applied to carry out in the processing of greasy weather video in embedded system simultaneously by the Video Stabilization based on Harris angle point and the Retinex algorithm restored with color.The steady picture processing based on Harris angle point and the defogging processing based on colored multi-Scale Retinex Algorithm for carry out yuv space to real-time acquired image, recover the image of clear non-jitter;B, defogging and steady effective combination as carrying out on yuv space are realized, and algorithm is optimized based on embedded system characteristic.The present invention can either effectively eliminate the shake in shooting video, and can obtain clear scene information, improve the visual effect of image, while can reach the real-time speed of service, to provide good application value for real time monitoring system.

Description

A kind of embedded greasy weather real time video image stabilization
Technical field
The present invention provides a kind of embedded greasy weather real time video image stabilizations, tremble suitable for having for greasy weather shooting video Dynamic video belongs to the field of Image Information Processing technology.
Background technique
The video that outdoor monitoring is shot in the greasy weather, in addition in the presence of due to being obscured not caused by the fine particle scattering in atmosphere Outside clear problem, can also there are problems that shake, in addition to influence video observing effect other than, for video be further processed or All there is influence in person's feature extraction.Therefore, carrying out quick defogging and debounce processing to video has critically important realistic meaning.
The method that the method for defogging mainly has the processing method based on image enhancement and the image restoration based on physical model. The atmospherical scattering model that method based on image restoration is proposed by McCartney mostly, by assuming that certain prior information, Image after degeneration is compensated, to estimate the pixel value of fog free images.Such method is based on Misty Image degeneration mould Type, thus image is recovered compared with naturally, mainstream algorithm has the algorithm of He dark channel prior and the median filtering algorithm of Tarel.Base In deformation or improvement that the current main stream approach of the algorithm of image enhancement is mainly center neighborhood Retinex algorithm.Retinex is calculated Method is based on color constancy theory, proposes to filter out low frequency component by gaussian filtering, obtains the high fdrequency component in image as increasing It is after strong as a result, contrast and protrusion details can be improved, but can since gray value of the algorithm to image entirety stretches Original color ratio can be will affect, thus propose that gray value stretching improves this problem.
Paper name: a kind of colored enhancing new algorithm of color keep, periodical: computer application, time: 2008.Zhao Quanyou Et al. introduce color recovery factor multi-Scale Retinex Algorithm improved, eliminate image and deposited after MSR is handled The partially grey problem of color;Paper name: the Retinex nighttime image based on YUV color space enhances algorithm, periodical: science Technology and engineering, time: 2014.Zhang Hongying et al. proposes the Retinex algorithm based on yuv space to be enhanced for nighttime image Algorithm.
Two most basic problems are the Motion estimation and compensation between image sequence, root in the method for electronic steady image It is different according to the method for estimating of use, the method for electronic steady image can be divided into following a few classes: gray projection algorithm, Block- matching Method, bit plane method represent point method and characteristic matching method etc..
Paper name: the Electronic Image Stabilization based on HarrisHarris angle point and improvement Hu square, periodical: computer engineering, Time: 2013.The paper proposes using HarrisHarris angle point as characteristic point, the improved Hu matrix of neighborhood image is calculated As corresponding feature vector, model is transformed to radiation, compensating parameter is calculated by Feature Points Matching, restores to stablize video sequence The method of column can effectively solve the problems, such as that video has rotation, translation and slight scaling.Paper name: Video Stabilization Based on Feature TrajectoryAugmentation and Selection and RobustMesh Grid Warping, periodical: IIar Journal, time: 2015.The paper proposes one kind to pass through construction The method of the grid estimation motion profile of robust features can obtain more stable steady picture effect, but algorithm complexity is higher.
Patent: a kind of water surface movable platform vision system image analysis processing method, the patent No.: CN103902972A, year Part: 2014.Which disclose a kind of water surface movable platform vision system image analysis processing methods, can dislike in sea fog etc. Bad weather realizes the defogging and surely picture and identification of video image, carries out steady picture and identification based on SIFT feature, passes through dark Method realize defogging processing.Patent: traffic events monitoring video signal preprocess method under a kind of severe weather conditions, specially Benefit number: CN103458156A, time: 2013.Which disclose traffic events monitoring signals under a kind of bad weather to handle Method, defogging can be realized under strong wind and greasy weather weather and eliminates shake, SIFT feature is used and carries out steady picture, using straight Figure balanced method in side's carries out defogging processing.
The video shot at present by mobile device is more and more, but more or less all there are problems that shake, and is clapping It is easy to be influenced by outdoor weather during taking the photograph, especially the video in greasy weather shooting, mostly there is unintelligible and contrast The problems such as decline, therefore the characteristics of combination algorithm above, a kind of greasy weather real time video image stabilization is developed, to raising algorithm Practical application is of great significance.
Summary of the invention
There are problems that fuzzy the purpose of the present invention is to solve the video shot under bad weather and shake, sufficiently changes The visual effect of kind video, to provide broader practice scene for real time monitoring and video tracking, splicing.
Technical solution of the present invention:
A kind of embedded greasy weather real time video image stabilization, steps are as follows:
A, judge that current frame number is first frame, if so, the Harris angle point in detection present frame, therefrom chooses N number of strong Harris angle point, and the location information of Harris angle point is stored in Cornerx and Cornery respectively, output image to step Rapid E is handled;If it is not, then directly carrying out the processing of step B;
A1, collected video are yuv space, and the channel Y pixel is gray value, directly in progress current frame image The step of Harris Corner Detection, Harris Corner Detection, is as follows:
(1) current frame image L (x, y) is calculated using function VLIB_xyGradientsAndMagnitude () in the library VLIB Gradient value I in the x and y directionx,Iy
(2) by gradient value Ix,IyAs input, figure is calculated using function VLIB_harrisScore_7x7 () in the library VLIB The Harris angle point score value of picture;
(3) 3 then are carried out using score value of the function VLIB_nonMaxSuppress_3x3_S16 () to Harris angle point The Harris angle point that the non-maxima suppression of × 3 neighborhoods, i.e. deletion score value are less than thresh, thresh are set as 12000;
(4) remaining Harris angle point after traversal step (3) processing, chooses the distance between Harris angle point and is greater than 5 × 5 Euclidian distance N number of angle point, then obtained the Harris angle point in current frame image;
A2, the abscissa of the Harris angle point detected in A1 and ordinate are stored in Cornerx and Cornery;
B, use previous frame image as reference frame, obtained using the angle point of Lucas-Kanade optical flow method track reference frame Then the Harris angular coordinate of present frame rejects error hiding in reference frame and present frame by stochastical sampling consistency algorithm Point solves the transformation matrix detaH of adjacent two field pictures;
B1, according to the continuity between video successive frame, determining the Harris angle point of every frame image, there is also continuity, benefits Estimated to obtain with image coordinate of the function VLIB_trackFeaturesLucasKanade_7x7 () to reference frame current Harris the angular coordinate Tracex and Tracey of frame image;
B2, using stochastical sampling consistency algorithm, reject the mistake in reference frame and the Harris angle point of present frame detection With point, detailed process is as follows:
(1) 4 points are randomly selected every time from the Harris angle point of reference frame image and current frame image and constitute matching pair, Substitute into following formula:
(2) then go out transformation model M using Singular-value Decomposition Solution, present frame is converted using transformation model M, count The Euclidian distance of corresponding match point coordinate in reference frame is calculated, if Euclidian distance is less than threshold value, illustrates feature Point meets model, then feature set consensus, and statistical nature collection number in is added in this feature point, when in is greater than optimal spy When collecting number in_max, optimal characteristics collection is updated, while updating transformation model M;
(3) the number of iterations k adds 1, and current erroneous probability p is greater than the error probability p_badxform allowed, then repeatedly step (2), stop iteration when p is less than p_badxform, after algorithm screens, determine last transformation model M and optimal characteristics Point set, and obtain optimal point set number in_max;
B3, transformation model M is returned as transformation matrix detaH;
C, the compensating parameter that present frame is calculated using the boundary point of present frame and transformation matrix detaH, is filtered by Kalman Wave carries out compensating parameter smoothly, to obtain the compensation matrix motionH of present frame;
C1, four boundary points (0,0) for taking present frame, (0, width-1), (height-1,0), (width-1, Height-1), and it is labeled as boundary point CornerA, boundary point CornerA is multiplied with transformation matrix detaH, obtains present frame pair It should be in the boundary point CornerB in reference frame;
C2, horizontal and vertical offset detax, detay of present frame, deflection angle are obtained by boundary point CornerB DetaAngle and scaling zoom;
C3, detax, detay, detaAngle and zoom are smoothed by Kalman filtering, and obtained current The compensation matrix motionH of frame;
D, global change's matrix of present frame is obtained using the transformation matrix detaH of present frame and compensation matrix motionH H carries out perspective transform to Y, UV of present frame, obtains image of the present frame surely as after;Judge whether current frame number can be whole by ten It removes, is, then update angular coordinate according to the method for A1, A2 in step A, be not then to save the angular coordinate of present frame;
D1, transformation matrix detaH is multiplied to obtain global change's matrix H of present frame with compensation matrix motionH;It utilizes Global change's matrix H converts the image coordinate of present frame, then has:
Wherein, (x, y) is present image coordinate in formula, and (x ', y ') is coordinate corresponding after present frame converts;
(x ', the y ') obtained after D2, transformation is floating-point coordinate, and the picture for the image that bilinear interpolation obtains to the end is carried out to it Element value;Separately the floating-point coordinate of image is set as (u+ α, v+ β), and wherein μ, ν are integer, and α, β are the floating number of [0,1], if u0=u+ α,v0=v+ β, then in (u0,v0) at pixel value f (u0,v0) indicate are as follows:
Judge whether current frame number can be divided exactly by ten, then update angular coordinate according to the method for A1, A2 in step A, be not, Then save the angular coordinate of present frame;
E, down-sampling is carried out to the channel image Y after steady picture, then carries out airspace gaussian filtering, obtains filtered image Yf;By filtered image YfCarry out the illumination image Y ' that log-domain is converted to the channel Y;
E1, down-sampling is carried out to the channel image Y after steady picture, the image after setting sampling is I (x, y), high by adjusting The width parameter c of this core, choose it is small, in, big three different scale (scale c < 50 one small, a 50 < c < of mesoscale 100, a large scale c > 100), respectively correspond three gaussian kernel function F1、F2、F3, use the arithmetic average of three Gaussian kernels Value replaces geometrical mean, then has to the channel Y:
Wherein, F (x, y)=F1(x,y)+F2(x,y)+F3(x,y).According to formula (5), time-domain filtering is transformed into airspace, Image I is obtained after carrying out gaussian filtering to the image I (x, y) after samplingfImage I (x, y) is carried out Fourier first by (x, y) Transformation, then be multiplied with gaussian kernel function, image I is obtained after then carrying out inverse Fourier transform againf(x,y);
Wherein, F (u, v) is that F (x, y) carries out the expression formula after frequency-domain transform, and I (u, v) is that I (x, y) carries out frequency-domain transform Expression formula afterwards.
E2, by filtered image If(x, y) carries out arest neighbors interpolation and obtains image Yf, then carry out log-domain and be converted to Illumination image Y ' (x, y):
Y ' (x, y)=log (Y (x, y))-log (Yf(x,y)) (6)
In formula, Y ' (x, y) is illumination image, Yf(x, y) is IfResult after (x, y) interpolation.
F, Y will be obtained after illumination image Y ' carry out gray value stretchingR, to the chrominance channel uv of steady picture treated image into Row equalization processing obtains uv ', obtains clear image;
F1, the mean μ and variances sigma that illumination image Y ' is obtained by calculation, and obtain the dynamic of the pixel value of illumination image Saturation point R under the boundary value of rangemin=μ -3 σ, upper saturation point Rmax+ 3 σ of=μ.
There may be dimnesses by the illumination image Y ' obtained after F2, filtering, and the inadequate distinct issues of details, therefore, it is necessary to logical Linear stretch enhancing contrast is crossed, the channel Y is handled as follows:
Wherein, Y ' (x, y) is illumination image, YR(x, y) is the Y channel image of the clear image finally obtained.
F3, since pixel value of the foggy image on coloration image is all close to 128, by the chrominance channel after steady picture Image uv (x, y) carries out the contrast that image can be improved after linear chroma adjustment according to the following formula, is more in line with actual conditions:
Uv ' (x, y)=uv (x, y) × 3-256 (8)
In formula, uv (x, y) is surely as the chrominance channel of rear image, and uv ' (x, y) is chromatic value adjusted.
Beneficial effects of the present invention:
(1) the real-time defogging and steady picture processing of greasy weather video are realized;
(2) complexity of algorithm is lower herein, can further apply in actual real-time system;
(3) present invention is that good theoretical basis has been established in the processing of subsequent algorithm.
Detailed description of the invention
Fig. 1 is algorithm main flow schematic diagram.
Fig. 2 is image surely as the flow diagram of processing.
Fig. 3 is the flow diagram for the multiple dimensioned Retinex defogging restored with color.
Fig. 4 is surely as the flow chart of stochastical sampling consistency algorithm in treatment process.
Specific embodiment
Below in conjunction with attached drawing and technical solution, a specific embodiment of the invention is further illustrated.
A kind of embedded greasy weather real time video image stabilization, steps are as follows:
A, judge whether present frame is first frame, is, then detect the Harris angle point in current frame image, therefrom choose 256 strong angle points of Harris, and the location information of the strong angle point of Harris is stored in Cornerx and Cornery respectively, it exports Step E is jumped to after image to be handled;It is not then directly to carry out the processing of step B;
A1, when carrying out Harris Corner Detection, by using the function VLIB_ in the library VLIB XyGradientsAndMagnitude () calculates gradient, using gradient as the defeated of function VLIB_harrisScore_7x7 () Enter, detects the Harris angle point of present frame, obtain angle point score value, pass through VLIB_nonMaxSuppress_3x3_S16 () Non-maxima suppression is carried out to Harris angle point score value in 3 × 3 neighborhood, deletes angle of the angle point score value less than 12000 Point traverses remaining angle point, chooses 256 Harris angle points of the Euclidian distance greater than 5 between surrounding angle point, obtains Strong angle point in Harris angle point, the number of the strong angle point of Harris, the size of the threshold value of non-maxima suppression and Euclidian away from From can be adjusted according to the size of image;
A2, strong angle point in the Harris angle point that will test transverse and longitudinal coordinate be saved in Cornerx, Cornery respectively In.
B, use previous frame image as reference frame, obtained using the angle point of Lucas-Kanade optical flow method track reference frame Then the Harris angular coordinate of present frame rejects error hiding in reference frame and present frame by stochastical sampling consistency algorithm Point solves the transformation matrix detaH of adjacent two field pictures:
B1, using previous frame image as reference frame, utilize the optical flow tracking function VLIB_ in the library VLIB Angle point in trackFeaturesLucasKanade_7x7 () track reference frame to the Harris angle point to present frame position Confidence breath is estimated, set pyramidal search window size be 10 × 10, the maximum pyramid number of plies be 5, greatest iteration Number is 20, is as a result accurate to 0.03, and search range is global search, finally obtains the position letter of the Harris angle point of present frame It ceases and is stored in Tracex, Tracey respectively;
B2, stochastical sampling consistency algorithm, rejecting reference frame and the mistake in the strong angle point of Harris of present frame detection are used Match point.The point logarithm of minimal characteristic needed for setup algorithm transform characteristics is 4, and the wrong probability of permission is the value of p_badxform It is 0.005.Specific algorithm flow is as shown in Figure 4;
B3, during rejecting Mismatching point, solve transformation matrix process can be replaced by least square method it is odd Different value, which is decomposed, reduces complexity.
C, the compensating parameter that present frame is calculated using the boundary point of present frame and transformation matrix detaH, is filtered by Kalman Wave carries out compensating parameter smoothly, to obtain the compensation matrix motionH of present frame;
C1, in actual operation, has used 3 Kalman filter, and initialized respectively, initial Hu mode is such as Under: it defines one 6 × 2 Kalman filter and horizontal and vertical displacement is filtered, and by the diagonal line element of observing matrix Element is assigned a value of 1, and (0,2) of state matrix, (1,3), (2,4), (3,5) are assigned a value of 1/30, (0,4), (1,5) are assigned a value of 1/ 1800;It defines two 3 × 1 Kalman filter to be filtered rotation angle and scaling, and by observing matrix Diagonal entry is assigned a value of 1, by (0,1) of state matrix, (1,2), is assigned a value of 1/30, (0,2) is assigned a value of 1/1800;
C2, four boundary points (0,0) for taking present frame, (0, width-1), (height-1,0), (width-1, Height-1) and it is labeled as CornerA, is multiplied to obtain present frame corresponding to ginseng with transformation matrix detaH by using CornerA Examine the boundary point point set CornerB in frame;
C3, horizontal and vertical offset detax, detay that present frame is obtained by CornerB, deflection angle DetaAngle, scaling zoom etc.;
C3, detax, detay, detaAngle, zoom are smoothed by Kalman filtering, and obtained current The compensation matrix motionH of frame.
D, global change's matrix H of present frame is obtained using present frame transformation matrix detaH and compensation matrix motionH, And perspective transform is carried out respectively to the channel Y, UV of present frame, it obtains surely as rear image;
D1, the coordinate of present frame is converted according to formula (2), obtains that the compensated coordinate of present frame will be calculated Coordinate value be previously stored in array;
D2, interpolation is carried out according to formula (3) to the coordinate of current frame image, obtains image of the present frame surely as after;Judgement is worked as Whether preceding frame number can be divided exactly by ten, be, then updates angular coordinate according to the method for A1, A2 in step A, be not, then saves current The angular coordinate of frame;
E, down-sampling is carried out to steady picture treated the channel image Y and obtains image I (x, y), then carry out gaussian filtering, it will Time domain gaussian filtering is transformed into airspace and is filtered, and after image I (x, y) carries out Fourier transformation, is multiplied with gaussian kernel function, then I is obtained after carrying out inverse Fourier transformf(x, y) obtains image Y after I (x, y) is carried out arest neighbors interpolationf;By image YfIt carries out Log-domain is converted to the illumination image Y ' in the channel Y:
E1, image I (x, y) is obtained after carrying out the down-sampling that scale is 2 to the image after steady picture, chooses three different scales The Gaussian kernel F of c1=27, c2=85, c3=1251、F2、F3By formula (4) carry out it is average after obtain convolution kernel F, to image I (x, Y) it after the DSP_ifft16x32 () in DSPLIB carries out Fourier transformation, is multiplied with gaussian kernel function, reuses DSP_ Ifft16x32 () obtains image I after carrying out inverse Fourier transformf(x, y) obtains image Y after carrying out arest neighbors interpolationf(x, y), It is specifically shown in formula (5).
E2, filtered image progress log-domain is converted to illumination image Y ' (x, y), calculation method such as formula (6) institute Show;In practical calculating process, since the range of known pixel values is 0~256, the range for carrying out Logarithm conversion is also 0 ~256, in order to reduce algorithm complexity, it will usually first establish two tables and calculate logarithm, respectively LogMapr [256* 256] and LogMapa [256*768], then can directly be searched by filtered value.
F, illumination image Y ' (x, y) is subjected to gray value stretching, coloration adjustment is carried out to coloration image uv (x, y), finally Obtain clear image:
F1, the mean μ and variances sigma that illumination image is obtained by calculation, and then obtain the dynamic of the pixel value of illumination image Saturation point R under the boundary value of rangemin=μ -3 σ, upper saturation point Rmax+ 3 σ of=μ, can be by dynamic model in practical calculating process It encloses and carries out adjusting appropriate, the coefficient of variance can be between 1.5~3.5;
F2, illumination image Y ' (x, y) is handled according to formula (7), then the channel the Y figure of available clearly image As YR(x,y);
F3, it adjusts to obtain uv ' (x, y) according to coloration image uv (x, y) progress coloration of the company (8) to image.
G, difference according to actual needs can use following method and optimize to the treatment process of video:
G1, it is optimized using editing machine option, the optimization option that editing machine is arranged is-O3 ,-g ,-ms;
G2, multiplication and division arithmetic are optimized using Inline Function, _ mpy () realizes fixed-point multiplication, _ mpysp2dp () realizes floating-point multiplication, and _ min2 () and _ max2 () are respectively compared maximum value and minimum value,;
G3, use " restrict " keyword, pointer incoherent for memory space in code or array, by adding Add restrict keyword that code optimization may be implemented;
G4, using #pragma optimization software flow, the double-deck circulation is added in #pragma MUST_ITERATE () In interior loop
G5, calculation amount is reduced using look-up table, coordinates computed is assigned as to the array of 3 720 sizes in perspective transform The storage of array height coordinate of storage width coordinate transform numerical value, 3 576 sizes converts numerical value, has saved the sky of point-by-point storage Between, decrease the number of multiplication calculating.
The implementing platform of the above method is Leonardo da Vinci's series DM8168 test board, and DSP dominant frequency is 1Ghz, and ARM dominant frequency is 1.2Ghz, Ubantu14.0.
The image for being 720 × 576 for image size, the present invention do a Harris Corner Detection every 10 frames, therefore Average handling time is every frame image 90ms, average to locate if defogging processing is equally once filtered every 10 frames The reason time is 65ms, may be implemented to handle in real time substantially.

Claims (1)

1. a kind of embedded greasy weather real time video image stabilization, which is characterized in that steps are as follows:
A, judge whether current frame number is first frame, if so, the Harris angle point in detection present frame, therefrom chooses N number of The strong angle point of Harris, and the location information of the strong angle point of Harris is stored in Cornerx and Cornery respectively, export image It is handled to step E;If it is not, then directly carrying out the processing of step B;
A1, collected video are yuv space, and the channel Y pixel is gray value, directly the angle Harris in progress current frame image The step of point detection, Harris Corner Detection, is as follows:
(1) current frame image L (x, y) is calculated in x using function VLIB_xyGradientsAndMagnitude () in the library VLIB With the gradient value I on the direction yx,Iy
(2) by gradient value Ix,IyAs input, image is calculated using function VLIB_harrisScore_7x7 () in the library VLIB Harris angle point score value;
(3) 3 × 3 then are carried out using score value of the function VLIB_nonMaxSuppress_3x3_S16 () to Harris angle point The Harris angle point that the non-maxima suppression of neighborhood, i.e. deletion score value are less than thresh, thresh are set as 12000;
(4) remaining Harris angle point after traversal step (3) processing, the Euclidian distance chosen between Harris angle point are big In 5 × 5 N number of angle point, then the strong angle point of Harris in current frame image has been obtained;
A2, the abscissa of the strong angle point of the Harris detected in A1 and ordinate are stored in Cornerx and Cornery;
B, use previous frame image as reference frame, obtained currently using the angle point of Lucas-Kanade optical flow method track reference frame Then the Harris angular coordinate of frame rejects the point of error hiding in reference frame and present frame by stochastical sampling consistency algorithm, Solve the transformation matrix detaH of adjacent two field pictures;
B1, according to the continuity between video successive frame, determining the Harris angle point of every frame image, there is also continuitys, utilize letter Number VLIB_trackFeaturesLucasKanade_7x7 () estimates the image coordinate of reference frame to obtain present frame figure Harris the angular coordinate Tracex and Tracey of picture;
B2, using stochastical sampling consistency algorithm, reject the Mismatching point in reference frame and the Harris angle point of present frame detection, Detailed process is as follows:
(1) corresponding 2 points composition is respectively chosen at random every time from the Harris angle point of reference frame image and current frame image Pairing substitutes into following formula:
(2) then go out transformation model M using Singular-value Decomposition Solution, present frame is converted using transformation model M, calculate ginseng The Euclidian distance of corresponding match point coordinate in frame is examined, if Euclidian distance is less than threshold value, illustrates that characteristic point is full Then feature set consensus, and statistical nature collection number in is added in this feature point by sufficient model, when in is greater than optimal characteristics collection When number in_max, optimal characteristics collection is updated, while updating transformation model M;
(3) the number of iterations k adds 1, and current erroneous Probability p is greater than the error probability p_badxform allowed, then repeatedly step (2), Stop iteration when p is less than p_badxform, after algorithm screens, determine last transformation model M and optimal characteristics point set, And obtain optimal point set number in_max;
B3, transformation model M is returned as transformation matrix detaH;
C, the compensating parameter that present frame is calculated using the boundary point of present frame and transformation matrix detaH, passes through Kalman filtering pair Compensating parameter carries out smoothly, obtaining the compensation matrix motionH of present frame;
C1, four boundary points (0,0) for taking present frame, (0, width-1), (height-1,0), (width-1, height-1), And it is labeled as boundary point CornerA, boundary point CornerA is multiplied with transformation matrix detaH, obtains present frame corresponding to reference frame In boundary point CornerB;
C2, horizontal and vertical offset detax, detay of present frame, deflection angle are obtained by boundary point CornerB DetaAngle and scaling zoom;
C3, detax, detay, detaAngle and zoom are smoothed by Kalman filtering, and obtain present frame Compensation matrix motionH;
D, global change's matrix H of present frame is obtained using the transformation matrix detaH of present frame and compensation matrix motionH, it is right Y, UV of present frame carry out perspective transform, obtain image of the present frame surely as after;Judge whether current frame number can be divided exactly by ten, It is then to update angular coordinate according to the method for A1, A2 in step A, be not then to save the angular coordinate of present frame;
D1, transformation matrix detaH is multiplied to obtain global change's matrix H of present frame with compensation matrix motionH;Utilize the overall situation Transformation matrix H converts the image coordinate of present frame, then has:
Wherein, (x, y) is present image coordinate in formula, and (x ', y ') is coordinate corresponding after present frame converts;
(x ', the y ') obtained after D2, transformation is floating-point coordinate, and the pixel for the image that bilinear interpolation obtains to the end is carried out to it Value;Separately the floating-point coordinate of image is set as (u+ α, v+ β), and wherein μ, ν are integer, and α, β are the floating number of [0,1], if u0=u+ α, v0 =v+ β, then in (u0,v0) at pixel value f (u0,v0) indicate are as follows:
Judge whether current frame number can be divided exactly by ten, be, then updates angular coordinate according to the method for A1, A2 in step A, be not, Then save the angular coordinate of present frame;
E, down-sampling is carried out to the channel image Y after steady picture, then carries out airspace gaussian filtering, obtains filtered image Yf;It will filter Image Y after wavefCarry out the illumination image Y ' that log-domain is converted to the channel Y;
E1, down-sampling is carried out to the channel image Y after steady picture, the image after setting sampling is I (x, y), by adjusting Gaussian kernel Width parameter c, choose it is small, in, big three different scales, scale c < 50 one small, a mesoscale 50 < c < 100, one Large scale c > 100 respectively correspond three gaussian kernel function F1、F2、F3, geometry is replaced using the arithmetic mean of instantaneous value of three Gaussian kernels Average value then has the channel Y:
Wherein, F (x, y)=F1(x,y)+F2(x,y)+F3(x,y);According to formula (5), time-domain filtering is transformed into airspace, to adopting Image I (x, y) after sample obtains image I after carrying out gaussian filteringfImage I (x, y) is carried out Fourier transformation first by (x, y), It is multiplied again with gaussian kernel function, obtains image I after then carrying out inverse Fourier transform againf(x,y);
Wherein, F (u, v) is that F (x, y) carries out the expression formula after frequency-domain transform, and I (u, v) is after I (x, y) carries out frequency-domain transform Expression formula;
E2, by filtered image If(x, y) carries out arest neighbors interpolation and obtains image Yf, then carry out log-domain and be converted to illumination Image Y ' (x, y):
Y ' (x, y)=log (Y (x, y))-log (Yf(x,y)) (6)
In formula, Y ' (x, y) is illumination image, Yf(x, y) is IfResult after (x, y) interpolation;
F, Y will be obtained after illumination image Y ' carry out gray value stretchingR, the chrominance channel uv of steady picture treated image is carried out equal Weighing apparatusization handles to obtain uv ', obtains clear image;
F1, the mean μ and variances sigma that illumination image Y ' is obtained by calculation, and obtain the dynamic range of the pixel value of illumination image Boundary value under saturation point Rmin=μ -3 σ, upper saturation point Rmax+ 3 σ of=μ;
There may be dimnesses by the illumination image Y ' obtained after F2, filtering, and the inadequate distinct issues of details, therefore, it is necessary to pass through line Property stretch enhancing contrast, the channel Y is handled as follows:
Wherein, Y ' (x, y) is illumination image, YR(x, y) is the Y channel image of the clear image finally obtained;
F3, since pixel value of the foggy image on coloration image is all close to 128, by the image of the chrominance channel after steady picture Uv (x, y) carries out the contrast that image can be improved after linear chroma adjustment according to the following formula, is more in line with actual conditions:
Uv ' (x, y)=uv (x, y) × 3-256 (8)
In formula, uv (x, y) is surely as the chrominance channel of rear image, and uv ' (x, y) is chromatic value adjusted.
CN201610272906.8A 2016-04-27 2016-04-27 A kind of embedded greasy weather real time video image stabilization Active CN105976330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610272906.8A CN105976330B (en) 2016-04-27 2016-04-27 A kind of embedded greasy weather real time video image stabilization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610272906.8A CN105976330B (en) 2016-04-27 2016-04-27 A kind of embedded greasy weather real time video image stabilization

Publications (2)

Publication Number Publication Date
CN105976330A CN105976330A (en) 2016-09-28
CN105976330B true CN105976330B (en) 2019-04-09

Family

ID=56993294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610272906.8A Active CN105976330B (en) 2016-04-27 2016-04-27 A kind of embedded greasy weather real time video image stabilization

Country Status (1)

Country Link
CN (1) CN105976330B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550174B (en) * 2016-10-28 2019-04-09 大连理工大学 A kind of real time video image stabilization based on homography matrix
CN106780365B (en) * 2016-11-25 2020-03-17 阿依瓦(北京)技术有限公司 Image de-jittering system based on heterogeneous computation
CN106780370B (en) * 2016-11-25 2019-12-20 阿依瓦(北京)技术有限公司 Image de-jittering device and method thereof
CN106846280A (en) * 2017-03-08 2017-06-13 沈阳工业大学 Image defogging method based on discrete warp wavelet
CN107729807A (en) * 2017-09-05 2018-02-23 南京理工大学 Integrated external force damage prevention target identification and intelligent early-warning system
CN108090885B (en) * 2017-12-20 2021-08-27 百度在线网络技术(北京)有限公司 Method and apparatus for processing image
CN108053382B (en) * 2017-12-25 2019-04-16 北京航空航天大学 A kind of visual characteristic defogging is steady as detection system
CN108347549B (en) * 2018-02-26 2020-11-10 华东理工大学 Method for improving video jitter based on time consistency of video frames
CN108765309B (en) * 2018-04-26 2022-05-17 西安汇智信息科技有限公司 Image defogging method for improving global atmospheric light in linear self-adaption mode based on dark channel
CN108765317B (en) * 2018-05-08 2021-08-27 北京航空航天大学 Joint optimization method for space-time consistency and feature center EMD self-adaptive video stabilization
CN109102013B (en) * 2018-08-01 2022-03-15 重庆大学 Improved FREAK characteristic point matching image stabilization method suitable for tunnel environment characteristics
CN110120023A (en) * 2019-05-14 2019-08-13 浙江工大盈码科技发展有限公司 A kind of image feedback antidote
CN110677578A (en) * 2019-08-14 2020-01-10 北京理工大学 Mixed image stabilization method and device based on bionic eye platform
CN111192210B (en) * 2019-12-23 2023-05-26 杭州当虹科技股份有限公司 Self-adaptive enhanced video defogging method
CN113905147B (en) * 2021-09-30 2023-10-03 桂林长海发展有限责任公司 Method and device for removing tremble of marine monitoring video picture and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458156A (en) * 2013-08-27 2013-12-18 宁波海视智能系统有限公司 Preprocessing method for traffic accident detection video signals on severe weather conditions
CN103902972A (en) * 2014-03-21 2014-07-02 哈尔滨工程大学 Water surface moving platform visual system image analyzing and processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1892666A1 (en) * 2006-08-16 2008-02-27 Toyota Motor Europe NV A method, an apparatus and a computer-readable medium for processing an image dataset

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458156A (en) * 2013-08-27 2013-12-18 宁波海视智能系统有限公司 Preprocessing method for traffic accident detection video signals on severe weather conditions
CN103902972A (en) * 2014-03-21 2014-07-02 哈尔滨工程大学 Water surface moving platform visual system image analyzing and processing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
On-Line Digital Image Stabilization for Translational and Rotational Motion;Qingle Zeng 等;《2011 International Conference on Image Analysis and Signal Processing》;20111023;第266-270页
基于YUV色彩空间的Retinex夜间图像增强算法;张红颖 等;《科学技术与工程》;20141031;第14卷(第30期);第71-75页
基于角点匹配的电子稳像算法;姚军 等;《光学与光电技术》;20090831;第7卷(第4期);第37-40页

Also Published As

Publication number Publication date
CN105976330A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN105976330B (en) A kind of embedded greasy weather real time video image stabilization
CN108596849B (en) Single image defogging method based on sky region segmentation
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
CN110163818A (en) A kind of low illumination level video image enhancement for maritime affairs unmanned plane
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN109658447B (en) Night image defogging method based on edge detail preservation
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN110047055A (en) A kind of enhancing of infrared image details and denoising method
CN107798670A (en) A kind of dark primary prior image defogging method using image wave filter
CN109064402A (en) Based on the single image super resolution ratio reconstruction method for enhancing non local total variation model priori
CN115187688A (en) Fog map reconstruction method based on atmospheric light polarization orthogonal blind separation and electronic equipment
Chen et al. Scene segmentation of remotely sensed images with data augmentation using U-net++
Wang et al. Low-light-level image enhancement algorithm based on integrated networks
CN113344810A (en) Image enhancement method based on dynamic data distribution
Lv et al. Two adaptive enhancement algorithms for high gray-scale RAW infrared images based on multi-scale fusion and chromatographic remapping
Yang et al. CSDM: A cross-scale decomposition method for low-light image enhancement
Yao et al. A multi-expose fusion image dehazing based on scene depth information
Lee et al. Joint defogging and demosaicking
Song et al. An adaptive real-time video defogging method based on context-sensitiveness
Zhu et al. Near-infrared and visible fusion for image enhancement based on multi-scale decomposition with rolling WLSF
CN113436123A (en) High-resolution SAR and low-resolution multispectral image fusion method based on cloud removal-resolution improvement cooperation
Mu et al. Color image enhancement method based on weighted image guided filtering
Elhefnawy et al. Effective visibility restoration and enhancement of air polluted images with high information fidelity
Liu et al. Research on image enhancement algorithm based on artificial intelligence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant