CN109376641A - A kind of moving vehicle detection method based on unmanned plane video - Google Patents

A kind of moving vehicle detection method based on unmanned plane video Download PDF

Info

Publication number
CN109376641A
CN109376641A CN201811203391.1A CN201811203391A CN109376641A CN 109376641 A CN109376641 A CN 109376641A CN 201811203391 A CN201811203391 A CN 201811203391A CN 109376641 A CN109376641 A CN 109376641A
Authority
CN
China
Prior art keywords
image
feature
vehicle
grades
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811203391.1A
Other languages
Chinese (zh)
Other versions
CN109376641B (en
Inventor
朱旭
孙思琦
徐伟
闫茂德
杨盼盼
左磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
CHECC Data Co Ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201811203391.1A priority Critical patent/CN109376641B/en
Publication of CN109376641A publication Critical patent/CN109376641A/en
Application granted granted Critical
Publication of CN109376641B publication Critical patent/CN109376641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Abstract

The invention discloses a kind of moving vehicle detection methods based on unmanned plane video, Feature Points Matching and abnormity point elimination are carried out to image using SURF algorithm first, using the unmanned plane image registration algorithm of the global and local homography matrix of joint to obtain transition matrix, compensate the adverse effect that Airborne Camera movement generates, then, reduce area to be tested using 2 frame difference methods, area to be tested is traversed further according to the center of super-pixel, further increase the efficiency of moving vehicle detection, then, the low order feature of vehicle is extracted using multichannel HOG characteristics algorithm, the contextual information for introducing vehicle obtains the high-order feature of vehicle, and both features are merged to obtain the multistage feature of target vehicle, finally, in conjunction with multistage feature and dictionary learning algorithm, realize moving vehicle detection.This method is able to suppress the influence of unmanned aerial vehicle onboard camera motion bring, handles vehicle deformation and background interference in image, and the robustness and real-time of moving vehicle detection can be improved.

Description

A kind of moving vehicle detection method based on unmanned plane video
Technical field
The present invention relates to the detection method of moving vehicle, especially a kind of moving vehicle inspection based on unmanned plane video Survey method.
Background technique
Unmanned plane as a kind of novel remotely-sensed data obtaining means, have deployment way is flexible, monitoring range is big, Information collection fine size, not by traffic above-ground interference etc. unique advantages.Unmanned plane during flying speed and height are adjustable, visual angle is flexible, Acquisition the high-efficient, at low cost of traffic above-ground image information, risk are low, and a wide range of traffic prison from part to wide area may be implemented It surveys.With the further development and incorporation of unmanned plane technology, image processing techniques, rationally utilize and analysis unmanned plane figure Picture has broad application prospects in traffic programme, design, management domain.
Common moving vehicle detection method has powerful connections extraction method, optical flow method etc..Wherein, background extracting method is to illumination and back Scape variation is extremely sensitive, and optical flow method calculating cost is too big.In order to improve the robustness of moving vehicle detection, some scholars are established Dynamic bayesian network, and vehicle is detected using sliding window method, although achieving certain effect, sliding window method calculation amount is still It is so too big, it also results in using limited.
It can be seen that although having there are many moving vehicle detection algorithm and all certain detection effect at present.But base It is still to be improved in the stability, robustness and real-time of the moving vehicle detection method of unmanned plane video.
Summary of the invention
The purpose of the present invention is to provide a kind of moving vehicle detection methods based on unmanned plane video, existing to overcome There is the deficiency of technology.
In order to achieve the above objectives, the present invention adopts the following technical scheme:
A kind of moving vehicle detection method based on unmanned plane video, comprising the following steps:
Step 1), obtain moving vehicle video of taking photo by plane, extract its consecutive image sequence, then extract reference picture and to It is registrated the SURF characteristic point of image, Feature Points Matching is then carried out, using RANSAC algorithm to the feature after matching Point carries out abnormity point elimination;
Step 2), for the characteristic point after abnormity point elimination, pass through unmanned plane image registration algorithm and obtain image Transition matrix;
Step 3), for step 2) treated image, the area to be tested of moving vehicle is determined using 2 frame difference methods, it is right Image carries out super-pixel segmentation, scan box is determined according to the center of super-pixel, to traverse area to be tested;
Step 4), using step 3) treated image, extract be made of vehicle texture and color vehicle low order it is special Sign;And the contextual information of vehicle is introduced, extract the high-order feature of vehicle;In the low order feature and high-order for obtaining target vehicle After feature, low order feature and high-order feature are merged, the multistage feature of target vehicle is obtained;
Step 5), to the multistage feature of the vehicle of acquisition, using dictionary learning algorithm training dictionary, and utilize the word after training The detection of allusion quotation completion moving vehicle.
Further, SURF spy is carried out using Harr feature and integral image concept to reference picture and image subject to registration Sign point extracts.
Further, to any one SURF characteristic point in reference picture, itself and characteristic point in image subject to registration are calculated Euclidean distance;Euclidean distance is smaller, then similarity is higher, when Euclidean distance is less than given threshold, is determined as successful match; If some SURF characteristic point in image subject to registration and multiple Feature Points Matchings in reference picture, be accordingly to be regarded as matching not at Function.
Further, after completing abnormity point elimination, image pyramid is introduced, using top-down mode, according to spy Sign point pairing result determines global homography matrix and local homography matrix: firstly, establishing reference picture and image subject to registration L+1 grade pyramid can be since L grades of global homography matrix, then step by step when determining global homography matrix Increase resolution ratio up to the 0th grade, and then obtains the 0th grade of corresponding global homography matrix.
Further, it definesWithRespectively indicate L grades of reference pictures and figure subject to registration The corresponding coordinate of picture;WhereinFor the x coordinate of L grades of reference pictures,For the y-coordinate of L grades of reference pictures,It is L grades The x coordinate of image subject to registration,For the y-coordinate of L grades of images subject to registration:
Then L grades of global homography matrixes are determined by following formula:
Wherein, wLFor intermediate variable, and have For L grades of global homography matrixes, matrix Element definition are as follows:
It is abbreviated as
It is defined below4 groups of Feature Points Matching results are randomly selected to determine a homography matrix every time, and are used l2Norm screens remaining characteristic matching point as the following formula:
Wherein, trFor the threshold value of abnormal point screening;When remaining characteristic matching point meets above formula, it is considered as validity feature With point, otherwise it is considered as invalid characteristic matching point;The L that homography matrix when validity feature matching points are most as finally determines Grade global homography matrix
L-1 grades of homography matrix is obtained by increasing image resolution ratio: introduce scale factor μ, reference picture and L-1 grades of corresponding pixels of image subject to registration may be expressed as:
Wherein,For the x coordinate of L-1 grades of reference pictures,For the y-coordinate of L-1 grades of reference pictures,It is The x coordinate of L-1 grades of images subject to registration,For the y-coordinate of L-1 grades of images subject to registration;μ is scale factor: to ask L-1 grades Homography matrix, have:
It enablesAbove formula can be rewritten as:
Wherein,For L-1 grades of global homography matrix;
Using the homography matrix derivation method from L grades to L-1 grades, gradually increasing resolution ratio can get the 0th grade of phase The global homography matrix answeredThat is:
Wherein, For the x coordinate of the 0th grade of reference picture,For the y of the 0th grade of reference picture Coordinate,For the x coordinate of the 0th grade of image subject to registration,For the y-coordinate of the 0th grade of image subject to registration, μLFor the 0th grade of homography square The scale factor of battle array.
Further, -1 frame of kth and kth frame in unmanned plane image sequence, F are respectively indicated using F (k-1) and F (k)r (k-1) and FrIt (k) is the image after registration;To the image F after registrationr(k-1) and Fr(k) area to be detected is determined using 2 frame difference methods Domain.
Further, small connected region, i.e. cell factory are divided the image into first;Then each picture in cell factory is acquired The gradient of vegetarian refreshments or the direction histogram at edge;Finally the feature of these cell factories is combined and can be formed by HOG spy Sign descriptor: being converted to HSV color space first to image, extract HOG feature respectively to triple channel, finally carries out feature and melts It closes, image is transformed into HSV color space by rgb color space, extract H, S, V Three-channel data template of image respectively, protect Save as two-dimensional matrix MH、MSAnd MV, while calculating separately the HOG feature H of three matrixesH、HSAnd HV
Further, triple channel HOG feature is merged by the way of weighting, it may be assumed that Hl=wHHH+wSHS+wVHV;Its In, HlIndicate the low order feature of vehicle;wH、wSAnd wVIt is HOG feature H respectivelyH、HSAnd HVWeight, and wH+wS+wV=1;Threeway The weight in road is adaptively determined by each channel data template, specifically by following formula:
The low order feature of vehicle has been determined, that is, has merged the HOG feature of H, S, V triple channel.
Further, when determining high-order feature, the contextual information of vehicle is introduced;Positive negative sample initialization is chosen manually just Dictionary and negative dictionary determine final positive dictionary D then according to dictionary learning and autonomous selection strategypWith negative dictionary Dn;Pass through The reconstructed error of other image blocks in the reconstructed error and neighborhood of target area is calculated to determine high-order feature;
For vehicle tv, reconstructed error is denoted as e (tv), and e (tv)=[e (tv,Dp),e(tv,Dn)]T, wherein e (tv, Dp) and e (tv,Dn) it is respectively tvReconstructed error on positive dictionary and negative dictionary;For some neighborhood image block a of vehicleι, Its reconstructed error is e (aι), and e (aι)=[e (aι,Dp),e(aι,Dn)]T, wherein subscript t is target vehicle tvImage in neighborhood The number of block;Wherein e (aι,Dp) and e (aι,Dn) it is respectively aιReconstructed error on positive dictionary and negative dictionary;For Neighborhood Graph As block aι, define target vehicle tvHigh-order feature be tvWith aιReconstructed error difference, be expressed as H (tv,aι)=| | e (tv)-e(aι)||2, wherein H (tv,aι) it is target vehicle tvRelative to neighborhood aιHigh-order feature;
As target vehicle tvWhen having M image block in neighborhood, target vehicle tvHigh-order feature are as follows: Hh=[H (tv,a1),H (tv,a2),…,H(tv,aM)]T
By obtained vehicle high-order feature together with low order Fusion Features, the multistage sign of target vehicle: F can be obtainedv=[Hl, Hh];The low order feature and high-order feature of comprehensive vehicle obtain the multistage feature of target vehicle.
Further, specifically in the dictionary learning algorithm based on correlation, in the dictionary updating stage, it is first determined with The relevant atom of new samples rarefaction representation is only updated these atoms;Degree of rarefication is introduced into the dictionary updating stage;To upper Renewal process is stated to iterate, until convergence, and then dictionary training rapidly and efficiently is realized, and be finally completed sport(s) car Detection.
Compared with prior art, the invention has the following beneficial technical effects:
The invention discloses a kind of moving vehicle detection methods based on unmanned plane video, use SURF algorithm first Feature Points Matching and abnormity point elimination are carried out to image, utilize the unmanned plane image registration for combining global and local homography matrix To obtain transition matrix, the adverse effect that compensation Airborne Camera movement generates then is reduced to be detected algorithm using 2 frame difference methods Region traverses area to be tested further according to the center of super-pixel, further increases the efficiency of moving vehicle detection, then, utilizes Multichannel HOG characteristics algorithm extracts the low order feature of vehicle, and the contextual information for introducing vehicle obtains the high-order feature of vehicle, and Both features are merged to obtain the multistage feature of target vehicle, finally, realizing fortune in conjunction with multistage feature and dictionary learning algorithm Dynamic vehicle detection.This method is able to suppress the influence of unmanned aerial vehicle onboard camera motion bring, handles vehicle deformation and back in image Scape interference, can be improved the robustness and real-time of moving vehicle detection.The present invention compensates for the unfavorable of Airborne Camera movement generation It influences, lays the foundation for moving vehicle detection;The method combined is traversed using the center of 2 frame difference methods and super-pixel, is improved Obtain the efficiency of area to be tested;For the area to be tested of acquisition, when extracting the low order feature of vehicle, using multichannel HOG feature extracting method, reduces erroneous detection and missing inspection;When extracting the high-order feature of vehicle, the contextual information of vehicle is introduced, Effectively inhibit vehicle deformation and background interference, to improve the accuracy rate of moving vehicle detection.The unmanned aerial vehicle vision that the present invention uses The accurate detection to driving vehicle in highway may be implemented in frequency moving vehicle detection method.
Further, it using top-down mode, is matched according to characteristic point as a result, proposing the global and local list of joint The image registration algorithm of answering property matrix.Global position variation is described in global homography matrix, and local homography matrix is retouched State local location variation.
Further, reduce area to be tested using 2 frame difference methods, and introduce super-pixel segmentation, according to the center of super-pixel It determines scanning area to be tested, the calculation amount of moving vehicle detection is effectively reduced.
Further, it when extracting vehicle high-order feature, chooses positive negative sample manually first and initializes positive dictionary and negative dictionary, Then further according to dictionary learning and the autonomous selection strategy of sample, after determining final positive dictionary and negative dictionary, by calculating target The reconstructed errors of other image blocks determines high-order feature in the reconstructed error and neighborhood in region, reduces the calculating of dictionary learning Amount, and then realize dictionary training rapidly and efficiently.
Detailed description of the invention
Fig. 1 is detection method flow diagram described in present example.
Fig. 2 is image pyramid described in present example.
Fig. 3 is the moving vehicle detection method frame based on image registration and super-pixel segmentation described in present example.
Specific embodiment
The invention will be described in further detail with reference to the accompanying drawing:
A kind of moving vehicle detection method based on unmanned plane video, main purpose are to inhibit unmanned aerial vehicle onboard Camera motion bring influence, handle image in vehicle deformation and background interference, and improve moving vehicle detection robustness and Real-time.The present invention is further described with reference to the accompanying drawings of the specification.
It is detection method flow diagram of the invention as shown in Fig. 1 in attached drawing, specific embodiment is as follows:
Step 1) takes photo by plane to the vehicle on highway with unmanned aerial vehicle onboard camera, obtains video of taking photo by plane, and it is continuous to extract it Then image sequence extracts the SURF characteristic point of reference picture and image subject to registration, then carries out Feature Points Matching.Through overmatching Characteristic point afterwards, still there may be mismatching, for this purpose, further being picked using RANSAC algorithm to carry out abnormal point It removes:
SURF feature specifically is carried out using Harr feature and integral image concept to reference picture and image subject to registration Point extracts.It follows following two principle and finds out correct matched characteristic point in reference picture and image subject to registration:
1) to any one SURF characteristic point in reference picture, calculate its Euclidean with characteristic point in image subject to registration away from From;Euclidean distance is smaller, then similarity is higher, when Euclidean distance is less than given threshold, is determined as successful match;Threshold value is taken as 6。
2) if multiple Feature Points Matchings in some SURF characteristic point and reference picture in image subject to registration, regard It is unsuccessful to match.
After Feature Points Matching, still may exist and mismatch, be mismatched to eliminate, using RANSAC algorithm Rejecting abnormalities point.
Step 2), for the characteristic point after abnormity point elimination, pass through unmanned plane image registration algorithm and obtain image Transition matrix, the adverse effect that unmanned aerial vehicle onboard camera motion generates image when compensating shooting;
After completing abnormity point elimination, image pyramid is introduced, using top-down mode, is matched and is tied according to characteristic point Fruit determines global homography matrix and local homography matrix.Firstly, establishing reference picture and wait match as shown in Fig. 2 in attached drawing The L+1 grade pyramid of quasi- image.0th grade is reference picture or image subject to registration, resolution ratio highest.When mobile to pyramid upper layer When, picture size and resolution ratio can reduce.On pyramidal top, i.e. L class resolution ratio is minimum.Determining global homography It when matrix, then can increase resolution ratio step by step until the 0th grade since L grades of global homography matrix, and then obtain the 0 grade of corresponding global homography matrix.
DefinitionWithRespectively indicate the corresponding seat of L grades of reference pictures and image subject to registration Mark;WhereinFor the x coordinate of L grades of reference pictures,For the y-coordinate of L grades of reference pictures,For L grades of images subject to registration X coordinate,For the y-coordinate of L grades of images subject to registration.
Then L grades of global homography matrixes are determined by following formula:
Wherein, wLFor intermediate variable, and have For L grades of global homography matrixes, matrix Element definition are as follows:
For convenience, it can be abbreviated as
It is defined below4 groups of Feature Points Matching results are randomly selected to determine a homography matrix every time, and are used l2Norm screens remaining characteristic matching point as the following formula:
Wherein, trFor the threshold value of abnormal point screening.When remaining characteristic matching point meets above formula, it is considered as validity feature With point, otherwise it is considered as invalid characteristic matching point.The L that homography matrix when validity feature matching points are most as finally determines Grade global homography matrix
L-1 grades of homography matrix can be obtained by increasing image resolution ratio.Introduce scale factor μ, reference picture It may be expressed as: with L-1 grades of corresponding pixels of image subject to registration
Wherein,For the x coordinate of L-1 grades of reference pictures,For the y-coordinate of L-1 grades of reference pictures,It is The x coordinate of L-1 grades of images subject to registration,For the y-coordinate of L-1 grades of images subject to registration;μ is scale factor.To ask L-1 grades Homography matrix, have:
It enablesAbove formula can be rewritten as:
Wherein,For L-1 grades of global homography matrix.
Using the homography matrix derivation method from L grades to L-1 grades, gradually increasing resolution ratio can get the 0th grade of phase The global homography matrix answeredThat is:
Wherein, For the x coordinate of the 0th grade of reference picture,For the y of the 0th grade of reference picture Coordinate,For the x coordinate of the 0th grade of image subject to registration,For the y-coordinate of the 0th grade of image subject to registration, μLFor the 0th grade of homography matrix Scale factor.
Now for L-1 grades, to illustrate how to realize image registration in conjunction with global and local homography matrix, take Scale factor μ=2.As shown in Fig. 2 in attached drawing, L-1 grades of image averaging is divided into four pieces, it is corresponding to define each sub-block Homography matrix is local homography matrix, is denoted asIndicate the local homography matrix of the ζ image block of L-1 grade.It asks It takes the algorithm of local homography matrix identical as global homography matrix, equally further rejects invalid characteristic matching point, in turn Determine part homography matrix.
For L-1 grades in Fig. 2 in attached drawing of image block 1, comprehensive L-1 grades of global homography matrixesWith L-1 grades of offices Portion's homography matrixThe coordinate conversion relation that the image block 1 of reference picture and image subject to registration can be obtained isWhereinWithRespectively indicate L-1 grades of reference pictures and image subject to registration Image block 1 corresponding coordinate,WithIt is expressed as the corresponding local intermediate variable of L-1 grades of image blocks 1 and complete Office's intermediate variable.NoteFor the transition matrix of the image block 1 of L-1 grades of images, above formula can letter It is written as
Similarly, for the image block 2,3,4 of L-1 grades in Fig. 2 in attached drawing, have:
Wherein, FL-1,2、FL-1,3、FL-1,4It is the transition matrix of the image block 2,3,4 of L-1 grades of images respectively,The corresponding coordinate of the image block 2,3,4 of respectively L-1 grades images subject to registration,The corresponding coordinate of the image block 2,3,4 of respectively L-1 grades reference pictures.
The transition matrix F of comprehensive four L-1 grades of image blocksL-1,1、FL-1,2、FL-1,3、FL-1,4, can obtain L-1 grades with reference to figure The coordinate conversion relation of picture and image subject to registration isWherein,For the joint transition matrix of L-1 grades of images;FL-1,ζIndicate the conversion of L-1 the ζ image block of grade Matrix;λL-1,ζFor the weight of L-1 the ζ image block transition matrix of grade.
Resolution ratio is stepped up until reaching the 0th grade of image pyramid, the seat of reference picture and image subject to registration can be obtained Mark transformation relation:Wherein,For the joint transition matrix of the 0th grade of image, The transition matrix of the as final global and local homography matrix of joint,For the corresponding coordinate of the 0th grade of image subject to registration, For the corresponding coordinate of the 0th grade of reference picture.
Step 3), for step 2) treated image, the area to be tested of moving vehicle is determined using 2 frame difference methods;It is right Image carries out super-pixel segmentation, scan box is determined according to the center of super-pixel, to traverse area to be tested;
As illustrated in figure 3 of the drawings, F (k-1) and F (k) respectively indicates -1 frame of kth and kth frame in unmanned plane image sequence. Fr(k-1) and FrIt (k) is the image after registration.In order to reduce the calculation amount of moving vehicle detection, to the image F after registrationr(k- And F 1)r(k) area to be tested is determined using 2 frame difference methods, see the rectangular box in attached drawing in Fig. 3 in small figure " 2 frame difference method ".With 2 For moving vehicle, uses and generate 4 pieces of area to be tested after 2 frame difference methods.
After determining area to be tested using 2 frame difference methods, super-pixel segmentation is carried out to image, it is true according to the center of super-pixel Determine scan box, and then traverses area to be tested to realize that moving vehicle detects.When traversing area to be tested, due to target vehicle Rotation and translation etc., need to carry out affine transformation to scan box, to reduce the omission factor of moving vehicle detection.
Step 4), using step 3) treated image, extract be made of vehicle texture and color vehicle low order it is special Sign;And the contextual information of vehicle is introduced, extract the high-order feature of vehicle;In the low order feature and high-order for obtaining target vehicle After feature, low order feature and high-order feature are merged, the multistage feature of target vehicle is obtained;
Small connected region, referred to as cell factory are specifically divided the image into first.Then each picture in cell factory is acquired The gradient of vegetarian refreshments or the direction histogram at edge.It can be formed by HOG spy finally, the feature of these cell factories is combined Levy descriptor.It is converted to HSV color space first to image, extracts HOG feature respectively to triple channel, finally carry out feature and melt It closes, image is transformed into HSV color space by rgb color space, extract H, S, V Three-channel data template of image respectively, protect Save as two-dimensional matrix MH、MSAnd MV, while calculating separately the HOG feature H of three matrixesH、HSAnd HV.To three by the way of weighting Channel HOG feature is merged, it may be assumed that Hl=wHHH+wSHS+wVHV.Wherein, HlIndicate the low order feature of vehicle;wH、wSAnd wVRespectively It is HOG feature HH、HSAnd HVWeight, and wH+wS+wV=1;The weight of triple channel adaptively determines by each channel data template, Specifically determined by following formula:
So far, it is determined that the low order feature of vehicle merges the HOG feature of H, S, V triple channel.
When determining high-order feature, the contextual information of vehicle is introduced.Positive negative sample is chosen manually to initialize positive dictionary and bear Dictionary determines final positive dictionary D then according to dictionary learning and autonomous selection strategypWith negative dictionary Dn.Next, passing through meter The reconstructed error of other image blocks in the reconstructed error and neighborhood of target area is calculated to determine high-order feature.
For vehicle tv, reconstructed error is denoted as e (tv), and e (tv)=[e (tv,Dp),e(tv,Dn)]T, wherein e (tv, Dp) and e (tv,Dn) it is respectively tvReconstructed error on positive dictionary and negative dictionary.For some neighborhood image block a of vehicleι, Its reconstructed error is e (aι), and e (aι)=[e (aι,Dp),e(aι,Dn)]T, wherein subscript t is target vehicle tvImage in neighborhood The number of block.Wherein e (aι,Dp) and e (aι,Dn) it is respectively aιReconstructed error on positive dictionary and negative dictionary.For Neighborhood Graph As block a ι, target vehicle t is definedvHigh-order feature be tvWith aιReconstructed error difference, be represented by H (tv,aι)=| | e (tv)-e(aι)2, wherein H (tv,aι) it is target vehicle tvRelative to neighborhood aιHigh-order feature.
As target vehicle tvWhen having M image block in neighborhood, target vehicle tvHigh-order feature are as follows: Hh=[H (tv,a1),H (tv,a2),…,H(tv,aM)]T
By obtained vehicle high-order feature together with low order Fusion Features, the multistage sign of target vehicle: F can be obtainedv=[Hl, Hh]。
So far, the low order feature and high-order feature of comprehensive vehicle obtain the multistage feature of target vehicle.
Step 5), to the multistage feature of the vehicle of acquisition, using dictionary learning algorithm training dictionary, and utilize the word after training The detection of allusion quotation completion moving vehicle.
Specifically in the dictionary learning algorithm based on correlation, in the dictionary updating stage, it is first determined dilute with new samples Dredging indicates relevant atom, is only updated to these atoms, reduces the calculation amount of dictionary learning.It on the other hand, will be sparse Degree is introduced into the dictionary updating stage.It iterates to the above process, until convergence, and then realize dictionary rapidly and efficiently Training, and it is finally completed the detection of moving vehicle.
Step 2) described in it introduces image pyramid, using top-down mode, is matched according to characteristic point as a result, mentioning The image registration algorithm of the global and local homography matrix of joint is gone out.Global homography matrix retouches global position variation It states, local homography matrix describes local location variation.
Step 3) introduces 2 frame difference methods and super-pixel segmentation, reduces area to be tested using 2 frame difference methods, and introduce super picture Element segmentation determines scanning area to be tested according to the center of super-pixel, the calculation amount of moving vehicle detection is effectively reduced.
Step 4) specific implementation is as follows: when extracting vehicle high-order feature, choosing positive negative sample initialization manually first and corrects a wrongly written character or a misspelt word Allusion quotation and negative dictionary after determining final positive dictionary and negative dictionary, lead to then further according to dictionary learning and the autonomous selection strategy of sample The reconstructed error of other image blocks in the reconstructed error and neighborhood for calculate target area is crossed to determine high-order feature.

Claims (10)

1. a kind of moving vehicle detection method based on unmanned plane video, which comprises the following steps:
Step 1), the video of taking photo by plane for obtaining moving vehicle, extract its consecutive image sequence, then extract reference picture and subject to registration The SURF characteristic point of image, then carries out Feature Points Matching, is clicked through using RANSAC algorithm to the feature after matching Row abnormity point elimination;
Step 2), for the characteristic point after abnormity point elimination, pass through the conversion that unmanned plane image registration algorithm obtains image Matrix;
Step 3), for step 2) treated image, the area to be tested of moving vehicle is determined using 2 frame difference methods, to image Super-pixel segmentation is carried out, scan box is determined according to the center of super-pixel, to traverse area to be tested;
Step 4), using step 3) treated image, extract by vehicle texture and color, constitute the low order feature of vehicle;And The contextual information for introducing vehicle, extracts the high-order feature of vehicle;In the low order feature and high-order feature for obtaining target vehicle Afterwards, low order feature and high-order feature are merged, the multistage feature of target vehicle is obtained;
Step 5), to the multistage feature of the vehicle of acquisition, using dictionary learning algorithm training dictionary, and complete using the dictionary after training At the detection of moving vehicle.
2. a kind of moving vehicle detection method based on unmanned plane video according to claim 1, which is characterized in that SURF feature point extraction is carried out using Harr feature and integral image concept to reference picture and image subject to registration.
3. a kind of moving vehicle detection method based on unmanned plane video according to claim 2, which is characterized in that To any one SURF characteristic point in reference picture, the Euclidean distance of itself and characteristic point in image subject to registration is calculated;Euclidean away from From smaller, then similarity is higher, when Euclidean distance is less than given threshold, is determined as successful match;If in image subject to registration Multiple Feature Points Matchings in some SURF characteristic point and reference picture, then be accordingly to be regarded as matching unsuccessful.
4. a kind of moving vehicle detection method based on unmanned plane video according to claim 1, which is characterized in that After completing abnormity point elimination, image pyramid is introduced, using top-down mode, result is matched according to characteristic point and is determined entirely Office's homography matrix and local homography matrix: firstly, the L+1 grade pyramid of reference picture and image subject to registration is established, true When determining global homography matrix, then it can increase resolution ratio step by step until the 0th since L grades of global homography matrix Grade, and then obtain the 0th grade of corresponding global homography matrix.
5. a kind of moving vehicle detection method based on unmanned plane video according to claim 4, which is characterized in that DefinitionWithRespectively indicate the corresponding coordinate of L grades of reference pictures and image subject to registration;WhereinFor the x coordinate of L grades of reference pictures,For the y-coordinate of L grades of reference pictures,For the x coordinate of L grades of images subject to registration,For the y-coordinate of L grades of images subject to registration:
Then L grades of global homography matrixes are determined by following formula:
Wherein, wLFor intermediate variable, and have For L grades of global homography matrixes, matrix element Is defined as:
It is abbreviated as
It is defined below4 groups of Feature Points Matching results are randomly selected every time to determine a homography matrix, and use l2Norm Remaining characteristic matching point is screened as the following formula:
Wherein, trFor the threshold value of abnormal point screening;When remaining characteristic matching point meets above formula, it is considered as validity feature match point, Otherwise it is considered as invalid characteristic matching point;The L grade that homography matrix when validity feature matching points are most as finally determines is complete Office's homography matrix
L-1 grades of homography matrix is obtained by increasing image resolution ratio: introducing scale factor μ, reference picture and wait matching L-1 grades of corresponding pixels of quasi- image may be expressed as:
Wherein,For the x coordinate of L-1 grades of reference pictures,For the y-coordinate of L-1 grades of reference pictures,It is L-1 grades The x coordinate of image subject to registration,For the y-coordinate of L-1 grades of images subject to registration;μ is scale factor: to ask L-1 grades of list to answer Property matrix, has:
It enablesAbove formula can be rewritten as:
Wherein,For L-1 grades of global homography matrix;
Using the homography matrix derivation method from L grades to L-1 grades, gradually increasing resolution ratio can get the 0th grade accordingly Global homography matrixThat is:
Wherein, For the x coordinate of the 0th grade of reference picture,For the y-coordinate of the 0th grade of reference picture,For the x coordinate of the 0th grade of image subject to registration,For the y-coordinate of the 0th grade of image subject to registration, μLFor the ratio of the 0th grade of homography matrix The example factor.
6. a kind of moving vehicle detection method based on unmanned plane video according to claim 1, which is characterized in that - 1 frame of kth and kth frame in unmanned plane image sequence, F are respectively indicated using F (k-1) and F (k)r(k-1) and FrIt (k) is registration Image afterwards;To the image F after registrationr(k-1) and Fr(k) area to be tested is determined using 2 frame difference methods.
7. a kind of moving vehicle detection method based on unmanned plane video according to claim 1, which is characterized in that Small connected region, i.e. cell factory are divided the image into first;Then the gradient or edge of each pixel in cell factory are acquired Direction histogram;Finally the feature of these cell factories, which is combined, can be formed by HOG feature descriptor: to image head HSV color space is first converted to, extracts HOG feature respectively to triple channel, Fusion Features are finally carried out, by image by rgb color Space is transformed into HSV color space, extracts H, S, V Three-channel data template of image respectively, saves as two-dimensional matrix MH、MSWith MV, while calculating separately the HOG feature H of three matrixesH、HSAnd HV
8. a kind of moving vehicle detection method based on unmanned plane video according to claim 7, which is characterized in that Triple channel HOG feature is merged by the way of weighting, it may be assumed that Hl=wHHH+wSHS+wVHV;Wherein, HlIndicate the low of vehicle Rank feature;wH、wSAnd wVIt is HOG feature H respectivelyH、HSAnd HVWeight, and wH+wS+wV=1;The weight of triple channel is by each channel Data template adaptively determines, specifically by following formula:
The low order feature of vehicle has been determined, that is, has merged the HOG feature of H, S, V triple channel.
9. a kind of moving vehicle detection method based on unmanned plane video according to claim 7, which is characterized in that When determining high-order feature, the contextual information of vehicle is introduced;Positive negative sample is chosen manually and initializes positive dictionary and negative dictionary, then According to dictionary learning and autonomous selection strategy, final positive dictionary D is determinedpWith negative dictionary Dn;By the reconstruct for calculating target area The reconstructed errors of other image blocks determines high-order feature in error and neighborhood;
For vehicle tv, reconstructed error is denoted as e (tv), and e (tv)=[e (tv,Dp),e(tv,Dn)]T, wherein e (tv,Dp) and e (tv,Dn) it is respectively tvReconstructed error on positive dictionary and negative dictionary;For some neighborhood image block a of vehicleι, heavy Structure error is e (aι), and e (aι)=[e (aι,Dp),e(aι,Dn)]T, wherein subscript t is target vehicle tvImage block in neighborhood Number;Wherein e (aι,Dp) and e (aι,Dn) it is respectively aιReconstructed error on positive dictionary and negative dictionary;For neighborhood image block aι, define target vehicle tvHigh-order feature be tvWith aιReconstructed error difference, be expressed as H (tv,aι)=| | e (tv)-e (aι)||2, wherein H (tv,aι) it is target vehicle tvRelative to neighborhood aιHigh-order feature;
As target vehicle tvWhen having M image block in neighborhood, target vehicle tvHigh-order feature are as follows: Hh=[H (tv,a1),H(tv, a2),…,H(tv,aM)]T
By obtained vehicle high-order feature together with low order Fusion Features, the multistage sign of target vehicle: F can be obtainedv=[Hl,Hh]; The low order feature and high-order feature of comprehensive vehicle obtain the multistage feature of target vehicle.
10. a kind of moving vehicle detection method based on unmanned plane video according to claim 1, feature exist In specifically in the dictionary learning algorithm based on correlation, in the dictionary updating stage, it is first determined with new samples rarefaction representation Relevant atom is only updated these atoms;Degree of rarefication is introduced into the dictionary updating stage;Above-mentioned renewal process is carried out It iterates, until convergence, and then dictionary training rapidly and efficiently is realized, and be finally completed the detection of moving vehicle.
CN201811203391.1A 2018-10-16 2018-10-16 Moving vehicle detection method based on unmanned aerial vehicle aerial video Active CN109376641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811203391.1A CN109376641B (en) 2018-10-16 2018-10-16 Moving vehicle detection method based on unmanned aerial vehicle aerial video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811203391.1A CN109376641B (en) 2018-10-16 2018-10-16 Moving vehicle detection method based on unmanned aerial vehicle aerial video

Publications (2)

Publication Number Publication Date
CN109376641A true CN109376641A (en) 2019-02-22
CN109376641B CN109376641B (en) 2021-04-27

Family

ID=65400009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811203391.1A Active CN109376641B (en) 2018-10-16 2018-10-16 Moving vehicle detection method based on unmanned aerial vehicle aerial video

Country Status (1)

Country Link
CN (1) CN109376641B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136104A (en) * 2019-04-25 2019-08-16 上海交通大学 Image processing method, system and medium based on unmanned aerial vehicle station
CN110598613A (en) * 2019-09-03 2019-12-20 长安大学 Expressway agglomerate fog monitoring method
CN111552269A (en) * 2020-04-27 2020-08-18 武汉工程大学 Multi-robot safety detection method and system based on attitude estimation
CN111612966A (en) * 2020-05-21 2020-09-01 广东乐佳印刷有限公司 Bill certificate anti-counterfeiting detection method and device based on image recognition
CN111881853A (en) * 2020-07-31 2020-11-03 中北大学 Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN112749779A (en) * 2019-10-30 2021-05-04 北京市商汤科技开发有限公司 Neural network processing method and device, electronic equipment and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106800A1 (en) * 2009-10-29 2012-05-03 Saad Masood Khan 3-d model based method for detecting and classifying vehicles in aerial imagery
CN105554456A (en) * 2015-12-21 2016-05-04 北京旷视科技有限公司 Video processing method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106800A1 (en) * 2009-10-29 2012-05-03 Saad Masood Khan 3-d model based method for detecting and classifying vehicles in aerial imagery
CN105554456A (en) * 2015-12-21 2016-05-04 北京旷视科技有限公司 Video processing method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN Z 等: "Vehicle detection in high-resolution aerial images based on fast sparse representation classification and multiorder feature", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
王素琴 等: "无人机航拍视频中的车辆检测方法", 《系统仿真学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136104A (en) * 2019-04-25 2019-08-16 上海交通大学 Image processing method, system and medium based on unmanned aerial vehicle station
CN110136104B (en) * 2019-04-25 2021-04-13 上海交通大学 Image processing method, system and medium based on unmanned aerial vehicle ground station
CN110598613A (en) * 2019-09-03 2019-12-20 长安大学 Expressway agglomerate fog monitoring method
CN112749779A (en) * 2019-10-30 2021-05-04 北京市商汤科技开发有限公司 Neural network processing method and device, electronic equipment and computer storage medium
CN111552269A (en) * 2020-04-27 2020-08-18 武汉工程大学 Multi-robot safety detection method and system based on attitude estimation
CN111612966A (en) * 2020-05-21 2020-09-01 广东乐佳印刷有限公司 Bill certificate anti-counterfeiting detection method and device based on image recognition
CN111612966B (en) * 2020-05-21 2021-05-07 广东乐佳印刷有限公司 Bill certificate anti-counterfeiting detection method and device based on image recognition
CN111881853A (en) * 2020-07-31 2020-11-03 中北大学 Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN111881853B (en) * 2020-07-31 2022-09-16 中北大学 Method and device for identifying abnormal behaviors in oversized bridge and tunnel

Also Published As

Publication number Publication date
CN109376641B (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN109376641A (en) A kind of moving vehicle detection method based on unmanned plane video
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
US6671399B1 (en) Fast epipolar line adjustment of stereo pairs
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
CN104134200B (en) Mobile scene image splicing method based on improved weighted fusion
CN109584156A (en) Micro- sequence image splicing method and device
CN108257089B (en) A method of the big visual field video panorama splicing based on iteration closest approach
CN114724120B (en) Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN104200461A (en) Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN108960267A (en) System and method for model adjustment
EP0895189B1 (en) Method for recovering radial distortion parameters from a single camera image
CN109191416A (en) Image interfusion method based on sparse dictionary study and shearing wave
CN106910208A (en) A kind of scene image joining method that there is moving target
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN110322403A (en) A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network
CN110009670A (en) The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN110120013A (en) A kind of cloud method and device
CN109711420B (en) Multi-affine target detection and identification method based on human visual attention mechanism
CN113239828B (en) Face recognition method and device based on TOF camera module
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211224

Address after: 908, block a, floor 8, No. 116, Zizhuyuan Road, Haidian District, Beijing 100089

Patentee after: ZHONGZI DATA CO.,LTD.

Patentee after: China Highway Engineering Consulting Group Co., Ltd.

Address before: 710064 middle section of South Second Ring Road, Beilin District, Xi'an City, Shaanxi Province

Patentee before: CHANG'AN University

TR01 Transfer of patent right