CN101930609B - Approximate target object detecting method and device - Google Patents

Approximate target object detecting method and device Download PDF

Info

Publication number
CN101930609B
CN101930609B CN 201010266886 CN201010266886A CN101930609B CN 101930609 B CN101930609 B CN 101930609B CN 201010266886 CN201010266886 CN 201010266886 CN 201010266886 A CN201010266886 A CN 201010266886A CN 101930609 B CN101930609 B CN 101930609B
Authority
CN
China
Prior art keywords
frame image
light stream
unique point
target object
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010266886
Other languages
Chinese (zh)
Other versions
CN101930609A (en
Inventor
刘威
于红菲
袁淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shanghai Co Ltd
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN 201010266886 priority Critical patent/CN101930609B/en
Publication of CN101930609A publication Critical patent/CN101930609A/en
Application granted granted Critical
Publication of CN101930609B publication Critical patent/CN101930609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an approximate target object detecting method and an approximate target object detecting device. The method comprises the following steps of: acquiring a current frame image and a frame image adjacent to the current frame image from a camera device; dividing the current frame image and the adjacent frame image into same number of image blocks, and differencing image blocks at corresponding positions of the current frame image and the adjacent frame image to obtain target object candidate area blocks of the current frame image; respectively performing characteristic point detection and tracking on the target object candidate area blocks to obtain a characteristic light stream vector set; clustering characteristic light stream vectors in the set, and screening approximate target object area blocks from the candidate area blocks according to a cluster result; and connecting the screened approximate target object area blocks to obtain an approximate target object area of the current frame image. An image frame block analysis method is adopted for detecting motion targets, types of the approximate target objects are not limited, and the method is also suitable for detecting rigid target objects and non-rigid target objects, and has the advantages of simple calculation, accurate detection and high speed of detection.

Description

Approaching object detection method and device
Technical field
The application relates to technical field of image processing, relates in particular to a kind of approaching object detection method and device.
Background technology
In the vehicle ' process; Because the existence of vehicle blind zone or the carelessness of human pilot; (for example cause near the object of vehicle; Other vehicle, motorcycle etc.) become the potential safety hazard of vehicle ' easily, if human pilot is not found object, then cause danger in the process of moving easily.In order to overcome the potential safety hazard of vehicle, need the object of vehicle ' peripheral region be detected, according to testing result human pilot is reminded.
In the prior art; Can utilize optical flow approach that the object near vehicle is detected; Promptly full figure is carried out feature point detection; And calculate light stream with respect to the unique point of being extracted, from the unique point of left and right view field image, select to have vector characteristic point on the approach direction as the convergence characteristic, according to the object of this convergence feature detection near vehicle.The inventor finds that in the research process to prior art prior art is passed through all images is adopted feature point detection, so computing velocity is relatively slow; Because it is receive the influence of picture noise and feature point tracking mistake, the light stream direction of some unique point is easy to generate mistake, therefore that such light stream testing result is inaccurate near the foundation of the object of vehicle as judgement; And, for non-rigid body object near vehicle, the light stream direction of object and size and incomplete same, even differ greatly, therefore further caused judged result inaccurate.
Summary of the invention
The purpose of the application embodiment provides a kind of approaching object detection method and device, to solve optical flow algorithm in the prior art to the inaccurate problem of testing result near the object of vehicle.
For solving the problems of the technologies described above, the application embodiment provides following technical scheme:
A kind of approaching object detection method comprises:
From camera head, obtain the consecutive frame image of current frame image and said current frame image;
To said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of said current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image;
Said target object candidate area blocks is carried out feature point detection and tracking respectively, obtain characteristic light stream vector set;
Characteristic light stream vectors in the said set is carried out cluster, from said candidate area blocks, filter out approaching target object area piece according to cluster result;
The said target object area piece that filters out is communicated with, obtains the approaching target object area of current frame image.
Said target object candidate area blocks is carried out feature point detection and tracking respectively, obtains characteristic light stream vector set and comprise:
Through the feature point detection algorithm candidate area blocks in the current frame image is carried out feature point detection;
Through detected unique point is followed the tracks of, obtain in the consecutive frame image matched feature points with said detected Feature Points Matching;
Calculate the characteristic light stream vectors of said candidate area blocks according to said detected unique point and said matched feature points.
Said target object candidate area blocks is carried out feature point detection and tracking respectively, obtain characteristic light stream vector set after, also comprise:
Detected unique point as first unique point, is followed the tracks of said first unique point, obtain second unique point of the coupling of said first unique point in said consecutive frame image;
Said second unique point is followed the tracks of, obtain the 3rd unique point of the coupling of said second unique point on said current frame image;
When the distance between said first unique point and said the 3rd unique point during greater than preset threshold value; The vectorial formed vector angle that the vector that perhaps forms when first unique point, second unique point and second unique point, the 3rd unique point form is during greater than preset threshold value, deletion and the said first unique point characteristic of correspondence light stream vectors.
Said characteristic light stream vectors in the said set is carried out cluster, from said candidate area blocks, filters out approaching target object area piece according to cluster result and comprise:
Calculate the light stream direction of the characteristic light stream vectors in the said set, and obtain the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object;
The said characteristic light stream vectors of obtaining is carried out cluster generate some cluster subclass, the characteristic light stream vectors in each said cluster subclass satisfies preset clustering rule;
Whether the quantity of judging the characteristic light stream vectors in each said cluster subclass is greater than preset threshold value;
Result according to judging when greater than said threshold value, keeps corresponding cluster subclass, when less than said threshold value, deletes corresponding cluster subclass;
The target object candidate area blocks of cluster subclass that will comprise reservation is as approaching target object area piece.
During approaching object when said method is applied to vehicle detection and drives towards the crossroad, said camera head is installed in the head of said vehicle.
When said method is applied to vehicle detection when reversing when intersecting approaching object on the track, said camera head is installed in the afterbody of said vehicle.
A kind of approaching object pick-up unit comprises:
Acquiring unit is used for obtaining from camera head the consecutive frame image of current frame image and said current frame image;
Difference unit, being used for said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of said current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image;
Detecting unit is used for said target object candidate area blocks is carried out feature point detection and tracking respectively, obtains characteristic light stream vector set;
Cluster cell is used for the characteristic light stream vectors of said set is carried out cluster, from said candidate area blocks, filters out approaching target object area piece according to cluster result;
Connected unit is used for the said target object area piece that filters out is communicated with, and obtains the approaching target object area of current frame image.
Said detecting unit comprises:
The feature point detection unit is used for through the feature point detection algorithm candidate area blocks of current frame image being carried out feature point detection;
The feature point tracking unit is used for through detected unique point is followed the tracks of, and obtains in the consecutive frame image matched feature points with said detected Feature Points Matching;
The light stream vectors computing unit is used for calculating according to said detected unique point and said matched feature points the characteristic light stream vectors of said candidate area blocks.
Also comprise:
Tracking cell; Be used for detected unique point as first unique point; Said first unique point is followed the tracks of; Obtain second unique point of the coupling of said first unique point in said consecutive frame image, said second unique point is followed the tracks of, obtain the 3rd unique point of the coupling of said second unique point on said current frame image;
Delete cells; Be used for when the distance between said first unique point and said the 3rd unique point during greater than preset threshold value; The vectorial formed vector angle that the vector that perhaps forms when first unique point, second unique point and second unique point, the 3rd unique point form is during greater than preset threshold value, deletion and the said first unique point characteristic of correspondence light stream vectors.
Said cluster cell comprises:
The light stream direction calculating unit is used for calculating the light stream direction of the characteristic light stream vectors of said set, and obtains the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object;
The subclass generation unit is used for that the said characteristic light stream vectors of obtaining is carried out cluster and generates some cluster subclass, and the characteristic light stream vectors in each said cluster subclass satisfies preset clustering rule;
The threshold decision unit, whether the quantity of characteristic light stream vectors that is used for judging each said cluster subclass is greater than preset threshold value;
Performance element is used for when greater than said threshold value, keeping corresponding cluster subclass according to the result who judges as a result, when less than said threshold value, deletes corresponding cluster subclass;
Region unit is confirmed the unit, is used for target object candidate area blocks with the cluster subclass that comprises reservation as approaching target object area piece.
It is thus clear that; From camera head, obtain the consecutive frame image of current frame image and said current frame image among the application embodiment; To said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of current frame image and consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image; Target object candidate area blocks is carried out feature point detection and tracking respectively; Obtain characteristic light stream vector set, the characteristic light stream vectors in the pair set is carried out cluster, from candidate area blocks, filters out approaching target object area piece according to cluster result; The target object area piece that filters out is communicated with, obtains the approaching target object area of current frame image.The application embodiment adopts picture frame block analysis method to carry out moving object detection; Compare with the existing method of using light stream, have and do not limit near the object type, be applicable to simultaneously and detect rigid body object and non-rigid body object, calculate simple and speed is fast, detect the advantages such as influence that accurately, are not subject to the wrong light stream that the feature point tracking mistake causes.
Description of drawings
In order to be illustrated more clearly in the application embodiment or technical scheme of the prior art; To do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below; Obviously, the accompanying drawing in describing below only is some embodiment that put down in writing among the application, for those of ordinary skills; Under the prerequisite of not paying creative work property, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the first embodiment process flow diagram of the approaching object detection method of the application;
Fig. 2 A is the second embodiment process flow diagram of the approaching object detection method of the application;
Fig. 2 B is the synoptic diagram that the vehicle in Fig. 2 A illustrated embodiment detects approaching object;
The synoptic diagram of the unique point that Fig. 2 C obtains for the antitracking method of passing through in Fig. 2 A illustrated embodiment;
Fig. 2 D is the synoptic diagram of image coordinate system in Fig. 2 A illustrated embodiment;
Fig. 2 E is for being installed in the synoptic diagram of the unique point light stream direction in the shot by camera image of vehicle right side among Fig. 2 B;
Fig. 2 F is for carrying out the synoptic diagram as a result after the cluster to the unique point light stream vectors among Fig. 2 E;
Fig. 2 G is the synoptic diagram near the target object area piece that is generated;
Fig. 2 H generates the synoptic diagram of approaching target object area for eight neighborhood pieces are linked to each other;
The synoptic diagram that Fig. 3 detects approaching object for the another kind of vehicle of using the application embodiment;
The synoptic diagram that Fig. 4 A detects approaching object for another vehicle of using the application embodiment;
The synoptic diagram of the image coordinate system of Fig. 4 B when detecting under the situation shown in Fig. 4 A;
Fig. 5 is the first embodiment block diagram of the approaching object pick-up unit of the application;
Fig. 6 is the second embodiment block diagram of the approaching object pick-up unit of the application.
Embodiment
The application embodiment provides a kind of approaching object detection method and device, detects the object near vehicle through the characteristic light stream method based on image block.
In order to make those skilled in the art person understand the technical scheme among the application embodiment better; And make the above-mentioned purpose of the application embodiment, feature and advantage can be more obviously understandable, below in conjunction with accompanying drawing technical scheme among the application embodiment done further detailed explanation.
Referring to Fig. 1, be the first embodiment process flow diagram of the approaching object detection method of the application:
Step 101: the consecutive frame image that from camera head, obtains current frame image and current frame image.
Step 102: to current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image.
Concrete; With said current frame image and said consecutive frame image division is some identical image blocks; Image block to corresponding each other in said current frame image and the said consecutive frame image carries out calculus of differences respectively; The result of branch computing of being on duty is during greater than preset threshold value, and the image block of correspondence is chosen as near target object candidate area blocks.
Step 103: target object candidate area blocks is carried out feature point detection and tracking respectively, obtain characteristic light stream vector set.
Concrete; Through feature point detection algorithm (can adopt the Harris algorithm in the embodiment of the invention) candidate area blocks in the current frame image is carried out feature point detection; Through detected unique point is followed the tracks of; Obtain in the consecutive frame image matched feature points with said detected Feature Points Matching, calculate the characteristic light stream vectors of said candidate area blocks according to said detected unique point and said matched feature points.
Step 104: the characteristic light stream vectors in the pair set is carried out cluster, from candidate area blocks, filters out approaching target object area piece according to cluster result.
Concrete; The light stream direction of the characteristic light stream vectors in the set of computations, and obtain the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object, the characteristic light stream vectors of obtaining is carried out cluster generate some cluster subclass; Characteristic light stream vectors in each said cluster subclass satisfies preset clustering rule; Whether the quantity of judging the characteristic light stream vectors in each cluster subclass is greater than preset threshold value, according to the result who judges, when greater than said threshold value; Keep corresponding cluster subclass; When less than said threshold value, delete corresponding cluster subclass, the target object candidate area blocks of cluster subclass that will comprise reservation is as approaching target object area piece.
Step 105: the target object area piece that filters out is communicated with, obtains the approaching target object area of current frame image.
Concrete, judge whether eight neighborhood pieces of each region unit belong to the said region unit that filters out in the said region unit that filters out, the region unit that will belong in the region unit that filters out links to each other with its eight neighborhoods piece.
Visible by the foregoing description; Adopt picture frame block analysis method to carry out moving object detection; Compare with the existing method of using light stream; Calculate simple, all can detect, not be subject to the influence of the wrong light stream that the feature point tracking mistake causes to the non-rigid body of rigid body, computing velocity is faster, testing result is more accurate.
Further, can also execution in step between above-mentioned steps 103 and the step 104 through detected unique point is carried out traceback, the deletion tracking results is greater than the pairing characteristic light stream vectors of the unique point of threshold value.Concrete; With detected unique point as first unique point; First unique point is followed the tracks of, obtain second unique point of the coupling of first unique point in the consecutive frame image, second unique point is followed the tracks of; Obtain the 3rd unique point of the coupling of second unique point on current frame image; When the distance between first unique point and the 3rd unique point during greater than preset threshold value, the vectorial formed vector angle that the vector that perhaps forms when first unique point, second unique point and second unique point, the 3rd unique point form is during greater than preset threshold value, the deletion and the first unique point characteristic of correspondence light stream vectors.This step is owing to can further remove the wrong light stream vectors in the light stream vectors through antitracking, and therefore mistake identification reduces, and makes testing result more accurate.
Referring to Fig. 2 A, be the second embodiment process flow diagram of the approaching object detection method of the application, this embodiment shows in detail the process that approaching object is detected:
Step 201: the consecutive frame image that from camera head, obtains current frame image and current frame image.
In order to realize the detection of the application embodiment to object, camera head can be installed in the vehicle both sides, camera for example is through the sequence image of camera head captured in real time near the object of vehicle.For the sequence image that each camera head is caught, from this sequence image, obtain current frame image and with the frame period of current frame image consecutive frame image more than or equal to 1 frame.Referring to Fig. 2 B, be the synoptic diagram that vehicle among the application embodiment detects object, wherein camera head is installed with 2 places, position in the position on the vehicle 1, is used for object A and the object B of both sides near this vehicle detected.
Step 202: with current frame image and consecutive frame image division is some identical image blocks.
Current frame image and consecutive frame image are two two field pictures that size and pixel equate; This two two field picture is divided into identical some image blocks; The image block that in two two field pictures, has same position is corresponding each other image block, and per two two field pictures can be divided into some groups of corresponding each other image blocks.
Step 203: the image block to corresponding each other in current frame image and the consecutive frame image carries out calculus of differences respectively.
Image block to same position in current frame image and the consecutive frame image carries out calculus of differences.Wherein, when image block is done calculus of differences, the coloured image calculus of differences can be adopted, also the gray level image calculus of differences can be adopted.
Be example with the gray level image calculus of differences in the present embodiment, calculate according to following formula:
S = 1 N Σ x , y ∈ I ( s ( x , y ) - s ′ ( x , y ) ) 2
In the following formula, N is the number of pixels in the image block, for example can make N=n 2, wherein this image block is a square, n representes that institute's image block is laterally or the number of pixels vertically; S (x, y), (x y) is respectively the gray-scale value of a pair of respective pixel in the image block of same position in current frame image and the consecutive frame image to s '; I representes this image block.
Step 204: the result of the branch computing of being on duty is during greater than preset threshold value, and the image block of correspondence is chosen as near target object candidate area blocks.
For example, the zone that can confirm this image block place if the S that calculates in the step 203 satisfies is the candidate area blocks that there are differences.Wherein, σ is 0 being average and the variation range that meets the noise of Gaussian distribution, i.e. variance, and σ is a constant; I representes the piece zone; Thre representes preset threshold value,, and span Thre ∈ (0,1] can value be 0.8 for example.
Step 205: the candidate area blocks in the current frame image is carried out feature point detection through the feature point detection algorithm.
Step 204 has obtained near target object candidate area blocks, and the candidate area blocks that is arranged in current frame image is carried out feature point detection, and feature point detection can adopt Harris algorithm of the prior art, Susan algorithm, SIFT algorithm etc., repeats no more at this.
Step 206:, obtain in the consecutive frame image and the matched feature points of this detected Feature Points Matching through detected unique point is followed the tracks of.
In the step 205 to carrying out feature point detection near target object candidate area blocks in the current frame image; Obtained the unique point that detects; Can adopt Lucas and Kanade ' s feature point tracking method that detected unique point is followed the tracks of then; The matched feature points of acquisition in the consecutive frame image, wherein two unique point constitutive characteristic points that mate each other are right.For example;
Figure BSA00000248642300083
is detected k unique point in the t two field picture;
Figure BSA00000248642300084
is the matched feature points of mating with
Figure BSA00000248642300091
in the t-1 two field picture, and then the unique point of
Figure BSA00000248642300092
and
Figure BSA00000248642300093
formation coupling is right.
Step 207: according to the characteristic light stream vectors of detected unique point and matched feature points calculated candidate region unit.
Obtained in step 206 feature points
Figure BSA00000248642300094
and
Figure BSA00000248642300095
can form the corresponding characteristic optical flow quantity
Figure BSA00000248642300096
which feature light flow quantity
Above-mentioned each candidate area blocks is carried out feature point detection and tracking respectively; And when forming characteristic light stream vectors and can avoid full figure detected characteristics point owing to certain piece image-region texture too a large amount of detected unique point that causes of complicacy all concentrate on this image-region; And the unique point quantity problem seldom that other zone is detected can effectively avoid docking the imperfection that the close-target quality testing is surveyed.
Step 208: through detected unique point is carried out traceback, the deletion tracking results is greater than the pairing characteristic light stream vectors of the unique point of threshold value.
The error of (like lane line etc.) or track algorithm because the existing of texture self similarity background in the picture noise, image; Possibly cause some wrong matched feature points to occurring, also corresponding to the characteristic light stream vectors that calculates according to these wrong matched feature points is wrong.Wrong light stream vectors to occur in order reducing, can to utilize the method for traceback to remove the Characteristics of Fault light stream vectors.
For example; During traceback; Detected unique point p in the current frame image is followed the tracks of, obtains the matched feature points p ' of this unique point p in the consecutive frame image, utilize identical traceback method to obtain the matched feature points p of p ' on current frame image to matched feature points p ' again "; In theory; Matched feature points p " coordinate position should be identical or gap is very little with the coordinate position of detected unique point p in the current frame image; if two unique point p that therefore obtain according to the traceback method and p " coordinate distance far; The situation that has occurred error tracking in tracing process then is described, will be confirmed as invalid characteristic light stream vectors to the characteristic light stream vectors that p and p ' calculate according to unique point thus; Otherwise, think that tracing process is correct, tracking results is reliable, keeps this characteristic light stream vectors.Except that can matched feature points p " judging with the coordinate distance of unique point p; " the vectorial formed vector angle of formation judges whether tracing process is correct to the vector that can also form according to unique point p, unique point p ' with unique point p ', unique point p; If angle is very little, for example, think that tracing process is correct less than 5 degree; Tracking results is reliable, keeps this characteristic light stream vectors.
Referring to Fig. 2 C, its mid point 1 is for detect the unique point in the current frame image at first, and point 2 is for follow the tracks of matched feature points in the consecutive frame image that obtains for the first time; Matched feature points in the current frame image that point 3 obtains for traceback point 2; If point 1 surpasses certain threshold value with the coordinate distance d of point 3, the failure of this secondary tracking then is described, abandon according to the characteristic light stream vectors of this unique point calculating; Otherwise, keep this matched feature points to and according to matched feature points to calculating light stream vectors.
Step 209: the light stream direction of the characteristic light stream vectors in the set of computations, and obtain the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object.
In side image-region captured in the vehicle ' process; Possibly there are static background such as trees, guardrail; These backgrounds also can form characteristic light stream; But the characteristic light stream of these static background is different on direction with characteristic light stream near object, and the characteristic light stream of static background is owing to vehicle movement causes, and near the characteristic light stream of object owing to the object displacement causes.Therefore, can through to characteristic light stream vectors (vector is also referred to as motion vector, comprises size and Orientation) thus judge the characteristic light stream of removing static background.
The image of taking with the video camera that is installed in the vehicle right side is an example; Under image coordinate system entering vehicle side zone near object, its characteristic light stream direction is at [pi/2, π] ∪ [pi/2;-π] in the scope; And the light stream direction of the static object of phase road pavement is in [pi/2, pi/2] scope, and this coordinate system is referring to shown in Fig. 2 D; Wherein, calculate the light stream direction according to following formula:
d = arctan ( p [ k ] . v - p ′ [ k ] . v p [ k ] . u - p ′ [ k ] . u )
In the following formula; P [k] .v and p [k] .u are respectively the coordinate of detected k unique point on v axle and u axle in present frame (like the t frame) image, and p ' [k] .v and p ' [k] .u are respectively detected k unique point in the current frame image followed the tracks of the coordinate of unique point on v axle and u axle on former frame (like the t-1 frame) image that obtains.Referring to Fig. 2 E, for being installed in the synoptic diagram of the unique point light stream direction in the shot by camera image of vehicle right side among Fig. 2 B.
Step 210: the characteristic light stream vectors of obtaining is carried out cluster generate some cluster subclass, the characteristic light stream vectors in each cluster subclass satisfies preset clustering rule.
The light stream that forms for the unique point of rigid motion target such as vehicle has similar direction, size, and the then big or small direction of light stream that forms for non-rigid motion target such as pedestrian's unique point changes all greatly.But no matter be that rigid body also is non-rigid body, the light stream that its unique point forms all has the similar characteristics of big or small direction in subrange.
Therefore, present embodiment carries out cluster respectively to the aforementioned candidate feature light stream vectors that obtains, and obtains the characteristic light stream vector set after the cluster; The light stream cluster is helped forming object; In addition can also some discrete characteristic light stream vectors in the characteristic light stream vectors in the above-mentioned candidate area blocks be removed, these discrete characteristic light stream vectors possibly be to utilize the traceback method not have the Characteristics of Fault light stream vectors of removing.Referring to Fig. 2 F, for the unique point light stream vectors among Fig. 2 E being carried out the synoptic diagram as a result after the cluster.
The process of cluster is carried out according to following mode:
At first, as if i characteristic light stream vectors r in the candidate feature light stream vectors iCan with existing light stream cluster subclass C j(j=1 ..., any one characteristic light stream vectors in m) satisfies clustering rule, then with this characteristic light stream vectors r iJoin this cluster subclass C jIn, otherwise with i characteristic light stream vectors r iBelong to a new characteristic light stream vectors cluster subclass C M+1In;
Secondly, repeat aforesaid operations, up to all characteristic light stream vectors all by cluster in some cluster subclass.
Wherein, judge that can any two characteristic light stream vectors belong in the cluster subclass, establish that i characteristic light stream vectors is r in the candidate feature light stream vectors i, its length is L i, the light stream direction is d i, j characteristic light stream vectors is r j, its length is L j, the light stream direction is d j, two candidate feature light streams then will satisfying following rule belong in the cluster subclass:
At first, judge according to following formula whether the size of light stream vectors is similar:
abs ( L i - L j ) mean ( L i , L j ) ≤ T 1
In the following formula, if the result is smaller or equal to preset threshold value T 1, characterization light stream vectors r then iAnd r jBig small capital seemingly, 0≤T 1≤0.4, T in the present embodiment 1=0.3;
Secondly, judge according to following formula whether the direction of light stream vectors is similar:
If abs is (d i-d j)<π, then the angle Δ d=abs (d of two light stream vectors i-d j);
Otherwise, angle Δ d=2 π-abs (d of two light stream vectors i-d j);
In the following formula, if the angle Δ d of two light stream vectors is less than preset threshold value T 2, characterization light stream vectors r then iAnd r jDirection similar, 0≤T 2≤0.5236, T in the present embodiment 2=0.1 radian;
At last, judge the distance of two characteristic light stream vectors, with characteristic light stream vectors r iAnd r jStarting point and terminal point whether be connected line segment minimum in formed four line segments respectively less than a certain threshold value T 3If, less than T 3Characterization light stream vectors r then iAnd r jDistance close, 0≤T 3≤20, T in the present embodiment 3=8 pixels.
Step 211: whether the quantity of judging the characteristic light stream vectors in each cluster subclass is greater than preset threshold value, if then execution in step 212; Otherwise, execution in step 213.
After having confirmed some types of cluster subclass according to above-mentioned deterministic process; For each cluster subclass, if the quantity of the characteristic light stream vectors in this cluster subclass during greater than a certain threshold value, then keeps the vector of the characteristic light in this cluster subclass; Otherwise, delete this cluster subclass.
Step 212: keep corresponding cluster subclass, execution in step 214.
Step 213: delete corresponding cluster subclass.
Step 214: the candidate area blocks that will comprise the cluster subclass is as the region unit near object.
Add up in each candidate area blocks and whether comprise,, then keep this candidate area blocks as near the target object area piece if comprise by the light stream of successful cluster.Referring to Fig. 2 G, be the synoptic diagram that is generated near the target object area piece.
Step 215: whether the eight neighborhood pieces of judging each region unit in the region unit that filters out belong to the region unit that filters out.
Step 216: the region unit that will belong in the region unit that filters out links to each other with its eight neighborhoods piece.
If a certain image block is confirmed as near the target object area piece, and 8 neighborhood pieces of this image block are similarly near the target object area piece, then this region unit are connected with its 8 neighborhood piece, constitute near target object area jointly, detect object thus.Referring to Fig. 2 H,, eight neighborhood pieces generate the synoptic diagram of approaching target object area for being linked to each other.
The embodiment that the approaching object of above-mentioned the application detects can also be further used for intersecting near vehicle detection, intersect when driving towards the crossroad or during reversing as detecting vehicle on the track near vehicle.As shown in Figure 3, when driving towards the crossroad for detection near the vehicle synoptic diagram, wherein camera head is installed at 3 places in the position; Shown in Fig. 4 A, intersect when detecting reversing on the track near the vehicle synoptic diagram, wherein camera head is installed at 4 places, position.
The situation near vehicle is an example shown in Fig. 4 A to detect; When application of aforementioned second embodiment detects; Only need be with calculating the light stream direction in the step 209, and the recording light flow path direction is revised as the coordinate system shown in Fig. 4 B with the coordinate system shown in Fig. 2 D near the characteristic light stream vectors in the coordinate range of object the time; Two subgraphs that grade is big about in this coordinate system image being divided into, the direction of carrying out characteristic light stream vectors are again judged and are selected and get final product.Wherein, for right subgraph, its characteristic light stream direction of object of driving towards vehicle in [pi/2, π] ∪ [pi/2 ,-π] scope, and relatively the light stream direction of the static object in the world in [pi/2, pi/2] scope; For left subgraph, its characteristic light stream direction of object of driving towards vehicle in [pi/2, pi/2] scope, and relatively the light stream direction of the static object in the world in [pi/2, π] ∪ [pi/2 ,-π] scope.
Corresponding with the embodiment of the approaching object detection method of the application, the application also provides the embodiment of approaching object pick-up unit.
Referring to Fig. 5, be the first embodiment block diagram of the approaching object pick-up unit of the application:
This approaching object pick-up unit comprises: acquiring unit 510, difference unit 520, detecting unit 530, cluster cell 540 and connected unit 550.
Wherein, acquiring unit 510 is used for obtaining from camera head the consecutive frame image of current frame image and said current frame image;
Difference unit 520, being used for said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of said current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image;
Detecting unit 530 is used for said target object candidate area blocks is carried out feature point detection and tracking respectively, obtains characteristic light stream vector set;
Cluster cell 540 is used for the characteristic light stream vectors of said set is carried out cluster, from said candidate area blocks, filters out approaching target object area piece according to cluster result;
Connected unit 550 is used for the said target object area piece that filters out is communicated with, and obtains the approaching target object area of current frame image.
Referring to Fig. 6, be the second embodiment block diagram of the approaching object pick-up unit of the application:
This approaching object pick-up unit comprises: acquiring unit 610, difference unit 620, detecting unit 630, tracking cell 640, delete cells 650, cluster cell 660 and connected unit 670.
Wherein, acquiring unit 610 is used for obtaining from camera head the consecutive frame image of current frame image and said current frame image;
Difference unit 620, being used for said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of said current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image;
Detecting unit 630 is used for said target object candidate area blocks is carried out feature point detection and tracking respectively, obtains characteristic light stream vector set;
Tracking cell 640; Be used for detected unique point as first unique point; Said first unique point is followed the tracks of; Obtain second unique point of the coupling of said first unique point in said consecutive frame image, said second unique point is followed the tracks of, obtain the 3rd unique point of the coupling of said second unique point on said current frame image;
Delete cells 650; Be used for when the distance between said first unique point and said the 3rd unique point during greater than preset threshold value; The vectorial formed vector angle that the vector that perhaps forms when first unique point, second unique point and second unique point, the 3rd unique point form is during greater than preset threshold value, deletion and the said first unique point characteristic of correspondence light stream vectors.
Cluster cell 660 is used for the characteristic light stream vectors of said set is carried out cluster, from said candidate area blocks, filters out approaching target object area piece according to cluster result;
Connected unit 670 is used for the said target object area piece that filters out is communicated with, and obtains the approaching target object area of current frame image.
Concrete, detecting unit 630 can comprise (not shown among Fig. 6): the feature point detection unit is used for through the feature point detection algorithm candidate area blocks of current frame image being carried out feature point detection; The feature point tracking unit is used for through detected unique point is followed the tracks of, and obtains in the consecutive frame image matched feature points with said detected Feature Points Matching; The light stream vectors computing unit is used for calculating according to said detected unique point and said matched feature points the characteristic light stream vectors of said candidate area blocks.
Concrete; Cluster cell 660 can comprise (not shown among Fig. 6): the light stream direction calculating unit; Be used for calculating the light stream direction of the characteristic light stream vectors of said set, and obtain the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object; The subclass generation unit is used for that the said characteristic light stream vectors of obtaining is carried out cluster and generates some cluster subclass, and the characteristic light stream vectors in each said cluster subclass satisfies preset clustering rule; The threshold decision unit, whether the quantity of characteristic light stream vectors that is used for judging each said cluster subclass is greater than preset threshold value; Performance element is used for when greater than said threshold value, keeping corresponding cluster subclass according to the result who judges as a result, when less than said threshold value, deletes corresponding cluster subclass; Region unit is confirmed the unit, is used for target object candidate area blocks with the cluster subclass that comprises reservation as approaching target object area piece.
Description through above embodiment can be known; From camera head, obtain the consecutive frame image of current frame image and said current frame image among the application embodiment; To said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of current frame image and consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image; Target object candidate area blocks is carried out feature point detection and tracking respectively; Obtain characteristic light stream vector set, the characteristic light stream vectors in the pair set is carried out cluster, from candidate area blocks, filters out approaching target object area piece according to cluster result; The target object area piece that filters out is communicated with, obtains the approaching target object area of current frame image.The application embodiment adopts picture frame block analysis method to carry out moving object detection; Compare with the existing method of using light stream, have and do not limit near the object type, be applicable to simultaneously and detect rigid body object and non-rigid body object, calculate simple and speed is fast, detect the advantages such as influence that accurately, are not subject to the wrong light stream that the feature point tracking mistake causes.
Description through above embodiment can know, those skilled in the art can be well understood to the application and can realize by the mode that software adds essential general hardware platform.Based on such understanding; The part that the application's technical scheme contributes to prior art in essence in other words can be come out with the embodied of software product; This computer software product can be stored in the storage medium, like ROM/RAM, magnetic disc, CD etc., comprises that some instructions are with so that a computer equipment (can be a personal computer; Server, the perhaps network equipment etc.) carry out the described method of some part of each embodiment of the application or embodiment.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, and identical similar part is mutually referring to getting final product between each embodiment, and each embodiment stresses all is the difference with other embodiment.Especially, for system embodiment, because it is basically similar in appearance to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
The application can be used in numerous general or special purpose computingasystem environment or the configuration.For example: personal computer, server computer, handheld device or portable set, plate equipment, multicomputer system, the system based on microprocessor, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, comprise DCE of above any system or equipment or the like.
The application can describe in the general context of the computer executable instructions of being carried out by computing machine, for example program module.Usually, program module comprises the routine carrying out particular task or realize particular abstract, program, object, assembly, data structure or the like.Also can in DCE, put into practice the application, in these DCEs, by through communication network connected teleprocessing equipment execute the task.In DCE, program module can be arranged in this locality and the remote computer storage medium that comprises memory device.
Though described the application through embodiment, those of ordinary skills know, the application has many distortion and variation and the spirit that do not break away from the application, hope that appended claim comprises these distortion and variation and the spirit that do not break away from the application.

Claims (8)

1. an approaching object detection method is characterized in that, comprising:
From camera head, obtain the consecutive frame image of current frame image and said current frame image;
To said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of said current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image;
Said target object candidate area blocks is carried out feature point detection and tracking respectively, obtain characteristic light stream vector set;
Characteristic light stream vectors in the said set is carried out cluster, from said candidate area blocks, filter out approaching target object area piece according to cluster result;
The said target object area piece that filters out is communicated with, obtains the approaching target object area of current frame image;
Wherein, said characteristic light stream vectors in the said set is carried out cluster, from said candidate area blocks, filters out approaching target object area piece according to cluster result and comprise:
Calculate the light stream direction of the characteristic light stream vectors in the said set, and obtain the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object;
The said characteristic light stream vectors of obtaining is carried out cluster generate some cluster subclass, the characteristic light stream vectors in each said cluster subclass satisfies preset clustering rule;
Whether the quantity of judging the characteristic light stream vectors in each said cluster subclass is greater than preset threshold value;
Result according to judging when greater than said threshold value, keeps corresponding cluster subclass, when less than said threshold value, deletes corresponding cluster subclass;
The target object candidate area blocks of cluster subclass that will comprise reservation is as approaching target object area piece.
2. method according to claim 1 is characterized in that, said target object candidate area blocks is carried out feature point detection and tracking respectively, obtains characteristic light stream vector set and comprises:
Through the feature point detection algorithm candidate area blocks in the current frame image is carried out feature point detection;
Through detected unique point is followed the tracks of, obtain in the consecutive frame image matched feature points with said detected Feature Points Matching;
Calculate the characteristic light stream vectors of said candidate area blocks according to said detected unique point and said matched feature points.
3. method according to claim 2 is characterized in that, said target object candidate area blocks is carried out feature point detection and tracking respectively, obtain characteristic light stream vector set after, also comprise:
Detected unique point as first unique point, is followed the tracks of said first unique point, obtain second unique point of the coupling of said first unique point in said consecutive frame image;
Said second unique point is followed the tracks of, obtain the 3rd unique point of the coupling of said second unique point on said current frame image;
When the distance between said first unique point and said the 3rd unique point during greater than preset threshold value; The vectorial formed vector angle that the vector that perhaps forms when first unique point, second unique point and second unique point, the 3rd unique point form is during greater than preset threshold value, deletion and the said first unique point characteristic of correspondence light stream vectors.
4. method according to claim 1 is characterized in that, during approaching object when said method is applied to vehicle detection and drives towards the crossroad, said camera head is installed in the head of said vehicle.
5. method according to claim 1 is characterized in that, when said method is applied to vehicle detection when reversing when intersecting approaching object on the track, said camera head is installed in the afterbody of said vehicle.
6. an approaching object pick-up unit is characterized in that, comprising:
Acquiring unit is used for obtaining from camera head the consecutive frame image of current frame image and said current frame image;
Difference unit, being used for said current frame image and consecutive frame image division is identical some image blocks, and the image block of the correspondence position of said current frame image and said consecutive frame image is done difference, obtains the target object candidate area blocks of current frame image;
Detecting unit is used for said target object candidate area blocks is carried out feature point detection and tracking respectively, obtains characteristic light stream vector set;
Cluster cell is used for the characteristic light stream vectors of said set is carried out cluster, from said candidate area blocks, filters out approaching target object area piece according to cluster result;
Connected unit is used for the said target object area piece that filters out is communicated with, and obtains the approaching target object area of current frame image;
Wherein, said cluster cell comprises:
The light stream direction calculating unit is used for calculating the light stream direction of the characteristic light stream vectors of said set, and obtains the characteristic light stream vectors of light stream direction in the preset direction scope of approaching object;
The subclass generation unit is used for that the said characteristic light stream vectors of obtaining is carried out cluster and generates some cluster subclass, and the characteristic light stream vectors in each said cluster subclass satisfies preset clustering rule;
The threshold decision unit, whether the quantity of characteristic light stream vectors that is used for judging each said cluster subclass is greater than preset threshold value;
Performance element is used for when greater than said threshold value, keeping corresponding cluster subclass according to the result who judges as a result, when less than said threshold value, deletes corresponding cluster subclass;
Region unit is confirmed the unit, is used for target object candidate area blocks with the cluster subclass that comprises reservation as approaching target object area piece.
7. device according to claim 6 is characterized in that, said detecting unit comprises:
The feature point detection unit is used for through the feature point detection algorithm candidate area blocks of current frame image being carried out feature point detection;
The feature point tracking unit is used for through detected unique point is followed the tracks of, and obtains in the consecutive frame image matched feature points with said detected Feature Points Matching;
The light stream vectors computing unit is used for calculating according to said detected unique point and said matched feature points the characteristic light stream vectors of said candidate area blocks.
8. device according to claim 6 is characterized in that, also comprises:
Tracking cell; Be used for detected unique point as first unique point; Said first unique point is followed the tracks of; Obtain second unique point of the coupling of said first unique point in said consecutive frame image, said second unique point is followed the tracks of, obtain the 3rd unique point of the coupling of said second unique point on said current frame image;
Delete cells; Be used for when the distance between said first unique point and said the 3rd unique point during greater than preset threshold value; The vectorial formed vector angle that the vector that perhaps forms when first unique point, second unique point and second unique point, the 3rd unique point form is during greater than preset threshold value, deletion and the said first unique point characteristic of correspondence light stream vectors.
CN 201010266886 2010-08-24 2010-08-24 Approximate target object detecting method and device Active CN101930609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010266886 CN101930609B (en) 2010-08-24 2010-08-24 Approximate target object detecting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010266886 CN101930609B (en) 2010-08-24 2010-08-24 Approximate target object detecting method and device

Publications (2)

Publication Number Publication Date
CN101930609A CN101930609A (en) 2010-12-29
CN101930609B true CN101930609B (en) 2012-12-05

Family

ID=43369768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010266886 Active CN101930609B (en) 2010-08-24 2010-08-24 Approximate target object detecting method and device

Country Status (1)

Country Link
CN (1) CN101930609B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654916A (en) * 2011-03-04 2012-09-05 联咏科技股份有限公司 Block comparison method
CN103196550A (en) * 2012-01-09 2013-07-10 西安智意能电子科技有限公司 Method and equipment for screening and processing imaging information of launching light source
WO2014131193A1 (en) * 2013-03-01 2014-09-04 Harman International Industries, Incorporated Road region detection
CN103391441B (en) * 2013-07-23 2016-05-11 武汉大学 A kind of monitor video object based on difference energy is deleted and is distorted detection algorithm
CN105243674B (en) * 2015-09-16 2018-09-11 阔地教育科技有限公司 One kind preventing moving target flase drop device and method
JP6618767B2 (en) * 2015-10-27 2019-12-11 株式会社デンソーテン Image processing apparatus and image processing method
CN107953827A (en) * 2016-10-18 2018-04-24 杭州海康威视数字技术股份有限公司 A kind of vehicle blind zone method for early warning and device
CN106683114B (en) * 2016-12-16 2019-08-20 河海大学 Fluid motion vector estimating method based on characteristic light stream
CN107273779A (en) * 2017-06-12 2017-10-20 广州川鸿电子有限公司 A kind of bar code image recognition methods, system and device
CN108063915B (en) * 2017-12-08 2019-09-17 浙江大华技术股份有限公司 A kind of image-pickup method and system
CN108595600B (en) * 2018-04-18 2023-12-15 努比亚技术有限公司 Photo classification method, mobile terminal and readable storage medium
CN108710828B (en) * 2018-04-18 2021-01-01 北京汽车集团有限公司 Method, device and storage medium for identifying target object and vehicle
CN109165540B (en) * 2018-06-13 2022-02-25 深圳市感动智能科技有限公司 Pedestrian searching method and device based on prior candidate box selection strategy
CN111275743B (en) * 2020-01-20 2024-03-12 深圳奇迹智慧网络有限公司 Target tracking method, device, computer readable storage medium and computer equipment
CN111739064B (en) * 2020-06-24 2022-07-29 中国科学院自动化研究所 Method for tracking target in video, storage device and control device
CN112801077B (en) * 2021-04-15 2021-11-05 智道网联科技(北京)有限公司 Method for SLAM initialization of autonomous vehicles and related device
CN115511919B (en) * 2022-09-23 2023-09-19 北京乾图科技有限公司 Video processing method, image detection method and device
CN116228834B (en) * 2022-12-20 2023-11-03 阿波罗智联(北京)科技有限公司 Image depth acquisition method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method
CN101295405A (en) * 2008-06-13 2008-10-29 西北工业大学 Portrait and vehicle recognition alarming and tracing method
CN101320478A (en) * 2008-07-23 2008-12-10 北京蓝色星际软件技术发展有限公司 Multi-frame morphology area detection method based on frame difference
CN101777185A (en) * 2009-12-09 2010-07-14 中国科学院自动化研究所 Target tracking method for modeling by integrating description method and discriminant method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3633706B2 (en) * 1996-03-06 2005-03-30 日産ディーゼル工業株式会社 Vehicle group running control device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method
CN101295405A (en) * 2008-06-13 2008-10-29 西北工业大学 Portrait and vehicle recognition alarming and tracing method
CN101320478A (en) * 2008-07-23 2008-12-10 北京蓝色星际软件技术发展有限公司 Multi-frame morphology area detection method based on frame difference
CN101777185A (en) * 2009-12-09 2010-07-14 中国科学院自动化研究所 Target tracking method for modeling by integrating description method and discriminant method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开平9-245287A 1997.09.19

Also Published As

Publication number Publication date
CN101930609A (en) 2010-12-29

Similar Documents

Publication Publication Date Title
CN101930609B (en) Approximate target object detecting method and device
Hur et al. Multi-lane detection in urban driving environments using conditional random fields
Jung et al. A lane departure warning system using lateral offset with uncalibrated camera
Zhao et al. A novel multi-lane detection and tracking system
CN104008371A (en) Regional suspicious target tracking and recognizing method based on multiple cameras
CN103679691A (en) Method and device for detecting continuous road segmentation object
Yu et al. Motion pattern interpretation and detection for tracking moving vehicles in airborne video
Lim et al. River flow lane detection and Kalman filtering‐based B‐spline lane tracking
Musleh et al. Uv disparity analysis in urban environments
Nguyen et al. Real-time validation of vision-based over-height vehicle detection system
Phan et al. Occlusion vehicle detection algorithm in crowded scene for traffic surveillance system
CN110555423B (en) Multi-dimensional motion camera-based traffic parameter extraction method for aerial video
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
Hultqvist et al. Detecting and positioning overtaking vehicles using 1D optical flow
Gökçe et al. Recognition of dynamic objects from UGVs using Interconnected Neuralnetwork-based Computer Vision system
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
Fakhfakh et al. Weighted v-disparity approach for obstacles localization in highway environments
US9183448B2 (en) Approaching-object detector, approaching object detecting method, and recording medium storing its program
Lee et al. Feature-based lateral position estimation of surrounding vehicles using stereo vision
Imad et al. Navigation system for autonomous vehicle: A survey
Seo et al. A vision system for detecting and tracking of stop-lines
Hsu et al. Detecting drivable space in traffic scene understanding
Kim et al. Robust lane detection for video-based navigation systems
Hu et al. Automatic detection and evaluation of 3D pavement defects using 2D and 3D information at the high speed
Bendjaballah et al. A classification of on-road obstacles according to their relative velocities

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211222

Address after: Room 1703, 888 Moyu South Road, Anting Town, Jiading District, Shanghai, 201805

Patentee after: NEUSOFT REACH AUTOMOTIVE TECHNOLOGY (SHANGHAI) Co.,Ltd.

Address before: Hunnan rookie street Shenyang city Liaoning province 110179 No. 2

Patentee before: NEUSOFT Corp.

TR01 Transfer of patent right