CN104318206B - A kind of obstacle detection method and device - Google Patents

A kind of obstacle detection method and device Download PDF

Info

Publication number
CN104318206B
CN104318206B CN201410520226.4A CN201410520226A CN104318206B CN 104318206 B CN104318206 B CN 104318206B CN 201410520226 A CN201410520226 A CN 201410520226A CN 104318206 B CN104318206 B CN 104318206B
Authority
CN
China
Prior art keywords
characteristic point
unit
msubsup
point
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410520226.4A
Other languages
Chinese (zh)
Other versions
CN104318206A (en
Inventor
于红绯
刘威
袁淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shanghai Co Ltd
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201410520226.4A priority Critical patent/CN104318206B/en
Publication of CN104318206A publication Critical patent/CN104318206A/en
Application granted granted Critical
Publication of CN104318206B publication Critical patent/CN104318206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The embodiment of the invention discloses a kind of obstacle detection method, methods described includes:Obtain the first two field picture and the second two field picture;Detect the characteristic point of first two field picture;The characteristic point of correspondence position in the characteristic point and the second two field picture is matched;Characteristic point to the first two field picture that the match is successful is clustered, and forms characteristic point class;Choose and represent to be shown to represent barrier close to from the characteristic point class of car, realize the detection to barrier.The embodiment of the invention also discloses a kind of obstacle detector.Obstacle detection method and device provided in an embodiment of the present invention can make detect close to the barrier from car in any state from car, improve the security of human pilot and passenger.

Description

A kind of obstacle detection method and device
Technical field
The present invention relates to image processing field, more particularly to a kind of obstacle detection method and device.
Background technology
When driver drives vehicle traveling, positioned at from barrier (such as vehicle, OK in car rear and both sides blind area region People etc.) potential safety hazard easily is constituted to driver, especially when driver does not notice that barrier is present and carries out lane change row When sailing.Therefore, it whether there is barrier in check frequency region, and the feedback of the information of barrier come to driver to driver Say extremely important.
The method of detection barrier has two classes in the prior art:One class is the method based on barrier outward appearance, another kind of to be Method based on the direction of motion.
Whether it using the external appearance characteristic of specific obstacle come disturbance in judgement thing is specific that the method based on barrier outward appearance is Barrier, if it is, the relevant information of the barrier is reported into driver.For example, if specific obstacle is car , then whether it is vehicle come disturbance in judgement thing using the external appearance characteristic (such as wheel, car plate, vehicle bottom shade) of vehicle;Such as Fruit specific obstacle be pedestrian, then using the external appearance characteristic (such as head, trunk, four limbs portion) of people come disturbance in judgement thing whether For people.
The advantage of method based on barrier outward appearance is to be difficult to be influenceed by illumination variation, and testing result is relatively stable.But It is due to that this method needs to be defined specific obstacle in advance, so the object type of detection is limited.Come from another angle Say, because whether the barrier that this method needs to judge detection according to this is that specific barrier (such as first determines whether that barrier is No is vehicle, if not, judging whether the barrier is people again), when the species of particular detection thing is more, the time phase of detection It should increase, it is more likely that cause driver to have little time to take corresponding measure when receiving obstacle information, occur car accident, institute So that in order to prevent the appearance of such case, the species of specific obstacle needs to be limited.
The general principle of method based on the direction of motion is:When vehicle straight forward, using the vehicle as object of reference, vehicle Though below relative to the object of ground static, the object opposite with vehicle forward direction, it is identical with vehicle forward direction still The object that its speed is less than the vehicle is moved to the direction away from the vehicle, only and speed identical with the vehicle forward direction More than the speed of the vehicle object be directed towards the vehicle position advance, i.e., the object is pursuing the vehicle, thus this Object as potential barrier because being detected.
The advantage of the method based on the direction of motion is the species without the concern for detection target, overcomes based on barrier The shortcoming of particular types barrier can only be detected by hindering beyond the region of objective existence to see method, but this method to detection vehicle straight trip when barrier compared with To be easy, when the vehicle is turning, it is necessary to obtain the kinematic parameters such as the anglec of rotation of vehicle.And the motion of vehicle is obtained by image Parameter needs to be calculated to make object of reference using the object in image relative to ground static, and under vehicle environment, ground Feature is usually not enough enriched, it is impossible to reach that calculating is required.So based drive method is generally only capable of when vehicle is kept straight on to barrier Thing is hindered to be detected, it is impossible to ensure the security of Ackermann steer angle driver.
The content of the invention
In order to solve prior art, the embodiment of the present invention provides a kind of obstacle detection method and device, realizes vehicle The detection of barrier can be carried out in any state, the security of driver is improved.
The embodiments of the invention provide a kind of obstacle detection method, methods described includes:
Obtain the first two field picture and the second two field picture;
Detect the characteristic point of first two field picture;
The characteristic point of correspondence position in the characteristic point and the second two field picture is matched;
Characteristic point to the first two field picture that the match is successful is clustered, and forms characteristic point class;
Choose and represent to be shown to represent barrier close to from the characteristic point class of car, realize the detection to barrier.
It is preferred that, the characteristic point of detection first two field picture is specially:
Pixel gradient image is generated according to first two field picture;
Choose the characteristic point that the Grad in the pixel gradient image in x-axis or y-axis direction is local maximum;
The Grad of not selected characteristic point is set to zero, and regenerates pixel gradient image;
The point of the pixel gradient image regenerated is traveled through, centered on the characteristic point being traversed, in preset search window It is interior to count the number that Grad in the pixel gradient image regenerated is more than the characteristic point of Grads threshold;
Judge whether the number of the characteristic point is more than or equal to first threshold;
If it is, choosing the maximum point of the search window manhole ladder angle value as the characteristic point of first two field picture.
It is preferred that, after the characteristic point by correspondence position in the characteristic point and the second two field picture is matched, Methods described also includes:
Feature Points Matching is carried out to the continuous k two field pictures comprising first two field picture, characteristic point sequence, the spy is obtained Levy point sequence refer to the feature point group that is matched in adjacent two field pictures into vector set, and the k is more than or equal to 4;
Judge whether two vectors adjacent in the vector set at least meet the one in following two conditions:The phase Two adjacent vectorial length changes are in the first preset range;Two adjacent vectorial angle changes are default second In the range of;
If it is not, then removing all characteristic points for constituting the characteristic point sequence.
It is preferred that, the characteristic point to the first two field picture that the match is successful is clustered specially:
Step A:The characteristic point of first two field picture for not being numbered and marking is chosen as seed point, and to stating seed point It is numbered and marks;
Step B:All neighbours' points of the seed point are searched according to preset rules;
Step C:All neighbours' points are numbered, the numbering of neighbours' point is consistent with the numbering of the seed point;
Step D:Any neighbours' point is chosen, using neighbours' point is as seed point and is marked, step B and step is performed C, until all neighbours' points for meeting preset rules are numbered and marked;
Step E:Numbering identical characteristic point is classified as a category feature point.
It is preferred that, the preset rules are:
The pixel coordinate of the seed point of first two field picture isWith the seed point in second two field picture The pixel coordinate of the characteristic point of matching isThe pixel coordinate of neighbours' point of the seed point isDescribed Pixel coordinate in two two field pictures with the characteristic point of neighbours' Point matching isWherein j ≠ i, the r be with it is described The frame number for the image being separated by between first two field picture and second two field picture;
The seed point at least needs to meet following one of relation condition with neighbours' point:
Wherein, the R is distance threshold, and the λ is length change threshold value, and the C is angle threshold.
It is preferred that, the selection represents close and is specially to represent barrier from the characteristic point class of car:
The distance change ratio of first two field picture and second two field picture is calculated, the distance change ratio is the first frame The vector of the vector length summation of the category feature of certain in image point such characteristic point corresponding with position in second two field picture is long The ratio between summation is spent, the vector length summation refers to the length vector sum of all any two characteristic points in certain category feature point;
It is more early than the acquisition time of second two field picture when the acquisition time of first two field picture, then the distance is become Change and represented than the characteristic point class less than 1 close to the characteristic point class from car;
It is more late than the acquisition time of second two field picture when the acquisition time of first two field picture, then the distance is become Change and represented than the characteristic point class more than 1 close to the characteristic point class from car.
The embodiment of the present invention additionally provides a kind of obstacle detector, and described device includes:Image acquisition unit, feature Point detection unit, Feature Points Matching unit, feature points clustering unit, characteristic point class choose unit and display unit;
Described image acquiring unit and feature point detection unit connection, the feature point detection unit and the feature Point matching unit is connected, the Feature Points Matching unit and feature points clustering unit connection, the feature points clustering unit Unit connection is chosen with the characteristic point class, the characteristic point class chooses unit and display unit connection;
Wherein, described image acquiring unit, for obtaining the first two field picture and the second two field picture;
The feature point detection unit, the characteristic point for detecting first two field picture;
The Feature Points Matching unit, for the characteristic point of correspondence position in the characteristic point and the second two field picture to be carried out Matching;
The feature points clustering unit, is clustered for the characteristic point to the first two field picture that the match is successful, forms special Levy a class;
The characteristic point class chooses unit, for choosing the characteristic point class represented close to from car;
The display unit, is shown for the characteristic point class of selection to be represented into barrier, realizes the inspection to barrier Survey.
It is preferred that, the feature point detection unit includes:Gradient image generation unit, fisrt feature point choose unit, figure Unit is chosen as regenerating unit, statistic unit, the first judging unit and second feature point;
Described image acquiring unit is connected with the gradient image generation unit, the gradient image generation unit with it is described Fisrt feature point chooses unit connection, and the fisrt feature point selection unit regenerates unit with described image and is connected, described Image regenerates unit and is connected with the statistic unit, and the statistic unit is connected with first judging unit, and described One judging unit is chosen unit with the second feature point and is connected, and the second feature point chooses unit and the Feature Points Matching Unit is connected;
Wherein, the gradient image generation unit, for generating pixel gradient image according to first two field picture;
The fisrt feature point chooses unit, for choosing the gradient in the pixel gradient image in x-axis or y-axis direction It is worth the characteristic point for local maximum;
Described image regenerates unit, for the characteristic point by Unit selection is not chosen by the fisrt feature point Grad is set to zero, and regenerates pixel gradient image;
The statistic unit, for traveling through the point of pixel gradient image regenerated, using the characteristic point that is traversed in The heart, counts that Grad in the pixel gradient image regenerated is more than the characteristic point of Grads threshold in preset search window Number;
First judging unit, for judging whether the number of the characteristic point is more than or equal to first threshold, if It is then to activate the second feature point to choose unit;
The second feature point chooses unit, for choosing the maximum point of the search window manhole ladder angle value as described the The characteristic point of one two field picture.
It is preferred that, described device also includes characteristic point sequence acquiring unit, the second judging unit and characteristic point removal unit;
The Feature Points Matching unit is connected with the characteristic point sequence acquiring unit, the characteristic point sequence acquiring unit It is connected with second judging unit, second judging unit and characteristic point removal unit connection, the characteristic point are gone Except unit and the feature points clustering unit are connected;
Wherein, the characteristic point sequence acquiring unit, for being carried out to the continuous k two field pictures comprising first two field picture Feature Points Matching, obtains characteristic point sequence, and the characteristic point sequence refers to the feature point group matched in adjacent two field pictures Into vector set, and the k be more than or equal to 4;
Second judging unit, for judging whether two vectors adjacent in the vector set at least meet following two One in individual condition:Two adjacent vectorial length changes are in the first preset range;Described adjacent two to The angle change of amount is in the second preset range;If it is not, then activating the characteristic point removal unit;
The characteristic point removal unit, all characteristic points of the characteristic point sequence are constituted for removing.
It is preferred that, the characteristic point class, which chooses unit, to be included:Distance change is than computing unit and chooses unit;
The feature points clustering unit is connected with the distance change than computing unit, and the distance change compares computing unit It is connected with the selection unit, the selection unit is connected with the display unit;
Wherein, the distance change is than computing unit, for calculating first two field picture and second two field picture Distance change ratio, the distance change is than for the vector length summation of certain category feature point in the first two field picture and the second frame figure The ratio between vector length summation of such corresponding characteristic point of position as in, the vector length summation refers to institute in certain category feature point There is the length vector sum of any two characteristic point;
The selection unit, for obtaining acquisition time of the time than second two field picture when first two field picture When early, the distance change is represented close to the characteristic point class from car than the characteristic point class less than 1;Or, when the first frame figure When the acquisition time of picture is more late than the acquisition time of second two field picture, by the distance change than the characteristic point class table more than 1 Show close to the characteristic point class from car.
Compared with prior art, the present invention has advantages below:
The present invention detects the characteristic point of first two field picture, by institute by obtaining the first two field picture and the second two field picture The characteristic point for stating correspondence position in characteristic point and the second two field picture is matched, to the characteristic point of the first two field picture that the match is successful Clustered, form characteristic point class, chosen and represent to be shown to represent barrier close to from the characteristic point class of car, realize to barrier Hinder the detection of thing, so the present invention is either that straight trip or turn can carry out the detection of barrier from car.In fact, this Invention not only can detect barrier from car when advancing, and close barrier can also be detected in retroversion or parking from car, Therefore, the obstacle detection method and device that the present invention is provided, which can make detect in any state from car, to be approached from car Barrier, improve the security of human pilot and passenger.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments described in application, for those of ordinary skill in the art, on the premise of not paying creative work, Other accompanying drawings can also be obtained according to these accompanying drawings.
A kind of flow chart for obstacle detection method embodiment one that Fig. 1 provides for the present invention;
A kind of flow chart for obstacle detection method embodiment two that Fig. 2 provides for the present invention;
The schematic diagram of first two field picture described in a kind of obstacle detection method embodiment two that Fig. 3 provides for the present invention;
The flow chart of detection image feature point methods in a kind of obstacle detection method embodiment two that Fig. 4 provides for the present invention;
First frame image features detected in a kind of obstacle detection method embodiment two that Fig. 5 provides for the present invention The schematic diagram of point;
Feature Points Matching schematic diagram in a kind of obstacle detection method embodiment two that Fig. 6 provides for the present invention;
Smooth features point sequence schematic diagram in a kind of obstacle detection method embodiment two that Fig. 7 (a) provides for the present invention;
Non- smooth characteristic point sequence schematic diagram in a kind of obstacle detection method embodiment two that Fig. 7 (b) provides for the present invention;
Non- smooth characteristic point sequence is another in a kind of obstacle detection method embodiment two that Fig. 7 (c) provides for the present invention Individual schematic diagram;
A kind of feature points clustering method flow diagram in a kind of obstacle detection method embodiment two that Fig. 8 provides for the present invention;
Feature points clustering schematic diagram in a kind of obstacle detection method embodiment two that Fig. 9 provides for the present invention;
Characteristic point class in a kind of obstacle detection method embodiment two that Figure 10 provides for the present invention close to barrier is shown It is intended to;
A kind of structured flowchart for obstacle detector embodiment one that Figure 11 provides for the present invention;
A kind of structured flowchart for obstacle detector embodiment two that Figure 12 provides for the present invention;
The structured flowchart of feature point detection unit in a kind of obstacle detector embodiment two that Figure 13 provides for the present invention;
Characteristic point class chooses the structure of unit in a kind of obstacle detector embodiment two that Figure 14 provides for the present invention Block diagram.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only this Invent a part of embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art exist The every other embodiment obtained under the premise of creative work is not made, the scope of protection of the invention is belonged to.
Embodiment of the method one:
Referring to Fig. 1, a kind of flow chart of obstacle detection method embodiment one that the figure provides for the present invention.
Obstacle detection method provided in an embodiment of the present invention comprises the following steps:
Step S101:Obtain the first two field picture and the second two field picture.
During vehicle driving, in order to make driver know the barrier in rear view of vehicle and both sides blind area region Hinder thing information, generally require to gather the image of the blind area.
In embodiments of the present invention, described " the first two field picture " and " the second two field picture " are just for the sake of this two frames figure of differentiation Picture, is not the restriction made to the priority time of first two field picture and second two field picture acquisition, if that is, described First two field picture be current point in time obtain image, then second two field picture can be first two field picture it The image of preceding acquisition, or the image obtained afterwards.
Step S102:Detect the characteristic point of first two field picture.
Step S103:The characteristic point of correspondence position in the characteristic point and the second two field picture is matched.
In actual applications, the method that the characteristic point of two field pictures is matched has many kinds, such as template With method, LK feature point tracking methods etc..
So-called template matching method refers to:
Assuming that image I feature point coordinates to be matched is M (x, y), pixel brightness value is m, then if coordinate in image J For the characteristic point luminance value and m difference near the characteristic point N of (x, y) within the specific limits, then M and N can be matched, this side Method is referred to as template matching method.
So-called LK feature point trackings method is the abbreviation of Lucas and Kanade feature point tracking methods, its general principle with Template matching method is similar, and because this method is known in the art general knowledge, the present embodiment is repeated no more.
Certainly, it is any to realize feature of present invention point the invention is not limited in both characteristic point matching methods Method with purpose is within the scope of the present invention.
Step S104:Characteristic point to the first two field picture that the match is successful is clustered, and forms characteristic point class.
In most cases, the image of first frame probability the same with second two field picture is very small, Especially when driving, then pass through features described above Point matching step, first two field picture may have Partial Feature point It can be matched with the characteristic point of second two field picture, and remaining characteristic point can not be matched.In the present embodiment, The situation that first two field picture can be matched with the characteristic point of second two field picture is referred to as that the match is successful.
This implementation is clustered to the characteristic point for the first two field picture that the match is successful, forms characteristic point class, its object is to: The object that every category feature point is represented makes a distinction, such as building, vehicle, pedestrian.
Step S105:Choose and represent to be shown to represent barrier close to from the characteristic point class of car, realize to barrier Detection.
Prior art detects barrier using based drive method, and this method is only capable of detection vehicle on straight trip direction Barrier during motion, when the vehicle is turning, using vehicle as object of reference, barrier needs to examine relative to the direction of motion of car Consider the kinematic parameter such as the anglec of rotation of vehicle, and these kinematic parameters are often difficult to obtain, therefore also can not acquired disturbance thing Information, potential safety hazard is caused to driver.
And the present embodiment detects the characteristic point of first two field picture by obtaining the first two field picture and the second two field picture, The characteristic point of correspondence position in the characteristic point and the second two field picture is matched, to the spy of the first two field picture that the match is successful Levy and clustered, form characteristic point class, choose and represent to be shown to represent barrier close to from the characteristic point class of car, realization Detection to barrier.The obstacle detection method that the present embodiment is provided represents to carry out generation close to from the characteristic point class of car due to choosing Table barrier is shown, so being either that straight trip or turn can carry out the detection of barrier from car.In fact, this Embodiment not only can detect barrier from car when advancing, and close barrier can also be detected in retroversion or parking, because This, the obstacle detection method that the present embodiment is provided can make detect the barrier close to the vehicle in any state from car Hinder thing, improve the security of human pilot and passenger.
Further, since the present embodiment carries out detection of obstacles using the characteristic point in image, so without the concern for obstacle The type of thing, overcomes technical disadvantages of the prior art based on appearance detecting method.
Embodiment of the method two:
Referring to Fig. 2, a kind of flow chart of obstacle detection method embodiment two that the figure provides for the present invention.
Obstacle detection method provided in an embodiment of the present invention comprises the following steps:
Step S201:Obtain the first two field picture and the second two field picture.
For example, Fig. 3 is the schematic diagram of first two field picture.From figure 3, it can be seen that the figure is vehicle on road The image of shooting, as we can see from the figure the building on both sides, car, a kind of bus, zebra stripes, vehicle shadow with And fence etc..
Step S202:Detect the characteristic point of first two field picture.
The angle generally detected in the prior art using Harris Corner Detection Algorithms come detection image characteristic point, the algorithm Point is generally in the more rich part of the textures such as the edge of object, the place such as car plate edge, wheel contour, for texture not Too abundant or some straight lines are just not easy to detect, such as the profile of car, cause can not often to judge the type of barrier, The relevant informations such as particular location, very big puzzlement is caused to driver.And Harris Corner Detection Algorithms need to carry out many secondary volumes Product step, more takes.
Be directed to prior art can not disturbance in judgement thing relevant information and time-consuming technological deficiency, referring to Fig. 4, this implementation Example provides a kind of method of detection image characteristic point, i.e. step S202 is specially:
Step S202a:Pixel gradient image is generated according to first two field picture.
Pixel gradient image is generated firstly the need of calculating pixel gradient image function, this implementation according to first two field picture Example provides a kind of algorithm for calculating pixel gradient image function:
It is It (x, y) to make first two field picture, and its implication is the pixel in the point that coordinate is (x, y);The pixel ladder Degree image function is Mt (x, y), and its implication is the Grad in the point that coordinate is (x, y).
Wherein,Difference table Show the first two field picture ItThe derivative of (x, y) in x-axis and y-axis direction.
Certainly, the algorithm for the calculating pixel gradient image function that the present embodiment is provided does not constitute the limit to the present invention program Fixed, the application demand designed, designed that those skilled in the art can be actual calculates the algorithm of pixel gradient image.
Calculate after pixel gradient image function according to function generation pixel gradient image.
Step S202b:It is local maximum to choose the Grad in the pixel gradient image in x-axis or y-axis direction Characteristic point.
So-called local maximum, that is, meet following condition:
Or
Use Grad to be for the purpose of the characteristic point of local maximum on the premise of Detection results are ensured, reduce special The workload during steps such as Point matching, cluster is levied, detection of obstacles efficiency is improved.
Step S202c:The Grad of not selected characteristic point is set to zero, and regenerates pixel gradient image.
Step S202d:The point of the pixel gradient image regenerated is traveled through, centered on the characteristic point being traversed, pre- If Grad is more than the number of the characteristic point of Grads threshold in the pixel gradient image that statistics is regenerated in search window.
Wherein, the preset search window is a virtual window, and those skilled in the art can voluntarily set its shape Shape and size, the present invention are not especially limited.In the present embodiment, preferably described preset search window is that the length of side is 5 pixels Square.
In the present embodiment, by setting Grads threshold, the characteristic point that some need not be concerned about is removed.These need not The characteristic point of care often away from object edge characteristic point and with the characteristic point of the luminance difference very little of surrounding features point, that is, Say, by these characteristic points remove can to the influential effect very little of detection of obstacles, but can greatly improve simultaneously Feature Points Matching, The efficiency of the following step such as cluster.
Obtained by rule of thumb, the Grads threshold is set to 15 by the present embodiment.Certainly, the present embodiment is to the Grads threshold Value does not constitute the restriction to the present invention program, those skilled in the art can according to practical application request sets itself.
Step S202f:Judge whether the number of the characteristic point is more than or equal to first threshold, if it is, being walked Rapid S202e.
The present invention is not specifically limited to the specific value of the first threshold, and those skilled in the art can be according to reality Application demand sets itself.In the present embodiment, the first threshold is the length of side and control deviation sum of preset search window.
The control deviation is set according to actual needs for those skilled in the art, if necessary to described Characteristic point in preset window is more, then set control deviation be on the occasion of;Conversely, being then set to negative value.
For example, the length of side of the preset search window is 5 pixels, and the control deviation is 0, then the first threshold For 5, unit is.
Step S202e:The maximum point of the preset search window manhole ladder angle value is chosen as the spy of first two field picture Levy a little.
In actual applications, after unconcerned characteristic point is removed, the quantity for being left characteristic point still may be more huge, If all carrying out the steps such as following matching, cluster, then amount of calculation will be very big, more takes.And for driver, and When obtain barrier information it is extremely important, in order to reduce the calculating time of following steps, the present embodiment choose preset search window Intraoral quantity is more than the characteristic point of first threshold, and selects in these characteristic points the maximum characteristic point of Grad as described the The characteristic point of one two field picture, to reach the purpose for improving detection of obstacles efficiency, referring to Fig. 5, the figure is to choose described preset to search The schematic diagram of the point of rope window inside gradient maximum.
Step S203:The characteristic point of correspondence position in the characteristic point and the second two field picture is matched.
Referring to Fig. 6, the figure is to carry out the schematic diagram after Feature Points Matching using step S203, wherein, red point represents described The characteristic point of first two field picture selection, the green point is the corresponding characteristic point of second two field picture of matching.
Step S204:Feature Points Matching is carried out to the continuous k two field pictures comprising first two field picture, characteristic point sequence is obtained Row.
The characteristic point sequence refer to the feature point group that is matched in adjacent two field pictures into vector set, and the k More than or equal to 4.
In actual applications, in the Feature Points Matching result that step S203 is obtained, it is possible that some are due to texture The characteristic point pair of matching error caused by the reasons such as similar or imaging deformation, these characteristic points are on that can influence final obstacle quality testing Effect is surveyed, so should be removed.
In the present embodiment, step S204 to step S206 is the specific steps for the characteristic point for removing matching error.
Step S205:Judge whether two vectors adjacent in the vector set at least meet its in following two conditions One:Two adjacent vectorial length changes are in the first preset range;Two adjacent vectorial angle changes In the second preset range;If it is not, then carrying out step S206.
For the image sampling device being fixed on vehicle, the movement with barrier relative to vehicle, the barrier exists The change in location presented in multiple image has certain flatness, accordingly, represents the characteristic point sequence of barrier movement It is smooth that should also be as, for example Fig. 7 (a).Using this principle, non-smooth characteristic point sequence can be removed.Fig. 7 (b) and Fig. 7 (c) is two kinds of non-smooth sequence diagrams, wherein, Fig. 7 (b) be occur in characteristic point sequence direction change it is big to The schematic diagram of amount, Fig. 7 (c) is that occur the big vectorial schematic diagram of length change in characteristic point sequence.
For characteristic point sequence { (xi,yi) | i >=t-k, i≤t }, two adjacent adjacent length changes are A, and angle becomes Turn to B.Wherein,
If following two conditions at least meet one, then it represents that the characteristic point sequence is smooth sequence, the two Part is:
A∈(0,T1]
B∈(0,T2]
Described (0, T1] it is first preset range, described (0, T2] it is second preset range, the T1And T2's Value can be set as the case may be, for example, the T1=0.2, T2=80 °.
Step S206:Remove all characteristic points for constituting the characteristic point sequence.
Step S207:Characteristic point to the first two field picture that the match is successful is clustered, and forms characteristic point class.
In the present embodiment, after step S206 removes characteristic point, the remaining characteristic point that the match is successful is clustered.
The present embodiment provides a kind of method of feature points clustering, and referring to Fig. 8, methods described includes:
Step S207a:The characteristic point of first two field picture for not being numbered and marking is chosen as seed point, and to stating kind Son point is numbered and marked.
Step S207b:All neighbours' points of the seed point are searched according to preset rules.
In the present embodiment, the preset rules are:
The pixel coordinate of the seed point of first two field picture isWith the seed point in second two field picture The pixel coordinate of the characteristic point of matching isThe pixel coordinate of neighbours' point of the seed point isDescribed Pixel coordinate in two two field pictures with the characteristic point of neighbours' Point matching isWherein j ≠ i, the r be with it is described The frame number for the image being separated by between first two field picture and second two field picture;
The seed point at least needs to meet following one of relation condition with neighbours' point:
Wherein, the R is distance threshold, and the λ is length change threshold value, and the C is angle threshold.
It is contemplated that the preset rules that the present embodiment is provided are not the restriction to technical solution of the present invention, ability Field technique personnel can be according to actual conditions designed, designed.
Step S207c:All neighbours' points are numbered, the numbering one of the numbering and the seed point of neighbours' point Cause.
Step S207d:Any neighbours' point is chosen, using neighbours' point is as seed point and is marked.
Step S207e:Judge with the presence or absence of the neighbours' point for meeting preset rules in first two field picture, if it is, Carry out step S207b;If it is not, then carrying out step S207f.
Step S207f:Numbering identical characteristic point is classified as a category feature point.
It should be noted that the neighbours' point for meeting preset rules in step S207e refers to all neighbours of seed point in S207a Occupy neighbours' point set by that analogy such as point and neighbours' point of these neighbours' points.
Fig. 9 is that the figure is with color representative feature point class, it can be seen that different face to the schematic diagram after feature points clustering Color represents different objects, and the characteristic point class of such as purple represents bus, and the characteristic point class of pink colour represents left side and built, brown Characteristic point class represent right side building etc..
Certainly, the feature points clustering method that the present embodiment is provided does not constitute the restriction to the present invention program, art technology Personnel can need designed, designed according to practical application.
Step S208:Choose and represent to be shown to represent barrier close to from the characteristic point class of car, realize to barrier Detection.
For close barrier, the change that the change that its being imaged in eye can over time produces amplification becomes Gesture, and produce the variation tendency reduced, magazine imaging for the change that then its imaging can over time of remote barrier Also there is similar rule.
In the present embodiment, the selection represents close and is specially to represent barrier from the characteristic point class of car:
The distance change ratio of first two field picture and second two field picture is calculated, the distance change ratio is the first frame The vector length of the characteristic point of any two is corresponding two with position in second two field picture in the category feature of certain in image point The ratio between vector length of characteristic point;
It is more early than the acquisition time of second two field picture when the acquisition time of first two field picture, then the distance is become Change and represented than the characteristic point class less than 1 close to the characteristic point class from car;
It is more late than the acquisition time of second two field picture when the acquisition time of first two field picture, then the distance is become Change and represented than the characteristic point class more than 1 close to the characteristic point class from car.
In common plane camera, for characteristic point class { (xk,yk)|k∈1,2,...,Ck, CkFor the characteristic point of kth class Number, it is v to define distance change ratiok, then,
Wherein,
Work as vk>When 1, CkTo represent close to the characteristic point class from car.
For fisheye camera, due to imaging deformation, above-mentioned rule is caused to be destroyed, can basis in order to solve this problem Camera imaging model, by imaging point in point P (x, y, z) of the projecting characteristic points into unit sphere space, i.e. space and its image There is relation in (x, y):(x, y)=P (x, y, z), wherein P is describe the mapping function of camera imaging model, and different cameras are deposited In different mapping function P.The point (x, y, z) in countless spaces, the present embodiment can be corresponded to for the point (x, y) in image Sphere normalization is carried out to spatial point (x, y, z), point (xs, ys, zs) is obtained.Wherein,
So as to draw (xs, ys, zs) and the point (x, y) in image one-to-one relationship.
Therefore, for characteristic point the class { (x in fisheye camerak,yk)|k∈1,2,...,Ck}(CkFor kth category feature point Number), the vkComputational methods are:
It can equally be drawn for fisheye camera and work as vk>When 1, CkTo represent close to the characteristic point class from car.
Referring to Figure 10, the red point in the figure represents, close to the characteristic point class from car, to be not difficult to find out from figure, these characteristic points The barrier that class is represented is bus and car.By showing these characteristic point classes, these barriers of driver can be reminded Close to from car, so that driver makes corresponding judgement and reacts.
In summary, the obstacle detection method that the present embodiment is provided improves the effect of detection of obstacles and improves barrier Hinder the efficiency of analyte detection, enhance the security of human pilot and passenger.
A kind of obstacle detection method provided based on above example, the embodiment of the present invention additionally provides a kind of barrier Detection means, describes its operation principle in detail below in conjunction with the accompanying drawings.
Device embodiment one:
Referring to Figure 11, a kind of structured flowchart of obstacle detector embodiment one that the figure provides for the present embodiment.
The obstacle detector that the present embodiment is provided includes:
Image acquisition unit 301, feature point detection unit 302, Feature Points Matching unit 303, feature points clustering unit 304th, characteristic point class chooses unit 305 and display unit 306;
Described image acquiring unit 301 and the feature point detection unit 302 are connected, the feature point detection unit 302 Connected with the Feature Points Matching unit 303, the Feature Points Matching unit 303 and the feature points clustering unit 304 are connected, The feature points clustering unit 304 and the characteristic point class are chosen unit 305 and connected, and the characteristic point class chooses the He of unit 305 The display unit 306 is connected;
Wherein, described image acquiring unit 301, for obtaining the first two field picture and the second two field picture;
The feature point detection unit 302, the characteristic point for detecting first two field picture;
The Feature Points Matching unit 303, for by the characteristic point of correspondence position in the characteristic point and the second two field picture Matched;
The feature points clustering unit 304, clusters for the characteristic point to the first two field picture that the match is successful, is formed Characteristic point class;
The characteristic point class chooses unit 305, for choosing the characteristic point class represented close to from car;
The display unit 306, is shown for the characteristic point class of selection to be represented into barrier, is realized to barrier Detection.
And the present embodiment detects the characteristic point of first two field picture by obtaining the first two field picture and the second two field picture, The characteristic point of correspondence position in the characteristic point and the second two field picture is matched, to the spy of the first two field picture that the match is successful Levy and clustered, form characteristic point class, choose and represent to be shown to represent barrier close to from the characteristic point class of car, realization Detection to barrier.The obstacle detection method that the present embodiment is provided represents to carry out generation close to from the characteristic point class of car due to choosing Table barrier is shown, so being either that straight trip or turn can carry out the detection of barrier from car.In fact, this Embodiment not only can detect barrier from car when advancing, and close barrier can also be detected in retroversion or parking, because This, the obstacle detector that the present embodiment is provided can make detect close to the obstacle from car in any state from car Thing, improves the security of human pilot and passenger.
Further, since the present embodiment carries out detection of obstacles using the characteristic point in image, so without the concern for obstacle The type of thing, overcomes technical disadvantages of the prior art based on appearance detecting method.
Device embodiment two:
Referring to Figure 12, a kind of structured flowchart of obstacle detector embodiment two that the figure provides for the present embodiment.
The obstacle detector that the present embodiment is provided also includes on the basis of device embodiment one:
Characteristic point sequence acquiring unit 307, the second judging unit 308 and characteristic point removal unit 309;
The Feature Points Matching unit 303 is connected with the characteristic point sequence acquiring unit 307, and the characteristic point sequence is obtained Unit 307 is taken to be connected with second judging unit 308, second judging unit 308 and the characteristic point removal unit 309 Connection, the characteristic point removal unit 309 and the feature points clustering unit 304 are connected;
Wherein, the characteristic point sequence acquiring unit 307, for the continuous k two field pictures comprising first two field picture Feature Points Matching is carried out, characteristic point sequence is obtained, the characteristic point sequence refers to the feature matched in adjacent two field pictures The vector set of point composition, and the k is more than or equal to 4;
Second judging unit 308, for judge two vectors adjacent in the vector set whether at least meet with One in lower two conditions:Two adjacent vectorial length changes are in the first preset range;Described adjacent two Individual vectorial angle change is in the second preset range;If it is not, then activating the characteristic point removal unit;
The characteristic point removal unit 309, all characteristic points of the characteristic point sequence are constituted for removing.
In this implementation, the feature point detection unit 302 includes:Gradient image generation unit 3021 and fisrt feature are clicked Take unit 3022, image to regenerate unit 3023, statistic unit 3024, the first judging unit 3025 and second feature point to choose Unit 3026;;
Described image acquiring unit 301 is connected with the gradient image generation unit 3021, and the gradient image generation is single Member 3021 is chosen unit 3022 with the fisrt feature point and is connected, and the fisrt feature point chooses unit 3022 and described image weight Newly-generated unit 3023 is connected, and described image regenerates unit 3023 and is connected with the statistic unit 3024, and the statistics is single Member 3024 is connected with first judging unit 3025, and first judging unit 3025 chooses 3026 with the second feature point Connection, the second feature point is chosen unit 3026 and is connected with the Feature Points Matching unit 303;
Wherein, the gradient image generation unit 3021, for generating pixel gradient image according to first two field picture;
The fisrt feature point chooses unit 3022, for choosing in the pixel gradient image in x-axis or y-axis direction Grad is the characteristic point of local maximum.
Described image regenerates unit 3023, for the characteristic point by Unit selection is not chosen by fisrt feature point Grad is set to zero, and regenerates pixel gradient image;
The statistic unit 3024, the point for traveling through the pixel gradient image regenerated, with the characteristic point being traversed Centered on, the characteristic point that Grad in the pixel gradient image regenerated is more than Grads threshold is counted in preset search window Number;
First judging unit 3025, for judging whether the number of the characteristic point is more than or equal to first threshold, If it is, activating the second feature point chooses unit;
The second feature point chooses unit 3026, for choosing the maximum point of the search window manhole ladder angle value as institute State the characteristic point of the first two field picture.
Referring to Figure 14, in the present embodiment, the characteristic point class, which chooses unit 305, to be included:Distance change compares computing unit 3051 and choose unit 3052;
The feature points clustering unit 304 is connected with the distance change than computing unit 3051, the distance change ratio Computing unit 3051 is connected with the selection unit 3052, and the selection unit 3052 is connected with the display unit 306;
Wherein, the distance change is than computing unit 3051, for calculating first two field picture and the second frame figure The distance change ratio of picture, the distance change is than for the vector length summation of certain category feature point and described second in the first two field picture The ratio between vector length summation of such corresponding characteristic point of position in two field picture, the vector length summation refers to certain category feature point In all any two characteristic points length vector sum;
The selection unit 3052, for obtaining acquisition of the time than second two field picture when first two field picture When time is early, the distance change is represented close to the characteristic point class from car than the characteristic point class less than 1;Or, when described first When the acquisition time of two field picture is more late than the acquisition time of second two field picture, by the distance change than the characteristic point more than 1 Class is represented close to the characteristic point class from car.
The obstacle detector that the present embodiment is provided improves the effect of detection of obstacles and improves detection of obstacles Efficiency, enhance the security of human pilot and passenger.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment Divide mutually referring to what each embodiment was stressed is the difference with other embodiment.It is real especially for device Apply for example, because it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to embodiment of the method Part explanation.Embodiment of the method described above is only schematical, wherein described illustrate as separating component Unit and module can be or may not be physically separate.Furthermore it is also possible to select it according to the actual needs In some or all of unit and module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying In the case of creative work, you can to understand and implement.
Described above is only the embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (8)

1. a kind of obstacle detection method, it is characterised in that methods described includes:
Obtain the first two field picture and the second two field picture;
Detect the characteristic point of first two field picture;
The characteristic point of correspondence position in the characteristic point and the second two field picture is matched;
Characteristic point to the first two field picture that the match is successful is clustered, and forms characteristic point class;
Choose and represent to be shown to represent barrier close to from the characteristic point class of car, realize the detection to barrier;
Wherein, the selection expression is close is specially to represent barrier from the characteristic point class of car:
The distance change ratio of first two field picture and second two field picture is calculated, the distance change ratio is the first two field picture In certain category feature point vector length summation such characteristic point corresponding with position in second two field picture vector length it is total The ratio between with, the vector length summation refers to the length vector sum of all any two characteristic points in certain category feature point;
It is more early than the acquisition time of second two field picture when the acquisition time of first two field picture, then by the distance change ratio Characteristic point class less than 1 is represented close to the characteristic point class from car;
It is more late than the acquisition time of second two field picture when the acquisition time of first two field picture, then by the distance change ratio Characteristic point class more than 1 is represented close to the characteristic point class from car.
2. obstacle detection method according to claim 1, it is characterised in that the spy of detection first two field picture Levy and be specially a little:
Pixel gradient image is generated according to first two field picture;
Choose the characteristic point that the Grad in the pixel gradient image in x-axis or y-axis direction is local maximum;
The Grad of not selected characteristic point is set to zero, and regenerates pixel gradient image;
The point of the pixel gradient image regenerated is traveled through, centered on the characteristic point being traversed, is united in preset search window Grad is more than the number of the characteristic point of Grads threshold in the newly-generated pixel gradient image of weight calculation;
Judge whether the number of the characteristic point is more than or equal to first threshold;
If it is, choosing the maximum point of the search window manhole ladder angle value as the characteristic point of first two field picture.
3. obstacle detection method according to claim 1, it is characterised in that described by the characteristic point and the second frame After the characteristic point of correspondence position is matched in image, methods described also includes:
Feature Points Matching is carried out to the continuous k two field pictures comprising first two field picture, characteristic point sequence, the characteristic point is obtained Sequence refer to the feature point group that is matched in adjacent two field pictures into vector set, and the k is more than or equal to 4;
Judge whether two vectors adjacent in the vector set at least meet the one in following two conditions:It is described adjacent Two vectorial length changes are in the first preset range;Two adjacent vectorial angle changes are in the second preset range It is interior;
If it is not, then removing all characteristic points for constituting the characteristic point sequence.
4. obstacle detection method according to claim 1, it is characterised in that described to the first two field picture that the match is successful Characteristic point clustered specially:
Step A:The characteristic point of first two field picture for not being numbered and marking is chosen to carry out as seed point, and to stating seed point Numbering and mark;
Step B:All neighbours' points of the seed point are searched according to preset rules;
Step C:All neighbours' points are numbered, the numbering of neighbours' point is consistent with the numbering of the seed point;
Step D:Any neighbours' point is chosen, using neighbours' point is as seed point and is marked, step B and step C is performed, directly It is numbered and marks to all neighbours' point for meeting preset rules;
Step E:Numbering identical characteristic point is classified as a category feature point.
5. obstacle detection method according to claim 4, it is characterised in that the preset rules are:
The pixel coordinate of the seed point of first two field picture isWith the seed Point matching in second two field picture The pixel coordinate of characteristic point beThe pixel coordinate of neighbours' point of the seed point isSecond frame Pixel coordinate in image with the characteristic point of neighbours' Point matching isWherein j ≠ i, the r are and described first The frame number for the image being separated by between two field picture and second two field picture;
The seed point at least needs to meet following one of relation condition with neighbours' point:
<mrow> <mo>|</mo> <msubsup> <mi>x</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>|</mo> <mo>+</mo> <mo>|</mo> <msubsup> <mi>y</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>|</mo> <mo>&lt;</mo> <mi>R</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mo>|</mo> <mfrac> <mrow> <mo>|</mo> <msubsup> <mi>x</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <mo>+</mo> <mo>|</mo> <msubsup> <mi>y</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <mo>-</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>j</mi> </msubsup> <mo>|</mo> <mo>-</mo> <mo>|</mo> <msubsup> <mi>y</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>j</mi> </msubsup> <mo>|</mo> </mrow> <mrow> <mi>max</mi> <mrow> <mo>(</mo> <mrow> <mo>|</mo> <msubsup> <mi>x</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <mo>+</mo> <mo>|</mo> <msubsup> <mi>y</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>j</mi> </msubsup> <mo>|</mo> <mo>+</mo> <mo>|</mo> <msubsup> <mi>y</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>j</mi> </msubsup> <mo>|</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>|</mo> <mo>&lt;</mo> <mi>&amp;lambda;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mo>|</mo> <mi>arctan</mi> <mfrac> <mrow> <msubsup> <mi>y</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>j</mi> </msubsup> </mrow> <mrow> <msubsup> <mi>x</mi> <mi>t</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>j</mi> </msubsup> </mrow> </mfrac> <mo>-</mo> <mi>arctan</mi> <mfrac> <mrow> <msubsup> <mi>y</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>i</mi> </msubsup> </mrow> <mrow> <msubsup> <mi>x</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mi>i</mi> </msubsup> </mrow> </mfrac> <mo>|</mo> <mo>&lt;</mo> <mi>C</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Wherein, the R is distance threshold, and the λ is length change threshold value, and the C is angle threshold.
6. a kind of obstacle detector, it is characterised in that described device includes:Image acquisition unit, feature point detection unit, Feature Points Matching unit, feature points clustering unit, characteristic point class choose unit and display unit;
Described image acquiring unit and feature point detection unit connection, the feature point detection unit and the characteristic point With unit connection, the Feature Points Matching unit and feature points clustering unit connection, the feature points clustering unit and institute State characteristic point class and choose unit connection, the characteristic point class chooses unit and display unit connection;
Wherein, described image acquiring unit, for obtaining the first two field picture and the second two field picture;
The feature point detection unit, the characteristic point for detecting first two field picture;
The Feature Points Matching unit, for by the characteristic point and the second two field picture correspondence position characteristic point carry out Match somebody with somebody;
The feature points clustering unit, clusters for the characteristic point to the first two field picture that the match is successful, forms characteristic point Class;
The characteristic point class chooses unit, for choosing the characteristic point class represented close to from car;
The display unit, is shown for the characteristic point class of selection to be represented into barrier, realizes the detection to barrier;
Wherein, the characteristic point class is chosen unit and included:Distance change is than computing unit and chooses unit;
The feature points clustering unit is connected with the distance change than computing unit, and the distance change is than computing unit and institute Selection unit connection is stated, the selection unit is connected with the display unit;
Wherein, the distance change is than computing unit, the distance for calculating first two field picture and second two field picture Change ratio, the distance change is than in the vector length summation of certain category feature point in the first two field picture and second two field picture The ratio between vector length summation of such corresponding characteristic point of position, the vector length summation refers in certain category feature point all The length vector sum for two characteristic points of anticipating;
The selection unit, it is more early than the acquisition time of second two field picture for the acquisition time when first two field picture When, the distance change is represented close to the characteristic point class from car than the characteristic point class less than 1;Or, when first two field picture Obtain the time it is more late than the acquisition time of second two field picture when, the distance change is represented than the characteristic point class more than 1 Close to the characteristic point class from car.
7. obstacle detector according to claim 6, it is characterised in that the feature point detection unit includes:Ladder Degree image generation unit, fisrt feature point choose unit, image and regenerate unit, statistic unit, the first judging unit and the Two characteristic points choose unit;
Described image acquiring unit is connected with the gradient image generation unit, the gradient image generation unit and described first Characteristic point chooses unit connection, and the fisrt feature point selection unit regenerates unit with described image and is connected, described image Regenerate unit to be connected with the statistic unit, the statistic unit is connected with first judging unit, described first sentences Disconnected unit is chosen unit with the second feature point and is connected, and the second feature point chooses unit and the Feature Points Matching unit Connection;
Wherein, the gradient image generation unit, for generating pixel gradient image according to first two field picture;
The fisrt feature point chooses unit, is for choosing the Grad in the pixel gradient image in x-axis or y-axis direction The characteristic point of local maximum;
Described image regenerates unit, the gradient for the characteristic point by Unit selection is not chosen by the fisrt feature point Value is set to zero, and regenerates pixel gradient image;
The statistic unit, the point for traveling through the pixel gradient image regenerated, centered on the characteristic point being traversed, Grad is more than the number of the characteristic point of Grads threshold in the pixel gradient image that statistics is regenerated in preset search window;
First judging unit, for judging whether the number of the characteristic point is more than or equal to first threshold, if it is, Activate the second feature point and choose unit;
The second feature point chooses unit, for choosing the maximum point of the search window manhole ladder angle value as first frame The characteristic point of image.
8. obstacle detector according to claim 6, it is characterised in that described device is also obtained including characteristic point sequence Take unit, the second judging unit and characteristic point removal unit;
The Feature Points Matching unit is connected with the characteristic point sequence acquiring unit, the characteristic point sequence acquiring unit and institute The connection of the second judging unit, second judging unit and characteristic point removal unit connection are stated, the characteristic point removes single First and described feature points clustering unit connection;
Wherein, the characteristic point sequence acquiring unit, for carrying out feature to the continuous k two field pictures comprising first two field picture Point matching, obtains characteristic point sequence, the characteristic point sequence refer to the feature point group that is matched in adjacent two field pictures into Vector set, and the k is more than or equal to 4;
Whether second judging unit, two vectors for judging adjacent in the vector set at least meet following two One in part:Two adjacent vectorial length changes are in the first preset range;Described adjacent two vectorial Angle change is in the second preset range;If it is not, then activating the characteristic point removal unit;
The characteristic point removal unit, all characteristic points of the characteristic point sequence are constituted for removing.
CN201410520226.4A 2014-09-30 2014-09-30 A kind of obstacle detection method and device Active CN104318206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410520226.4A CN104318206B (en) 2014-09-30 2014-09-30 A kind of obstacle detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410520226.4A CN104318206B (en) 2014-09-30 2014-09-30 A kind of obstacle detection method and device

Publications (2)

Publication Number Publication Date
CN104318206A CN104318206A (en) 2015-01-28
CN104318206B true CN104318206B (en) 2017-09-29

Family

ID=52373436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410520226.4A Active CN104318206B (en) 2014-09-30 2014-09-30 A kind of obstacle detection method and device

Country Status (1)

Country Link
CN (1) CN104318206B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109716255A (en) * 2016-09-18 2019-05-03 深圳市大疆创新科技有限公司 For operating movable object with the method and system of avoiding barrier
CN106815550B (en) * 2016-11-25 2020-02-07 中国科学院自动化研究所 Emergency obstacle avoidance method based on visual fear reaction brain mechanism
CN106791407B (en) * 2016-12-27 2021-02-23 宇龙计算机通信科技(深圳)有限公司 Self-timer control method and system
CN107980138B (en) 2016-12-28 2021-08-17 达闼机器人有限公司 False alarm obstacle detection method and device
CN106983454B (en) * 2017-05-12 2020-11-20 北京小米移动软件有限公司 Sweeping robot sweeping method and sweeping robot
CN109101874B (en) * 2018-06-21 2022-03-18 南京大学 Library robot obstacle identification method based on depth image
CN109559519A (en) * 2018-12-18 2019-04-02 广东中安金狮科创有限公司 Monitoring device and its parking offense detection method, device, readable storage medium storing program for executing
CN111612812B (en) * 2019-02-22 2023-11-03 富士通株式会社 Target object detection method, detection device and electronic equipment
CN111123255A (en) * 2019-12-13 2020-05-08 意诺科技有限公司 Method, device and system for positioning moving target
CN111311656B (en) * 2020-02-21 2023-06-27 辽宁石油化工大学 Moving object detection method and device suitable for vehicle-mounted fisheye camera
CN111738127B (en) * 2020-06-17 2023-08-25 安徽淘云科技股份有限公司 Entity book in-place detection method and device, electronic equipment and storage medium
CN112529335B (en) * 2020-12-25 2021-12-31 广州文远知行科技有限公司 Model detection method, device, equipment and storage medium
CN112733718B (en) * 2021-01-11 2021-08-06 深圳市瑞驰文体发展有限公司 Foreign matter detection-based billiard game cheating identification method and system
CN112734810B (en) * 2021-04-06 2021-07-02 北京三快在线科技有限公司 Obstacle tracking method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408978A (en) * 2008-11-27 2009-04-15 东软集团股份有限公司 Method and apparatus for detecting barrier based on monocular vision
CN103231708A (en) * 2013-04-12 2013-08-07 安徽工业大学 Intelligent vehicle obstacle avoiding method based on binocular vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123722B (en) * 2011-11-18 2016-04-27 株式会社理光 Road object detection method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408978A (en) * 2008-11-27 2009-04-15 东软集团股份有限公司 Method and apparatus for detecting barrier based on monocular vision
CN103231708A (en) * 2013-04-12 2013-08-07 安徽工业大学 Intelligent vehicle obstacle avoiding method based on binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的避障策略算法研究;张怀相等;《杭州电子科技大学学报》;20130831;第33卷(第4期);31-34页 *

Also Published As

Publication number Publication date
CN104318206A (en) 2015-01-28

Similar Documents

Publication Publication Date Title
CN104318206B (en) A kind of obstacle detection method and device
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
Pinggera et al. Lost and found: detecting small road hazards for self-driving vehicles
CN110765922B (en) Binocular vision object detection obstacle system for AGV
CN104778690B (en) A kind of multi-target orientation method based on camera network
CN103413308B (en) A kind of obstacle detection method and device
US9286678B2 (en) Camera calibration using feature identification
DE112020002697T5 (en) MODELING A VEHICLE ENVIRONMENT WITH CAMERAS
CN112036210B (en) Method and device for detecting obstacle, storage medium and mobile robot
CN104751146B (en) A kind of indoor human body detection method based on 3D point cloud image
EP3398158B1 (en) System and method for identifying target objects
Barrois et al. 3D pose estimation of vehicles using a stereo camera
CN110619674B (en) Three-dimensional augmented reality equipment and method for accident and alarm scene restoration
McDaniel et al. Ground plane identification using LIDAR in forested environments
CN106778633B (en) Pedestrian identification method based on region segmentation
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
Huang et al. Robust lane marking detection under different road conditions
CN111079675A (en) Driving behavior analysis method based on target detection and target tracking
CN111145211B (en) Method for acquiring pixel height of head of upright pedestrian of monocular camera
CN111444891A (en) Unmanned rolling machine operation scene perception system and method based on airborne vision
Huang et al. Lane marking detection based on adaptive threshold segmentation and road classification
Dai et al. A vehicle detection method via symmetry in multi-scale windows
CN107609468B (en) Class optimization aggregation analysis method for active safety detection of unmanned aerial vehicle landing area and application
CN103177248A (en) Rapid pedestrian detection method based on vision
Petrovai et al. Obstacle detection using stereovision for Android-based mobile devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211118

Address after: 201801 room 1703, No. 888, Moyu South Road, Anting Town, Jiading District, Shanghai

Patentee after: NEUSOFT REACH AUTOMOTIVE TECHNOLOGY (SHANGHAI) Co.,Ltd.

Address before: Hunnan rookie street Shenyang city Liaoning province 110179 No. 2

Patentee before: NEUSOFT Corp.