CN106203381B - Obstacle detection method and device in a kind of driving - Google Patents

Obstacle detection method and device in a kind of driving Download PDF

Info

Publication number
CN106203381B
CN106203381B CN201610576441.5A CN201610576441A CN106203381B CN 106203381 B CN106203381 B CN 106203381B CN 201610576441 A CN201610576441 A CN 201610576441A CN 106203381 B CN106203381 B CN 106203381B
Authority
CN
China
Prior art keywords
subregion
point
ttc
frame
observation point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610576441.5A
Other languages
Chinese (zh)
Other versions
CN106203381A (en
Inventor
余道明
陈强
兴军亮
张康
董健
黄君实
杨浩
龙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610576441.5A priority Critical patent/CN106203381B/en
Publication of CN106203381A publication Critical patent/CN106203381A/en
Application granted granted Critical
Publication of CN106203381B publication Critical patent/CN106203381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses obstacle detection method and devices in a kind of driving, specifically include: obtaining current video frame;Critical point detection is carried out to barrier based on the current video frame;It is divided according to region belonging to the key point, and chooses observation point from the key point of the subregion after each division;It tracks the observation point and chooses each subregion and respond best observation point;The mass center that best observation point obtains each subregion is responded according to each subregion;According to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC.The present invention is realized in monocular cam, and barrier collision time can be calculated accurately, in real time by not needing acquisition vehicle speed information.

Description

Obstacle detection method and device in a kind of driving
Technical field
The present invention relates to obstacle detection method and devices in intelligent travelling crane technical field more particularly to a kind of driving.
Background technique
With the continuous development of science and technology, it is had begun in terms of the driving of motor vehicle and develops intelligent navigation and driving skill Art.The critically important technology branch for being intelligent navigation to barrier before how detecting and driving.In the prior art, there are two types of Scheme, a kind of scheme are using distance measuring sensors such as laser radar, millimetre-wave radars, and the advantages of this scheme is directly to obtain It obtains barrier and drives the accurate direct range between vehicle, algorithm realizes simple, measurement accuracy height, but equipment manufacturing cost is held high It is expensive, and complexity is installed, and vehicle appearance can be changed.Another scheme is the ranging scheme of view-based access control model, vision therein point For monocular scheme and binocular scheme.The monocular FCWS's (forward collision warning system) of view-based access control model is excellent Point is that it only needs a common camera, thus its is cheap, and installation is simple, and do not change vehicle appearance etc., but it is lacked Point is that more accurate relative distance or TTC (time to contact) need the requirement to algorithm relatively high in order to obtain, For example existing algorithm has the FCW of Mobileye, which needs to carry out more accurate space orientation to barrier or pedestrian, Vehicle speed information is needed simultaneously.The advantages of vision prescription based on binocular is that algorithm comparison is intuitive, i.e., by calculating two read in Parallax between the two-path video input of camera calculates the distance of barrier and vehicle, while passing through the distance of two frame of front and back Variation calculates TTC, but inspects algorithm complexity the disadvantage is that calculating, and cannot reach real-time calculating, and special calculating is needed to set It is standby.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind State obstacle detection method and device in a kind of driving of problem.
On the one hand, the present invention proposes obstacle detection method in a kind of driving, this method comprises:
Obtain current video frame;
Critical point detection is carried out to barrier based on the current video frame;
It is divided according to region belonging to the key point, and from the key point of the subregion after each division Choose observation point;
It tracks the observation point and chooses each subregion and respond best observation point;
The mass center that best observation point obtains each subregion is responded according to each subregion;
According to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC。
Optionally, the key point includes FAST, ORB and/or Harris characteristic point.
Optionally, the region according to belonging to the key point divides it, and from the key point of each subregion Observation point is chosen, is specifically included:
The key point is divided into 9 sub-regions according to the region belonging to it;
Judge whether the keypoint quantity in each subregion is greater than 9, if it is greater than 9, then chooses 9 keys therein Point is used as observation point, otherwise regard all key points in subregion as observation point.
Optionally, it tracks the observation point and chooses each subregion and respond best observation point, specifically include:
Judge whether to be the first video frame;
Video frame is obtained if it is, returning;
Otherwise remove the observation point that tracking response value in each subregion is less than predetermined threshold, and Response to selection is best most More three points, to be used to obtain the mass center of each subregion.
Optionally, choose tracking response value in removed each subregion be less than predetermined threshold observation vertex neighborhood with Track response is greater than the point of predetermined threshold, to be used to obtain the mass center of each subregion.
Optionally, small according to tracking response value in each subregion removed in the case where not being the first video frame In predetermined threshold observation point, duplicate removal processing is carried out to new observation point in next frame each subregion.
Optionally, according to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision Time TTC, specifically includes:
Calculate the mutual distance d (t+1) of each subregion mass center;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
Optionally, it is calculated according to the TTC of current frame image and predetermined quantity or predetermined time calculated in preceding picture frame Final TTC.
Optionally, judge whether TTC or final TTC are less than the predetermined time, if it is, alarm.
On the other hand, the present invention provides obstacle detector in a kind of driving, which includes:
Camera, for obtaining video frame;
Detection module, for carrying out critical point detection to barrier based on the video frame, according to belonging to the key point Region it is divided, and choose observation point from the key point of the subregion after each division;
Tracking module responds best observation point for tracking the observation point and choosing each subregion;
TTC computing unit, for responding the mass center that best observation point obtains each subregion, root according to each subregion According to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC.
Optionally, the detection module specifically includes:
Critical point detection unit, for carrying out critical point detection to barrier based on the video frame,
Sub-zone dividing unit, for the key point to be divided into 9 sub-regions according to the region belonging to it;
Observation point selection unit, for judging whether the keypoint quantity in each subregion is greater than 9, if it is greater than 9, then 9 key points therein are chosen as observation point, otherwise regard all key points in subregion as observation point.
Optionally, the tracking module specifically includes judging unit and observation point processing unit, wherein
Judging unit, for judging whether it is the first video frame;If it is, directly indicating that the camera obtains video Otherwise frame indicates observation point processing unit, remove tracking response value in each subregion and be less than the observation point of predetermined threshold, and select It selects and responds best most three points.
Optionally, the observation point processing unit is also used to choose tracking response value in removed each subregion and is less than The tracking response of the observation vertex neighborhood of predetermined threshold is greater than the point of predetermined threshold, to be used to obtain the mass center of each subregion.
Optionally, the observation point selection unit is not in the case where being the first video frame, according to the every height removed Tracking response value is less than the observation point of predetermined threshold in region, carries out at duplicate removal to observation point new in next frame each subregion Reason.
Optionally, TTC computing unit is specifically used for:
Calculate the mutual distance d (t+1) of each subregion mass center;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
Optionally, which further includes final TTC computing unit, for according to current frame image and predetermined quantity or in advance The TTC calculated in preceding picture frame to fix time calculates final TTC.
Optionally, barrier collision warning unit, for judging whether TTC or final TTC are less than the predetermined time, if It is then to alarm.
On the other hand, a kind of automatic cruising system of the present invention, driving assistance system, automobile data recorder, including the row Obstacle detector in vehicle.
The technical solution provided in the embodiment of the present application, has at least the following technical effects or advantages:
This programme proposes that the FCW algorithm based on pure vision, this programme have two big advantages compared with currently existing scheme:
1, the present invention does not need to obtain vehicle speed information, this is that is to say, bright be not connected to vehicle net under the configuration of monocular cam Network obtains vehicle speed information, reduces installation cost;
2, for the present invention when calculating TTC, the existing scheme that compares is more accurate, more robust, while reducing to vehicle body The detection accuracy requirement of contour edge line.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow chart that the method for barrier is detected in the driving proposed according to the present invention;
Fig. 2 shows the position blocki diagrams of the barrier in video frame;
Fig. 3 shows the position blocki diagram of the barrier in travel lane in video frame;
Fig. 4 shows the frame shape variation diagram of the barrier in two frame video frame of front and back in travel lane;
Fig. 5 shows the crucial point diagram of the barrier in two frame video frame of front and back in travel lane;Fig. 6 shows basis The flow chart of the method for barrier is detected in a kind of formation of specific implementation form proposed by the present invention;
Fig. 7 shows the structural block diagram that the device of barrier is detected in the driving proposed according to the present invention;
Fig. 8 shows the structural block diagram of the detection module in the device for detecting barrier in driving;
Fig. 9 shows the structural block diagram of the tracking module in the device for detecting barrier in driving.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
The present invention proposes obstacle detection method in a kind of driving, as shown in Figure 1, this method comprises:
S1. video frame is obtained;
S2. critical point detection is carried out to barrier based on current video frame;
S3. the region according to belonging to the key point divides it, and from the key of the subregion after each division Observation point is chosen in point;
S4. it tracks the observation point and chooses each subregion and respond best observation point;
S5. the mass center that best observation point obtains each subregion is responded according to each subregion;
S6. according to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC。
In above-mentioned steps S1, video image is obtained using monocular cam and is not needed subsequent compared to binocular camera Special anaglyph processing module, and monocular cam low cost, effectively in save the cost
In step s 2, in acquired video image, barrier in detection image is defeated if there is barrier Out in image barrier position block diagram, as shown in Fig. 2, simultaneously whether further disturbance in judgement object is in the travel lane of vehicle, When barrier is in the travel lane of vehicle, just need to carry out critical point detection to the barrier in image.Camera regardless of It is arranged on the outside of car body, or is arranged in car body (preferably rearview mirror), in the case where no barrier, camera intake To image be fixed, therefore using no barrier in the case where the image that shoots as initial pictures.Vehicle is being advanced In the process, the picture frame that camera takes is compared with initial pictures, is capable of detecting when the barrier in front, and can be defeated The position block diagram of barrier out.In the case where not detecting barrier, can continue to return to acquisition video frame.Vehicle is advancing In the process, the position of lane line is defined in initial pictures, therefore can be judged according to the position of lane line detected Barrier whether be located at the travel lane of vehicle, as shown in figure 3, judge in vehicle travel lane barrier.Such as Fruit is not located at travel lane, then barrier will not cause obstacle to the traveling of vehicle, can ignore, and continues to obtain to return Take picture frame.
In fact, the frame shape directly according to detected barrier (such as automobile) in the picture, calculates before and after frames figure The variation of the frame shape scale of barrier as in, as shown in figure 4, can estimate TTC (time to according to the variation of scale Contact), but due to the shape of frame shape inaccuracy reflection barrier, it is possible to cause the TTC of estimation to have error, it is basic herein On, we introduce the characteristic point of the dispersed distribution on image, seek TTC. according to characteristic point
When carrying out critical point detection, using FAST algorithm, ORB (oriented FAST and Rotated BRIEF), Harris algorithm picks characteristic point.FAST characteristic point, ORB (oriented FAST and Rotated BRIEF) are special Levy point, HARRIS characteristic point is all a kind of local invariant feature.ORB is built upon improved FAST detective operators and improved On rBRIEF description, and the arithmetic speed of FAST algorithm and BRIEF algorithm is all very fast, therefore ORB is in arithmetic speed Possess absolute predominance.The characteristic point that Harris algorithm extracts is strong to grey scale change and the repeatability of geometric transformation, so feature Point detects high-efficient, has scaling invariance.When detecting characteristic point, on the image, compare certain point S and its neighborhood territory pixel Point gray value, if the gray value of these pixels subtracts the absolute of the gray value of S point there are N number of continuous pixel on circle Value is greater than predetermined threshold, then S is required characteristic point.Travel lane is in in front and back two field pictures shown in Fig. 4 Barrier characteristic point choose as figure 5 illustrates.
In step s3, the region according to belonging to the key point divides it, and from the sub-district after each division Observation point is chosen in the key point in domain.In order to avoid the regional area for the image that key point is all concentrated, TTC time estimation is caused to have Error, the application propose the technological means of dispersed distribution observation point.As a kind of specific embodiment, the pass that can be will test Key point is divided into 9 sub-regions according to the region of its output, in order to control calculation amount, and result can be made accurate as far as possible again, preferably 9 key points are at most chosen as observation point in each sub-regions in ground.If the key point in a sub-regions less than 9, So it can regard the key point of actual quantity as observation point.Certainly, in the specific implementation, it is not limited to the key that will test Point is divided into 9 sub-regions according to the region of its output, can also be divided into 8 sub-regions, 10,12 sub-regions etc., in each sub-district The observation point quantity chosen in domain is also not limited to 9, for example 8,10,12 can be on the one hand the quantity of selection considers below The precision of TTC is calculated, on the one hand considers the speed calculated.
In step s 4, it tracks the observation point and chooses the best observation point of each subregion tracking response.As one Kind specific embodiment tracks the observation point using KLT track algorithm, and KLT track algorithm is a kind of extensive in the prior art Algorithm for tracking.When tracking the observation point using track algorithm, first determine whether picture frame is first frame, if It is first frame, then returns and obtain next frame video image, because first frame image does not have previous frame image to may be used to determine tracking Response.If not first frame, then the best point of each subregion tracking response is chosen, among these includes: to remove tracking to ring It should be less than the point of predetermined threshold.As an implementation, for the observation point finally retained the dispersed distribution as far as possible made, with standard The outer profile for really obtaining barrier chooses the observation point neighbour that tracking response value in removed each subregion is less than predetermined threshold The tracking response in domain is greater than the point of predetermined threshold, to be used to obtain the mass center of each subregion.
When inputting next frame image and choosing the observation point of next frame, the processing result of previous frame can be made full use of, i.e., In the case where not being the first video frame, the observation of predetermined threshold is less than according to tracking response value in each subregion removed Point carries out duplicate removal processing to observation point new in next frame each subregion.
The characteristic point extracted or chosen neither has size constancy, and does not have rotational invariance, some algorithm ratios If ORB algorithm uses gray scale centroid method, the square by calculating characteristic point assigns characteristic point direction, assigns characteristic point invariable rotary Property, cause it is computationally intensive, handle it is not in due course.And the application passes through the before and after frames tracking response in other words using selection strong robustness Big characteristic point obtains the mass center of all subregion, and calculation amount is small, and quickly, timeliness is strong for processing speed
In step s 5, the mass center that each subregion is obtained according to the best observation point of each subregion tracking response, makees For a kind of preferred embodiment, each regional choice response response is best in 9 regions of the tracking barrier of division Most 3 points, further according to the response weighted average calculation each subregion of selected response best each observation point Mass center, these mass centers substantially can accurately reflect the appearance profile of barrier.
In step s 6, according to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier Collision time TTC.As a preferred embodiment, each mass center is connected with each other, each mass center phase after calculating connection The distance between mutually, it is denoted as d (t), for next frame image, is denoted as d (t+1), the ratio of d (t+1) and d (t) are calculated, as S, TTC=delta (t)/(s-1) can be calculated according to ratio S.
Can indicate that camera inputs next frame image after step S6, and repeat the step S1-S6, thus for The picture frame of barrier is detected on travel lane, can all be calculated a TTC, not stopped iteration then.Certainly, each due to inputting Frame image, can nearly all calculate a TTC, TTC be it is nonlinear, the TTC that can inevitably have a certain frame image has error, in order to It, can be using in the predetermined time or the history TTC of predetermined quantity picture frame is weighted and averaged and counts so that the TTC calculated is more accurate Calculate the TTC of present frame, the image remoter from current frame image influences smaller, thus forward calculated TTC weighting on it Coefficient is lower, and the calculated TTC weighting coefficient of present frame is maximum, can be 1.
As another specific embodiment, in step s 4, is removing point of the tracking response less than predetermined threshold, can use The point of the strong robustness of next frame replaces the point removed, i.e., high using being tracked between present frame and next frame image The observation point of tracking response come replace the tracking response be less than predetermined threshold point, every frame video image so repeat into Row, to guarantee observed point robustness with higher always.
Since the FAST characteristic point, ORB characteristic point, HARRIS characteristic point of extraction neither have size constancy, and do not have Standby invariable rotary shape, some algorithms use gray scale centroid method in the prior art, and the square by calculating characteristic point sets characteristic point side To imparting characteristic point rotational invariance;Assuming that there is offset in characteristic point position coordinate and mass center, using characteristic point position as starting point, Mass center is that terminal determines that the direction of a vector and it, this direction are set as characteristic point direction, according further to it is described to Amount is registrated.And the present invention is directly weighted and averaged the mass center for seeking subregion by the robustness using characteristic point, thus or Person's barrier outer profile then calculates and asks to obtain outer profile length, makes full use of obstacle distance vehicle remoter, outer profile is got over Small, obstacle distance vehicle is closer, and the bigger natural phenomena of outer profile calculates obstacle according to the ratio of front and back outer profile length The precision of object collision time, calculating is high, and operation is exceedingly fast, strong real-time.
As a preferred embodiment, as shown in fig. 6, obstacle detection method executes following process in driving:
S11. video frame Ft is read in by monocular cam;
S12. it is based on the video frame Ft, the barrier in detection image, and exports the position block diagram of barrier;
S13. the position block diagram based on the barrier judges whether really there is barrier;If there is no barrier, S11 is then returned to step, thens follow the steps S14 if there is barrier;
S14. the travel lane that barrier is in vehicle is judged whether there is, if it is not, return step S11, if there is Then follow the steps S15;
S15. critical point detection is carried out to the barrier for being in vehicle travel lane in image, and the key point of detection is pressed It is divided into 9 sub-regions according to the region of its output, wherein each sub-regions at most take 9 key points as observation point;
The key point includes FAST, ORB and/or Harris characteristic point takes if less than 9 key points of subregion The point of actual quantity can averagely be found out its mass center to the point in each region.
S16. the observation point of extraction is tracked using KLT track algorithm;
S17. judge whether inputted video frame Ft is first frame, if it is, feedback step S11, otherwise executes step Rapid S18;
S18. remove the observation point that tracking response is less than threshold value, and duplicate removal updates the point that Ft+1 frame newly detects;
S19. most three that each regional choice tracking response is best in 9 regions of the tracking target of division Point is weighted the mass center for finding out three of them point to these three points according to its response;Then calculate 9 mass centers it is mutual away from From d (t);And the ratio between the distance between calculate two frame of front and back i.e.: d (t+1)/d (t)=S, then according to formula: Tm=delta (t)/(s-1), wherein Tm is required TTC;
S20. judge whether TTC is less than predetermined threshold, alarm if it is step S21. is executed, otherwise return step S11.
The present invention solves the TTC computational problem of FCW using a kind of monocular cam, reduces FCW system operation money Source, and the accuracy and robustness of the TTC of FCW are improved, reduce the rate of false alarm of FCW.
On the other hand, the present invention provides obstacle detector in a kind of driving, as shown in fig. 7, the device includes:
Camera 100, for obtaining video frame, which is monocular cam;
Detection module 200, for carrying out critical point detection to barrier based on the video frame, according to the key point institute The region of category divides it, and chooses observation point from the key point of the subregion after each division;
Tracking module 300 responds best observation point for tracking the observation point and choosing each subregion;
TTC computing unit 400, for responding the matter that best observation point obtains each subregion according to each subregion The heart, according to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC.
As shown in figure 8, the detection module 200 specifically includes:
Critical point detection unit 201, for carrying out critical point detection to barrier based on the video frame,
Sub-zone dividing unit 202, for the key point to be divided into 9 sub-regions according to the region belonging to it;
Observation point selection unit 203, for judging whether the keypoint quantity in each subregion is greater than 9, if it is greater than 9, then 9 key points therein are chosen as observation point, otherwise regard all key points in subregion as observation point.
As shown in figure 9, the tracking module 300 specifically includes judging unit 301 and observation point processing unit 302, wherein
Judging unit 301, for judging whether it is the first video frame;If it is, directly indicating that the camera obtains Otherwise video frame indicates observation point processing unit 302, remove the observation that tracking response value in each subregion is less than predetermined threshold Point, and most three points that Response to selection is best.
In order to enable being used to calculate the observation point dispersed distribution as far as possible of mass center, and then finally make barrier shape wheel Wide as accurate as possible, it is small that the observation point processing unit 302 is also used to choose tracking response value in removed each subregion It is greater than the point of predetermined threshold in the tracking response of the observation vertex neighborhood of predetermined threshold, to be used to obtain the mass center of each subregion.
In order to make full use of the processing result of previous frame, and the robustness of observation point is provided as far as possible, the observation clicks It takes unit in the case where not being the first video frame, predetermined threshold is less than according to tracking response value in each subregion removed Observation point, duplicate removal processing is carried out to new observation point in next frame each subregion.
The TTC computing unit, is specifically used for:
Calculate each subregion mass center distance d (t+1) interconnected;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
The device further includes final TTC computing unit as a preferred implementation manner, for according to current frame image and Predetermined quantity or the TTC of predetermined time calculated in preceding picture frame calculate final TTC.
The obstacle detector further includes barrier collision warning unit, for judging whether TTC and final TTC are small In the predetermined time, as long as the two has one to be less than predetermined time, alarm.
On the other hand, the present invention also provides detection of obstacles in a kind of automatic cruising system, including mentioned-above driving Device is slowed down for controlling driving vehicle when calculating and being lower than the predetermined time to TTC, is braked, and the present invention also provides one kind Driving assistance system or automobile data recorder, for carrying out warning note when calculating and being lower than the predetermined time to TTC, the present invention is also A kind of intelligent back vision mirror, including obstacle detector in mentioned-above driving are provided, for calculating to TTC lower than predetermined Warning note is carried out when the time.
Obstacle detector proposed by the invention is mountable outside vehicle body because the present invention utilize be monocular camera shooting Head has no effect on vehicle body appearance, and easy for installation, also in this mountable vehicle body, if be mounted in vehicle body, after being preferably mounted at At visor position.
The technical solution provided in the embodiment of the present application, has at least the following technical effects or advantages:
This programme proposes that the FCW algorithm based on pure vision, this programme have two big advantages: 1, this hair compared with currently existing scheme Bright not need to obtain vehicle speed information, this is that is to say, the bright vehicle network that is not connected to obtains vehicle speed information, reduction installation cost;The present invention exists When calculating TTC, the existing scheme that compares is more accurate, more robust, while reducing the essence of the detection to body contour edge line Degree requires.
Since the electronic equipment that the present embodiment is introduced is used by the method for implementing to make marks in the embodiment of the present application Device, so based on the method to make marks described in the embodiment of the present application, those skilled in the art can understand this The specific embodiment of the electronic equipment of embodiment and its various change form, so how real for the electronic equipment herein The method to make marks in existing the embodiment of the present application is no longer discussed in detail.As long as those skilled in the art implement the application reality Device used by the method to make marks in example is applied, the range to be protected of the application is belonged to.
Algorithm and display are not inherently related to any particular computer, virtual system, or other device provided herein. Various general-purpose systems can also be used together with teachings based herein.As described above, it constructs required by this kind of system Structure be obvious.In addition, the present invention is also not directed to any particular programming language.It should be understood that can use various Programming language realizes summary of the invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodiments Including certain features rather than other feature, but the combination of the feature of different embodiment means in the scope of the present invention Within and form different embodiments.For example, in the following claims, embodiment claimed it is any it One can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) realize gateway according to an embodiment of the present invention, proxy server, in system Some or all components some or all functions.The present invention is also implemented as executing side as described herein Some or all device or device programs (for example, computer program and computer program product) of method.It is such It realizes that program of the invention can store on a computer-readable medium, or can have the shape of one or more signal Formula.Such signal can be downloaded from an internet website to obtain, and perhaps be provided on the carrier signal or with any other shape Formula provides.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
Obstacle detection method in present invention offer A1, a kind of driving, which is characterized in that this method comprises:
Obtain current video frame;
Critical point detection is carried out to barrier based on the current video frame;
It is divided according to region belonging to the key point, and from the key point of the subregion after each division Choose observation point;
It tracks the observation point and chooses the observation point that each subregion responds best predetermined quantity;
The mass center that best observation point obtains each subregion is responded according to each subregion;
According to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC。
A2, method according to a1, it is further characterized in that, the key point includes FAST, ORB and/or Harris spy Sign point.
A3, method according to a1 or a2, it is further characterized in that, it is carried out according to region belonging to the key point It divides, and chooses observation point from the key point of each subregion, specifically include:
The key point is divided into 9 sub-regions according to the region belonging to it;
Judge whether the keypoint quantity in each subregion is greater than 9, if it is greater than 9, then chooses 9 keys therein Point is used as observation point, otherwise regard all key points in subregion as observation point.
A4, according to the described in any item methods of A1 to A3, it is further characterized in that, track the observation point and choose every height The best observation point of region response, specifically includes:
Judge whether to be the first video frame;
Video frame is obtained if it is, returning;
Otherwise remove the observation point that tracking response value in each subregion is less than predetermined threshold, and Response to selection is best most More three points, to be used to obtain the mass center of each subregion.
A5, method according to a4, it is further characterized in that, it is small to choose tracking response value in removed each subregion It is greater than the point of predetermined threshold in the tracking response of the observation vertex neighborhood of predetermined threshold, to be used to obtain the mass center of each subregion.
A6, the method according to A4 or A5, it is further characterized in that, in the case where not being the first video frame, according to institute In each subregion removed tracking response value be less than predetermined threshold observation point, to sight new in next frame each subregion Measuring point carries out duplicate removal processing.
A7, according to the described in any item methods of A1 to A6, it is further characterized in that, according to every height in present frame and previous frame Mass center mutual distance in region calculates barrier collision time TTC, specifically includes:
Calculate the mutual distance d (t+1) of each subregion mass center;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
A8, according to the described in any item methods of A1 to A7, it is further characterized in that, according to current frame image and predetermined quantity or The TTC of person's predetermined time calculated in preceding picture frame calculates final TTC.
A9, the method according to A7 or A8, it is further characterized in that, judge present frame TTC or final TTC whether Less than the predetermined time, if it is, alarm.
Obstacle detector in B10, a kind of driving, which is characterized in that the device includes:
Camera, for obtaining video frame;
Detection module, for carrying out critical point detection to barrier based on the video frame, according to belonging to the key point Region it is divided, and choose observation point from the key point of the subregion after each division;
Tracking module, for tracking the observation point and choosing the observation that each subregion responds best predetermined quantity Point;
TTC computing unit, for responding the mass center that best observation point obtains each subregion, root according to each subregion According to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC.
B11, detection device according to b10, it is further characterized in that, the key point includes FAST, ORB and/or Harris characteristic point.
B12, method according to b10 or b11, it is further characterized in that, the detection module specifically includes:
Critical point detection unit, for carrying out critical point detection to barrier based on the video frame,
Sub-zone dividing unit, for the key point to be divided into 9 sub-regions according to the region belonging to it;
Observation point selection unit, for judging whether the keypoint quantity in each subregion is greater than 9, if it is greater than 9, then 9 key points therein are chosen as observation point, otherwise regard all key points in subregion as observation point.
B13, according to the described in any item devices of B10 to B12, it is further characterized in that, the tracking module, which specifically includes, to be sentenced Disconnected unit and observation point processing unit, wherein
Judging unit, for judging whether it is the first video frame;If it is, directly indicating that the camera obtains video Otherwise frame indicates observation point processing unit, remove tracking response value in each subregion and be less than the observation point of predetermined threshold, and select It selects and responds best most three points.
B14, method according to b13, it is further characterized in that, the observation point processing unit, which is also used to choose, to be removed Each subregion in tracking response value be less than predetermined threshold observation vertex neighborhood tracking response be greater than predetermined threshold point, with For obtaining the mass center of each subregion.
B15, the method according to B13 or B14, it is further characterized in that, the observation point selection unit be not first In the case where video frame, the observation point of predetermined threshold is less than according to tracking response value in each subregion removed, to next New observation point carries out duplicate removal processing in frame each subregion.
B16, according to the described in any item devices of B10 to B15, it is further characterized in that, TTC computing unit is specifically used for:
Calculate the mutual distance d (t+1) of each subregion mass center;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
B17, the method according to B16, it is further characterized in that, which further includes final TTC computing unit, is used for root Final TTC is calculated according to the TTC of current frame image and predetermined quantity or predetermined time calculated in preceding picture frame.
B18, the device according to B16 or B17, it is further characterized in that, barrier collision warning unit, for judging Whether TTC is less than the predetermined time, if it is, alarm.
Detection of obstacles in C19, a kind of automatic cruising system, including the described in any item drivings of claim B10-B17 Device.
Obstacle detector in D20, a kind of driving assistance system, including the described in any item drivings of B10-17.
Obstacle detector in E21, a kind of automobile data recorder, including the described in any item drivings of B10-17.

Claims (17)

1. obstacle detection method in a kind of driving, which is characterized in that this method comprises:
Obtain current video frame;
Critical point detection is carried out to barrier based on the current video frame;
It is divided according to region belonging to the key point, and is chosen from the key point of each subregion after division Observation point;
It tracks the observation point and chooses tracking response value in each subregion that each subregion is removed and be less than predetermined threshold Observation vertex neighborhood tracking response be greater than predetermined threshold observation point;
In each subregion removed according to each subregion tracking response value be less than predetermined threshold observation vertex neighborhood with The observation point that track response is greater than predetermined threshold obtains the mass center of each subregion;
According to present frame and each subregion mass center in previous frame it is mutual at a distance from calculate barrier collision time TTC.
2. according to the method described in claim 1, it is further characterized in that, the key point includes FAST, ORB, Harris feature At least one of point.
3. according to the method described in claim 1, it is further characterized in that, it is drawn according to region belonging to the key point Point, and observation point is chosen from the key point of each subregion, it specifically includes:
Region belonging to the key point is divided into 9 sub-regions;
Judge whether the keypoint quantity in each subregion is greater than 9, if it is greater than 9, then chooses 9 key point conducts therein Otherwise observation point regard all key points in subregion as observation point.
4. method according to any one of claims 1 to 3, it is further characterized in that, it tracks the observation point and chooses each The tracking response that tracking response value is less than the observation vertex neighborhood of predetermined threshold in each subregion that subregion is removed is greater than pre- The observation point for determining threshold value, specifically includes:
Judge whether to be the first video frame;
Video frame is obtained if it is, returning;
Otherwise remove the observation point that tracking response value in each subregion is less than predetermined threshold, and Response to selection is greater than predetermined threshold Most three points, to be used to obtain the mass center of each subregion.
5. method according to any one of claims 1 to 3, it is further characterized in that, according to each in present frame and previous frame The mutual distance of subregion mass center calculates barrier collision time TTC, specifically includes:
Calculate the mutual distance d (t+1) of present frame each subregion mass center;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
6. method according to any one of claims 1 to 3, it is further characterized in that, according to current frame image and predetermined quantity Preceding picture frame calculate TTC calculate final TTC or according in current frame image and predetermined time in preceding picture frame meter The TTC of calculation calculates final TTC.
7. according to the method described in claim 6, it is further characterized in that, judge whether the TTC of present frame or final TTC are less than Predetermined time, if it is, alarm.
8. obstacle detector in a kind of driving, which is characterized in that the device includes:
Camera, for obtaining video frame;
Detection module, for carrying out critical point detection to barrier based on the video frame, according to area belonging to the key point Domain divides it, and chooses observation point from the key point of each subregion after division;
Tracking module, for tracking the observation point and choosing tracking response value in each subregion that each subregion is removed Less than the observation point that the tracking response of the observation vertex neighborhood of predetermined threshold is greater than predetermined threshold;
TTC computing unit, tracking response value is less than predetermined threshold in each subregion for being removed according to each subregion Observation vertex neighborhood tracking response be greater than predetermined threshold observation point obtain each subregion mass center, according to present frame with it is upper The distance that each subregion mass center is mutual in one frame calculates barrier collision time TTC.
9. device according to claim 8, it is further characterized in that, the key point includes FAST, ORB, Harris feature At least one of point.
10. device according to claim 8, it is further characterized in that, the detection module specifically includes:
Critical point detection unit, for carrying out critical point detection to barrier based on the video frame,
Sub-zone dividing unit, for region belonging to the key point to be divided into 9 sub-regions;
Observation point selection unit if it is greater than 9, is then chosen for judging whether the keypoint quantity in each subregion is greater than 9 Otherwise 9 key points therein regard all key points in subregion as observation point as observation point.
11. according to the described in any item devices of claim 8 to 10, it is further characterized in that, the tracking module, which specifically includes, to be sentenced Disconnected unit and observation point processing unit, wherein
Judging unit, for judging whether it is the first video frame;If it is, directly indicate that the camera obtains video frame, Otherwise it indicates observation point processing unit, removes tracking response value in each subregion and be less than the observation point of predetermined threshold, and select Response is greater than most three points of predetermined threshold.
12. according to the described in any item devices of claim 8 to 10, it is further characterized in that, TTC computing unit is specifically used for:
Calculate the mutual distance d (t+1) of present frame each subregion mass center;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Using TTC=delta (t)/s-1, the collision time of barrier is obtained.
13. according to the described in any item devices of claim 8 to 10, it is further characterized in that, which further includes that final TTC is calculated Unit, for calculating final TTC according to the TTC of current frame image and predetermined quantity calculated in preceding picture frame or according to current Frame image and the TTC calculated in preceding picture frame in the predetermined time calculate final TTC.
14. device according to claim 13, it is further characterized in that, barrier collision warning unit is current for judging Whether the TTC of frame or final TTC are less than the predetermined time, if it is, alarm.
15. obstacle detector in a kind of automatic cruising system, including the described in any item drivings of claim 8-14.
16. obstacle detector in a kind of driving assistance system, including the described in any item drivings of claim 8-14.
17. obstacle detector in a kind of automobile data recorder, including the described in any item drivings of claim 8-14.
CN201610576441.5A 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving Active CN106203381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610576441.5A CN106203381B (en) 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610576441.5A CN106203381B (en) 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving

Publications (2)

Publication Number Publication Date
CN106203381A CN106203381A (en) 2016-12-07
CN106203381B true CN106203381B (en) 2019-05-31

Family

ID=57491114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610576441.5A Active CN106203381B (en) 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving

Country Status (1)

Country Link
CN (1) CN106203381B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6764378B2 (en) * 2017-07-26 2020-09-30 株式会社Subaru External environment recognition device
EP3576007B1 (en) * 2018-05-28 2023-09-20 Aptiv Technologies Limited Method for validation of obstacle candidate
CN109050391B (en) * 2018-07-26 2020-06-05 北京经纬恒润科技有限公司 High beam control method and device
CN110021006B (en) * 2018-09-06 2023-11-17 浙江大学台州研究院 Device and method for detecting whether automobile parts are installed or not
CN111383340B (en) * 2018-12-28 2023-10-17 成都皓图智能科技有限责任公司 Background filtering method, device and system based on 3D image
CN109583432A (en) * 2019-01-04 2019-04-05 广东翼卡车联网服务有限公司 A kind of vehicle blind zone intelligent early-warning method based on image recognition
CN110658827B (en) * 2019-10-25 2020-06-23 嘉应学院 Transport vehicle automatic guiding system and method based on Internet of things
CN111060125B (en) * 2019-12-30 2021-09-17 深圳一清创新科技有限公司 Collision detection method and device, computer equipment and storage medium
CN112528793B (en) * 2020-12-03 2024-03-12 上海汽车集团股份有限公司 Method and device for eliminating jitter of obstacle detection frame of vehicle
CN115547060B (en) * 2022-10-11 2023-07-25 上海理工大学 Intersection traffic conflict index calculation method considering vehicle contour

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842028A (en) * 2011-03-22 2012-12-26 富士重工业株式会社 Vehicle exterior monitoring device and vehicle exterior monitoring method
CN103403779A (en) * 2011-03-04 2013-11-20 日立汽车系统株式会社 Vehicle-mounted camera and vehicle-mounted camera system
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device
CN103732480A (en) * 2011-06-17 2014-04-16 罗伯特·博世有限公司 Method and device for assisting a driver in performing lateral guidance of a vehicle on a carriageway

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5690688B2 (en) * 2011-09-15 2015-03-25 クラリオン株式会社 Outside world recognition method, apparatus, and vehicle system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103403779A (en) * 2011-03-04 2013-11-20 日立汽车系统株式会社 Vehicle-mounted camera and vehicle-mounted camera system
CN102842028A (en) * 2011-03-22 2012-12-26 富士重工业株式会社 Vehicle exterior monitoring device and vehicle exterior monitoring method
CN103732480A (en) * 2011-06-17 2014-04-16 罗伯特·博世有限公司 Method and device for assisting a driver in performing lateral guidance of a vehicle on a carriageway
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device

Also Published As

Publication number Publication date
CN106203381A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106203381B (en) Obstacle detection method and device in a kind of driving
CN112292711B (en) Associating LIDAR data and image data
CN108725440B (en) Forward collision control method and apparatus, electronic device, program, and medium
CN106991389B (en) Device and method for determining road edge
Zielke et al. Intensity and edge-based symmetry detection with an application to car-following
CN102194239B (en) For the treatment of the method and system of view data
Shin et al. Visual lane analysis and higher-order tasks: a concise review
CN106054174A (en) Fusion method for cross traffic application using radars and camera
CN106054191A (en) Wheel detection and its application in object tracking and sensor registration
CN102510734A (en) Pupil detection device and pupil detection method
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
US20120050496A1 (en) Moving Obstacle Detection Using Images
CN113261010A (en) Object trajectory-based multi-modal sensor fusion method for cross-domain correspondence
CN112906777A (en) Target detection method and device, electronic equipment and storage medium
CN110345924A (en) A kind of method and apparatus that distance obtains
Lee et al. Temporally consistent road surface profile estimation using stereo vision
Teoh et al. A reliability point and kalman filter-based vehicle tracking technique
CN106408593A (en) Video-based vehicle tracking method and device
Min et al. Motion detection using binocular image flow in dynamic scenes
CN111144415A (en) Method for detecting micro pedestrian target
CN104931024B (en) Obstacle detector
CN116109669A (en) Target tracking method and system and electronic equipment
Rabe Detection of moving objects by spatio-temporal motion analysis
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
Ando et al. Direct imaging of stabilized optical flow and possible anomalies from moving vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220728

Address after: Room 801, 8th floor, No. 104, floors 1-19, building 2, yard 6, Jiuxianqiao Road, Chaoyang District, Beijing 100015

Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.