CN106203381A - Obstacle detection method and device in a kind of driving - Google Patents

Obstacle detection method and device in a kind of driving Download PDF

Info

Publication number
CN106203381A
CN106203381A CN201610576441.5A CN201610576441A CN106203381A CN 106203381 A CN106203381 A CN 106203381A CN 201610576441 A CN201610576441 A CN 201610576441A CN 106203381 A CN106203381 A CN 106203381A
Authority
CN
China
Prior art keywords
frame
sub regions
observation station
ttc
barrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610576441.5A
Other languages
Chinese (zh)
Other versions
CN106203381B (en
Inventor
余道明
陈强
兴军亮
张康
董健
黄君实
杨浩
龙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610576441.5A priority Critical patent/CN106203381B/en
Publication of CN106203381A publication Critical patent/CN106203381A/en
Application granted granted Critical
Publication of CN106203381B publication Critical patent/CN106203381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses obstacle detection method and device in a kind of driving, specifically include: obtain current video frame;Based on described current video frame, barrier is carried out critical point detection;According to the region belonging to described key point, it is divided, and the key point of the subregion after each division chooses observation station;Follow the tracks of described observation station and choose the observation station that the response of every sub regions is best;The barycenter of every sub regions is obtained according to the observation station that the response of every sub regions is best;According to present frame, every sub regions barycenter distance each other calculates barrier collision time TTC with previous frame.Present invention achieves in the case of monocular cam, it is not necessary to obtain speed information and can calculate barrier collision time accurately, in real time.

Description

Obstacle detection method and device in a kind of driving
Technical field
The present invention relates to intelligent travelling crane technical field, particularly relate to obstacle detection method and device in a kind of driving.
Background technology
Along with the development of science and technology, have begun to develop intelligent navigation and drive skill in terms of the driving of motor vehicles Art.How to detect the critically important technology branch that forward direction barrier is intelligent navigation and driving.In prior art, there are two kinds Scheme, a kind of scheme is to use the distance measuring sensor such as laser radar, millimetre-wave radar, and the advantage of this scheme is directly to obtain Obtaining barrier and the accurate direct range driven between vehicle, algorithm realizes simple, and certainty of measurement is high, but equipment manufacturing cost is high Expensive, and complexity is installed, and vehicle appearance can be changed.Another kind of scheme is the ranging scheme of view-based access control model, and vision therein is divided For monocular scheme and binocular scheme.The monocular FCWS's (forward collision warning system) of view-based access control model is excellent Point is one common photographic head of its need, thus it is cheap, installs simple, and does not change vehicle appearance etc., but it lacks Point is to need the requirement to algorithm higher to obtain more accurate relative distance or TTC (time to contact), The most existing algorithm has the FCW of Mobileye, and this algorithm needs barrier or pedestrian are carried out more accurate space orientation, Simultaneously need to speed information.The advantage of vision prescription based on binocular is that method comparison is directly perceived, two i.e. read in by calculating Parallax between the two-path video input of photographic head calculates the distance of barrier and vehicle, simultaneously by the distance of two frames front and back Change calculations TTC, but its shortcoming is to calculate to inspect algorithm complexity, it is impossible to reach to calculate in real time, and need special calculating to set Standby.
Summary of the invention
In view of the above problems, it is proposed that the present invention in case provide one overcome the problems referred to above or at least in part solve on State obstacle detection method and device in a kind of driving of problem.
On the one hand, the present invention proposes obstacle detection method in a kind of driving, and the method includes:
Obtain current video frame;
Based on described current video frame, barrier is carried out critical point detection;
According to the region belonging to described key point, it is divided, and the key point of the subregion after each division Choose observation station;
Follow the tracks of described observation station and choose the observation station that the response of every sub regions is best;
The barycenter of every sub regions is obtained according to the observation station that the response of every sub regions is best;
According to present frame, every sub regions barycenter distance each other calculates barrier collision time with previous frame TTC。
Optionally, described key point includes FAST, ORB and/or Harris characteristic point.
Optionally, according to the region belonging to described key point, it is divided, and from the key point of every sub regions Choose observation station, specifically include:
Described key point is divided into 9 sub regions according to its affiliated region;
Judge that the keypoint quantity in every sub regions, whether more than 9, if greater than 9, then chooses 9 key points therein As observation station, otherwise using all key points in subregion all as observation station.
Optionally, follow the tracks of described observation station and choose the observation station that the response of every sub regions is best, specifically including:
Judge whether it is the first frame of video;
Frame of video is obtained if it is, return;
Otherwise remove the observation station less than predetermined threshold of tracking response value in every sub regions, and Response to selection is best Many three points, to be used for obtaining the barycenter of every sub regions.
Optionally, choose the observation station neighborhood less than predetermined threshold of tracking response value in the every sub regions removed with Track response is more than the point of predetermined threshold, to be used for obtaining the barycenter of every sub regions.
Optionally, in the case of not being the first frame of video, little according to tracking response value in the every sub regions removed In predetermined threshold observation station, observation station new in sub regions every to next frame carries out duplicate removal process.
Optionally, according to present frame, every sub regions barycenter distance each other calculates barrier collision with previous frame Time TTC, specifically includes:
Calculate every sub regions barycenter distance d (t+1) each other;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Utilize TTC=delta (t)/s-1, obtain the collision time of barrier.
Optionally, according to current frame image and predetermined quantity or the TTC calculating calculated at front picture frame of the scheduled time Final TTC.
Optionally, it is judged that whether TTC or final TTC is less than the scheduled time, if it is, report to the police.
On the other hand, the present invention provides obstacle detector in a kind of driving, and this device includes:
Photographic head, is used for obtaining frame of video;
Detection module, for carrying out critical point detection based on described frame of video to barrier, belonging to described key point Region it is divided, and the key point of the subregion after each division chooses observation station;
Tracking module, for following the tracks of described observation station and choosing the observation station that the response of every sub regions is best;
TTC computing unit, obtains the barycenter of every sub regions, root for the observation station best according to the response of every sub regions According to present frame, every sub regions barycenter distance each other calculates barrier collision time TTC with previous frame.
Optionally, described detection module specifically includes:
Critical point detection unit, for barrier being carried out critical point detection based on described frame of video,
Sub-zone dividing unit, for being divided into 9 sub regions described key point according to its affiliated region;
Observation station chooses unit, and whether the keypoint quantity in judging every sub regions is more than 9, if greater than 9, then Choose 9 key points therein as observation station, otherwise using all key points in subregion all as observation station.
Optionally, described tracking module specifically includes judging unit and observation station processing unit, wherein
Judging unit, being used for judging whether is the first frame of video;If it is, directly indicate described photographic head to obtain video Frame, otherwise instruction observation station processing unit, remove tracking response value in every sub regions and be less than the observation station of predetermined threshold, and select Select most three points that response is best.
Optionally, described observation station processing unit is additionally operable to choose tracking response value in the every sub regions removed and is less than The tracking response of the observation station neighborhood of predetermined threshold is more than the point of predetermined threshold, to be used for obtaining the barycenter of every sub regions.
Optionally, described observation station chooses unit in the case of not being the first frame of video, according to the every height removed In region, tracking response value is less than the observation station of predetermined threshold, and observation station new in sub regions every to next frame is carried out at duplicate removal Reason.
Optionally, TTC computing unit, specifically for:
Calculate every sub regions barycenter distance d (t+1) each other;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Utilize TTC=delta (t)/s-1, obtain the collision time of barrier.
Optionally, this device also includes final TTC computing unit, for according to current frame image and predetermined quantity or pre- The TTC final TTC of calculating calculated at front picture frame fixed time.
Optionally, barrier collision warning unit, it is used for judging whether TTC or final TTC is less than the scheduled time, if It is then to report to the police.
On the other hand, one automatic cruising system of the present invention, drive assist system, drive recorder, including described row Obstacle detector in car.
The technical scheme provided in the embodiment of the present application, at least has the following technical effect that or advantage:
This programme proposes FCW algorithm based on pure vision, and this programme has two big advantages compared with currently existing scheme:
1, the present invention is under the configuration of monocular cam, it is not necessary to obtain speed information, and this also just illustrates to be not connected to car net Network obtains speed information, reduces installation cost;
2, the present invention is when calculating TTC, compares more existing scheme more accurate, and more robust reduces vehicle body simultaneously The accuracy of detection requirement of contour edge line.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of description, and in order to allow above and other objects of the present invention, the feature and advantage can Become apparent, below especially exemplified by the detailed description of the invention of the present invention.
Accompanying drawing explanation
By reading the detailed description of hereafter preferred implementation, various other advantage and benefit common for this area Technical staff will be clear from understanding.Accompanying drawing is only used for illustrating the purpose of preferred implementation, and is not considered as the present invention Restriction.And in whole accompanying drawing, it is denoted by the same reference numerals identical parts.In the accompanying drawings:
Fig. 1 shows the flow chart according to the method detecting barrier in the driving that the present invention proposes;
Fig. 2 shows the position blocki diagram of the barrier in frame of video;
Fig. 3 shows the position blocki diagram of the barrier being in travel lane in frame of video;
Fig. 4 shows the shaped as frame variation diagram of the barrier being in travel lane in before and after two frame frame of video;
Fig. 5 shows the crucial point diagram of the barrier being in travel lane in before and after two frame frame of video;Fig. 6 shows basis A kind of formation being embodied as form that the present invention proposes detects the flow chart of the method for barrier;
Fig. 7 shows the structured flowchart according to the device detecting barrier in the driving that the present invention proposes;
Fig. 8 shows the structured flowchart of the detection module in the device detecting barrier in driving;
Fig. 9 shows the structured flowchart of the tracking module in the device detecting barrier in driving.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing shows the disclosure Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should be by embodiments set forth here Limited.On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure Complete conveys to those skilled in the art.
The present invention proposes obstacle detection method in a kind of driving, as it is shown in figure 1, the method includes:
S1. frame of video is obtained;
S2. based on current video frame, barrier is carried out critical point detection;
S3. according to the region belonging to described key point, it is divided, and the key of the subregion after each division Point is chosen observation station;
S4. follow the tracks of described observation station and choose the observation station that the response of every sub regions is best;
S5. the barycenter of every sub regions is obtained according to the observation station that the response of every sub regions is best;
S6. according to present frame, every sub regions barycenter distance each other calculates barrier collision time with previous frame TTC。
In above-mentioned steps S1, monocular cam is utilized to obtain video image, compared to binocular camera, it is not necessary to follow-up Special anaglyph processing module, and monocular cam cost is low, effectively in cost-effective
In step s 2, in acquired video image, the barrier in detection image, if there is barrier, defeated Publish picture as in the position block diagram of barrier, as in figure 2 it is shown, and further disturbance in judgement thing whether be in the travel lane of vehicle, When barrier is in the travel lane of vehicle, just need the barrier in image is carried out critical point detection.Photographic head no matter Being arranged on outside car body, be also provided in car body (preferably rearview mirror), in the case of not having barrier, photographic head absorbs To image be fixing, therefore will there is no in the case of barrier the image of shooting as initial pictures.Vehicle is being advanced During, the picture frame that photographic head photographs is compared with initial pictures, it is possible to detect the barrier in front, and can be defeated Go out the position block diagram of barrier.In the case of being not detected by barrier, can continue to return acquisition frame of video.Vehicle is advancing During, in initial pictures, the position of lane line is regulation, therefore can go out detected according to the position judgment of lane line Barrier whether be positioned at the travel lane of vehicle, as shown in Figure 3, it is judged that be in the barrier of the travel lane of vehicle.As Fruit is not positioned at travel lane, then the traveling of vehicle will not be caused obstacle by barrier, can ignore, thus returns and continue to obtain Take picture frame.
It practice, directly according to detected barrier (such as automobile) shaped as frame in the picture, frame figure before and after calculating The change of the shaped as frame yardstick of barrier in Xiang, as shown in Figure 4, can estimate TTC (time to according to the change of yardstick Contact), but due to the profile of shaped as frame inaccurate reflection barrier, it is possible to cause the TTC of estimation to have error, basis at this On, we introduce the characteristic point of the dispersed and distributed on image, ask for TTC. according to characteristic point
When carrying out critical point detection, available FAST algorithm, ORB (oriented FAST and Rotated BRIEF), Harris algorithm picks characteristic point.FAST characteristic point, ORB (oriented FAST and Rotated BRIEF) are special Levy point, HARRIS characteristic point is all a kind of local invariant feature.ORB is built upon FAST detective operators and the improvement improved RBRIEF describes on son, and the arithmetic speed of FAST algorithm and BRIEF algorithm is the fastest, and therefore ORB is in arithmetic speed Have absolute advantages.The feature point pairs grey scale change that Harris algorithm extracts is repeatable strong with geometric transformation, so feature The efficiency of some detection is high, has scaling invariance.When detecting characteristic point, on image, compare certain point S and its neighborhood territory pixel Point gray value, if there is N number of continuous print pixel on Yuan, the gray value of these pixels deducts the absolute of the gray value of S point Value is more than predetermined threshold, then S is required characteristic point.To two two field pictures before and after shown in Fig. 4 are in travel lane The characteristic point of barrier choose as figure 5 illustrates.
In step s3, according to the region belonging to described key point, it is divided, and from each division Hou Zi district The key point in territory chooses observation station.In order to avoid the regional area of the image that key point is all concentrated, TTC time estimation is caused to have Error, the application proposes the technological means of dispersed and distributed observation station.As a kind of detailed description of the invention, the pass that can will detect Key point is divided into 9 sub regions according to its region exported, and in order to control amount of calculation, and can make again result the most accurately, preferably Ground, at most chooses 9 key points as observation station at each subregion.If the key point in a sub regions is less than 9, So can be using the key point of actual quantity all as observation station.Certainly, in the specific implementation, the key that will detect it is not limited to Point is divided into 9 sub regions according to its region exported, it is possible to being divided into 8 sub regions, 10,12 sub regions etc., in Mei Gezi district The observation station quantity chosen in territory is also not limited to 9, and such as 8,10,12 is the most permissible, after the quantity chosen on the one hand consideration Calculate the precision of TTM, on the one hand consider the speed calculated.
In step s 4, follow the tracks of described observation station and choose the observation station that every sub regions tracking response is best.As one Planting detailed description of the invention, utilize KLT track algorithm to follow the tracks of described observation station, KLT track algorithm is a kind of extensive in prior art For the algorithm followed the tracks of.When utilizing track algorithm to follow the tracks of described observation station, first determine whether whether picture frame is the first frame, if It is the first frame, then returns and obtain next frame video image, because the first two field picture does not has previous frame image to may be used to determine tracking Response.If not the first frame, then choose the point that every sub regions tracking response is best, include among these: remove tracking and ring Should be less than the point of predetermined threshold.As a kind of embodiment, the final observation station retained in order to make is tried one's best dispersed and distributed, with standard Really obtain the outline of barrier, choose tracking response value in the every sub regions removed adjacent less than the observation station of predetermined threshold The tracking response in territory is more than the point of predetermined threshold, to be used for obtaining the barycenter of every sub regions.
When inputting next frame image and choosing the observation station of next frame, the result of previous frame can be made full use of, i.e. In the case of not being the first frame of video, it is less than the observation of predetermined threshold according to tracking response value in the every sub regions removed Point, observation station new in sub regions every to next frame carries out duplicate removal process.
The characteristic point extracted or choose neither possesses size constancy, does not the most possess rotational invariance, some algorithm ratios As ORB algorithm uses gray scale centroid method, give characteristic point direction by calculating the square of characteristic point, give characteristic point invariable rotary Property, cause computationally intensive, it is the most in good time to process.And the application by utilization choose strong robustness in other words before and after frame tracking response Big characteristic point obtains the barycenter of all subregion, and amount of calculation is little, and quickly, timeliness is strong for processing speed
In step s 5, obtain the barycenter of every sub regions according to the observation station that every sub regions tracking response is best, make For a kind of preferred implementation, in 9 regions following the tracks of barrier divided, each regional choice response response is best Most 3 points, further according to the every sub regions of response weighted average calculation of selected response best each observation station Barycenter, these barycenter substantially can accurately reflect the appearance profile of barrier.
In step s 6, according to present frame, every sub regions barycenter distance each other calculates barrier with previous frame Collision time TTC.As one preferred embodiment, each barycenter is connected with each other, calculates each barycenter phase after connecting Distance between Hu, is designated as d (t), for next frame image, is designated as d (t+1), calculates the ratio of d (t+1) and d (t), as S, TTC=delta (t)/(s-1) can be calculated according to this ratio S.
Photographic head can be indicated after step S6 to input next frame image, and repeat described step S1-S6, thus for The picture frame of barrier detected on travel lane, all can calculate a TTC, the most not stop iteration.Certainly, each due to input Two field picture, it is nonlinear for nearly all calculating a TTC, TTC, and the TTC that can there is a certain two field picture unavoidably has error, in order to Making the TTC calculated the most accurate, the available scheduled time, interior or predetermined quantity picture frame history TTC weighted average was counted Calculate the TTC of present frame, from current frame image more away from image the least on its impact, thus the most forward TTC that calculates weights Coefficient is the lowest, and the TTC weight coefficient that present frame calculates is maximum, can be 1.
As another kind of detailed description of the invention, in step s 4, removing the tracking response point less than predetermined threshold, available The point of the strong robustness of next frame replaces the point removed, and i.e. utilizes followed the tracks of between present frame and next frame image high The observation station of tracking response replaces described tracking response less than the point of predetermined threshold, every frame video image so repeat into OK, thus ensure that observed point has higher robustness always.
Owing to the FAST characteristic point extracted, ORB characteristic point, HARRIS characteristic point neither possess size constancy, and do not have Standby invariable rotary shape, in prior art, some algorithms use gray scale centroid method, set characteristic point side by calculating the square of characteristic point To, give characteristic point rotational invariance;Assume that characteristic point position coordinate and barycenter exist skew, with characteristic point position as starting point, Barycenter is that terminal determines a vector and its direction, and this direction setting is characterized a direction, according further to described to Amount registrates.And the present invention asks for the barycenter of subregion by the direct weighted average of robustness utilizing characteristic point, thus or Person's barrier outline, calculating is asked and is obtained outline length then, makes full use of obstacle distance vehicle the most remote, and outline is more Little, obstacle distance vehicle is the nearest, the natural phenomena that outline is the biggest, calculates obstacle according to the ratio of front and back's outline length Thing collision time, the precision of calculating is high, and computing is exceedingly fast, real-time.
As one preferred embodiment, as shown in Figure 6, the following process of obstacle detection method execution in driving:
S11. frame of video Ft is read in by monocular cam;
S12. based on described frame of video Ft, the barrier in detection image, and export the position block diagram of barrier;
S13. position based on described barrier block diagram judges whether really to exist barrier;If there is no barrier, Then return and perform step S11, then perform step S14 if there is barrier;
S14. determine whether that barrier is in the travel lane of vehicle, if it is not, return step S11, if had Then perform step S15;
S15. the barrier being in vehicle travel lane in image is carried out critical point detection, and the key point of detection is pressed It is divided into 9 sub regions, each of which sub regions at most to take 9 key points as observation station according to its region exported;
Described key point includes FAST, ORB, and/or Harris characteristic point, if subregion just takes less than 9 key points The point of actual quantity, can be averaged to the point in each region and obtain its barycenter.
S16. the observation station to extraction uses KLT track algorithm to be tracked;
S17. judge whether frame of video Ft inputted is the first frame, if it is, feedback step S11, otherwise performs step Rapid S18;
S18. remove the observation station following the tracks of response less than threshold value, and duplicate removal updates the point that Ft+1 frame is newly detected;
S19. in 9 regions of the tracking target divided, each regional choice follows the tracks of best most three of response Point, is weighted obtaining the barycenter of its three points to these three point according to its response;Then calculate 9 barycenter mutual away from From d (t);And before and after calculating ratio of distances constant between two frames is i.e.: d (t+1)/d (t)=S, then according to formula: Tm=delta T ()/(s-1), wherein Tm is required TTC;
S20. judging that TTC, whether less than predetermined threshold, reports to the police if performing step S21., otherwise returning step S11.
The present invention utilizes a kind of monocular cam to solve the TTC computational problem of FCW, reduces FCW system and runs resource, And improve accuracy and the robustness of the TTC of FCW, reduce the rate of false alarm of FCW.
On the other hand, the present invention provides obstacle detector in a kind of driving, as it is shown in fig. 7, this device includes:
Photographic head 100, is used for obtaining frame of video, and this photographic head is monocular cam;
Detection module 200, for carrying out critical point detection based on described frame of video to barrier, according to described key point institute It is divided by the region belonged to, and chooses observation station the key point of the subregion after each division;
Tracking module 300, for following the tracks of described observation station and choosing the observation station that the response of every sub regions is best;
TTC computing unit 400, obtains the matter of every sub regions for the observation station best according to the response of every sub regions The heart, according to present frame, every sub regions barycenter distance each other calculates barrier collision time TTC with previous frame.
As shown in Figure 8, described detection module 200 specifically includes:
Critical point detection unit 201, for barrier being carried out critical point detection based on described frame of video,
Sub-zone dividing unit 202, for being divided into 9 sub regions described key point according to its affiliated region;
Observation station chooses unit 203, and whether the keypoint quantity in judging every sub regions is more than 9, if greater than 9, then choose 9 key points therein as observation station, otherwise using all key points in subregion all as observation station.
As it is shown in figure 9, described tracking module 300 specifically includes judging unit 301 and observation station processing unit 302, wherein
Judging unit 301, being used for judging whether is the first frame of video;If it is, directly indicate described photographic head to obtain Frame of video, otherwise instruction observation station processing unit 302, remove tracking response value in every sub regions and be less than the observation of predetermined threshold Point, and most three points that Response to selection is best.
So that be used for calculating the observation station dispersed and distributed as far as possible of barycenter, and then barrier profile is finally made to take turns Exterior feature is accurate as far as possible, and it is little that described observation station processing unit 302 is additionally operable to choose tracking response value in the every sub regions removed Tracking response in the observation station neighborhood of predetermined threshold is more than the point of predetermined threshold, to be used for obtaining the barycenter of every sub regions.
In order to make full use of the result of previous frame, and providing the robustness of observation station as far as possible, described observation clicks Take unit in the case of not being the first frame of video, according to tracking response value in the every sub regions removed less than predetermined threshold Observation station, observation station new in sub regions every to next frame carries out duplicate removal process.
Described TTC computing unit, specifically for:
Calculate interconnective distance d of every sub regions barycenter (t+1);
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Utilize TTC=delta (t)/s-1, obtain the collision time of barrier.
As a kind of preferred implementation, this device also includes final TTC computing unit, for according to current frame image and Predetermined quantity or the TTC final TTC of calculating calculated at front picture frame of the scheduled time.
This obstacle detector also includes barrier collision warning unit, is used for judging whether TTC and final TTC is less than The scheduled time, as long as both have one less than the scheduled time, warning.
On the other hand, the present invention also provides for a kind of automatic cruising system, including detection of obstacles in foregoing driving Device, carries out slowing down, braking for controlling driving vehicle when calculating TTC less than the scheduled time, and the present invention also provides for one Drive assist system or drive recorder, for carrying out alarm when calculating TTC less than the scheduled time, the present invention is also There is provided a kind of intelligent back vision mirror, including obstacle detector in foregoing driving, for calculating TTC less than predetermined Alarm is carried out during the time.
Obstacle detector proposed by the invention may be installed outside vehicle body, because present invention utilization is monocular shooting Head, has no effect on vehicle body outward appearance, and easy for installation, it is possible to install in this vehicle body, if be arranged in vehicle body, after being preferably mounted at Visor position.
The technical scheme provided in the embodiment of the present application, at least has the following technical effect that or advantage:
This programme proposes FCW algorithm based on pure vision, and this programme has two big advantages compared with currently existing scheme: 1, this Bright need not obtains speed information, and this also just illustrates to be not connected to car Network Capture speed information, reduces installation cost;The present invention exists When calculating TTC, compare more existing scheme more accurate, more robust, reduce the essence of the detection to body outline edge line simultaneously Degree requirement.
The electronic equipment introduced due to the present embodiment is used by the method implementing to make marks in the embodiment of the present application Device, so based on the method made marks described in the embodiment of the present application, those skilled in the art will appreciate that this The detailed description of the invention of the electronic equipment of embodiment and its various versions, thus the most real for this electronic equipment at this The method made marks in existing the embodiment of the present application is no longer discussed in detail.As long as it is real that those skilled in the art implement the application Execute the device that the method made marks in example is used, broadly fall into the scope that the application to be protected.
Algorithm and display are not intrinsic to any certain computer, virtual system or miscellaneous equipment relevant provided herein. Various general-purpose systems can also be used together with based on teaching in this.As described above, construct required by this kind of system Structure be apparent from.Additionally, the present invention is also not for any certain programmed language.It is understood that, it is possible to use various Programming language realizes the content of invention described herein, and the description done language-specific above is to disclose this Bright preferred forms.
In description mentioned herein, illustrate a large amount of detail.It is to be appreciated, however, that the enforcement of the present invention Example can be put into practice in the case of not having these details.In some instances, it is not shown specifically known method, structure And technology, in order to do not obscure the understanding of this description.
Similarly, it will be appreciated that one or more in order to simplify that the disclosure helping understands in each inventive aspect, exist Above in the description of the exemplary embodiment of the present invention, each feature of the present invention is grouped together into single enforcement sometimes In example, figure or descriptions thereof.But, the method for the disclosure should not be construed to reflect an intention that i.e. required guarantor The application claims feature more more than the feature being expressly recited in each claim protected.More precisely, as following Claims reflected as, inventive aspect is all features less than single embodiment disclosed above.Therefore, The claims following detailed description of the invention are thus expressly incorporated in this detailed description of the invention, the most each claim itself All as the independent embodiment of the present invention.
Those skilled in the art are appreciated that and can carry out the module in the equipment in embodiment adaptively Change and they are arranged in one or more equipment different from this embodiment.Can be the module in embodiment or list Unit or assembly are combined into a module or unit or assembly, and can put them in addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit excludes each other, can use any Combine all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed appoint Where method or all processes of equipment or unit are combined.Unless expressly stated otherwise, this specification (includes adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be carried out generation by providing identical, equivalent or the alternative features of similar purpose Replace.
Although additionally, it will be appreciated by those of skill in the art that embodiments more in this include institute in other embodiments Including some feature rather than further feature, but the combination of the feature of different embodiment means to be in the scope of the present invention Within and form different embodiments.Such as, in the following claims, embodiment required for protection any it One can mode use in any combination.
The all parts embodiment of the present invention can realize with hardware, or to run on one or more processor Software module realize, or with combinations thereof realize.It will be understood by those of skill in the art that and can use in practice Microprocessor or digital signal processor (DSP) realize in gateway according to embodiments of the present invention, proxy server, system The some or all functions of some or all parts.The present invention is also implemented as performing side as described herein Part or all equipment of method or device program (such as, computer program and computer program).Such The program realizing the present invention can store on a computer-readable medium, or can have the shape of one or more signal Formula.Such signal can be downloaded from internet website and obtain, or provides on carrier signal, or with any other shape Formula provides.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference marks that should not will be located between bracket is configured to limitations on claims.Word " comprises " and does not excludes the presence of not Arrange element in the claims or step.Word "a" or "an" before being positioned at element does not excludes the presence of multiple such Element.The present invention and can come real by means of including the hardware of some different elements by means of properly programmed computer Existing.If in the unit claim listing equipment for drying, several in these devices can be by same hardware branch Specifically embody.Word first, second and third use do not indicate that any order.These word explanations can be run after fame Claim.
The present invention provides obstacle detection method in A1, a kind of driving, it is characterised in that the method includes:
Obtain current video frame;
Based on described current video frame, barrier is carried out critical point detection;
According to the region belonging to described key point, it is divided, and the key point of the subregion after each division Choose observation station;
Follow the tracks of described observation station and choose the observation station of the best predetermined quantity of every sub regions response;
The barycenter of every sub regions is obtained according to the observation station that the response of every sub regions is best;
According to present frame, every sub regions barycenter distance each other calculates barrier collision time with previous frame TTC。
A2, according to the method described in A1, be further characterized in that, described key point includes that FAST, ORB and/or Harris are special Levy a little.
A3, according to the method described in A1 or A2, be further characterized in that, according to the region belonging to described key point, it carried out Divide, and choose observation station from the key point of every sub regions, specifically include:
Described key point is divided into 9 sub regions according to its affiliated region;
Judge that the keypoint quantity in every sub regions, whether more than 9, if greater than 9, then chooses 9 key points therein As observation station, otherwise using all key points in subregion all as observation station.
A4, according to the method described in any one of A1 to A3, be further characterized in that, follow the tracks of described observation station and choose every height The observation station that region response is best, specifically includes:
Judge whether it is the first frame of video;
Frame of video is obtained if it is, return;
Otherwise remove the observation station less than predetermined threshold of tracking response value in every sub regions, and Response to selection is best Many three points, to be used for obtaining the barycenter of every sub regions.
A5, according to the method described in A4, be further characterized in that, choose tracking response value in the every sub regions removed little Tracking response in the observation station neighborhood of predetermined threshold is more than the point of predetermined threshold, to be used for obtaining the barycenter of every sub regions.
A6, according to the method described in A4 or A5, be further characterized in that, in the case of not being the first frame of video, according to institute In the every sub regions removed tracking response value less than predetermined threshold observation station, sight new in sub regions every to next frame Measuring point carries out duplicate removal process.
A7, according to the method described in any one of A1 to A6, be further characterized in that, according to height every in present frame and previous frame Barycenter distance each other in region calculates barrier collision time TTC, specifically includes:
Calculate every sub regions barycenter distance d (t+1) each other;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Utilize TTC=delta (t)/s-1, obtain the collision time of barrier.
A8, according to the method described in any one of A1 to A7, be further characterized in that, according to current frame image and predetermined quantity or The TTC final TTC of calculating calculated at front picture frame of person's scheduled time.
A9, according to the method described in A7 or A8, be further characterized in that, it is judged that whether the TTC of present frame or final TTC Less than the scheduled time, if it is, report to the police.
Obstacle detector in B10, a kind of driving, it is characterised in that this device includes:
Photographic head, is used for obtaining frame of video;
Detection module, for carrying out critical point detection based on described frame of video to barrier, belonging to described key point Region it is divided, and the key point of the subregion after each division chooses observation station;
Tracking module, for following the tracks of described observation station and choosing the observation of the best predetermined quantity of every sub regions response Point;
TTC computing unit, obtains the barycenter of every sub regions, root for the observation station best according to the response of every sub regions According to present frame, every sub regions barycenter distance each other calculates barrier collision time TTC with previous frame.
B11, according to the detection device described in B10, be further characterized in that, described key point include FAST, ORB and/or Harris characteristic point.
B12, according to the method described in B10 or B11, be further characterized in that, described detection module specifically includes:
Critical point detection unit, for barrier being carried out critical point detection based on described frame of video,
Sub-zone dividing unit, for being divided into 9 sub regions described key point according to its affiliated region;
Observation station chooses unit, and whether the keypoint quantity in judging every sub regions is more than 9, if greater than 9, then Choose 9 key points therein as observation station, otherwise using all key points in subregion all as observation station.
B13, according to the device described in any one of B10 to B12, be further characterized in that, described tracking module specifically includes to be sentenced Disconnected unit and observation station processing unit, wherein
Judging unit, being used for judging whether is the first frame of video;If it is, directly indicate described photographic head to obtain video Frame, otherwise instruction observation station processing unit, remove tracking response value in every sub regions and be less than the observation station of predetermined threshold, and select Select most three points that response is best.
B14, according to the method described in B13, be further characterized in that, described observation station processing unit is additionally operable to choose and is removed Every sub regions in tracking response value less than the tracking response of the observation station neighborhood of predetermined threshold more than the point of predetermined threshold, with It is used for obtaining the barycenter of every sub regions.
B15, according to the method described in B13 or B14, be further characterized in that, it be not first that described observation station chooses unit In the case of frame of video, it is less than the observation station of predetermined threshold according to tracking response value in the every sub regions removed, to next Observation station new in the every sub regions of frame carries out duplicate removal process.
B16, according to the device described in any one of B10 to B15, be further characterized in that, TTC computing unit, specifically for:
Calculate every sub regions barycenter distance d (t+1) each other;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Utilize TTC=delta (t)/s-1, obtain the collision time of barrier.
B17, according to the method described in B16, be further characterized in that, this device also includes final TTC computing unit, for root According to current frame image and predetermined quantity or the TTC final TTC of calculating calculated at front picture frame of the scheduled time.
B18, according to the device described in B16 or B17, be further characterized in that barrier collision warning unit is used for judging Whether TTC is less than the scheduled time, if it is, report to the police.
C19, a kind of automatic cruising system, including detection of obstacles in the driving described in any one of claim B10-B17 Device.
D20, a kind of drive assist system, including obstacle detector in the driving described in any one of B10-17.
E21, a kind of drive recorder, including obstacle detector in the driving described in any one of B10-17.

Claims (10)

1. obstacle detection method in a driving, it is characterised in that the method includes:
Obtain current video frame;
Based on described current video frame, barrier is carried out critical point detection;
According to the region belonging to described key point, it is divided, and the key point of the subregion after each division is chosen Observation station;
Follow the tracks of described observation station and choose the observation station of the best predetermined quantity of every sub regions response;
The barycenter of every sub regions is obtained according to the observation station that the response of every sub regions is best;
According to present frame, every sub regions barycenter distance each other calculates barrier collision time TTC with previous frame.
Method the most according to claim 1, is further characterized in that, described key point includes FAST, ORB and/or Harris Characteristic point.
Method the most according to claim 1 and 2, is further characterized in that, enters it according to the region belonging to described key point Row divides, and chooses observation station from the key point of every sub regions, specifically includes:
Described key point is divided into 9 sub regions according to its affiliated region;
Judge that the keypoint quantity in every sub regions, whether more than 9, if greater than 9, then chooses 9 key point conducts therein Observation station, otherwise using all key points in subregion all as observation station.
4. according to the method described in any one of claims 1 to 3, it is further characterized in that, follows the tracks of described observation station and choose each The observation station that subregion response is best, specifically includes:
Judge whether it is the first frame of video;
Frame of video is obtained if it is, return;
Otherwise remove the observation station less than predetermined threshold of tracking response value in every sub regions, and Response to selection best most three Individual, to be used for obtaining the barycenter of every sub regions.
Method the most according to claim 4, is further characterized in that, chooses tracking response value in the every sub regions removed The point of predetermined threshold it is more than, to be used for obtaining the matter of every sub regions less than the tracking response of the observation station neighborhood of predetermined threshold The heart.
6. according to the method described in claim 4 or 5, it is further characterized in that, in the case of not being the first frame of video, according to institute In the every sub regions removed tracking response value less than predetermined threshold observation station, sight new in sub regions every to next frame Measuring point carries out duplicate removal process.
7., according to the method described in any one of claim 1 to 6, it is further characterized in that, each with previous frame according to present frame Subregion barycenter distance each other calculates barrier collision time TTC, specifically includes:
Calculate every sub regions barycenter distance d (t+1) each other;
Calculate the ratio of distances constant of present frame and former frame, s=d (t+1)/d (t);
Utilize TTC=delta (t)/s-1, obtain the collision time of barrier.
8. according to the method described in any one of claim 1 to 7, it is further characterized in that, according to current frame image and predetermined quantity Or the TTC final TTC of calculating calculated at front picture frame of the scheduled time.
9. according to the method described in claim 7 or 8, it is further characterized in that, it is judged that whether the TTC or final TTC of present frame Less than the scheduled time, if it is, report to the police.
10. obstacle detector in a driving, it is characterised in that this device includes:
Photographic head, is used for obtaining frame of video;
Detection module, for carrying out critical point detection based on described frame of video to barrier, according to the district belonging to described key point It is divided by territory, and chooses observation station the key point of the subregion after each division;
Tracking module, for following the tracks of described observation station and choosing the observation station of the best predetermined quantity of every sub regions response;
TTC computing unit, obtains the barycenter of every sub regions for the observation station best according to the response of every sub regions, according to working as Every sub regions barycenter distance each other calculates barrier collision time TTC to front frame with previous frame.
CN201610576441.5A 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving Active CN106203381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610576441.5A CN106203381B (en) 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610576441.5A CN106203381B (en) 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving

Publications (2)

Publication Number Publication Date
CN106203381A true CN106203381A (en) 2016-12-07
CN106203381B CN106203381B (en) 2019-05-31

Family

ID=57491114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610576441.5A Active CN106203381B (en) 2016-07-20 2016-07-20 Obstacle detection method and device in a kind of driving

Country Status (1)

Country Link
CN (1) CN106203381B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109050391A (en) * 2018-07-26 2018-12-21 北京经纬恒润科技有限公司 A kind of high beam control method and device
CN109308442A (en) * 2017-07-26 2019-02-05 株式会社斯巴鲁 Exterior environment recognition device
CN109583432A (en) * 2019-01-04 2019-04-05 广东翼卡车联网服务有限公司 A kind of vehicle blind zone intelligent early-warning method based on image recognition
CN110021006A (en) * 2018-09-06 2019-07-16 浙江大学台州研究院 A kind of device and method whether detection automobile parts are installed
CN110543807A (en) * 2018-05-28 2019-12-06 Aptiv技术有限公司 method for verifying obstacle candidate
CN110658827A (en) * 2019-10-25 2020-01-07 嘉应学院 Transport vehicle automatic guiding system and method based on Internet of things
CN111060125A (en) * 2019-12-30 2020-04-24 深圳一清创新科技有限公司 Collision detection method and device, computer equipment and storage medium
CN111383340A (en) * 2018-12-28 2020-07-07 成都皓图智能科技有限责任公司 Background filtering method, device and system based on 3D image
CN112528793A (en) * 2020-12-03 2021-03-19 上海汽车集团股份有限公司 Method and device for eliminating shaking of obstacle detection frame of vehicle
CN115547060A (en) * 2022-10-11 2022-12-30 上海理工大学 Intersection traffic conflict index calculation method considering vehicle outline

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842028A (en) * 2011-03-22 2012-12-26 富士重工业株式会社 Vehicle exterior monitoring device and vehicle exterior monitoring method
US20130073194A1 (en) * 2011-09-15 2013-03-21 Clarion Co., Ltd. Vehicle systems, devices, and methods for recognizing external worlds
CN103403779A (en) * 2011-03-04 2013-11-20 日立汽车系统株式会社 Vehicle-mounted camera and vehicle-mounted camera system
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device
CN103732480A (en) * 2011-06-17 2014-04-16 罗伯特·博世有限公司 Method and device for assisting a driver in performing lateral guidance of a vehicle on a carriageway

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103403779A (en) * 2011-03-04 2013-11-20 日立汽车系统株式会社 Vehicle-mounted camera and vehicle-mounted camera system
CN102842028A (en) * 2011-03-22 2012-12-26 富士重工业株式会社 Vehicle exterior monitoring device and vehicle exterior monitoring method
CN103732480A (en) * 2011-06-17 2014-04-16 罗伯特·博世有限公司 Method and device for assisting a driver in performing lateral guidance of a vehicle on a carriageway
US20130073194A1 (en) * 2011-09-15 2013-03-21 Clarion Co., Ltd. Vehicle systems, devices, and methods for recognizing external worlds
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308442B (en) * 2017-07-26 2023-09-01 株式会社斯巴鲁 Vehicle exterior environment recognition device
CN109308442A (en) * 2017-07-26 2019-02-05 株式会社斯巴鲁 Exterior environment recognition device
CN110543807A (en) * 2018-05-28 2019-12-06 Aptiv技术有限公司 method for verifying obstacle candidate
CN109050391A (en) * 2018-07-26 2018-12-21 北京经纬恒润科技有限公司 A kind of high beam control method and device
CN109050391B (en) * 2018-07-26 2020-06-05 北京经纬恒润科技有限公司 High beam control method and device
CN110021006A (en) * 2018-09-06 2019-07-16 浙江大学台州研究院 A kind of device and method whether detection automobile parts are installed
CN110021006B (en) * 2018-09-06 2023-11-17 浙江大学台州研究院 Device and method for detecting whether automobile parts are installed or not
CN111383340A (en) * 2018-12-28 2020-07-07 成都皓图智能科技有限责任公司 Background filtering method, device and system based on 3D image
CN111383340B (en) * 2018-12-28 2023-10-17 成都皓图智能科技有限责任公司 Background filtering method, device and system based on 3D image
CN109583432A (en) * 2019-01-04 2019-04-05 广东翼卡车联网服务有限公司 A kind of vehicle blind zone intelligent early-warning method based on image recognition
CN110658827A (en) * 2019-10-25 2020-01-07 嘉应学院 Transport vehicle automatic guiding system and method based on Internet of things
CN111060125A (en) * 2019-12-30 2020-04-24 深圳一清创新科技有限公司 Collision detection method and device, computer equipment and storage medium
CN111060125B (en) * 2019-12-30 2021-09-17 深圳一清创新科技有限公司 Collision detection method and device, computer equipment and storage medium
CN112528793A (en) * 2020-12-03 2021-03-19 上海汽车集团股份有限公司 Method and device for eliminating shaking of obstacle detection frame of vehicle
CN112528793B (en) * 2020-12-03 2024-03-12 上海汽车集团股份有限公司 Method and device for eliminating jitter of obstacle detection frame of vehicle
CN115547060A (en) * 2022-10-11 2022-12-30 上海理工大学 Intersection traffic conflict index calculation method considering vehicle outline

Also Published As

Publication number Publication date
CN106203381B (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN106203381A (en) Obstacle detection method and device in a kind of driving
CN112292711B (en) Associating LIDAR data and image data
CN106991389B (en) Device and method for determining road edge
KR100909741B1 (en) Monitoring device, monitoring method
JP5939357B2 (en) Moving track prediction apparatus and moving track prediction method
Kim et al. Sensor fusion algorithm design in detecting vehicles using laser scanner and stereo vision
CN102194239B (en) For the treatment of the method and system of view data
CN106054174A (en) Fusion method for cross traffic application using radars and camera
JP2015165381A (en) Image processing apparatus, equipment control system, and image processing program
US9098750B2 (en) Gradient estimation apparatus, gradient estimation method, and gradient estimation program
EP3403216A1 (en) Systems and methods for augmenting upright object detection
KR101573576B1 (en) Image processing method of around view monitoring system
US11403947B2 (en) Systems and methods for identifying available parking spaces using connected vehicles
WO2018130634A1 (en) Enhanced object detection and motion estimation for a vehicle environment detection system
CN108021899A (en) Vehicle intelligent front truck anti-collision early warning method based on binocular camera
CN110942038A (en) Traffic scene recognition method, device, medium and electronic equipment based on vision
CN110751836A (en) Vehicle driving early warning method and system
CN112906777A (en) Target detection method and device, electronic equipment and storage medium
CN105374049A (en) Multi-angle-point tracking method based on sparse optical flow method and apparatus thereof
CN110345924A (en) A kind of method and apparatus that distance obtains
CN103577790A (en) Road turning type detecting method and device
EP3467545A1 (en) Object classification
CN110864670B (en) Method and system for acquiring position of target obstacle
CN111144415A (en) Method for detecting micro pedestrian target
CN116109669A (en) Target tracking method and system and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220728

Address after: Room 801, 8th floor, No. 104, floors 1-19, building 2, yard 6, Jiuxianqiao Road, Chaoyang District, Beijing 100015

Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.