CN1782668A - Method and device for preventing collison by video obstacle sensing - Google Patents

Method and device for preventing collison by video obstacle sensing Download PDF

Info

Publication number
CN1782668A
CN1782668A CN 200510073059 CN200510073059A CN1782668A CN 1782668 A CN1782668 A CN 1782668A CN 200510073059 CN200510073059 CN 200510073059 CN 200510073059 A CN200510073059 A CN 200510073059A CN 1782668 A CN1782668 A CN 1782668A
Authority
CN
China
Prior art keywords
barrier
theta
video
imageing sensor
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200510073059
Other languages
Chinese (zh)
Inventor
曾俊元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN 200510073059 priority Critical patent/CN1782668A/en
Priority to US11/260,723 priority patent/US20060111841A1/en
Priority to JP2005332937A priority patent/JP2006184276A/en
Publication of CN1782668A publication Critical patent/CN1782668A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present invention reveals all-weather video obstacle sensing collision preventing device and method without needing complicated fuzzy operation. The present invention may be used as the reference of system carrier driver. The method includes the cooperation of one obstacle, one system carrier and one image sensor, and includes the following steps: searching and analyzing the images of the obstacle, locating the image sensor, executing obstacle distinguishing process, obtaining the absolute speed of the system carrier, obtaining one relative distance and one relative speed of the system carrier to the obstacle, and executing one collision preventing policy.

Description

Barrier avoiding collision and device with video-aware
Technical field
The present invention relates to a kind of barrier collision avoidance system and its implementation method, relate in particular to a kind ofly, and be particularly suitable for applying to the collision avoidance system and the method for the vehicles based on video-aware.
Background technology
The domestic research that has had many academic research units being engaged in automobile anti-rear end collision is with Intelligent Transportation System (the Intelligent Transportation Systems of National Chiao Tung University; ITS) the automobile collision preventing warning subsystem in the integration scheme is an example, and its principle is the distance that adopts between the ultrasonic sensor measuring vehicle.Abroad, the research of the security system aspect that relevant automobile is relevant has been carried out several years, and be Intelligent Transportation System ITS in conjunction with other relevant integration of information system, finished automatic collision device (Automotive Collision Avoidance System at present; ACAS), its principle is to use car that infrared rays survey driver itself driven and the distance between the car of the place ahead, and then extrapolates the relative velocity between two cars, and the last same man-computer interface that sees through reminds the driver to make safety practice.The foundation of ACAS can utilize sensor reception environment information, utilize the image of retrieval to make vehicle identification and set up anti-three its system architectures of process description of strategy that knock into the back.
The function of sensor is to retrieve the information of external environment condition, the sensor that at present domestic and international related experiment is used, as the clear ultrasound wave that proposes of river end (up-to-date ultrasound wave engineering), the radio that Health Physics is proposed involves laser (infrared ray) (International Commission onNon-Ionizing Radiation Protection:Guidelines for limiting exposureto time-varying electric, magnetic and electromagnetic fields), the GPS three-point fix that Wann proposed (Position tracking and velocity estimation formobile positioning systems), the CCD camera that Kearney proposed (Cameracalibration using geometric constraints) etc.The characteristic of each sensor is as shown in table 1.
The characteristic of table 1 sensor
Sensor Ultrasound wave Radiowave Laser (infrared ray) Satnav CCD camera
Principle Doppler effect tests the speed Doppler effect tests the speed Ultra-red ray effect tests the speed The GPS location Plane of delineation coordinate conversion is solid space coordinate, intelligent image recognition
Advantage Human body is had no adverse effects, and cheap and hardware is easily realized. Can measure the about 100-200M of middle and long distance. Measuring distance is longer, can reach 500-600M, and measured value is accurate. The function of auto navigation can be provided. Measuring distance can reach 100M, most complete traffic information can be provided, comprise that highway sideline detects, leading vehicle distance, information such as the speed of a motor vehicle.
Shortcoming Measuring distance is shorter, only about 0-10m.Complete road information can't be provided. The electromagnetic wave harm human body.The road information more complete than image can't be provided. Hazard ratio electromagnetic wave to human body (especially human eye) is bigger.Complete road information can't be provided. Cost an arm and a leg, about positioning error 10m, can't effectively provide car collision avoidance function.Other barrier need be installed GPS simultaneously and just can be located. Be subjected to the influence of weather light, but can do appropriate processing by the method that intelligent type signal is handled.
The use occasion that sensor is common Reversing collision avoidance system, autonomous car, collision prevention of vehicle Alert with velometer, collision prevention of vehicle Alert with velometer, collision prevention of vehicle Satellite navigation Industrial Image Detecting, the foundation of mechanical arm vision, autonomous car, collision prevention of vehicle
Though utilize the CCD camera retrieving images that most complete traffic information can be provided as shown in Table 1, shortcoming is disturbed by light, and can't be used in the barrier identification at night.
At present, it is a lot of to utilize image to do the method for vehicle identification both at home and abroad, comprises Yamaguchi institute Tilly with car plate identification (A Method for Identifying Specific Vehicles Using TemplateMatching), three the place ahead known orientation signs easy to identify that Marmoiton proposed (Locationand relative speed estimation of vehicles by monocular vision), the Figure recognition that Kato proposed (Preceding Vehicle Recognition Based on LearningFrom Sample Images), the optical-flow (Real-time estimation andtracking of optical flow vectors for obstacle detection) that Kruger proposes and the vehicle image totem that Lutzeler proposed or the contrast (EMS-vision:recognition ofintersections on unmarked road networks) of boundary combinations.The comparison such as the table 2 of various image recognition method for vehicles.
Table 2 image recognition method for vehicles
Utilize car plate identification The sign easy to identify of three the place ahead known orientation Figure recognition The image boundary combination of vehicle
Theoretical foundation Hi-pass filter can be discerned the number-plate number; Car plate size pattern unanimity.Utilize the car plate amount of pixels to decide distance with front truck. Utilize three signs easy to identify in the place ahead; And the relative orientation of three signs is known. Find out the proper vector of vehicle and do the neural network training. Utilize on the image, the border of vehicle distributes.
The application scenario Managing system of car parking Active safe driving backup system The steel plate Defect Detection; Recognition of face Active safe driving backup system
Algorithm Hi-pass filter 3 accurate scenographys The neural network training Utilizing HCDFCM to do the robustness border seeks
CPU computational resource usage degree In only use single CCD camera to import as image, the single treatment single image, handle but whole image made image, consumed many CPU resources. In only use single CCD camera to import as image, the single treatment single image, handle but whole image made image, consumed many cpu resources. Quite time-consuming when height is done the neural network training, and the quality of the quality of training data decision identification. The low color range data (at the most 720 pixels) of a line segment on the image that only need just can.
Must pre-determined parameter or information The coefficient of Hi-pass filter (High-pass Filter) The relative coordinate information of the place ahead three signs The foundation of template database; The foundation of neural network. In the image, the distribution on vehicle border.
The method difficulty or ease Difficult background can not be too complicated, and this method is only applicable in 10 meters In Difficulty is set up the totem database and is difficult for; It is just effective to look for all representative car of its quality and quantity, road totem to train. Although easily the distribution on vehicle border might not, with other boundary combinations group on the road very big difference is arranged.
Can find range from In short 10 meters In can reach about 100 meters In can reach about 100 meters In can reach about 100 meters
Accuracy Not high High Not high High
Counting yield Medium Medium Medium Fast
Development cost Low The medium land sign engineering of the road vehicle of cooperation government that needs cooperates, could commercialization High development is difficult for, and time and human cost are high. Calculating and hardware resource that low consumption is used all lack
Crashproof response strategy mainly is the reaction that simulating human is done before rear-end collision takes place, and general human distance by observation and front truck just can be made suitable reaction with intuition by rule of thumb with relative velocity, avoids the generation of rear-end collision.Both at home and abroad the research of the crashproof response strategy that is proposed at active driving safety system is quite a lot of.Wherein, car-following collision preventionsystem (CFCPS) that Mar J. is proposed and An ANFIS controller for the car-following collisionprevention system have obtained the effect of an excellence in crashproof performance after a plurality of relevant crashproof response strategies are compared.CFCPS is that the value that the distance with the relative velocity of front and back car, front and back car deducts the safe distance gained is input, and serves as to calculate core based on the fuzzy deduction engine of 25 fuzzy rules, tries to achieve the foundation of vehicle acceleration and deceleration at last.It is being inquired into to making vehicle reach safety and stability in addition, also the distance of vehicle equals safe distance before and after i.e. this moment, and the travel speed of front car and rear car equates, mention during time of the required cost of system, CFCPS only needs 7-8 second, the experiment of similar properties such as GM (General Motors) model needs 10 seconds, and Kikuchi and Chakroborty model then needs 12-14 second.
Summary of the invention
Fundamental purpose of the present invention is proposing a kind of round-the-clock barrier avoiding collision and device of being implemented on, so that can be by day with carry out barrier identification evening, and the fuzzy rule inference computing that need not via complexity can draw the crashproof strategy of a cover, the foundation for the driver of system's carrier during as driving.
Another object of the present invention is proposing a kind of round-the-clock barrier avoiding collision and device of being implemented on, and when the location of imageing sensor is changed because of system's carrier is impacted, need not can directly recover its location voluntarily through field survey.
For achieving the above object, the present invention discloses a kind of barrier avoiding collision with video-aware, and it is applied to a barrier and one the system's carrier in moving, and an imageing sensor is set up in system's carrier.Described barrier avoiding collision comprises the following step (a)-(f), and wherein step (a) is a plurality of images of searching system carrier and analyzes; Step (b) is the described imageing sensor in location; Step (c) is to carry out a barrier identification process; Step (d) is an absolute velocity of obtaining described system carrier; Step (e) is to obtain a relative distance and a relative velocity of described system carrier and described barrier; And step (f) is to carry out a crashproof strategy.Wherein step (a) is in another example, and its a plurality of images of retrieving can be positioned at the forward and backward, right or right of described system carrier, perhaps can retrieve constantly in first and second of time.
Above-mentioned avoiding collision can a video-aware the barrier collision avoidance system implemented, it is installed on system's carrier and mainly comprises an imageing sensor, an arithmetic element and an alarm.Described imageing sensor is in order to retrieve described a plurality of image and to be able to the cognitive disorders thing, and described arithmetic element can be analyzed described a plurality of images.Have barrier to exist if analysis result is thought, then described alarm will send acousto-optic or produce vibrations to report to the police.
Description of drawings
Fig. 1 is the barrier collision avoidance system synoptic diagram with video-aware of the present invention;
Fig. 2 is the process flow diagram of the barrier avoiding collision with video-aware of the present invention;
Fig. 3 is the process flow diagram of a plurality of image step of the described barrier of analysis of Fig. 2;
The imaging geometry figure that Fig. 4 measures for depth distance;
Fig. 5 is the hardware configuration synoptic diagram of sensitization electroplax;
Fig. 6 is for measuring the imaging geometry figure of lateral separation;
Fig. 7 is that the height when being embodiment with the vehicle is measured (the length in pixels l of detection square box Dw) the image synoptic diagram;
Fig. 8 (a)-(d) for vehicle four kinds of different depths apart from the time on image, present different l DwSynoptic diagram;
Fig. 9 is the image geometry graph of a relation of positioning image sensor;
Figure 10 is the process flow diagram that a barrier identification process step is provided of Fig. 2;
Six kinds of sweep trace forms of Figure 11 (a)-(f) illustration;
Figure 12 is the process flow diagram of the crashproof strategy step of execution of Fig. 2;
The experiment synoptic diagram that the barrier that Figure 13 (a), 13 (b) and 13 (c) illustration utilize cloth woods parameter to be done is discerned;
Figure 14 is that the pavement reflecting of rainy day at night is schemed barrier identification influence; With
Figure 15 is the housing retrieval synoptic diagram of vehicle in the illustration image.
Embodiment
Figure 1 shows that disclosed a kind of barrier collision avoidance system 20 with video-aware, it is installed on system's carrier 24.Described collision avoidance system 20 mainly comprises an imageing sensor 22, an arithmetic element 26 and an alarm 25.Described imageing sensor 22 can scan, and retrieval is through a barrier 21 a plurality of images of scanning.Described arithmetic element 26 is analyzed at described a plurality of images.Have described barrier 21 to exist if analysis result is thought, then described alarm 25 will send acousto-optic or produce vibrations and report to the police.Described imageing sensor 22 can be retrieved and is positioned at 24 forward and backward, the right or right a plurality of images of described system carrier in another embodiment, perhaps can retrieve constantly in first and second of time.
Figure 2 shows that the flow process of the barrier avoiding collision 10 with video-aware of the present invention.It comprises following steps 11-16, wherein step 11 is for retrieving a plurality of images and analyzing, step 12 is the described imageing sensor in location, step 13 is for carrying out a barrier identification process, step 14 is for obtaining the absolute velocity of described system carrier, step 15 is to obtain a relative distance and a relative velocity of described system carrier and described barrier, and step 16 is for carrying out a crashproof strategy.
The detailed content of above-mentioned steps below is described:
Step 11 is retrievals and analyzes described a plurality of image that it comprises the following step (ginseng Fig. 3):
(a) fathom apart from 111 (being the relative distance of described system carrier 24 and described barrier 21):
The imaging geometry figure that depth distance is measured as shown in Figure 4, this figure includes two coordinate systems: two dimensional image plane coordinate (X i, Y i) and three-dimensional real space coordinate (X w, Y w, Z w).The former true origin is the plane of delineation 50 center O i, latter's true origin O wPhysical geometry center for imageing sensor 22 camera lenses.H c(height of images ensor) represents O wTo the vertical height on ground, promptly
Figure A20051007305900161
F is the focal length of imageing sensor 22.The optic axis 52 of imageing sensor 22 with
Figure A20051007305900162
Expression, the intersection point on this ray and ground is C; Point A is positioned at one and is parallel to ground and passes through O wRay on.If there is an impact point D to be positioned at F point dead ahead L distance, and the D point is E in the corresponding point of the plane of delineation.If l = O i E ‾ , L 1=FC, θ 1=∠ AO wC, θ 2=∠ CO wD=∠ EO wO iAnd θ 3=∠ KO wD=∠ GO wE.Can obtain following relational expression:
θ 1 = tan - 1 ( H C L 1 ) - - - ( 1 )
= tan - 1 ( Δ p 1 * ( c - y 1 ) f ) - - - ( 2 )
θ 2 = tan - 1 ( l f ) - - - ( 3 )
L = H C tan ( θ 1 + θ 2 ) - - - ( 4 )
l=p l×Δp l (5)
p l = f Δ p l × tan ( ( tan - 1 H C L - θ 1 ) ) - - - ( 6 )
Imageing sensor 22 focal distance f are known herein, and c is taken as half (the c value of the image of 240*320 is 120) of image ordinate value, H C, L 1Can obtain y by actual measurement lRepresent the position of the forthright end in image, judge fast via image by human eye easily and learn; θ 1The angle of depression (the Depression Angle that is called imageing sensor 22 again; DA), be an important parameter that influences coordinate Mapping, formula (1) and (2) they are two kinds of summary image calibration stepss, can need not to be measured and can be drawn θ by derivation by angel measuring instrument in addition 1The l of formula (3) can obtain through Flame Image Process and formula (5), (6), wherein p lBe length in pixels (pixel length), presentation graphs 4
Figure A20051007305900174
Shared amount of pixels, Δ p lBe the spacing between pixel on the plane of delineation.The L that formula (4) is tried to achieve is the actual distance of imageing sensor 22 and the place ahead barrier 21.
Δ p lMeasurement involve understanding for imageing sensor 22 hardware configurations, be example with the sensitization electroplax of CCD camera, its hardware configuration as shown in Figure 5.Pixel resolution is 640*480 (p x* p y) the sensitization electroplax be responsible for receiving external photochromic color signal, therefore imageing sensor 22 cornerwise length (S) are 1/3 inch, can be conversed the separation delta p of pixel by formula (7) l(centimetre).
Δ p l = S × p y p x 2 + p y 2 × 1 p y
= 1 3 × 2 13 × 1 480 = 9.77 × 10 - 3 - - - ( 7 )
In addition, Δ p lAlso can try to achieve, can get formula (8) according to formula (1)-(4) by image.
L = H C tan ( θ 1 + θ 2 ) = H c tan ( tan - 1 ( H C L 1 ) + tan - 1 ( p l × Δ p l f ) ) - - - ( 8 )
When the focal distance f of imageing sensor 22 when being known, p lCan observe by Fig. 4 and learn H C, L 1, L can get via actual measurement.Then can obtain Δ p lFor trying to achieve more representative Δ p l, different p lCan be corresponding to different Δ p l, so desirable multiple spot Δ p lTo obtain a plurality of Δ p l, and ask a plurality of Δ p lAverage; Maybe can utilize a plurality of Δ p lAnd the simultaneous equations of f is found the solution Δ p lObtain Δ p by experimental result lBe 8.31 * 10 -3(centimetre), rate of accuracy reached 85%.
(b) measure horizontal distance 112:
If KG among Fig. 4 and DE are detached out in figure, its inner geometry concerns under the constant condition, joins Fig. 6 in addition and do clearer explanation.Fig. 6 represents the geometric relationship figure that the DK lateral separation is measured, and if the D point among the figure is to negative X wDirection moves the W distance just can obtain the K point, and the real space coordinate position is (W, H C, L).The K point is imaged as the G point on the plane of delineation, the planimetric coordinates position be (w, l).
Figure A20051007305900182
Expression Vector;
Figure A20051007305900184
Expression
Figure A20051007305900185
Vector.Can get relational expression (9) and (10).
θ 3 = cos - 1 n → · a → | n → | | a → | - - - ( 9 )
W=H Ccsc(θ 12)tanθ 3
= w × H C 2 + L 2 f 2 + l 2 - - - ( 10 )
(c) height 113 of the described barrier of measurement:
Fig. 7 illustrates the height measurement method when described barrier 21 is embodiment with the vehicle.In the image range that a vehicle can form, square box as shown in it, its length in pixels l Dw(1ength ofdetection window) can be tried to achieve by following formula (11).C in the formula (11) is half of image abscissa value, is the image of 240*320 for horizontal stroke-ordinate, and the C value is 240/2=120.I is the ordinate value of the tailstock at the plane of delineation, and this value from bottom to top increases progressively its coordinate figure in regular turn.The p of formula (11) l' can obtain the H in the formula by formula (12) VBe height of car, H CBe vehicle width, L_p is the degree of depth of i mapping to real space point.Shown in Fig. 8 (a)-(d), for Different L _ p, same car can present different l on image Dw, the imageing sensor 22 of this moment is fixed state.L_p can be obtained θ by formula (13) 1Be the angle of depression of the imageing sensor 22 of formula (1), θ 2=∠ CO wD=∠ EO wO i(with reference to Fig. 4).
l dw=c+p l′-i (11)
p l ′ = f Δ p l × tan ( θ 1 + tan - 1 ( H V - H C L _ p ) ) - - - ( 12 )
L _ p = H C tan ( θ 1 + θ 2 ) - - - ( 13 )
Table 3 is four embodiment, wherein H V=134cm, L 1=1836cm, H C=129cm is feasible in order to proof formula (11)-(13).Can observe in addition and learn average error rate about 7.21%, also be its rate of accuracy reached more than 90%, obviously discoverable type (11)-(13) but be practical application.
The statistical form of table 3 verification expression (11)-(13) feasible degree
(c)
Fig. 8 (d) 157 78.5 12 13.5 11.11
Step 12 is the described imageing sensor in location, and it comprises following each step (ginseng Fig. 9):
(a) at first, from bottom to top do transversal scanning, find some p (being positioned on the road central strip line segment 32), p ' (being positioned on the highway sideline 31) when supposing to scan linel ' with pavement edge line feature every about 3-5 rice by sweep trace linel;
(b) seek out the two-end-point of described road central strip line segment 32 (being generally white line segment) towards upper and lower along the road central strip line segment 32 on the figure left side by the p point, as p1, p2, and forming line3 and line2 according to this respectively, p1 ', p2 ' they are line3, the line2 intersection points of highway sideline 31 on the right of figure separately in addition;
(c) find out (line5) the intersection point y of two rays of p1p2 (line4), p1 ' p2 ' 1
(d) y 1Substitution formula (2), so the angle of depression θ of imageing sensor 22 1Can get;
(e) in addition according to Fig. 9 and formula (4), can derive formula (14), wherein La and La ' are respectively the depth distance of line3 and line2 and imageing sensor 22; In addition referring to Fig. 4, θ 2, θ 2' be respectively defined different ∠ CO according to La, La ' wD.
La = H c tan ( θ 1 + θ 2 ) La ′ = H c tan ( θ 1 + θ 2 ′ ) - - - ( 14 )
By getting formula (15), wherein C in the formula (14) 1Be the length of a road surface line segment.
H c = C 1 ( 1 tan ( θ 1 + θ 2 ) - 1 tan ( θ 1 + θ 2 ′ ) ) - - - ( 15 )
θ 1(angle of depression of imageing sensor 22) and H cAfter (imageing sensor 22 is to the height on ground) obtained, represent that promptly described imageing sensor 22 has been positioned.
Because barrier avoiding collision of the present invention and device, can directly try to achieve the angle of depression and height of imageing sensor by graphical analysis, even thereby the location of imageing sensor is when changing because of system's carrier is impacted, it need not can directly reorientate automatically through field survey.
Above θ 1And H cTry to achieve automatically all and to need prior known f (focal length of camera lens) and Δ p lThe value of (spacing on the plane of delineation between pixel) two parameters below proposes in addition f and Δ p lThe method of trying to achieve automatically how via image.Formula (16) can be derived according to formula (15), formula (17) can be deduced with reason formula (16):
H c × ( tan ( θ 1 + θ 2 ′ ) - tan ( θ 1 + θ 2 ) tan ( θ 1 + θ 2 ) × tan ( θ 1 + θ 2 ′ ) ) = C 1 - - - ( 16 )
H c × ( tan ( θ 1 + θ 2 ′ ′ ) - tan ( θ 1 + θ 2 ) tan ( θ 1 + θ 2 ) × tan ( θ 1 + θ 2 ′ ′ ) ) = C 10 - - - ( 17 )
In formula (16) and (17), C 1Be the length of a road surface line segment, C 10Being the spacing of road surface line segment, all is given value.H c, θ 1, θ 2, θ 2' and θ 2" all be f and Δ p lFunction.Also promptly existing f and Δ p lTwo unknown numbers, and have by f and Δ p lTwo formulas (16) and (17) two identical relatioies that unknown parameter is set up, so f and Δ p lTwo parameters can get.
Step 13 is for carrying out a barrier identification process, and it comprises following steps (ginseng Figure 10):
(a) set a sweep trace form 131, described sweep trace form is selected from following any form, shown in Figure 11 (a)-(f), is the image of gained shown in the frame.
Form one: single line type sweep trace, as Figure 11 (a).
Form two: the meander configuration sweep trace, as Figure 11 (b), its scan mode is described below: the scope that two sidelines 33 are surrounded is the scope of wide approximate number rice altogether about the place ahead, imageing sensor 22 position, and sweep length is decided on demand.Sweep trace 40 is up scanned pixel by the image bottom one by one with the form of complications, does break-in scanning during the depth distance of the approximate number rice that whenever advances, and the distance of advancing is also decided on demand.
Three: three line style sweep traces of form as Figure 11 (c), are the scanning form of three linear pattern sweep traces 40, its scope that comprises be about about imageing sensor 22 place system carrier 24 dead aheads wide altogether 1.5 times to the width of described system carrier 24.
Four: five line style sweep traces of form, as Figure 11 (d), its scope that comprises is two sweep traces 40 of extension again of three line style sweep traces 40 of Figure 11 (c).
Form five: turning type sweep trace, as Figure 11 (e), with sweep trace 40 maximums of Figure 11 (c) be not both the scope that the adjustment of this sweep trace form has strengthened left and right sides scan edge line 40, can be used as the sweep trace form of Ackermann steer angle.
Form six: the lateral type sweep trace, as Figure 11 (f).
Wherein if use form four, can detect then that insert suddenly subtend, crossroad and insert the back of overtaking other vehicles, or the anxious barrier that stops.In addition,, so can be used as the foundation of nearly high beam of automatic adjustment and adjustment meeting vehicle speed during night, just when the measured distance C of distance of subtend barrier and system's carrier less than a setting because can detect the subtend barrier 13During rice, then can be adjusted to the low beam illumination, otherwise then can be adjusted to lamp illumination far away.
(b) provide a marginal point to identify 132, details are as follows: calculate the Euclidean distance (Euclidean distance) of neighbor on color range on the sweep trace.If described image is coloured image, with the Euclidean distance between E (k) expression k and k+1 the pixel, then E (k) is defined as ( R k + 1 - R k ) 2 + ( G k + 1 - G k ) 2 + ( B k + 1 - B k ) 2 3 . If described E (k) is greater than C 2, then described k pixel is regarded as a marginal point.R wherein k, G k, B kThe color range value of representing the redgreenblue of k pixel respectively, C 2Be a critical constant, can set by empirical value.If described image is the black and white gray scale image, then E (k) is defined as Gray K+1-Gray k, and if described E (k) greater than C 3, then described k pixel is regarded as a marginal point.Gray wherein kThe GTG color range value of representing k pixel, C 3It is a critical constant.
(c) set a scan mode 133, described scan mode can be selected from following arbitrary mode:
(c.1) scan mode of formula between detection zone: from bottom to top scanning, when finding marginal point, can suppose described point be the tailstock in image the position and set up according to this between a detection zone, and then analyze the pixel data of interscan line between described detection zone.Barrier 21 from the different depth distance of imageing sensor 22 time, length l between detection zone DwCan be different.Fig. 8 (a)-(d) is to be example with an automobile, when automobile different depth apart from the time can be different detection zone between length l DwThe sweep trace terminal point of this scan mode is an example will discern automobile, is the l of the tailstock in Fig. 8 (a) image when the i=0 Dw(be l Dw_m), l wherein Dw=l Dw_m-i, l Dw_mRepresentative is when the picture position of the front truck tailstock formed detection burst length when i=0 (being the image bottommost)).
(c.2) scan mode of multi step format: scan mode is from bottom to top progressively made scanning analysis to the image slices vegetarian refreshments, does not set up between detection zone.Sweep stopping point generally is the picture position of the terminal point on road.
(d) provide the true-false value 134 of two cloth woods parameters, method is as follows:
(d.1) utilize barrier 21 bottoms to have the characteristic of shade.Because three-dimensional thing can produce shade, non-three-dimensional things such as the graticule on road surface can't produce shade, thereby described shade can be used as the foundation of differentiating barrier 21.A cloth woods parameter a is provided, and then the true and false of a decided by formula (18) and (19),
If N shadow _ pixel l dw ≥ C 4 Set up, then a is true (18)
If N shadow _ pixel l dw < C 4 Set up, then a is false (19)
L wherein DwFor detecting burst length.N Shadow_pixelBe meant the amount of pixels that meets shadow character, get the about C in bottom between described detection zone usually 5* l DwLong pixel data.C 4, C 5It is a constant value.
The shade (shadow-pixel) of vehicle bottom should meet the relation of following formula (20) in addition:
shadow _ pixel = R &le; C 6 &times; R r Gray &le; C 7 &times; Gray r - - - ( 20 )
Each symbol description is as follows in the formula (20), and when analyzing coloured image, R is the color range value of the redness of represent pixel data respectively, R rRepresent the color range value of the RGB of grey road respectively; When analyzing the black and white gray scale image, the color range value of Gray represent pixel data, Gray rRepresent the color range value of road.And seize in the color color range value of grey road, normally get the pixel group that meets grey characteristics on the image, and ask the color average of described pixel group, wherein C 6, C 7It is a constant value.In addition, color average that can be by described pixel group and then judged the weather brightness of system's carrier 24 positions, and as the foundation of adjusting car light brightness automatically, just then the car light adjustable brightness is dark when the weather brighter display, otherwise it is bright to work as the dark more then car light of weather brightness adjustable brightness.
(d.2) utilize barrier 21 the light that throws or reflect have the characteristic of descending luminance effect.When general weather was dark, the image recognition during mostly as night can luminance brightness be judged the position of barrier in image.Distribute because of luminance brightness is the polychrome rank, if the distribution only by calculating luminance brightness is as the foundation of the barrier of identification consumption calculations performance then, and the position of finding out is not to be accurate barrier position yet.Provide a cloth woods parameter b at this, as the foundation that judges whether to described barrier 21, then the true and false of b decided by formula (21),
If R 〉=C 8Or Gray 〉=C 9Set up, then b is true, otherwise is false (21)
When wherein coloured image is analyzed in the R representative, the color range value of the redness of pixel group data; When the black and white gray scale image is analyzed in the Gray representative, the GTG color range value of pixel data.By analyzing many colours or black and white gray scale image, when color range values such as the R of pixel group or Gray promote or be decremented to C 8, C 9During value (critical constant), then generally mostly be the position of barrier in image.
(e) judge described barrier kind 135, wherein represent with a, b respectively about two cloth woods parameters of barrier shade characteristic, barrier projection or catoptrical descending luminance characteristic.Daytime identification is different with the identification rule of identification at night, and wherein using switching time of discerning rule at daytime or night (through conversion identification rule then after described switching time) is to be determined by the system time that is arranged in the arithmetic element.Described identification rule comprises following discriminating step:
If a is for true, then barrier was identified as the vehicles on the roads such as an automobile, a locomotive, a bicycle when (i) discerned daytime, the barrier of its bottom tool shadow pixel;
When (ii) discern daytime if a for false, then barrier is identified as bottoms such as a pavement strip, a shadow of the trees, a guardrail, Yi Shanbi, a house, a division island, a people and does not have a barrier of shadow pixel;
If b is for true, then barrier was identified as steric hindrance things such as a steam turbine car, a guardrail, Yi Shanbi, a house, a division island, a people when (iii) discerned night; And
If b is vacation, then barrier was identified as graticule or clear on the way when (iv) discerned night.
Figure 13 (a), 13 (b) and 13 (c) comprise 17 width of cloth subgraphs such as (a)-(q), and the identification synoptic diagram that described barrier kind 135 described identification rules are done is judged in its illustration utilization.Using the sweep trace do scanning identification of single line type form at this, serves as the barrier target that mainly will discern with the barrier on the road, and the checking barrier is discerned the feasibility of rule, and the experimental data that is obtained is put in order in table 4.
Subgraph (a)-(k) among Figure 13 (a), 13 (b) and 13 (c) is the barrier identification synoptic diagram when applying to daytime, mainly is as the identification rule with cloth woods parameter a.Subgraph (1)-(q) mainly is as the identification rule with cloth woods parameter b for applying to the barrier identification synoptic diagram at night.
The scope that on behalf of single line type sweep trace, L1 scanned in the subgraph (a)-(q) among Figure 13 (a), 13 (b) and 13 (c); What L2 represented is that every sweep trace L1 goes up the Euclidean distance of neighbor on color range and is regarded as real border greater than above-mentioned boundary threshold decided at the higher level but not officially announced according to the given boundary threshold decided at the higher level but not officially announced of experience (the horizontal coordinate distance of L1 and L2 is made as 25 at this); Mainly be to judge according to cloth woods parameter a daytime when discerning, and L3 is the position of judging the barrier of the pixel that belongs to bottom tool shadow pixels such as steam turbine car, at this it is classified as o1 class barrier.L4 indicates then is that a pixel of shadow pixel is not had in the bottom, and the position on the nearest real border of system of distance carrier 24, as barriers such as graticule, the shadow of the trees, guardrail, Shan Bi, house, division island, people on the road, at this it is classified as o2 class barrier.Night is when discerning, mainly be to judge according to cloth woods parameter b, L5 then is the position of steric hindrance things such as steam turbine car, guardrail, Shan Bi, house, division island, people, and the function or the characteristic of described steric hindrance thing tool emission or reflection source classify as o3 class barrier at this with it.
The synoptic diagram of the identification rule when table 4 daytime and night and institute's foundation
(a) (automobile) 0.416 Very L3 indicates and is o1
(b) (automobile; The shadow of the trees) (0.588 automobile); 0 (shadow of the trees) Very; False L3 indicates and is o1; L4 indicates and is o2
(c) (automobile; Pavement strip) (0.612 automobile); 0 (pavement strip) Very; False L3 indicates and is o1; L4 indicates and is o2
(d) (locomotive; Pavement strip) (0.313 locomotive); 0 (pavement strip) Very; False L3 indicates and is o1; L4 indicates and is o2
(e) (bicycle; Pavement strip) (0.24 bicycle); 0 (pavement strip) Very; False L3 indicates and is o1; L4 indicates and is o2
(f) (guardrail) 0 False L4 indicates and is o2
(g) (mountain wall) 0 False L4 indicates and is o2
(h) (house) 0 False L4 indicates and is o2
(i) (division island) 0 False L4 indicates and is o2
Figure A20051007305900271
By subgraph (a)-(q) demonstration of above table 4 and Figure 13 (a), 13 (b) and 13 (c), utilize cloth woods parameter a, the b can the multiple barrier that may influence traffic safety of accurate and stably round-the-clock identification.
But under the situation of rainy day at night, still may cause the erroneous judgement in the identification.Block A, B on Figure 14, C are that street lamp A, brake lamp B, car light C shine the reflected light position behind surface gathered water (figure do not show), and block A, B, C interior pixel group's R, G, B color range value roughly are following distribution characters:
Block A:R:200-250; G:170-220; B:70-140
Block B:R:160-220; G:0-20; B:0-40
Block C:R:195-242; G:120-230; B:120-21
Therefore, if according to the logic determines of formula (21), then block A, B, C will very likely be judged as barrier, but this and the fact are contrary.
Solve the problem that block A, B, C among Figure 14 are mistaken for barrier, can on system's carrier, install one the car light of strengthening the brightness of blue light color range is arranged, identification process subsequently can overcome the undesirable element of erroneous judgement that pavement reflecting causes, and details are as follows for the flow process of described rainy day at night identification rule:
(a) sweep trace from bottom to top when scanning block A or B or C, is modified to formula (21) rule of formula (22) as barrier identification earlier.
If B 〉=C 11Or Gray 〉=C 12Set up, then b is true, otherwise is false (22)
When wherein coloured image is analyzed in the B representative, the color range value of the blueness of pixel data; When the black and white gray scale image is analyzed in the Gray representative, the GTG color range value of pixel data; By analyzing many colours or black and white gray scale image, when color range values such as the R of pixel group or Gray promote or be decremented to C 11, C 12During value (critical constant), then generally mostly be the position of barrier in image.
(a) with Figure 14 be example, block A, B will not be regarded as barrier.
(b) be example with Figure 14, block B is identified as barrier.
(c) rainy day at night identification rule is to utilize a sweat to dye to be installed in the light of the reinforcement blue light color range brightness car light of described system carrier 24, light-illuminating is arrived described barrier, if the sky light of described barrier reaches the characteristic of certain blue light color range value, then decidable is the reflective of steric hindrance thing, shines reflective in surface gathered water otherwise can be considered street lamp.The having or not can be used as of surface gathered water judges whether weather is the foundation of rainy day.With Figure 14 is example, and block A is identified as non-barrier, and need judge according to this whether system's carrier place weather is the rainy day.
(d) with Figure 14 be example, though block C is identified as a barrier, with the system carrier be not to belong to same track, the actual distance (obstacle distance) of video camera on described barrier entity C and the system's carrier, according to the geometrical principle inference as the formula (23).
The distance of block C among obstacle distance=Figure 14
* (height+video camera of car light put height)/video camera put height (23);
If block C and system's carrier are same track, then obstacle distance equals the distance of block C among itself and Figure 14.
Please referring to Fig. 9, step 14 is for obtaining the absolute velocity of described system carrier, and details are as follows:
(a) after the point of the p1 from Fig. 9 was found out, the p1 point was the end points of described road central strip line segment 32, then found out the position that the described p1 of next image is ordered again.Suppose that at this described road central strip line segment 32 is white line segments.
(b) therefore next p1 point of opening image can down be done transversal scanning every 3-5 rice in regular turn with the line1 sweep trace among Fig. 9, or down seek the end points of adularescent line segment according to the slope of p1p2 among Fig. 9 usually apart from nearer.
(c) two images before and after the contrast, the change in location of described white line segment end points p1 just can push away its actual displacement, and this segment distance is exactly the distance that imageing sensor 22 place system carriers 24 move, if it is poor to open retrieval time of image divided by front and back again, can draw the absolute velocity of described system carrier 24.
In addition, the absolute velocity of described system carrier 24 also can via an analog-digital converter directly the velometer on described system carrier 24 obtain.
Step 15 is for obtaining a relative distance and a relative velocity of described system carrier and described barrier, and details are as follows:
After identifying the position of described barrier 21 in image, just can obtain the relative distance L of described system carrier 24 and described barrier 21 according to formula (1)-(6), shown in (24).
L = H c tan ( &theta; 1 + tan - 1 ( p l &times; &Delta; p l f ) ) - - - ( 24 )
Wherein, the height H of imageing sensor 22 C, angle of depression θ 1, the separation delta p between the focal distance f, pixel lFor known, p lCan try to achieve by the picture position of vehicle.And relative velocity (the Relative Velocity of described system carrier 24 and described barrier 21; RV) can try to achieve according to following formula (25).
RV = &Delta;L ( t ) &Delta;t - - - ( 25 )
Before and after distinctly representing, Δ t, Δ L (t) open the mistiming and the identified range difference of vehicle of image retrieval.
Step 16 is for carrying out a crashproof strategy, and it comprises following steps (referring to Figure 12):
(a) provide a velocity equivalent 161.Bigger one in the described velocity equivalent size definition relative velocity that to be the absolute velocity of described system carrier 24 and described system carrier 24 approach mutually with described barrier 21;
(b) provide a safe distance (safe distance) 162.Described safe distance size approximately add between 10 meters between one of 2,000 minutes of described velocity equivalent one to 2,000 minute.In one preferred embodiment, described safe distance to be defined as with the kilometers per hour be that half of described velocity equivalent numerical values recited of unit adds five, and described safe distance unit be rice;
(c) provide a safety coefficient (safe coefficient) 163.Described safety coefficient size definition be described relative distance and described safe distance ratio, and the size of described safety coefficient is between 0 and 1;
(d) provide a warning degree 164.Described warning degree size definition is 1 to deduct described safety coefficient;
(e) send acousto-optic or generation vibrations 165.According to the size of described warning degree, send acousto-optic or produce the driver warning of vibrations with described alarm 25, and can the people of acousto-optic around described system carrier 24 report to the police to described system carrier 24;
(f) provide the housing of barrier 21 described in the image to retrieve and demonstration 166.Ginseng Figure 15, the width of described housing is w aDuring daytime, be the width w of the vehicle bottom shadow of measuring bBe vehicle afterbody catoptrical width w when discerning night cThe height h of described housing aFor suc as formula (11) described l Dw
(g) provide absolute velocity 167 one time, described absolute velocity is defined as the present absolute velocity of described system carrier 24 and the product of described safety coefficient; And
(h) provide a recording function 168.In a preferred embodiment, described recording function can described safety coefficient during less than a certain empirical constant value (for example, 0.8) just open the sight before taking place with record harm, and do not need to open for a long time recording function.
Though above-described embodiment is an automobile, in every case be that the barrier with edge feature all can utilize disclosed method to be discerned, thereby the barrier that the present invention sayed can comprise automobile, locomotive, truck, lorry, train, people, dog, guardrail, division island and house etc.
The above system's carrier 24 is to be that example describes with the automobile, but actual application is not limited to automobile, and promptly described system carrier 24 can be any vehicles such as motorcycle, truck, lorry.
Among the above-described embodiment, but the device of every retrieving images all can be used as described imageing sensor 22, thereby described imageing sensor 22 can be charge coupled cell (Charge Coupled Device; CCD) or the arbitrary devices such as digital camera on CMOS (Complementary Metal Oxide Semiconductor) (CMOS) the element video camera, digital camera, wall scroll strip video camera, portable equipment.
Technology contents of the present invention and technical characterstic are above being disclosed, yet one of ordinary skill in the art still may be based on teaching of the present invention and announcements and done all replacement and corrections that does not deviate from spirit of the present invention.Therefore, protection scope of the present invention should be not limited to the content that embodiment discloses, and should comprise various do not deviate from replacement of the present invention and corrections, and is contained by aforementioned claim.

Claims (28)

1. the barrier avoiding collision with video-aware is characterized in that can be applicable to system's carrier, and an imageing sensor is set up in described system carrier, and described avoiding collision comprises the following step:
Retrieval is also analyzed a plurality of images;
Locate described imageing sensor;
Carry out a barrier identification process;
Obtain the absolute velocity of described system carrier;
Obtain a relative distance and a relative velocity of a described system carrier and a barrier; With crashproof strategy of execution.
2. the barrier avoiding collision with video-aware according to claim 1, the step that it is characterized in that described positioning image sensor are in order to the spacing between pixel on the focal length of the distance on the angle of depression that obtains described imageing sensor, described imageing sensor and ground, described imageing sensor camera lens and the plane of delineation.
3. the barrier avoiding collision with video-aware according to claim 2 is characterized in that the acquisition of the distance on the angle of depression of described imageing sensor and described imageing sensor and ground comprises following steps:
Transversal scanning is done at a from bottom to top every interval of horizontal scanning line;
Identify a unique point with pavement edge line feature;
Identify two first end points of a feature line segment at described unique point place;
Described two first end points are got two horizontal lines through horizontal scanning, and described two horizontal lines meet at another feature line segment respectively in two second end points;
Discern the intersection point of described two first end points lines and described two second end points lines;
Obtain the angle of depression of described imageing sensor; With
Obtain the distance of described imageing sensor to ground.
4. the barrier avoiding collision with video-aware according to claim 3 is characterized in that the acquisition of the distance on the angle of depression of described imageing sensor and described imageing sensor and ground further comprises following steps:
Obtain the focal length of described imageing sensor camera lens; With
Obtain the spacing between pixel on the described plane of delineation.
5. the barrier avoiding collision with video-aware according to claim 3, the angle of depression that it is characterized in that described imageing sensor are half the focal length of value, imageing sensor and the described intersection points and getting of longitudinal length of the spacing of utilizing pixel on the described image, image.
6. the barrier avoiding collision with video-aware according to claim 3 is characterized in that imageing sensor is to utilize the angle of depression of described imageing sensor and the depth distance of described two horizontal lines and imageing sensor to try to achieve to the distance on ground.
7. the barrier avoiding collision with video-aware according to claim 3, the angle of depression that it is characterized in that described imageing sensor is to try to achieve according to following formula:
&theta; 1 = tan - 1 ( &Delta; p l * ( c - y 1 ) f )
Wherein, θ 1The angle of depression for described imageing sensor;
Δ p lSpacing for pixel on the described image;
C is half the value of longitudinal length of image;
y 1Position for described intersection point; With
F is the focal length of described imageing sensor.
8. the barrier avoiding collision with video-aware according to claim 3 is characterized in that described imageing sensor is to try to achieve according to following formula to the distance on ground:
H c = C 1 ( 1 tan ( &theta; 1 + &theta; 2 ) - 1 tan ( &theta; 1 + &theta; 2 &prime; ) )
H wherein cBe the distance of described imageing sensor to ground, C 1It is the length value of a road surface line segment; θ 1The angle of depression for described imageing sensor; And θ 2, θ 2' satisfy respectively La = H c tan ( &theta; 1 + &theta; 2 ) ,
La &prime; = H c tan ( &theta; 1 + &theta; 2 &prime; ) , Wherein La and La ' are respectively the depth distance of two horizontal lines to described imageing sensor.
9. the barrier avoiding collision with video-aware according to claim 3 is characterized in that the spacing between pixel is to try to achieve according to two following formulas on the focal length of described camera lens and the plane of delineation:
H c &times; ( tan ( &theta; 1 + &theta; 2 &prime; ) - tan ( &theta; 1 + &theta; 2 ) tan ( &theta; 1 + &theta; 2 ) &times; tan ( &theta; 1 + &theta; 2 &prime; ) ) = C 1
H c &times; ( tan ( &theta; 1 + &theta; 2 &prime; &prime; ) - tan ( &theta; 1 + &theta; 2 ) tan ( &theta; 1 + &theta; 2 ) &times; tan ( &theta; 1 + &theta; 2 &prime; &prime; ) ) = C 10
Wherein, C 1Be the length of a road surface line segment, C 10Be the spacing of road surface line segment, H cBe the distance of described imageing sensor to ground, θ 1Be the angle of depression of described imageing sensor, H c, θ 1, θ 2, θ 2' and θ 2" all be f and Δ p lFunction, f is the focal length of described camera lens, Δ p lBe the spacing between pixel on the plane of delineation, and θ 2, θ 2' satisfy respectively La = H c tan ( &theta; 1 + &theta; 2 ) , La &prime; = H c tan ( &theta; 1 + &theta; 2 &prime; ) , Wherein La and La ' are respectively the depth distance of two horizontal lines to described imageing sensor.
10. the barrier avoiding collision with video-aware according to claim 1 is characterized in that described barrier identification process comprises following steps:
Set a sweep trace form, described sweep trace form is selected from: single line type sweep trace, meander configuration sweep trace, three line style sweep traces, five line style sweep traces, turning type sweep trace and lateral type sweep traces;
Provide a marginal point to identify;
Set a scan mode, described scan mode is formula or a multi step format between detection zone;
In at least two cloth woods parameters one is provided, and described two cloth woods parameters are respectively about barrier shade characteristic, barrier projection or catoptrical descending luminance characteristic;
Judge the true-false value of described cloth woods parameter; And
Judge described barrier kind.
11. the barrier avoiding collision with video-aware according to claim 10 is characterized in that described marginal point evaluation comprises following steps:
Calculate pixel and the Euclidean distance of its neighbor on color range on the described horizontal scanning line; With
If described Euclidean distance is greater than a critical constant, then described pixel is regarded as a marginal point.
12. the barrier avoiding collision with video-aware according to claim 10 is characterized in that the true-false value about the cloth woods parameter of barrier shade characteristic is to be judged by following formula:
If N shadow _ pixel l dw &GreaterEqual; C 4 Set up, then described cloth woods parameter is true;
If N shadow _ pixel l dw < C 4 Set up, then described cloth woods parameter is false;
C wherein 4It is a constant value;
l DwBe the length between detection zone; With
N Shadow_pixelFor meeting the amount of pixels of shadow character.
13. the barrier avoiding collision with video-aware according to claim 10 is characterized in that the true-false value about the cloth woods parameter of barrier projection or catoptrical descending luminance characteristic is to be judged by following formula:
If R 〉=C 8Or Gray 〉=C 9Set up, described cloth woods parameter is true, otherwise is false;
C wherein 8, C 9Be critical constant; When coloured image is analyzed in the R representative, the color range value of the redness of pixel group data; When black white image is analyzed in the Gray representative, the GTG color range value of pixel group data.
14. the barrier avoiding collision with video-aware according to claim 10, it is characterized in that also comprising rainy day at a night identification rule, its utilization is installed in a described barrier of light-illuminating that reinforcement blue light color range brightness car light is launched of described system carrier, according to the characteristic of the catoptrical blue light color range of described barrier value, judge whether the kind of described barrier and weather are the rainy day.
15. the barrier avoiding collision with video-aware according to claim 14 is characterized in that the true-false value about the cloth woods parameter of barrier projection or catoptrical descending luminance characteristic is to be judged by following formula:
If B 〉=C 11Or Gray 〉=C 12Set up, then described cloth woods parameter is true, otherwise is false;
C wherein 11, C 12Be critical constant; B represents respectively when analyzing coloured image, the color range value of the blueness of pixel group data; When black white image is analyzed in the Gray representative, the GTG color range value of pixel group data.
16. the barrier avoiding collision with video-aware according to claim 10, it is characterized in that also comprising one in the daytime with discern the rule switch process night, wherein discerning rule in the daytime is the cloth woods parameter of dyspraxia thing shade characteristic, discerning rule night is the cloth woods parameter of projection of dyspraxia thing or catoptrical descending luminance characteristic, and be interior in an arithmetic element that is arranged on the described system carrier switching time of described switch process.
17. the barrier avoiding collision with video-aware according to claim 10, if it is characterized in that about the cloth woods parameter true-false value of barrier shade characteristic true, then described barrier is identified as the object of a bottom tool shadow pixel, otherwise described barrier is identified as the object that a bottom does not have the shadow pixel.
18. the barrier avoiding collision with video-aware according to claim 10, if it is characterized in that about the cloth woods parameter true-false value of barrier projection or catoptrical descending luminance characteristic true, then described barrier is identified as a steric hindrance thing, otherwise described barrier is identified as clear.
19. the barrier avoiding collision with video-aware according to claim 10, it is characterized in that also comprising a nearly far away lamp automatic switchover step, whether it switches as foundation less than a specific range with the distance of subtend barrier by the system's carrier that calculates.
20. the barrier avoiding collision with video-aware according to claim 10, it is characterized in that also comprising a car light automatic brightness adjustment step, it is by the road pixel of seizing and calculate its color color range mean value, judging the weather brightness of described system carrier position, and as the foundation of adjusting car light brightness automatically.
21. the barrier avoiding collision with video-aware according to claim 1 is characterized in that the obtaining of absolute velocity of described system carrier comprises following steps:
The position of an end points in one first image of a feature line segment of identification;
Discern the position of described end points at one second image; With
With the distance of described two end points mistiming divided by described first and second images of retrieval;
Wherein said first and second image packets are contained in described a plurality of image, and the retrieval of second image is later than the retrieval of described first image.
22. the barrier avoiding collision with video-aware according to claim 1 is characterized in that described crashproof strategy comprises following steps:
A velocity equivalent is provided, and it is selected from the bigger of described absolute velocity and described relative velocity;
Safe distance by the velocity equivalent decision is provided;
A safety coefficient is provided, its size definition be described relative distance and described safe distance ratio, and the size of described safety coefficient is between 0 and 1;
A warning degree is provided, and its size definition is 1 to deduct described safety coefficient;
According to the size of described warning degree, report to the police or report to the police to the driver of described system carrier with the people of acousto-optic around described system carrier in the mode of acousto-optic or vibrations;
The housing retrieval and demonstration of barrier described in the image are provided;
Absolute velocity is provided one time, and described time absolute velocity is defined as the present absolute velocity of described system carrier and the product of described safety coefficient; And
A recording function is provided.
23. the barrier avoiding collision with video-aware according to claim 22 is characterized in that described recording function opens during less than an empirical constant value in described safety coefficient.
24. the barrier avoiding collision with video-aware according to claim 1 is characterized in that the absolute velocity of described system carrier can directly obtain from the velometer of described system carrier.
25. the barrier avoiding collision with video-aware according to claim 1 is characterized in that described imageing sensor is one that is selected from following: the video camera on charge coupled cell video camera, complementary metal oxide semiconductor element video camera, wall scroll strip video camera and the handheld communication devices.
26. the barrier collision avoidance system with video-aware is characterized in that being applied to system's carrier, it comprises:
An imageing sensor, it is able to the cognitive disorders thing in order to retrieve a plurality of images; With
An arithmetic element, it comprises following function:
(a) analyze described a plurality of images;
(b) carry out a barrier identification process according to the analysis result of a plurality of images, whether exist with the disturbance in judgement thing; And
(c) carry out a crashproof strategy.
27. the barrier collision avoidance system with video-aware according to claim 26 is characterized in that comprising in addition an alarm, when described a plurality of images are judged when barrier is arranged by analysis, described alarm will send acousto-optic or produce vibrations.
28. the barrier collision avoidance system with video-aware according to claim 26 is characterized in that described imageing sensor is one that is selected from following: the video camera on charge coupled cell video camera, complementary metal oxide semiconductor element video camera, wall scroll strip video camera and the handheld communication devices.
CN 200510073059 2004-11-19 2005-05-27 Method and device for preventing collison by video obstacle sensing Pending CN1782668A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN 200510073059 CN1782668A (en) 2004-12-03 2005-05-27 Method and device for preventing collison by video obstacle sensing
US11/260,723 US20060111841A1 (en) 2004-11-19 2005-10-27 Method and apparatus for obstacle avoidance with camera vision
JP2005332937A JP2006184276A (en) 2004-11-19 2005-11-17 All-weather obstacle collision preventing device by visual detection, and method therefor

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN200410096623 2004-12-03
CN200410096623.X 2004-12-03
CN 200510073059 CN1782668A (en) 2004-12-03 2005-05-27 Method and device for preventing collison by video obstacle sensing

Publications (1)

Publication Number Publication Date
CN1782668A true CN1782668A (en) 2006-06-07

Family

ID=36773114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200510073059 Pending CN1782668A (en) 2004-11-19 2005-05-27 Method and device for preventing collison by video obstacle sensing

Country Status (1)

Country Link
CN (1) CN1782668A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030316B (en) * 2007-04-17 2010-04-21 北京中星微电子有限公司 Safety driving monitoring system and method for vehicle
CN101161524B (en) * 2006-10-12 2010-10-27 财团法人车辆研究测试中心 Method and apparatus for detecting vehicle distance
CN101458083B (en) * 2007-12-14 2011-06-29 财团法人工业技术研究院 Structure light vision navigation system and method
CN101266132B (en) * 2008-04-30 2011-08-10 西安工业大学 Running disorder detection method based on MPFG movement vector
CN102862574A (en) * 2012-09-21 2013-01-09 上海永畅信息科技有限公司 Method for realizing active safety of vehicle on the basis of smart phone
CN102889892A (en) * 2012-09-13 2013-01-23 东莞宇龙通信科技有限公司 Live-action navigation method and navigation terminal
US8913128B2 (en) 2010-12-28 2014-12-16 Automotive Research & Test Center Image-based barrier detection and warning system and method thereof
CN104580882A (en) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN105444759A (en) * 2015-11-17 2016-03-30 广东欧珀移动通信有限公司 Indoor navigation method and device thereof
CN105466438A (en) * 2014-09-25 2016-04-06 通用汽车环球科技运作有限责任公司 Sensor odometry and application in crash avoidance vehicle
CN105957402A (en) * 2016-06-17 2016-09-21 平玉兰 Real-time displaying and recording device for road condition ahead of large vehicle
CN106403965A (en) * 2016-08-29 2017-02-15 北京奇虎科技有限公司 Vehicle positioning method and apparatus in traveling process, and intelligent terminal device
CN106909141A (en) * 2015-12-23 2017-06-30 北京机电工程研究所 Obstacle detection positioner and obstacle avoidance system
CN107038895A (en) * 2017-06-12 2017-08-11 苏州寅初信息科技有限公司 A kind of Vehicular intelligent safety protecting method and its system for bridge section
CN107255470A (en) * 2014-03-19 2017-10-17 能晶科技股份有限公司 Obstacle detector
CN107490365A (en) * 2016-06-10 2017-12-19 手持产品公司 Scene change detection in dimensioning device
CN108010072A (en) * 2017-10-31 2018-05-08 努比亚技术有限公司 A kind of air navigation aid, terminal and computer-readable recording medium
CN109947109A (en) * 2019-04-02 2019-06-28 北京石头世纪科技股份有限公司 Robot working area map construction method and device, robot and medium
CN110610593A (en) * 2019-10-16 2019-12-24 徐州筑之邦工程机械有限公司 Intelligent safety early warning system for mining truck
CN110658809A (en) * 2019-08-15 2020-01-07 北京致行慕远科技有限公司 Method and device for processing travelling of movable equipment and storage medium
WO2020244414A1 (en) * 2019-06-03 2020-12-10 杭州海康机器人技术有限公司 Obstacle detection method, device, storage medium, and mobile robot

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101161524B (en) * 2006-10-12 2010-10-27 财团法人车辆研究测试中心 Method and apparatus for detecting vehicle distance
CN101030316B (en) * 2007-04-17 2010-04-21 北京中星微电子有限公司 Safety driving monitoring system and method for vehicle
CN101458083B (en) * 2007-12-14 2011-06-29 财团法人工业技术研究院 Structure light vision navigation system and method
CN101266132B (en) * 2008-04-30 2011-08-10 西安工业大学 Running disorder detection method based on MPFG movement vector
US8913128B2 (en) 2010-12-28 2014-12-16 Automotive Research & Test Center Image-based barrier detection and warning system and method thereof
CN102889892B (en) * 2012-09-13 2015-11-25 东莞宇龙通信科技有限公司 The method of real scene navigation and navigation terminal
CN102889892A (en) * 2012-09-13 2013-01-23 东莞宇龙通信科技有限公司 Live-action navigation method and navigation terminal
CN102862574B (en) * 2012-09-21 2015-08-19 上海永畅信息科技有限公司 The method of vehicle active safety is realized based on smart mobile phone
CN102862574A (en) * 2012-09-21 2013-01-09 上海永畅信息科技有限公司 Method for realizing active safety of vehicle on the basis of smart phone
CN107255470A (en) * 2014-03-19 2017-10-17 能晶科技股份有限公司 Obstacle detector
CN107255470B (en) * 2014-03-19 2020-01-10 能晶科技股份有限公司 Obstacle detection device
CN105466438A (en) * 2014-09-25 2016-04-06 通用汽车环球科技运作有限责任公司 Sensor odometry and application in crash avoidance vehicle
CN105466438B (en) * 2014-09-25 2018-09-21 通用汽车环球科技运作有限责任公司 Sensor instrument distance in anticollision vehicle and application
CN104580882A (en) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN104580882B (en) * 2014-11-03 2018-03-16 宇龙计算机通信科技(深圳)有限公司 The method and its device taken pictures
CN105444759A (en) * 2015-11-17 2016-03-30 广东欧珀移动通信有限公司 Indoor navigation method and device thereof
CN106909141A (en) * 2015-12-23 2017-06-30 北京机电工程研究所 Obstacle detection positioner and obstacle avoidance system
CN107490365A (en) * 2016-06-10 2017-12-19 手持产品公司 Scene change detection in dimensioning device
CN107490365B (en) * 2016-06-10 2021-06-15 手持产品公司 Scene change detection in a dimensional metrology device
CN105957402A (en) * 2016-06-17 2016-09-21 平玉兰 Real-time displaying and recording device for road condition ahead of large vehicle
CN106403965A (en) * 2016-08-29 2017-02-15 北京奇虎科技有限公司 Vehicle positioning method and apparatus in traveling process, and intelligent terminal device
CN107038895A (en) * 2017-06-12 2017-08-11 苏州寅初信息科技有限公司 A kind of Vehicular intelligent safety protecting method and its system for bridge section
CN107038895B (en) * 2017-06-12 2020-07-24 何祥燕 Intelligent vehicle safety protection method and system for bridge road section
CN108010072A (en) * 2017-10-31 2018-05-08 努比亚技术有限公司 A kind of air navigation aid, terminal and computer-readable recording medium
CN109947109A (en) * 2019-04-02 2019-06-28 北京石头世纪科技股份有限公司 Robot working area map construction method and device, robot and medium
CN109947109B (en) * 2019-04-02 2022-06-21 北京石头创新科技有限公司 Robot working area map construction method and device, robot and medium
WO2020244414A1 (en) * 2019-06-03 2020-12-10 杭州海康机器人技术有限公司 Obstacle detection method, device, storage medium, and mobile robot
CN110658809A (en) * 2019-08-15 2020-01-07 北京致行慕远科技有限公司 Method and device for processing travelling of movable equipment and storage medium
CN110610593A (en) * 2019-10-16 2019-12-24 徐州筑之邦工程机械有限公司 Intelligent safety early warning system for mining truck

Similar Documents

Publication Publication Date Title
CN1782668A (en) Method and device for preventing collison by video obstacle sensing
US11222219B2 (en) Proximate vehicle localization and identification
CN114282597B (en) Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN110264783B (en) Vehicle anti-collision early warning system and method based on vehicle-road cooperation
CN113276769B (en) Vehicle blind area anti-collision early warning system and method
CN109949594B (en) Real-time traffic light identification method
US9569675B2 (en) Three-dimensional object detection device, and three-dimensional object detection method
CN102508246B (en) Method for detecting and tracking obstacles in front of vehicle
US7366325B2 (en) Moving object detection using low illumination depth capable computer vision
US20060111841A1 (en) Method and apparatus for obstacle avoidance with camera vision
CN1945596A (en) Vehicle lane Robust identifying method for lane deviation warning
JP5896027B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN1912950A (en) Device for monitoring vehicle breaking regulation based on all-position visual sensor
CN102685516A (en) Active safety type assistant driving method based on stereoscopic vision
CN105047019B (en) A kind of passenger stock prevent rear car overtake other vehicles after lane change determination methods and device suddenly
CN101075376A (en) Intelligent video traffic monitoring system based on multi-viewpoints and its method
CN101950350A (en) Clear path detection using a hierachical approach
CN107590470A (en) A kind of method for detecting lane lines and device
CN102303563B (en) System and method for prewarning front vehicle collision
US20210201057A1 (en) Traffic light recognition system and method thereof
RU2635280C2 (en) Device for detecting three-dimensional objects
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
Cheng et al. A vehicle detection approach based on multi-features fusion in the fisheye images
CN113147733A (en) Intelligent speed limiting system and method for automobile in rain, fog and sand-dust weather

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication