CN103020948A - Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system - Google Patents

Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system Download PDF

Info

Publication number
CN103020948A
CN103020948A CN2011102959071A CN201110295907A CN103020948A CN 103020948 A CN103020948 A CN 103020948A CN 2011102959071 A CN2011102959071 A CN 2011102959071A CN 201110295907 A CN201110295907 A CN 201110295907A CN 103020948 A CN103020948 A CN 103020948A
Authority
CN
China
Prior art keywords
image
warning system
point
intelligent vehicle
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011102959071A
Other languages
Chinese (zh)
Inventor
周刊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No 207 Institute Of Second Academy China Aerospace Science & Industry Corp
Original Assignee
No 207 Institute Of Second Academy China Aerospace Science & Industry Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No 207 Institute Of Second Academy China Aerospace Science & Industry Corp filed Critical No 207 Institute Of Second Academy China Aerospace Science & Industry Corp
Priority to CN2011102959071A priority Critical patent/CN103020948A/en
Publication of CN103020948A publication Critical patent/CN103020948A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the field of a photoelectric technology, and specifically discloses a night image characteristic extraction method in an intelligent vehicle-mounted anti-collision pre-warning system. The night image characteristic extraction method comprises the following steps of: selecting an automobile tail light as characteristics to be extracted, and then canceling noises of images, carrying out image edge clearness, selecting threshold value and carrying out threshold segmentation by using a series of image processing methods; and selecting an interested region AOI (Automated Optical Inspection) when the characteristics of the automobile tail light are extracted and paired, and expressing a target automobile by a rectangular frame and deducing a distance measuring formula through modeling and a series of transformations after extracting and paring the characteristics of the automobile tail light. According to the night image characteristic extraction method, the characteristics of the automobile tail light are highlighted, extracted and paired by a series of image processing methods so as to locate the special location of a front automobile in a lane and provide reliable data for the intelligent vehicle-mounted anti-collision pre-warning system which calculates the distance between the front automobile and the automobile.

Description

Nighttime image feature extracting method in the intelligent vehicle-carried anti-collision early warning system
Technical field
The invention belongs to field of photoelectric technology, be specifically related to the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system.
Background technology
Intelligent transportation system ITS (Intelligent Transportation System) refers under existing traffic, uses modern high technology to make traffic system intelligent, significantly improves the traffic capacity of road network.ITS is the study hotspot of Present Global field of traffic, and brainstrust wishes to adopt intelligent control technology, computer technology and ICT (information and communication technology) to transform the conventional traffic system, to improve Security of the system.The vehicle anti-collision early warning system is a part of intelligent transportation system.It is the barriers such as vehicle that detect when automobile is advanced around it, when with the distance of barrier less than the safe distance alarm, make the driver take to slow down or the measures such as braking keep away barrier.Can improve like this security of running car.
At present, the vehicle anti-collision early warning system has radar vehicle anti-collision early warning system, machine vision anti-collision early warning system, interactive intelligent chemoprevention to hit early warning system, laser anti-collision early warning system, ultrasonic anti-collision early warning system, infrared vehicle anti-collision early warning system and comprehensive vehicle anti-collision early warning system.For radar vehicle anti-collision early warning system, its principle is to run into the echo that barrier reflects after utilizing the electromagnetic wave emission, and constantly the distance of calculating and the place ahead barrier is reported to the police to the target that constitutes a threat to.For comprehensive vehicle anti-collision early warning system, since the eighties, the company more than 300 of the U.S. and research is all set about by many famous universities and research institution in the world.Such as: about the research of millimetre-wave radar, can reduce electromagnetic wave beam angle amplitude and various interference and misoperation by aerial radiation with the above millimetre-wave radar of 30GHZ.
Analyze and find, existing various anti-collision early warning systems have some defectives at present, should repair from the following aspects: different weather is different with the safe distance requirement to safety time with vehicle, so can improve by forming intelligent expert system accuracy and the real-time of warning; The performance of driver's fatigue driving and automobile can cause collision accident in the collision accident, therefore needs to increase the detection to driver and automobile; In order to improve the scope of monitoring, need to increase the angle of vision; The distance of automobile and danger is the factor of the system alarms such as laser, ultrasound wave, radar, so should consider relative acceleration, relative velocity, weather and vehicle; Industry standard is that the product successful Application be unable to do without, and the task of top priority is to form industry standard; Reduce and fail to report, report by mistake, to strengthen stability; Improve the antijamming capability of early warning system.
For the nighttime image of vehicle anti-collision early warning system acquisition, at present both at home and abroad the method for range finding comprises and extracts two kinds of tail-light and car body profiles.Wherein, the extracting method for tail-light mainly comprises following 6 kinds:
(1) algorithm of the deformation gradient of intensity-based image change.The degree of accuracy of the method is inadequate.
(2) based on the method for histogrammic image segmentation.The less pertinence of the method.
(3) the highlighted circle district structure car light template detection car light that surrounds of the dark point of design, thus determine car light pair by the symmetry in further calculated candidate car light zone again.The method is in structure car light template and calculate relative complex on these two points of symmetry.
(4) at first utilize colouring information in image, to detect light for vehicle, then utilize movable information and priori that light for vehicle is mated.The method is not suitable for gray level image.
(5) at first adopt the color characteristic that extracts automobile tail light based on the fuzzy rule of colourity-saturation degree-Intensity model; Utilize reflected light on the automobile or the brightness of shadow region further to confirm identification result.The method is not suitable for gray level image.
(6) at first original image is carried out the WTH conversion, then carry out binary conversion treatment, only stayed the image of tail-light thereby remove all kinds of obstacles by color characteristic and shape facility at last.The relative complex of the method.
Along with the continuous propelling of Chinese Urbanization and constantly popularizing of automobile, various traffic hazards constantly occur, and especially at night, highway accident is occurred frequently.Therefore, the research of intelligent vehicle-carried anti-collision early warning system and nighttime image treatment technology thereof is just more and more important.
Summary of the invention
The object of the present invention is to provide the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system, the method highlights tail-light, extract and matches by a series of image processing method, to orient the particular location of front vehicles in this track, the distance of calculating front truck and this car for intelligent vehicle-carried anti-collision early warning system provides reliable data.
Realize the technical scheme of the object of the invention: the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system, the method may further comprise the steps:
(1) intelligent vehicle-carried anti-collision early warning system selects tail-light as feature to be extracted from color, gray scale and the shape facility of vehicle;
(2) with medium filtering the image that gathers is carried out denoising;
(3) with Laplce's sharpening with the edge clear in the image that gathers;
(4) taillight of judging front vehicles in the most suitable this track of single threshold with K-means algorithm and FCM algorithm extracts;
(5) utilize the Otsu threshold split plot design, according to the gamma characteristic of image image is divided into background and two parts of target;
(6) with mathematical morphology open operator the image that gathers is processed to remove little assorted point in the background;
(7) when automobile tail light being carried out feature extraction and pairing, choose interested zone;
(8) from geometric properties, brightness and the color characteristic of front vehicles, select geometric properties, utilize the rule of its area and car light position roughly to be in the same level line and extract and match taillight;
(9) remove non-AOI zone, ask for connected domain and extract its central point, extract and the pairing taillight, finally express target vehicle with rectangle frame on this basis;
(10) obtain the distance of the place ahead barrier and this car by modeling and a series of conversion.
The concrete steps of described step (2) are as follows: adopt a moving window that contains odd number point, replace the gray-scale value of specified point with the intermediate value of the gray-scale value of each point in the window, specified point is the central point of window normally; According to the gray-scale value of pixel in the neighborhood being set center pixel by the result of gray scale ordering; If element number is even number, the mean value of two element gray-scale values in the middle of after the ordering as intermediate value; If element number is odd number, be intermediate value according to middle numerical value after the size ordering so.
The concrete steps of described step (3) are as follows: establishing Laplace operator is Its formula is shown below:
▿ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2
If the fuzzy of image caused that by diffusion phenomena then the image g after the sharpening is shown below:
g = f - k ▿ 2 f .
The concrete steps of the K-means algorithm in the described step (4) are as follows:
(4.1) from data, get randomly K point as the center of initial clustering, to represent each cluster;
(4.2) all points arrive this K the distance of putting in the computational data, and point is included in its nearest cluster;
(4.3) the geometric center place that moves to cluster by the center with cluster adjusts cluster centre;
(4.4) till repeating step (4.2) no longer changes to the center of cluster, namely this moment algorithm convergence.
The concrete steps of described step (5) are as follows:
For image I (x, y), note T is the segmentation threshold of prospect and background, and prospect is counted and accounted for image scaled is w 0, its average gray is u 0Background is counted and accounted for image scaled is ω 1, average gray is u 1The overall average gray scale of image is designated as μ, and inter-class variance is designated as g; The background of supposing image is darker, and size is M * N, note N 1Be the pixel count of pixel grey scale in the image greater than T, N 0Be the pixel count of pixel grey scale in the image less than T, then have:
ω 0=N 0/M×N
ω 1=N 1/M×N
N 0+N 1=M×N
ω 01=1
μ=ω 0011
g=ω 0*(μ 0-μ)^2+ω 1*(μ 1-μ)^2
Obtain inter-class variance g=ω 0* ω 1* (μ 01) ^2
From the minimum gradation value to the maximum gradation value, travel through T, as T so that value g=ω 0* (μ 0-μ) ^2+ ω 1* (μ 1-when μ) ^2 was maximum, at this moment the value of T was the optimal threshold of cutting apart; The prospect that threshold value T is partitioned into and background two parts have consisted of entire image.
Area-of-interest in the described step (7) is delta-shaped region, and the X coordinate on its summit is 1/2 of picture width, and the Y coordinate is 2/5 of picture height; Two end points about the bottom that two other point is respectively picture.
The place ahead barrier in the described step (10) and this car apart from d c(90 °-γ-α of=h*tg 0)+((Y 1-Y 2) 2+ (X 1-X 2) 2) 0.5
Useful technique effect of the present invention is: brightness and the shape facility of medium filtering processing rear taillight are more outstanding than former figure.The profile of Laplce's sharpening algorithm processing aftercarriage is more clear, and the edge in taillight zone is more outstanding, and the zone outside the vehicle has also obtained sharpening to a certain degree.The Otsu threshold split plot design so that taillight is more outstanding, has reduced more interfere information, has compressed more data volume.Utilize mathematical morphology can eliminate the little noise spot in the background, the aperture of interior of articles and the zigzag fashion of edge, the comparatively complete profile of having got back simultaneously.After choosing area-of-interest, remove first non-AOI zone, only be retained in the original image information in the AOI scope, can avoid the interference of other scenery, dwindle the hunting zone, reduce data processing amount, the travelling speed of faster procedure also delimited scope for the extraction of light for vehicle.Through after the pre-service, asking for connected domain can show independent completion ground, tail-light zone, after screening out the inappropriate zone of area, each connected domain of being left is tried to achieve central point, namely can represent a connected domain with a point.Ask for after the connected domain, the tail-light zone has clearly showed, and all the other regionals have also all showed.
Description of drawings
Fig. 1 is the process flow diagram of the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system provided by the present invention.
Fig. 2 and Fig. 3 are the synoptic diagram of monocular vision geometric projection model.
Embodiment
Below in conjunction with drawings and Examples the present invention is described in further detail.
The feature of front vehicles has geometric properties, brightness and color characteristic etc. in this track.Two taillights of front vehicles are the most obvious features in the vehicle at night car body, and therefore, the present invention selects tail-light as the feature of extracting.
Nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system, the method may further comprise the steps:
(1) intelligent vehicle-carried anti-collision early warning system selects tail-light as feature to be extracted from color, gray scale and the shape facility of vehicle
Because system acquisition is gray level image, so the color characteristic of vehicle is inadvisable, can only among gray scale and shape facility, select.At night, the feature Relative Fuzzy that the edge contour of front vehicles and shade etc. are commonly used is difficult to detect; Comparatively speaking, two of front vehicles taillight features are the most outstanding.Therefore, the present invention selects tail-light as the feature of extracting.
(2) with medium filtering the image that gathers is carried out denoising
The medium filtering of image is a kind of nonlinear image processing method, usually adopts a moving window that contains odd number point, replaces the gray-scale value of specified point with the intermediate value of the gray-scale value of each point in the window.Specified point is the central point of window normally.It is according to the gray-scale value of pixel in the neighborhood being set center pixel by the result of gray scale ordering.If element number is even number, so the mean value of two element gray-scale values in the middle of after the ordering as intermediate value.If element number is odd number, be intermediate value according to middle numerical value after the size ordering so.Because it is not simply to get average, so generation is fuzzy fewer.
(3) with Laplce's sharpening with the edge clear in the image that gathers
Laplace's operation is the linear operation of a kind of rotational invariance (isotropy), and is not only the linear combination of partial derivative computing.If Laplace operator is Its formula is as shown in the formula shown in (1):
▿ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 - - - ( 1 )
If the fuzzy of image caused that by diffusion phenomena then the image g after the sharpening is as shown in the formula shown in (2):
g = f - k ▿ 2 f - - - ( 2 )
In the formula, f, g are respectively the image before and after the sharpening, and k is the coefficient relevant with diffusional effect.The not blurred picture that formula 2 expression blurred picture f obtain after the Laplace operator sharpening is g.Here the selection of k is wanted rationally, k too conference makes the contour edge in the image produce overshoot; K is too little, and sharpening is not obvious.
(4) taillight of judging front vehicles in the most suitable this track of single threshold with K-means algorithm and FCM algorithm extracts
The concrete steps of K-means algorithm are as follows:
(4.1) from data, get randomly K point as the center of initial clustering, to represent each cluster;
(4.2) all points arrive this K the distance of putting in the computational data, and point is included in its nearest cluster;
(4.3) as the implication of mean among the K-means: the geometric center (being mean value) that moves to cluster by the center with cluster locates to adjust cluster centre;
(4.4) till repeating step (4.2) no longer changes to the center of cluster, this time algorithm convergence.
The FCM algorithm is introduced membership function, with this hard classification obfuscation of K-mean algorithm, defines and minimize the cluster loss function.In algorithm convergence, each sample and all kinds of cluster centre just can draw for all kinds of degree of membership values, have so just finished the fuzzy clustering division.In essence, the technology of climbing the mountain of its employing iteration is sought the optimum solution of problem, belongs to a kind of local search algorithm.
These two kinds of clustering algorithms of K-means algorithm and FCM algorithm are to consistent the showing of cluster analysis result of road picture at night, and poly-2 classes are best suited for extraction and the pairing of tail-light in this track, and needed number of threshold values is 1.Therefore, the present invention selects single threshold that road image is cut apart.
(5) utilize the Otsu threshold split plot design, according to the gamma characteristic of image image is divided into background and two parts of target
Large Tianjin method is called for short OTSU, is again maximum variance between clusters, is a kind of adaptive Threshold.It is divided into background and two parts of target to image according to the gamma characteristic of image, because the variance inhomogeneity a kind of tolerance that is intensity profile, the two-part difference of the larger then composing images of the inter-class variance between two parts is larger.Have two kinds of reasons can cause this two-part difference to diminish: part target mistake is divided into background; Part background mistake is divided into target.Therefore, if when cutting apart the inter-class variance maximum then misclassification probability is minimum.
For image I (x, y), note T is the segmentation threshold of prospect and background, and prospect is counted and accounted for image scaled is w 0, its average gray is u 0Background is counted and accounted for image scaled is ω 1, average gray is u 1The overall average gray scale of image is designated as μ, and inter-class variance is designated as g.
The background of supposing image is darker, and size is M * N, note N 1Be the pixel count of pixel grey scale in the image greater than T, N 0Be the pixel count of pixel grey scale in the image less than T, then have:
ω 0=N 0/M×N (3)
ω 1=N 1/M×N (4)
N 0+N 1=M×N (5)
ω 01=1 (6)
μ=ω 0011 (7)
g=ω 0*(μ 0-μ) 21*(μ 1-μ) 2 (8)
Directly larger with large Tianjin method calculated amount, with formula (7) substitution formula (8), the formula that is generally adopted:
g=ω 01*(μ 01) 2 (9)
From the minimum gradation value to the maximum gradation value, travel through T, as T so that value g=ω 0* (μ 0-μ) 2+ ω 1* (μ 1-μ) 2When maximum, at this moment the value of T is the optimal threshold of cutting apart.The prospect that threshold value T is partitioned into and background two parts have consisted of entire image.
(6) with mathematical morphology open operator the image that gathers is processed to remove little assorted point in the background
Mode with set can be defined as corrosion:
Figure BDA0000095563030000091
The effect of corrosion is the frontier point of eliminating object, removes the object less than structural element, the varying in size of the structural element of choosing, the varying in size of the object of then removing.When structural element was enough large, erosion operation can divide two objects that have tiny connection.Erosion operation can be regarded as the subset S[x of each and structural element S congruence among the image X] be punctured into a some x.Conversely, also each the some X among the X can be expanded as S[x].This process namely is dilation operation.Symbol is
Figure BDA0000095563030000092
Its definition is:
X ⊕ S = { x | S [ x ] ∩ x ≠ Φ } - - - ( 10 )
Dilation operation is incorporated near the background dot the image in the object, may make two to be communicated with at a distance of nearer object, can fill up the hole that image contains after cutting apart.
All computings that are combined into by corrosion and the compound and set operation of two computings of expanding have consisted of morphology operations family.For image X and structural element S, X is designated as XoS to the opening operation of S, and definition is:
Figure BDA0000095563030000094
(7) when automobile tail light being carried out feature extraction and pairing, utilize the characteristics of image and the priori of summary, choose interested zone
The present invention pays close attention to is travel front vehicles in the track of this car, do not need to pay close attention to the information of picture in its entirety, so, can establish an area-of-interest (Area of Interest, AOI), with interference and the impact of avoiding other various scenery in the road background that front vehicles is detected, and reduce operand.
The road of the Vehicle Driving Cycle among the present invention is structured road, and its Pavement Design is followed certain standard, and its lane line is parallel to each other.Because the projection projective transformation will be intersected in the plane of delineation of two dimension.Boundary line, the left side and boundary line, the right will meet at a bit (end point).Local plane parallel is in the time of camera optical axis, and end point is just at the center of the plane of delineation, and the zone, road surface is just all in lower half plane of image.Again because the video camera angle basic horizontal of setting up, thus target vehicle all under end point, just can not be in the about zone more than 2/5 of image.
Therefore, the present invention selects a triangle as area-of-interest, and the X coordinate on its summit is 1/2 of picture width, and the Y coordinate is 2/5 of picture height; Two end points about the bottom that two other point is respectively picture.No matter be inclined left lane line or inclined right lane line when this car is kept straight in smooth track, need crashproof target vehicle if the dead ahead has, this target vehicle usually all can be in this triangle scope so.
(8) select geometric properties from geometric properties, brightness and the color characteristic of front vehicles, taillight is extracted and matched to the fact that roughly is in the same level line with rule and the car light position of its area
The feature of front vehicles has geometric properties, brightness and color characteristic etc. in this track.
For geometric properties, have: the dutycycle Q of connected region (being the area ratio of hot spot and its boundary rectangle), area (A), length and width, length breadth ratio (length of the height/connected domain of LF=connected domain), girth (P: be the pixel number in abutting connection with car light), circularity (AP=4 π A/P 2AP ∈ (0,1)), position (horizontal range is poor, and vertical range is poor), centre coordinate, horizontal width, vertical width, symmetry, change of shape rate, shape area rate of change (K=DS/AP), (shape, area) similarity, form factor (
Figure BDA0000095563030000101
A is the area of connected domain after the binaryzation, and D is the diameter of connected domain.SF is more near 0, and the shape of connected domain is near circle) etc.
For brightness, its using method is summarized as: design detects car light by the highlighted circle district structure car light template that dark point surrounds; Use the color characteristic that extracts automobile tail light based on the fuzzy rule of colourity-saturation degree-Intensity model, then utilize the brightness of reflected light on the automobile or shadow region as the auxiliary parameter of further confirming identification result; Utilize the low threshold value of brightness tentatively to determine the doubtful zone of automobile tail light, utilize the low threshold value of red ratio and brightness ratio high threshold to detect the higher taillight hot spot of brightness, utilize red ratio high threshold to detect the lower taillight hot spot of brightness.
For color characteristic, because the image of collection of the present invention is gray level image, so the color characteristic of car light is not used in the present invention.
The priori of automobile lamp is summarized as follows:
(8.1) the car light zone be not too little also not too large, have the highlight regions of a certain size area, can utilize this feature to remove the excessive zone of area.
(8.2) shape approximation in car light zone circle or oval can utilize this feature to get rid of the non-seemingly circle noise region of being introduced by ambient light.
(8.3) variation of circularities and size in motion process of car light point is little.
(8.4) centre coordinate is positioned at a trapezoid area (trapezoidal height is relevant with the video camera angle of pitch) at image middle part.
(8.5) shape of automobile tail light and size meet certain rule.
(8.6) normal vehicle of exercising, the right line of its car light should be approximately perpendicular to the track direction, and the turning of lane change vehicle is also very little.
(8.7) generic background is high around the brightness ratio of automobile tail light, and is higher than a certain empirical value.
(8.8) when taillight brightness is lower, whole taillight image is shown in red.
(8.9) when taillight brightness is higher, the edge of red taillight image is still partially red originally, and the centre position can be partially white.
(8.10) car light always occurs in pairs, and the position is on the same level line haply, shape, area approximation, and the road surface coordinate distance between two car lights is in certain scope.
(8.11) when two or more car lights of car light and other all when similar pairing, only get the pairing of similarity maximum, other pairing is got rid of, and that is to say that a car light can only belong at most a pairing.
Observe the road image at night that gathers among the present invention and pass through pre-service design sketch afterwards as can be known, the ordinate that the area in taillight zone is not too large, taillight is right is very nearly the same.Therefore, select geometric properties, in the priori of above-mentioned automobile tail light, use the rule of area and the right position of car light and be in haply the fact on the same level line.
(9) remove non-AOI zone, ask for connected domain and extract its central point, extract and the pairing taillight, express target vehicle with rectangle frame on this basis.
AOI (Area Of Interest): interested zone.
According to area-of-interest the original image that gathers is carried out only keeping the interior original image information of delta-shaped region after zone divides, the image information full scale clearance outside this scope is fallen.After having done like this information in the image is simplified greatly, unnecessary region elimination, so that follow-up pretreated image information is reduced in a large number, the travelling speed of faster procedure also delimited scope for the extraction of light for vehicle.
In order to extract exactly tail-light, need first with tail-light zone independent completion show, this can realize by asking for connected domain.
Ask this step of connected domain so that complete clearly the performance out in tail-light zone also has other zone to exist outside the tail-light zone.The present invention asks first the area of regional, then screens out the excessive zone of area, with the central point of red "+" number each connected domain that expression is remaining.
So, connected domain one by one just is expressed as the point into one by one, convenient follow-up further processing briefly.
The present invention need to match to represent this vehicle target with two taillights of same automobile.The method of pairing is: travel through each point, judge its y coordinate figure, if a point is in the up and down n with delegation capable (for example 5 row), there is other point to exist, the with it pairing of first point that then will find, and from chained list, remove this 2 point, no longer participate in searching match point next time.
The match point that finally obtains is marked with yellow point, in order more clearly it to be showed blue this a pair of match point of segment link of the present invention.The tail-light that matches out is the basis of orienting vehicle location.
The centre coordinate of two taillights that line has extracted and matched out is found out the mid point of line, centered by this mid point, draws a length breadth ratio and is 1 rectangle and can represent target vehicle, and the length of side is 1.2 times of two taillight central horizontal distances.The setting of ratio is determined jointly by the construction features of empirical value and automobile.
(10) by modeling and a series of conversion obtain the place ahead barrier and this car apart from d c
(10.1) as shown in Figures 2 and 3, single camera vision system can be converted to geometric model by the pinhole imaging system principle: in Fig. 2, the O point is the optical center point of video camera, the G point is the trapezoidal diagonal line intersection point in the visual field, it also is the intersection point on camera optical axis and plane, road, OG is the optical axis of video camera, and the I point is the vertical projection of O point on the plane, road.Plane ABU represents the plane, road, and ABCD is the trapezoid area on the plane, road that photographs of video camera.In the coordinate system of road surface, establishing the direction that vehicle advances is Y direction, and establishing coordinate origin is the G point.Fig. 3 has shown A, B, C, D, each corresponding point in the plane of delineation of G, is a, b, c, d as 4 end points of planar rectangular, is W and H as the wide and height on plane.The true origin of photo coordinate system is the mid point g of rectangle in the image, and the direction that vehicle advances is the y axle.Get 1 P on the plane, road, coordinate is (X P, Y P), the corresponding point of P point in the plane of delineation are p, the coordinate in photo coordinate system is (x p, y p).Corresponding relation between road surface coordinate and image coordinate can derive out with geometric relationship:
Y P = h * k 1 * y p ( 1 + k 2 2 1 - k 2 * k 1 * y p ) X P = ( UG + Y P ) UG * k 3 * x p * k 4 y p = Y P k 1 ( h + h * k 2 2 + Y P * k 2 ) x p = UG ( UG + Y P ) * k 3 * k 4 * X P - - - ( 12 )
Wherein:
k 1 = 2 tg ( α 0 ) / H k 2 = tg ( γ 0 ) k 3 = h / cos ( γ 0 ) k 4 = 2 tg ( β 0 ) / W UG = h * ( tg ( γ 0 ) - tg ( γ 0 - α 0 ) ) cos ( γ 0 - α 0 ) cos ( γ 0 - α 0 ) - cos ( γ 0 ) - - - ( 13 )
Wherein, the setting height(from bottom) of video camera is h; The height of image is H; The wide of image is W; The angle of pitch of video camera is γ 0The horizontal field of view angle of camera lens is 2 β 0The vertical visual field angle of camera lens is 2 α 0Be front two formulas of formula (12) as the plane to the mapping relations on plane, road; The plane, road is rear two formulas of formula (13) to closing as the inverse mapping on plane.
(10.2) the place ahead barrier that calculates with method of geometry and this car apart from d c
Wherein, d 1Distance between the nearest visual field of this car front end and video camera, d 2Be on the image recently the visual field apart from the distance of barrier.
The distance in this car front end and the nearest visual field of video camera: d 1(90 °-γ of=h*tg 00) (14)
d 2Calculation procedure:
(10.2.1) can calculate the coordinate of the mid point of its line by the centre coordinate of two taillights extracting and match out.Centered by this mid point, draw a length breadth ratio and be 1 rectangle and can represent target vehicle, the length of side is 1.2 times of horizontal range at two taillight centers.Calculate the plane of delineation coordinate x of its base mid point 1, y 1
(10.2.2) plane of delineation coordinate x of base, computed image plane mid point 2, y 2
(10.2.3) use formula (12) with x 1, y 1x 2, y 2The coordinate transform of these planes of delineation becomes road planimetric coordinates X 1, Y 1, X 2, Y 2
(10.2.4) calculate apart from d 2Be shown below:
d 2=((Y 1-Y 2) 2+(X 1-X 2) 2) 0.5 (15)
Above-mentioned formula (14) and (15) are brought into apart from d c=d 1+ d 2, obtain the place ahead barrier and this car apart from d cShown in (16):
d c=h*tg(90°-γ-α 0)+((Y 1-Y 2) 2+(X 1-X 2) 2) 0.5 (16)
The above has done detailed description to the present invention in conjunction with the accompanying drawings and embodiments, but the present invention is not limited to above-described embodiment, in the ken that those of ordinary skills possess, can also make various variations under the prerequisite that does not break away from aim of the present invention.The content that is not described in detail in the instructions of the present invention all can adopt prior art.

Claims (7)

1. the nighttime image feature extracting method in the intelligent vehicle-carried anti-collision early warning system, the method may further comprise the steps:
(1) intelligent vehicle-carried anti-collision early warning system selects tail-light as feature to be extracted from color, gray scale and the shape facility of vehicle;
(2) with medium filtering the image that gathers is carried out denoising;
(3) with Laplce's sharpening with the edge clear in the image that gathers;
(4) taillight of judging front vehicles in the most suitable this track of single threshold with K-means algorithm and FCM algorithm extracts;
(5) utilize the Otsu threshold split plot design, according to the gamma characteristic of image image is divided into background and two parts of target;
(6) with mathematical morphology open operator the image that gathers is processed to remove little assorted point in the background;
(7) when automobile tail light being carried out feature extraction and pairing, choose interested zone;
(8) from geometric properties, brightness and the color characteristic of front vehicles, select geometric properties, utilize the rule of its area and car light position roughly to be in the same level line and extract and match taillight;
(9) remove non-AOI zone, ask for connected domain and extract its central point, extract and the pairing taillight, finally express target vehicle with rectangle frame on this basis;
(10) obtain the distance of the place ahead barrier and this car by modeling and a series of conversion.
2. the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system according to claim 1, it is characterized in that: the concrete steps of described step (2) are as follows: adopt a moving window that contains odd number point, replace the gray-scale value of specified point with the intermediate value of the gray-scale value of each point in the window, specified point is the central point of window normally; According to the gray-scale value of pixel in the neighborhood being set center pixel by the result of gray scale ordering; If element number is even number, the mean value of two element gray-scale values in the middle of after the ordering as intermediate value; If element number is odd number, be intermediate value according to middle numerical value after the size ordering so.
3. the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system according to claim 2, it is characterized in that: the concrete steps of described step (3) are as follows: establishing Laplace operator is
Figure FDA0000095563020000011
Its formula is shown below:
▿ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2
If the fuzzy of image caused that by diffusion phenomena then the image g after the sharpening is shown below:
g = f - k ▿ 2 f .
4. the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system according to claim 3, it is characterized in that: the concrete steps of the K-means algorithm in the described step (4) are as follows:
(4.1) from data, get randomly K point as the center of initial clustering, to represent each cluster;
(4.2) all points arrive this K the distance of putting in the computational data, and point is included in its nearest cluster;
(4.3) the geometric center place that moves to cluster by the center with cluster adjusts cluster centre;
(4.4) till repeating step (4.2) no longer changes to the center of cluster, namely this moment algorithm convergence.
5. the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system according to claim 4, it is characterized in that: the concrete steps of described step (5) are as follows:
For image I (x, y), note T is the segmentation threshold of prospect and background, and prospect is counted and accounted for image scaled is w 0, its average gray is u 0Background is counted and accounted for image scaled is ω 1, average gray is u 1The overall average gray scale of image is designated as μ, and inter-class variance is designated as g; The background of supposing image is darker, and size is M * N, note N 1Be the pixel count of pixel grey scale in the image greater than T, N 0Be the pixel count of pixel grey scale in the image less than T, then have:
ω 0=N 0/M×N
ω 1=N 1/M×N
N 0+N 1=M×N
ω 01=1
μ=ω 0011
g=ω 0*(μ 0-μ) 21*(μ 1-μ) 2
Obtain inter-class variance g=ω 0* ω 1* (μ 01) 2
From the minimum gradation value to the maximum gradation value, travel through T, as T so that value g=ω 0* (μ 0-μ) 2+ ω 1* (μ 1-μ) 2When maximum, at this moment the value of T is the optimal threshold of cutting apart; The prospect that threshold value T is partitioned into and background two parts have consisted of entire image.
6. the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system according to claim 5, it is characterized in that: the area-of-interest in the described step (7) is delta-shaped region, the X coordinate on its summit is 1/2 of picture width, and the Y coordinate is 2/5 of picture height; Two end points about the bottom that two other point is respectively picture.
7. the nighttime image feature extracting method in a kind of intelligent vehicle-carried anti-collision early warning system according to claim 6 is characterized in that: the place ahead barrier in the described step (10) and this car apart from d c(90 °-γ-α of=h*tg 0)+((Y 1-Y 2) 2+ (X 1-X 2) 2) 0.5
CN2011102959071A 2011-09-28 2011-09-28 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system Pending CN103020948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011102959071A CN103020948A (en) 2011-09-28 2011-09-28 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011102959071A CN103020948A (en) 2011-09-28 2011-09-28 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system

Publications (1)

Publication Number Publication Date
CN103020948A true CN103020948A (en) 2013-04-03

Family

ID=47969515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011102959071A Pending CN103020948A (en) 2011-09-28 2011-09-28 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system

Country Status (1)

Country Link
CN (1) CN103020948A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870809A (en) * 2014-02-27 2014-06-18 奇瑞汽车股份有限公司 Vehicle detection method and device
CN104501806A (en) * 2014-11-24 2015-04-08 李青花 Intelligent positioning navigation system
CN104778454A (en) * 2015-04-13 2015-07-15 杭州电子科技大学 Night vehicle tail lamp extraction method based on descending luminance verification
CN105389991A (en) * 2015-12-03 2016-03-09 杭州中威电子股份有限公司 Self-adaptive snapshot method for behavior of running red light
CN105528795A (en) * 2016-02-18 2016-04-27 北京航空航天大学 Infrared human face segmentation method utilizing shortest annular path
CN105844595A (en) * 2016-03-14 2016-08-10 天津工业大学 Method of constructing model for restoring headlight in nighttime traffic video based on atmosphere reflection-scattering principle
CN106274904A (en) * 2016-11-04 2017-01-04 黄河交通学院 A kind of vehicle frame lightweight cylinder retarder control method and system
CN106407951A (en) * 2016-09-30 2017-02-15 西安理工大学 Monocular vision-based nighttime front vehicle detection method
CN106446758A (en) * 2016-05-24 2017-02-22 南京理工大学 Obstacle early-warning device based on image identification technology
CN106774006A (en) * 2017-01-22 2017-05-31 成都图灵创想科技有限责任公司 Computerized flat knitting machine striker detection means and method
CN107545568A (en) * 2017-08-07 2018-01-05 上海斐讯数据通信技术有限公司 A kind of processing method and system of 3D binary images
CN108062757A (en) * 2018-01-05 2018-05-22 北京航空航天大学 It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target
CN108229249A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of night front vehicles detection method
CN108974018A (en) * 2018-08-31 2018-12-11 辽宁工业大学 To anticollision prior-warning device and its monitoring method before a kind of automobile based on machine vision
CN109846459A (en) * 2019-01-18 2019-06-07 长安大学 A kind of fatigue driving state monitoring method
CN110020575A (en) * 2018-01-10 2019-07-16 富士通株式会社 Vehicle detection apparatus and method, electronic equipment
CN111783498A (en) * 2019-04-03 2020-10-16 泰州阿法光电科技有限公司 Multi-parameter field acquisition method
CN112651269A (en) * 2019-10-12 2021-04-13 常州通宝光电股份有限公司 Method for rapidly detecting vehicles in front in same direction at night
CN114841874A (en) * 2022-04-20 2022-08-02 福思(杭州)智能科技有限公司 Image processing method, device, equipment and storage medium
CN112651269B (en) * 2019-10-12 2024-05-24 常州通宝光电股份有限公司 Method for rapidly detecting forward same-direction vehicles at night

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周刊: "《智能车载红外视觉预警系统中的图像特征提取技术研究》", 《万方数据 学位论文》 *
周刊: "《智能车载预警系统中的夜间图像预处理技术研究》", 《现代显示》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870809A (en) * 2014-02-27 2014-06-18 奇瑞汽车股份有限公司 Vehicle detection method and device
CN104501806A (en) * 2014-11-24 2015-04-08 李青花 Intelligent positioning navigation system
CN104778454A (en) * 2015-04-13 2015-07-15 杭州电子科技大学 Night vehicle tail lamp extraction method based on descending luminance verification
CN104778454B (en) * 2015-04-13 2018-02-02 杭州电子科技大学 A kind of vehicle at night taillight extracting method based on descending luminance checking
CN105389991A (en) * 2015-12-03 2016-03-09 杭州中威电子股份有限公司 Self-adaptive snapshot method for behavior of running red light
CN105389991B (en) * 2015-12-03 2017-12-15 杭州中威电子股份有限公司 A kind of adaptive Jaywalking snapshot method
CN105528795B (en) * 2016-02-18 2018-06-01 北京航空航天大学 A kind of infrared face dividing method using annular shortest path
CN105528795A (en) * 2016-02-18 2016-04-27 北京航空航天大学 Infrared human face segmentation method utilizing shortest annular path
CN105844595B (en) * 2016-03-14 2018-09-04 天津工业大学 The method for building model recovery night traffic video car light based on atmospheric reflectance-scattering principle
CN105844595A (en) * 2016-03-14 2016-08-10 天津工业大学 Method of constructing model for restoring headlight in nighttime traffic video based on atmosphere reflection-scattering principle
CN106446758A (en) * 2016-05-24 2017-02-22 南京理工大学 Obstacle early-warning device based on image identification technology
CN106407951A (en) * 2016-09-30 2017-02-15 西安理工大学 Monocular vision-based nighttime front vehicle detection method
CN106407951B (en) * 2016-09-30 2019-08-16 西安理工大学 A kind of night front vehicles detection method based on monocular vision
CN106274904B (en) * 2016-11-04 2018-08-17 黄河交通学院 A kind of vehicle frame lightweight cylinder retarder control method and system
CN106274904A (en) * 2016-11-04 2017-01-04 黄河交通学院 A kind of vehicle frame lightweight cylinder retarder control method and system
CN108229249A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 A kind of night front vehicles detection method
CN106774006A (en) * 2017-01-22 2017-05-31 成都图灵创想科技有限责任公司 Computerized flat knitting machine striker detection means and method
CN106774006B (en) * 2017-01-22 2023-12-29 成都图灵创想科技有限责任公司 Device and method for detecting firing pin of computerized flat knitting machine
CN107545568A (en) * 2017-08-07 2018-01-05 上海斐讯数据通信技术有限公司 A kind of processing method and system of 3D binary images
CN107545568B (en) * 2017-08-07 2021-08-20 东方财富信息股份有限公司 Processing method and system for 3D binary image
CN108062757B (en) * 2018-01-05 2021-04-30 北京航空航天大学 Method for extracting infrared target by using improved intuitionistic fuzzy clustering algorithm
CN108062757A (en) * 2018-01-05 2018-05-22 北京航空航天大学 It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target
CN110020575A (en) * 2018-01-10 2019-07-16 富士通株式会社 Vehicle detection apparatus and method, electronic equipment
CN110020575B (en) * 2018-01-10 2022-10-21 富士通株式会社 Vehicle detection device and method and electronic equipment
CN108974018A (en) * 2018-08-31 2018-12-11 辽宁工业大学 To anticollision prior-warning device and its monitoring method before a kind of automobile based on machine vision
CN108974018B (en) * 2018-08-31 2023-06-16 辽宁工业大学 Machine vision-based forward anti-collision early warning and monitoring method for automobile
CN109846459A (en) * 2019-01-18 2019-06-07 长安大学 A kind of fatigue driving state monitoring method
CN111783498B (en) * 2019-04-03 2021-02-19 邱群 Multi-parameter field acquisition method
CN111783498A (en) * 2019-04-03 2020-10-16 泰州阿法光电科技有限公司 Multi-parameter field acquisition method
CN112651269A (en) * 2019-10-12 2021-04-13 常州通宝光电股份有限公司 Method for rapidly detecting vehicles in front in same direction at night
CN112651269B (en) * 2019-10-12 2024-05-24 常州通宝光电股份有限公司 Method for rapidly detecting forward same-direction vehicles at night
CN114841874A (en) * 2022-04-20 2022-08-02 福思(杭州)智能科技有限公司 Image processing method, device, equipment and storage medium
CN114841874B (en) * 2022-04-20 2024-05-31 福思(杭州)智能科技有限公司 Image processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103020948A (en) Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
Wu et al. Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement
CN109190523B (en) Vehicle detection tracking early warning method based on vision
CN101950350B (en) Clear path detection using a hierachical approach
CN105206109B (en) A kind of vehicle greasy weather identification early warning system and method based on infrared CCD
US9384401B2 (en) Method for fog detection
CN106845453B (en) Taillight detection and recognition methods based on image
CN104182756B (en) Method for detecting barriers in front of vehicles on basis of monocular vision
CN104778444B (en) The appearance features analysis method of vehicle image under road scene
CN103984950B (en) A kind of moving vehicle brake light status recognition methods for adapting to detection on daytime
CN107066986A (en) A kind of lane line based on monocular vision and preceding object object detecting method
CN112801022A (en) Method for rapidly detecting and updating road boundary of unmanned mine card operation area
Ming et al. Vehicle detection using tail light segmentation
Prakash et al. Robust obstacle detection for advanced driver assistance systems using distortions of inverse perspective mapping of a monocular camera
CN103050008B (en) Method for detecting vehicles in night complex traffic videos
CN107622494B (en) Night vehicle detection and tracking method facing traffic video
CN110659552B (en) Tramcar obstacle detection and alarm method
CN107886034A (en) Driving based reminding method, device and vehicle
CN105678287A (en) Ridge-measure-based lane line detection method
CN106407951A (en) Monocular vision-based nighttime front vehicle detection method
CN112666573B (en) Detection method for retaining wall and barrier behind mine unloading area vehicle
Álvarez et al. Perception advances in outdoor vehicle detection for automatic cruise control
Chen et al. Salient video cube guided nighttime vehicle braking event detection
CN104050479A (en) Method for eliminating automobile shadow and window interference in remote control image and recognizing automobile
Nguyen et al. Fused raised pavement marker detection using 2d-lidar and mono camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130403