WO2018058356A1 - Method and system for vehicle anti-collision pre-warning based on binocular stereo vision - Google Patents

Method and system for vehicle anti-collision pre-warning based on binocular stereo vision Download PDF

Info

Publication number
WO2018058356A1
WO2018058356A1 PCT/CN2016/100521 CN2016100521W WO2018058356A1 WO 2018058356 A1 WO2018058356 A1 WO 2018058356A1 CN 2016100521 W CN2016100521 W CN 2016100521W WO 2018058356 A1 WO2018058356 A1 WO 2018058356A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
map
image
line
disparity map
Prior art date
Application number
PCT/CN2016/100521
Other languages
French (fr)
Chinese (zh)
Inventor
李斌
赵勇
Original Assignee
驭势科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 驭势科技(北京)有限公司 filed Critical 驭势科技(北京)有限公司
Priority to PCT/CN2016/100521 priority Critical patent/WO2018058356A1/en
Priority to CN201680001426.6A priority patent/CN108243623B/en
Publication of WO2018058356A1 publication Critical patent/WO2018058356A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present invention relates generally to automotive autonomous driving techniques, and more particularly to automotive anti-collision techniques based on binocular stereo vision.
  • Accurate and real-time anti-collision warning has important application significance, especially in the safety control of assisted driving and the automatic control of automatic driving.
  • anti-collision warning can reduce accidents as much as possible to avoid personal injury. And property damage; in autonomous driving, the more accurate the anti-collision warning, the higher the safety.
  • the calibration is first performed.
  • the area below a certain threshold is judged as the ground.
  • the laser radar required for this method is very expensive and difficult to popularize.
  • the millimeter-wave accuracy is far less than that of the laser radar;
  • the second is to use the monocular color camera to detect obstacles in front by machine learning and computer vision.
  • This method relies heavily on the training samples and the artificially designed features, and the driving area varies widely. If the training sample does not exist, the detection will not be detected, and the scalability and versatility are not strong.
  • the monocular camera cannot accurately obtain the depth information, and the obtained result often does not conform to the real scene.
  • the method is real-time. Sex is also difficult to guarantee.
  • Patent document 1CN101135558B discloses a machine vision-based automobile anti-collision warning technology, in which a machine vision method is used to collect the front vehicle license plate features and lane line information, and according to the size of the projected image pixel points of the vehicle license plate in front of the machine vision. Calculate the distance from the vehicle in front, and calculate the driving state of the preceding vehicle based on the state information of the vehicle speed and steering. According to the relative distance between the vehicle and the boundary of the lane line, it is judged whether or not the vehicle is traveling within a safe lane range.
  • Patent Document 1 uses machine learning methods for detection, which relies heavily on trained samples and artificially designed features.
  • the scene regions encountered during driving vary widely, and cannot be detected in the absence of training samples.
  • sexuality is not strong; on the other hand, monocular cameras cannot accurately obtain depth information and speed information, and the results obtained often do not conform to the real scene; finally, the real-time nature of the method is difficult to guarantee.
  • Patent Document 2CN102685516A discloses an active safety assisted driving method based on stereo vision technology.
  • the active safety assisted driving system comprehensively utilizes opto-mechanical information technology, consisting of a stereo vision subsystem, an image rapid processing subsystem and a safety assisted driving subsystem, including two high-resolution CCD cameras, ambient light level sensors, and dual-channel video capture.
  • Card synchronous controller, data transmission circuit, power supply circuit, image fast processing algorithm library, voice reminder module, screen display module and active safe driving control module.
  • Patent Document 2 also uses a method based on machine learning recognition to detect, which relies heavily on trained samples and artificially designed features.
  • the scene regions encountered during driving vary widely, and cannot be detected in the absence of training samples. Scalability and versatility are not strong, but more accurate results can be achieved by binoculars in distance calculation.
  • a machine vision-based automotive anti-collision warning technology that is more versatile, more real-time, and less dependent on training data and manual design features.
  • the present invention has been made in view of the above circumstances.
  • an automobile anti-collision warning method based on binocular stereo vision may include: capturing, by a binocular camera mounted on an automobile body, two left and right gray scales in front of a car in a traveling direction of the automobile. Image, calculated disparity map; converted from disparity map to V disparity map; binarization of V disparity map; using RANSAC method to fit segmentation line from points of binarized V disparity map;
  • the obtaining the map by the plan view may include: first extracting points higher than the first threshold height of the ground according to the world coordinates of each point, and converting the points into the ground coordinate system; Points above the first threshold height and below the second threshold height are marked in the occupied map; the value occupying each pixel in the map is the sum of the heights of its respective points, thereby obtaining the occupied map.
  • the position of each obstacle obtained by the connected domain mark detection algorithm from the occupied map may include: converting the color to the color according to the size of each pixel value in the occupied map.
  • the image is such that the larger the pixel value is, the more toward the red color, the smaller the pixel value is, the more the color is biased toward the blue color, and the different objects are segmented by the connected domain mark detection algorithm, thereby obtaining the position of each obstacle.
  • the binarizing the V disparity map may include: determining a maximum value of each row of pixel values, and setting a gray value of a pixel at which only the maximum value is in each row to 255, the remaining pixel gray value is set to 0.
  • using the RANSAC method to fit a piece of segmentation line may include repeatedly performing the following sequence of operations until a predetermined end criterion is reached: selecting a set of random subsets among the maximum points in the V disparity map To perform straight line fitting to obtain a straight line model; use the obtained straight line model to test all other data. If a point is applied to the estimated straight line model, it is considered to be an intra-point.
  • the estimated model is considered reasonable, then re-estimate the model with all intra-site points, and estimate the error rate of the intra-point and model; if the error rate of the model is lower than the current best model, replace it with the model The best model at present; the best model obtained in the end is used as the segmentation line of the segment.
  • the using the RANSAC method to fit the multi-segment segment line may include: first extracting the first line according to the above method, and after the extraction is completed, belonging to The point of the first straight line is removed from the V disparity map, and then the second line is extracted for the remaining points according to the method of claim 5, and so on, until the number of remaining points is less than a predetermined threshold.
  • the line model parameters of the first frame image are subtracted from the accumulated parameter results, and the line model parameters of the current frame image are added, and then averaged as The linear model parameters for this frame.
  • the obtaining the travelable area in the original grayscale image by the extracted straight line includes: selecting, for each row in the V disparity map, a point at which the disparity value on the extracted straight line is d, In the row corresponding to the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
  • an automobile anti-collision warning system based on binocular stereo vision may include: a binocular camera that continuously captures two left and right grayscale images in front of the car in the direction of travel of the vehicle;
  • the device including a memory, a processor, a communication interface, a bus, a memory, a communication interface, and a processor are all connected to the bus, wherein the computer stores executable instructions, and the computing device can obtain two left and right cameras obtained by the binocular camera via the communication interface.
  • a grayscale image when the processor executes the computer executable instruction, performing the following method: calculating a disparity map based on two left and right grayscale images; converting a disparity map from the disparity map; and performing a binary value on the V disparity map Using the RANSAC method to fit a segmented straight line from the binarized V-disparity map points; smoothing the straight line according to the multi-frame image; obtaining the travelable region in the original grayscale image by the extracted straight line; Calculate the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system based on the original image and the disparity map.
  • the ground is a plane model, and RANSAC is used to fit the plane to obtain a ground model; the entire scene in the original gray image is converted from camera coordinates to world coordinates, and a plan is generated at the same time, and the plan is taken to occupy the map;
  • the connected domain mark detection algorithm divides the position of each obstacle and converts it into the original image to mark it, and calculates the distance of the obstacle to the vehicle through the disparity map. When the current vehicle distance is less than a certain threshold, an alarm or an incoming decision is made. Modules participate in decision making.
  • obtaining a map by the plan view may include: first extracting points higher than the first threshold height of the ground according to the world coordinates of each point, and converting the points into the ground coordinate system; A point at a first threshold height and below a second threshold height is marked in the occupied map; the value occupying each pixel in the map is the sum of the heights of its corresponding points, thereby obtaining Occupy the map.
  • the position of each obstacle obtained by the connected domain mark detection algorithm from the occupied map may include: converting the color to the color according to the size of each pixel value in the occupied map.
  • the image is such that the larger the pixel value is, the more toward the red color, the smaller the pixel value is, the more the color is biased toward the blue color, and the different objects are segmented by the connected domain mark detection algorithm, thereby obtaining the position of each obstacle.
  • the binarizing the V disparity map comprises: obtaining a maximum value of each row of pixel values, and setting a gray value of a pixel at which only the maximum value is in each row to 255, The remaining pixel grayscale values are set to zero.
  • the using the RANSAC method to fit a piece of segmentation line may include repeatedly performing the following sequence of operations until a predetermined exit criterion is reached: selecting a random set of the maximum points in the V disparity map The subset is used to perform straight line fitting to obtain a straight line model; the obtained straight line model is used to test all other data.
  • a point is applied to the estimated straight line model, it is considered to be an intra-point, and if there is a point exceeding a predetermined number, The class is an intra-point, then the estimated model is considered reasonable, then the model is re-estimated with all intra-site points, and the error rate of the intra-point and model is estimated; if the error rate of the model is lower than the current best model, then The model replaces the current best model; the best model obtained last is used as the segmentation line.
  • the using the RANSAC method to fit the multi-segment segment line may include: extracting the first line, and after the extraction is completed, removing the points belonging to the first line from the V-disparity map, and then The remaining points are used to extract the second line, and so on, until the number of remaining points is less than a predetermined threshold.
  • the line model parameters of the first frame image are subtracted from the accumulated parameter results, and the line model parameters of the current frame image are added, and then averaged as The linear model parameters for this frame.
  • the obtaining the travelable area in the original grayscale image by the extracted straight line may include: selecting, for each row in the V disparity map, a disparity value on the extracted straight line as d, In the corresponding row in the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe drivable region.
  • an automobile anti-collision warning system may include: a binocular phase The machine is configured to capture two left and right grayscale images in front of the car in the direction of travel of the car; the parallax map calculation component calculates a parallax map from the left and right two grayscale images; the V parallax map conversion module converts the parallax map to obtain a V Parallax map; binarization module, binarization of V disparity map; RANSAC line fitting module, using RANSAC method to fit segmentation line from the binarized V disparity map points; multi-frame image filtering a module, according to the multi-frame image smoothing and filtering straight line; the original image travelable area determining module obtains the travelable area in the original grayscale image by the extracted straight line; the ground model fitting module calculates the belonging according to the original image and the disparity map The three-dimensional coordinates of the ground point in the real world coordinate system, assuming the ground is a plane model, using RANSAC to
  • a stereo vision-based automobile anti-collision warning method and system obtains a ground model based on a binocular visual image, obtains a occupied map of the scene through the ground model, and obtains anti-collision information from the occupied map, and can adapt to various For various roads and road conditions, the algorithm has low accuracy requirements for disparity map, reduced front-end calculation, strong anti-interference ability, and no dependence on data and artificial design features.
  • FIG. 1 shows a schematic diagram of an in-vehicle anti-collision warning system 100 in accordance with an embodiment of the present invention
  • FIG. 2 shows a general flow chart of a vehicle collision avoidance warning method based on binocular stereo vision according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing a situation in which a least square method incorrectly extracts a straight line in the presence of a large noise
  • FIG. 4 shows a flow diagram of a method 240 of fitting a straight line from points of a V disparity map, in accordance with an embodiment of the present invention
  • FIG. 5 is a block diagram showing the structure of an automobile collision avoidance warning system 300 according to another embodiment of the present invention.
  • the disparity map is based on any image in the image pair, the size of which is the size of the reference image, and the element value is the image of the disparity value.
  • the disparity map contains the distance information of the scene.
  • the parallax map can be calculated from the left and right images taken by the binocular camera.
  • the coordinates of a point in a common two-dimensional disparity map are represented by (u, v), where u is the abscissa and v is the ordinate; the pixel value of the pixel at the point (u, v) is represented by d(u, v),
  • the pixel value represents the parallax at this point (u, v). Since the disparity map contains the distance information of the scene, the image matching of the disparity map from the stereo image pair has been the most active field in binocular vision research.
  • V disparity map The V disparity map is converted from the disparity map, and the gray value of any point (d, v) in the V disparity map is the number of points in the line corresponding to the ordinate of the disparity map having the disparity value equal to d.
  • the V-disparity map can be viewed as a side view of the disparity map. The plane in the original image is projected into a straight line by accumulating the number of identical points of the same line of disparity values.
  • RANSAC Abbreviation for RANdom Sample Consensus, which is an algorithm for calculating valid mathematical data based on a set of sample data sets containing abnormal data.
  • Occupation map In the early visual system, the environmental knowledge is represented by a grid, in which the 2D projection information of each object in the environment to the ground is preserved. Such a representation is called occupation. map.
  • Binocular stereo vision is an important form of machine vision. It is based on the parallax principle and uses the imaging device to acquire two images of the measured object from different positions. A method of positional deviation to obtain three-dimensional geometric information of an object. Combining the images obtained by the two eyes and observing the difference between them, we can obtain a clear sense of depth, establish the correspondence between the features, and map the same spatial physical points in different images. Corresponding to the point, this difference, we call the disparity image.
  • the binocular stereo vision measurement method has the advantages of high efficiency, appropriate accuracy, simple system structure, and low cost. In the measurement of moving objects (including animals and human bodies), the stereoscopic method is a more effective measurement method because image acquisition is done in an instant. Binocular stereo vision system is one of the key technologies of computer vision. The distance information of spatial 3D scenes is also the most basic content in computer vision research.
  • FIG. 1 shows a schematic diagram of a system 100 for detecting a travelable area of a vehicle, including a binocular camera 110 and a computing device 120, in accordance with an embodiment of the present invention.
  • the binocular camera 110 continuously captures two left and right grayscale images in front of the car in the direction of travel of the car.
  • the binocular camera 110 is mounted, for example, in front of the top of the vehicle such that its imaging range is concentrated on the road surface at the front of the vehicle.
  • Computing device 120 includes a memory 121, a processor 122, a communication interface 123, and a bus 124.
  • the memory 121, the communication interface 123 and the processor 122 are both connected to the bus 124.
  • the memory stores computer executable instructions, and the computing device can obtain the left and right grayscale images captured by the binocular camera via the communication interface, when the processor executes the A method of performing an automobile collision avoidance warning when the computer executable instruction is described.
  • An alarm 125 may also be included in the computing device 120 for providing an alert signal or sending a notification when a danger or emergency is detected.
  • Fig. 1 The structure shown in Fig. 1 is merely an example, and can be added, reduced, replaced, etc. as needed.
  • the technology for detecting the travelable area of the automobile in real time is obtained by the binocular camera sensor, the left and right images are obtained, the disparity map is obtained from the left and right images, and the V disparity map is constructed by using the disparity map (V-disparity). Map), then use RANSAC to score points on the V disparity map A straight line is formed, and the straight line is smoothed and filtered according to the multi-frame image, and finally the parallax corresponding to the straight lines obtains a safe area that can be driven in the original image.
  • V-disparity disparity map
  • FIG. 2 shows a general flow diagram of a method 200 of detecting a travelable area of a vehicle in real time in accordance with an embodiment of the present invention.
  • step S210 two left and right grayscale images in front of the car in the traveling direction of the vehicle are imaged by a binocular camera mounted on the vehicle body, and a parallax map is calculated.
  • the correspondence between each pair of images is first found, and the disparity map of the current scene is obtained according to the triangulation principle.
  • step S220 a V-disparity map is obtained from the parallax map conversion.
  • the relative distance of the object with respect to the lens is represented by the change in the shade of gray, and the parallax of the ground is continuously changed according to the depth information included in the disparity map, and the segment line is approximated.
  • M d represents the pixel value of a point on the disparity map
  • M vd represents the pixel value of the corresponding point on the V disparity map.
  • P num is the gray value of the corresponding pixel, so that a gray V disparity map is obtained.
  • step S230 the V-disparity map is binarized.
  • the principle of binarization is to first determine the maximum value of each row, and the gray value of the pixel at which only the maximum value is located in each row is set to 255, and the gray values of the remaining pixels are set. Is 0.
  • step S240 the RANSAC method is used to fit the segmentation line from the points of the binarized V-disparity map.
  • the disadvantage of the Hough transform is that the detection speed is too slow to control in real time; the accuracy is not high enough, the expected information is not detected, but the error is judged, and a large amount of redundant data is generated. This is mainly due to:
  • the least squares method calculates the value of the minimum mean square error with respect to the partial derivatives of the parameters a, b being zero.
  • the least squares method is synonymous with linear regression.
  • the least squares method is only suitable for situations where the error is small. Imagine this situation. If you need to extract a model from a noisy dataset (for example, only 20% of the data fits the model), the least squares method is not enough. For example, in Figure 3, a straight line (pattern) can be easily seen by the naked eye, but the least squares method is wrong.
  • the present invention detects a road surface by extracting a straight line from a V-disparity map, and has a large noise in the parallax map.
  • the least square method is used to extract the straight line, and it is likely that a wrong fit is obtained.
  • the RANSAC algorithm can estimate the parameters of the mathematical model in an iterative manner from a set of observation data sets containing “outside points”, which is very suitable for model parameter estimation of observation data with more noise.
  • the data obtained in practical applications often contains noisy data, which will cause interference to the construction of the model.
  • noise data points outliers (outliers) which play an active role in model construction.
  • inliers inside points
  • RANSAC do One thing is to randomly select some points, use these points to get a model (if the line is fitted, the so-called model is actually the slope), and then use this model to test the remaining points, if the test If the data point is within the error tolerance, the data point is judged as an intra-point, otherwise it is judged as an out-of-office point. If the number of intra-site points reaches a certain threshold, it indicates that the selected data point set has reached an acceptable level. Otherwise, all the steps after the previous random selection of the point set are continued, and the process is repeated until Finding the selected set of data points is acceptable, and the model obtained at this time can be considered as the optimal model construction for the data points.
  • FIG. 4 shows a flow diagram of a method 240 of fitting a straight line from a point of a V disparity map, in accordance with an embodiment of the present invention. This method can be used for step S240 in Fig. 2.
  • step S241 a set of random subsets of the points in the binarized V-disparity map is selected for straight line fitting to obtain a straight line model.
  • step S242 the obtained line model is used to test all other data. If a point is applied to the estimated line model, it is considered to be the intra-point, and the number of points in the statistical office.
  • step S243 it is judged whether or not the number of intra-office points is larger than the threshold, and if the result of the determination is YES, the process proceeds to step S245, otherwise, the process proceeds to step S244.
  • step S244 it is determined that the estimated model is unreasonable, the model is discarded, and then proceeds to step S249.
  • step S245 it is judged that the estimated model is reasonable, then the model is re-estimated with all the intra-site points, and the error rate of the intra-point and the model is estimated, and then proceeds to step S246.
  • step S246 it is judged whether the error rate of the current estimated model is smaller than the error rate of the optimal model, and if the result is affirmative, it proceeds to step S247, otherwise proceeds to step S248.
  • step S247 the current model is substituted for the best model, that is, because the current estimated model error rate is lower than the best model error rate according to the judgment of step S246, the performance is better than the best model, so the most replaced The good model becomes the new best model, and then proceeds to step S249.
  • step S248 the estimated model is discarded, and then proceeds to step S249.
  • step S249 it is determined whether or not the termination condition is reached, and if the termination condition is reached, the process is terminated, otherwise the process returns to step S241 to repeat the execution.
  • the termination condition here may be, for example, the number of iterations reaches a threshold number of times, the error rate is lower than a predetermined threshold, and the like.
  • the method of extracting a straight line from the V disparity map by using the RANSAC method is described above with reference to FIG. 4, and the ground is not a plane, so it is reflected in the V disparity map as a plurality of consecutive segmented straight lines, and the method of extracting the multi-segment piecewise straight lines may be, for example, as follows : first according to the method described in connection with FIG. 4, for example. Extract the first line, after the extraction is completed, remove the point belonging to the first line from the V disparity map, and then extract the second line according to the same method for the remaining points, and then repeat until the remaining points The number is less than a predetermined threshold.
  • step S240 the process proceeds to step S250.
  • step S250 the straight line is smoothed according to the multi-frame image.
  • Kalman filtering is performed on the fitted straight line.
  • the road surface detecting technology designs a method for smoothly smoothing a straight line according to a multi-frame image in accordance with real-time requirements. Since the slope of the road on which the automobile travels does not change greatly, and the change is uniformly slow, the variation of the straight line fitted according to the embodiment of the present invention also varies uniformly. On the other hand, since the parallax map obtained by the binocular camera has a lot of noise, the obtained straight line causes unnecessary jitter. In order to reduce such jitter and consider that the straight line obtained by the above fitting is uniform and slow, the embodiment of the present invention proposes smoothing filtering using a multi-frame image to obtain a smooth and uniform straight line model.
  • step S260 the travelable area in the original grayscale image is obtained by the extracted straight line.
  • the travelable area in the original grayscale image may be obtained by the line extracted in the V disparity map as follows: for each row in the V disparity map, the point of the disparity value on the extracted line is d, In the row corresponding to the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
  • Information on safe travelable areas in grayscale maps provides critical decision-making information for assisted driving, autonomous driving and driverless driving to prevent collisions and ensure safety.
  • step S270 according to the original image and the disparity map, the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system are calculated, and the ground is assumed to be a plane model, and the plane is fitted using RANSAC to obtain the ground model. .
  • step S280 the entire scene in the original grayscale image is converted from camera coordinates to world coordinates, while a plan view is generated, and the map is occupied by the plan view.
  • the process of occupying the map is: first extracting points above a certain height of the ground, and converting those points into the ground coordinate system; we will occupy points above a certain height and below a certain height.
  • the map is marked; the value occupying each pixel in the map is the sum of the heights of its corresponding points, thereby obtaining the occupied map.
  • step S290 the position of each obstacle is segmented from the occupied map by the connected domain mark detection algorithm, and converted into the original image, and the distance of the obstacle to the vehicle is calculated by the disparity map.
  • the image is converted to a color-marked image based on the size of each pixel value in the map.
  • different objects () are segmented by the connected domain label detection algorithm.
  • the connected domain label detection algorithm refer to the literature, Di Stefano, Luigi, and Andrea Bulgarelli. "A simple and efficient connected components labeling algorithm.” Image Analysis and Processing , 1999. Proceedings.International Conference on. IEEE, 1999.
  • the specific position of each obstacle is obtained and converted into the original image, and the parallax map is used to calculate the distance of the obstacle to the vehicle. The relevant distance can be found based on the relationship between the parallax and the depth.
  • step S2901 when the current vehicle distance is less than a certain threshold, an alarm is issued or the decision module is involved in the decision.
  • FIG. 5 is a block diagram showing the structure of a real-time detection system 300 for a motor vehicle that can detect a travelable area of a vehicle in real time according to another embodiment of the present invention.
  • the system 300 is placed in the vehicle for detecting the travelable area of the car in real time, providing critical support for the driver's assisted driving, automatic driving and driverless driving.
  • the vehicle travelable area real-time detection system 300 may include: a binocular camera 310, a parallax map calculation unit 320, a V-disparity map conversion unit 330, a binarization unit 340, a RANSAC line fitting unit 350, and a multi-frame.
  • the binocular camera 310 is configured to capture two left and right grayscale images in front of the car in the direction of travel of the car.
  • the parallax map calculation unit 320 calculates a parallax map from the left and right two grayscale images.
  • the V-disparity map conversion section 330 converts the parallax map to obtain a V-disparity map.
  • the binarization unit 340 binarizes the V-disparity map.
  • the RANSAC line fitting component 350 uses the RANSAC method to fit a segmentation line from the points of the binarized V-disparity map.
  • the multi-frame image filtering section 360 smooth-filters the straight line according to the multi-frame image.
  • the original image travelable area determining section 370 obtains the travelable area in the original grayscale image by the extracted straight line.
  • the ground model fitting module 380 calculates the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system according to the original image and the disparity map, and assumes that the ground is a plane model, and uses RANSAC to fit the plane to obtain a ground model.
  • the Occupied Map Fetch Module 390 converts the entire scene in the original grayscale image from camera coordinates to world coordinates, while generating a floor plan, which is taken up by the plan view to occupy the map.
  • the obstacle segmentation and distance calculation module 391 divides the position of each obstacle from the occupied map by the connected domain mark detection algorithm, converts it into the original image, and calculates the distance of the obstacle to the vehicle through the disparity map. .
  • the alarm module 392 performs an alarm when the current vehicle distance is less than a certain threshold or enters a decision module to participate in the decision.
  • the distance calculation module 391 and the alarm module 392 reference may be made to the description of the corresponding steps in FIG. 2, and details are not described herein again.
  • binocular camera in this paper should be understood in a broad sense. Any camera that can obtain left and right images or a device with camera function can be regarded as a binocular camera in this paper.
  • the parallax map calculation unit 320, the V-disparity map conversion unit 330, the binarization unit 340, the RANSAC line fitting unit 350, the multi-frame image filter unit 360, the original image travelable area determining unit 370, the ground model fitting module 380, The occupation map entropy module 390, the obstacle segmentation and distance calculation module 391, and the alarm module 392 should also be understood in a broad sense.
  • These components can be implemented in software, firmware or hardware or a combination of these, and the components can be combined with each other. Combinations or further splits and the like are intended to fall within the scope of the present disclosure.
  • the real-time detecting method and system for a travelable area of a vehicle can adapt to various road surfaces and road conditions, have low precision requirements for parallax maps, reduce front end calculation amount, have strong anti-interference ability, and improve real-time performance. Auto-safe driving of cars is critical.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method and system for vehicle anti-collision pre-warning based on a binocular stereo vision, comprising: acquiring left and right images by means of a binocular camera mounted on a vehicle body, and obtaining a disparity image based on the left and right images (S210); converting the disparity image to obtain a V-disparity image (S220); binarizing the V-disparity image (S230); fitting to obtain a segment straight line from points of the binarized V-disparity image using a RANSAC method (S240); smoothing and filtering the straight line according to multiple frames of images (S250); obtaining an accessible region in an original grayscale image by means of the extracted straight line (S260); according to the original image and the disparity image, calculating a three-dimensional coordinate of a point on the ground in a real-world coordinate system, and assuming that the ground is a planar model, fitting the plane using RANSAC so as to obtain a ground model (S270); converting the entire scene in the original grayscale image from a camera coordinate into a world coordinate while generating a plane graph, and obtaining an occupied map from the plane graph (S280); dividing the occupied map by means of a connected domain labeling detection algorithm to obtain the position of each obstacle, converting same into the original image for marking, and calculating the distance from the obstacle to this vehicle through the disparity image (S290); and when a current vehicle distance is less than a certain threshold value, giving an alarm or introducing a decision-making module for participation in decision-making (S2901). The method is adapted to various pavements and road conditions, has a low requirement for disparity image precision, does not rely on data, and is not affected by an artificial design.

Description

基于双目立体视觉的汽车防碰撞预警方法和系统Automobile anti-collision warning method and system based on binocular stereo vision 技术领域Technical field
本发明总体地涉及汽车自动驾驶技术,更具体地涉及基于双目立体视觉的汽车防碰撞技术。The present invention relates generally to automotive autonomous driving techniques, and more particularly to automotive anti-collision techniques based on binocular stereo vision.
背景技术Background technique
精确实时的防碰撞预警有重要的应用意义,尤其是在辅助驾驶安全警示和自动驾驶的自动控制中起到决定性作用,比如在自动驾驶中,防碰撞预警可以尽可能多地减少事故,避免人身和财产损失;在自动驾驶中,防碰撞预警越精确,安全性越高。Accurate and real-time anti-collision warning has important application significance, especially in the safety control of assisted driving and the automatic control of automatic driving. For example, in automatic driving, anti-collision warning can reduce accidents as much as possible to avoid personal injury. And property damage; in autonomous driving, the more accurate the anti-collision warning, the higher the safety.
目前,对于防碰撞预警方法主要有,一是基于激光雷达传感器或毫米波雷达,首先进行标定,对低于一定阈值的区域判断为地面,该方法所需激光雷达成本很高,很难普及使用,毫米波精度远没有激光雷达高;二是利用单目彩色摄像头,通过机器学习和计算机视觉的方法来检测前方障碍物,该方法严重依赖于训练的样本和人工设计的特征,可行驶区域千差万别,遇到训练样本中不存在的情况则检测不出来,扩展性、通用性不强,另一方面,单目相机无法准确地获取深度信息,得到的结果往往不符合真实场景,最后该方法实时性也难以保障。At present, there are mainly anti-collision warning methods. First, based on laser radar sensors or millimeter-wave radars, the calibration is first performed. The area below a certain threshold is judged as the ground. The laser radar required for this method is very expensive and difficult to popularize. The millimeter-wave accuracy is far less than that of the laser radar; the second is to use the monocular color camera to detect obstacles in front by machine learning and computer vision. This method relies heavily on the training samples and the artificially designed features, and the driving area varies widely. If the training sample does not exist, the detection will not be detected, and the scalability and versatility are not strong. On the other hand, the monocular camera cannot accurately obtain the depth information, and the obtained result often does not conform to the real scene. Finally, the method is real-time. Sex is also difficult to guarantee.
近年来,已经提出了一些基于机器视觉(包括单目视觉和立体视觉)的汽车安全驾驶技术。In recent years, some safe driving techniques based on machine vision (including monocular vision and stereo vision) have been proposed.
专利文献1CN101135558B公开了一种基于机器视觉的汽车防撞预警技术,其中采用机器视觉的方法采集前方车辆车牌特征以及车道线信息,根据其前方车辆车牌在机器视觉的投影成像像素点的多少大小,进行与前方车辆距离的计算,结合本车的车速、转向等状态信息计算出前车的行驶状态,根 据本车与车道线边界的相对距离,判断是否行驶在安全的车道范围之内等。Patent document 1CN101135558B discloses a machine vision-based automobile anti-collision warning technology, in which a machine vision method is used to collect the front vehicle license plate features and lane line information, and according to the size of the projected image pixel points of the vehicle license plate in front of the machine vision. Calculate the distance from the vehicle in front, and calculate the driving state of the preceding vehicle based on the state information of the vehicle speed and steering. According to the relative distance between the vehicle and the boundary of the lane line, it is judged whether or not the vehicle is traveling within a safe lane range.
专利文献1用机器学习的方法进行检测,严重依赖于训练的样本和人工设计的特征,行驶中遇到的场景区域千差万别,遇到训练样本中不存在的情况则检测不出来,扩展性、通用性不强;另一方面,单目相机无法准确地获取深度信息和速度信息,得到的结果往往不符合真实场景;最后该方法实时性也难以保障。Patent Document 1 uses machine learning methods for detection, which relies heavily on trained samples and artificially designed features. The scene regions encountered during driving vary widely, and cannot be detected in the absence of training samples. Sexuality is not strong; on the other hand, monocular cameras cannot accurately obtain depth information and speed information, and the results obtained often do not conform to the real scene; finally, the real-time nature of the method is difficult to guarantee.
专利文献2CN102685516A公开了一种基于立体视觉技术的主动安全式辅助驾驶方法。该主动安全式辅助驾驶系统综合利用光机电信息技术,由立体视觉子系统、图像快速处理子系统以及安全辅助驾驶子系统组成,包括两台高分辨率CCD摄像机、环境光照度传感器、双通道视频采集卡、同步控制器、数据传输电路、供电电路、图像快速处理算法库、语音提醒模块、屏幕显示模块以及主动安全驾驶控制模块等。在各种天气条件下,实时识别分道线、前方车辆、自行车、行人等危险目标的相对距离、相对速度和相对加速度等参数,通过语音提示驾驶员采取的应对措施,危急情况下实现自动减速和紧急刹车,全天候地保证行车安全。Patent Document 2CN102685516A discloses an active safety assisted driving method based on stereo vision technology. The active safety assisted driving system comprehensively utilizes opto-mechanical information technology, consisting of a stereo vision subsystem, an image rapid processing subsystem and a safety assisted driving subsystem, including two high-resolution CCD cameras, ambient light level sensors, and dual-channel video capture. Card, synchronous controller, data transmission circuit, power supply circuit, image fast processing algorithm library, voice reminder module, screen display module and active safe driving control module. In various weather conditions, real-time identification of the relative distance, relative speed and relative acceleration of dangerous targets such as lane separation, vehicles ahead, bicycles, pedestrians, etc., through voice prompting the driver to take countermeasures, automatic deceleration in critical situations And emergency braking, all-weather to ensure safe driving.
专利文献2也用基于机器学习的识别的方法进行检测,严重依赖于训练的样本和人工设计的特征,行驶中遇到的场景区域千差万别,遇到训练样本中不存在的情况则检测不出来,扩展性、通用性不强,但在距离计算上通过双目可以达到较精确的结果。Patent Document 2 also uses a method based on machine learning recognition to detect, which relies heavily on trained samples and artificially designed features. The scene regions encountered during driving vary widely, and cannot be detected in the absence of training samples. Scalability and versatility are not strong, but more accurate results can be achieved by binoculars in distance calculation.
需要通用性更强、实时性更强、对训练数据和人工设计特征依赖小的基于机器视觉的汽车防碰撞预警技术。A machine vision-based automotive anti-collision warning technology that is more versatile, more real-time, and less dependent on training data and manual design features.
发明内容Summary of the invention
鉴于以上情况,提出了本发明。The present invention has been made in view of the above circumstances.
根据本发明的一个方面,提供了一种基于双目立体视觉的汽车防碰撞预警方法,可以包括:通过汽车车身上搭载的双目摄像头拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图;从视差图转换得到V视差图;对V视差图进行二值化;使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线; According to an aspect of the present invention, an automobile anti-collision warning method based on binocular stereo vision is provided, which may include: capturing, by a binocular camera mounted on an automobile body, two left and right gray scales in front of a car in a traveling direction of the automobile. Image, calculated disparity map; converted from disparity map to V disparity map; binarization of V disparity map; using RANSAC method to fit segmentation line from points of binarized V disparity map;
根据多帧图像平滑滤波直线;以及通过所提取的直线得到原灰度图像中的可行驶区域;根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型;将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图;从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离;当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。Smoothing the line according to the multi-frame image; and obtaining the travelable area in the original gray image by the extracted straight line; calculating the three-dimensional coordinates of the point belonging to the ground in the real world coordinate system according to the original image and the disparity map, assuming the ground For the planar model, use RANSAC to fit the plane to obtain the ground model; convert the entire scene in the original grayscale image from camera coordinates to world coordinates, and generate a plan view, which is taken from the plan to occupy the map; The domain mark detection algorithm divides the position of each obstacle and converts it into the original image to mark it, and calculates the distance of the obstacle to the vehicle through the disparity map; when the current vehicle distance is less than a certain threshold, an alarm or an incoming decision module is performed. Participate in decision making.
根据上述的汽车防碰撞预警方法,由平面图求取占据地图可以包括:首先依据各个点的世界坐标,将高于地面第一阈值高度的点抽取出来,将这些点转换到地面坐标系中;将高于第一阈值高度并低于第二阈值高度的点在占据地图中标出;占据地图中每个像素的值是其相应点的高度累加和,由此得到占据地图。According to the above-mentioned automobile anti-collision warning method, the obtaining the map by the plan view may include: first extracting points higher than the first threshold height of the ground according to the world coordinates of each point, and converting the points into the ground coordinate system; Points above the first threshold height and below the second threshold height are marked in the occupied map; the value occupying each pixel in the map is the sum of the heights of its respective points, thereby obtaining the occupied map.
根据上述汽车防碰撞预警方法,所述从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置可以包括:根据占据地图中每个像素值的大小将其转换为彩色带有标记的图像,使得像素值越大越偏向红色,像素值越小越偏向于蓝色,通过连通域标记检测算法分割出不同的物体,由此得到每个障碍物的位置。According to the above-described automobile anti-collision warning method, the position of each obstacle obtained by the connected domain mark detection algorithm from the occupied map may include: converting the color to the color according to the size of each pixel value in the occupied map. The image is such that the larger the pixel value is, the more toward the red color, the smaller the pixel value is, the more the color is biased toward the blue color, and the different objects are segmented by the connected domain mark detection algorithm, thereby obtaining the position of each obstacle.
根据上述的汽车防碰撞预警方法,其中,所述对V视差图进行二值化可以包括:求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。According to the car collision avoidance warning method described above, the binarizing the V disparity map may include: determining a maximum value of each row of pixel values, and setting a gray value of a pixel at which only the maximum value is in each row to 255, the remaining pixel gray value is set to 0.
根据上述汽车防碰撞预警方法,使用RANSAC方法来拟合一段分段直线可以包括:反复执行下述操作序列,直至达到预定结束标准:选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;以最后得到的最好的模型作为该段分段直线。According to the above automobile anti-collision warning method, using the RANSAC method to fit a piece of segmentation line may include repeatedly performing the following sequence of operations until a predetermined end criterion is reached: selecting a set of random subsets among the maximum points in the V disparity map To perform straight line fitting to obtain a straight line model; use the obtained straight line model to test all other data. If a point is applied to the estimated straight line model, it is considered to be an intra-point. If there are more than a predetermined number of points, it is classified as Intra-point, then the estimated model is considered reasonable, then re-estimate the model with all intra-site points, and estimate the error rate of the intra-point and model; if the error rate of the model is lower than the current best model, replace it with the model The best model at present; the best model obtained in the end is used as the segmentation line of the segment.
根据上述汽车防碰撞预警方法,所述使用RANSAC方法来拟合多段分段直线可以包括:首先按照上述方法来提取第一条直线,提取完毕后,将属于 第一直线的点从V视差图中去除,然后针对剩下的点按照权利要求5所述的方法来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。According to the above automobile anti-collision warning method, the using the RANSAC method to fit the multi-segment segment line may include: first extracting the first line according to the above method, and after the extraction is completed, belonging to The point of the first straight line is removed from the V disparity map, and then the second line is extracted for the remaining points according to the method of claim 5, and so on, until the number of remaining points is less than a predetermined threshold.
根据上述汽车防碰撞预警方法,所述根据多帧图像平滑滤波直线可以包括:设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。According to the above automobile anti-collision warning method, the smoothing of the straight line according to the multi-frame image may include: setting a time window, assuming that the line model is represented by ax+by+c=0, obtaining a line model parameter for each frame image, for each Each parameter is accumulated for each frame. When a new image is received for each frame, the line model parameters of the first frame image are subtracted from the accumulated parameter results, and the line model parameters of the current frame image are added, and then averaged as The linear model parameters for this frame.
根据上述汽车防碰撞预警方法,所述通过所提取的直线得到原灰度图像中的可行驶区域包括:针对V视差图中的每一行,选取提取的直线上的视差值为d的点,在视差图对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。According to the above automobile anti-collision warning method, the obtaining the travelable area in the original grayscale image by the extracted straight line includes: selecting, for each row in the V disparity map, a point at which the disparity value on the extracted straight line is d, In the row corresponding to the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
根据本发明的另一方面,提供了一种基于双目立体视觉的汽车防碰撞预警系统,可以包括:双目摄像头,持续拍摄得到沿汽车行驶方向的汽车前方的左右两张灰度图像;计算装置,包括存储器、处理器、通信接口、总线,存储器、通信接口和处理器都连接到总线,存储器中存储有计算机可执行指令,计算装置经由通信接口能够获得双目摄像头拍摄得到的左右两张灰度图像,当处理器执行所述计算机可执行指令时,执行下述方法:基于左右两张灰度图像,计算得到视差图;从视差图转换得到V视差图;对V视差图进行二值化;使用RANSAC方法来来从二值化后的V视差图的点中拟合得到分段直线;根据多帧图像平滑滤波直线;通过所提取的直线得到原灰度图像中的可行驶区域;根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型;将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图;从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离,当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。According to another aspect of the present invention, an automobile anti-collision warning system based on binocular stereo vision is provided, which may include: a binocular camera that continuously captures two left and right grayscale images in front of the car in the direction of travel of the vehicle; The device, including a memory, a processor, a communication interface, a bus, a memory, a communication interface, and a processor are all connected to the bus, wherein the computer stores executable instructions, and the computing device can obtain two left and right cameras obtained by the binocular camera via the communication interface. a grayscale image, when the processor executes the computer executable instruction, performing the following method: calculating a disparity map based on two left and right grayscale images; converting a disparity map from the disparity map; and performing a binary value on the V disparity map Using the RANSAC method to fit a segmented straight line from the binarized V-disparity map points; smoothing the straight line according to the multi-frame image; obtaining the travelable region in the original grayscale image by the extracted straight line; Calculate the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system based on the original image and the disparity map. The ground is a plane model, and RANSAC is used to fit the plane to obtain a ground model; the entire scene in the original gray image is converted from camera coordinates to world coordinates, and a plan is generated at the same time, and the plan is taken to occupy the map; The connected domain mark detection algorithm divides the position of each obstacle and converts it into the original image to mark it, and calculates the distance of the obstacle to the vehicle through the disparity map. When the current vehicle distance is less than a certain threshold, an alarm or an incoming decision is made. Modules participate in decision making.
根据上述汽车防碰撞预警系统,由平面图求取占据地图可以包括:首先依据各个点的世界坐标,将高于地面第一阈值高度的点抽取出来,将这些点转换到地面坐标系中;将高于第一阈值高度并低于第二阈值高度的点在占据地图中标出;占据地图中每个像素的值是其相应点的高度累加和,由此得到 占据地图。According to the above-mentioned automobile anti-collision warning system, obtaining a map by the plan view may include: first extracting points higher than the first threshold height of the ground according to the world coordinates of each point, and converting the points into the ground coordinate system; A point at a first threshold height and below a second threshold height is marked in the occupied map; the value occupying each pixel in the map is the sum of the heights of its corresponding points, thereby obtaining Occupy the map.
根据上述汽车防碰撞预警系统,所述从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置可以包括:根据占据地图中每个像素值的大小将其转换为彩色带有标记的图像,使得像素值越大越偏向红色,像素值越小越偏向于蓝色,通过连通域标记检测算法分割出不同的物体,由此得到每个障碍物的位置。According to the above automobile anti-collision warning system, the position of each obstacle obtained by the connected domain mark detection algorithm from the occupied map may include: converting the color to the color according to the size of each pixel value in the occupied map. The image is such that the larger the pixel value is, the more toward the red color, the smaller the pixel value is, the more the color is biased toward the blue color, and the different objects are segmented by the connected domain mark detection algorithm, thereby obtaining the position of each obstacle.
根据上述汽车防碰撞预警系统,其中,所述对V视差图进行二值化包括:求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。According to the above automobile anti-collision warning system, wherein the binarizing the V disparity map comprises: obtaining a maximum value of each row of pixel values, and setting a gray value of a pixel at which only the maximum value is in each row to 255, The remaining pixel grayscale values are set to zero.
根据上述汽车防碰撞预警系统,所述使用RANSAC方法来拟合一段分段直线可以包括:反复执行下述操作序列,直至达到预定退出标准:选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;以最后得到的最好的模型作为该段分段直线。According to the above automobile anti-collision warning system, the using the RANSAC method to fit a piece of segmentation line may include repeatedly performing the following sequence of operations until a predetermined exit criterion is reached: selecting a random set of the maximum points in the V disparity map The subset is used to perform straight line fitting to obtain a straight line model; the obtained straight line model is used to test all other data. If a point is applied to the estimated straight line model, it is considered to be an intra-point, and if there is a point exceeding a predetermined number, The class is an intra-point, then the estimated model is considered reasonable, then the model is re-estimated with all intra-site points, and the error rate of the intra-point and model is estimated; if the error rate of the model is lower than the current best model, then The model replaces the current best model; the best model obtained last is used as the segmentation line.
根据上述汽车防碰撞预警系统,所述使用RANSAC方法来拟合多段分段直线可以包括:提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。According to the above automobile anti-collision warning system, the using the RANSAC method to fit the multi-segment segment line may include: extracting the first line, and after the extraction is completed, removing the points belonging to the first line from the V-disparity map, and then The remaining points are used to extract the second line, and so on, until the number of remaining points is less than a predetermined threshold.
根据上述汽车防碰撞预警系统,所述根据多帧图像平滑滤波直线可以包括:设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。According to the above automobile anti-collision warning system, the smoothing of the straight line according to the multi-frame image may include: setting a time window, assuming that the line model is represented by ax+by+c=0, obtaining a linear model parameter for each frame image, for each Each parameter is accumulated for each frame. When a new image is received for each frame, the line model parameters of the first frame image are subtracted from the accumulated parameter results, and the line model parameters of the current frame image are added, and then averaged as The linear model parameters for this frame.
根据上述汽车防碰撞预警系统,所述通过所提取的直线得到原灰度图像中的可行驶区域可以包括:针对V视差图中的每一行,选取提取的直线上的视差值为d,在视差图中对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。According to the above automobile anti-collision warning system, the obtaining the travelable area in the original grayscale image by the extracted straight line may include: selecting, for each row in the V disparity map, a disparity value on the extracted straight line as d, In the corresponding row in the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe drivable region.
根据本发明的再一方面,一种汽车防碰撞预警系统,可以包括:双目相 机,配置为拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像;视差图计算部件,从左右两张灰度图像计算得到视差图;V视差图转换模块,从视差图转换得到V视差图;二值化模块,对V视差图进行二值化;RANSAC直线拟合模块,使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;多帧图像滤波模块,根据多帧图像平滑滤波直线;原图像可行驶区域确定模块,通过所提取的直线得到原灰度图像中的可行驶区域;地面模型拟合模块,根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型;占据地图求取模块,将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图;障碍物分割及距离计算模块,从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离;报警模块,当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。According to still another aspect of the present invention, an automobile anti-collision warning system may include: a binocular phase The machine is configured to capture two left and right grayscale images in front of the car in the direction of travel of the car; the parallax map calculation component calculates a parallax map from the left and right two grayscale images; the V parallax map conversion module converts the parallax map to obtain a V Parallax map; binarization module, binarization of V disparity map; RANSAC line fitting module, using RANSAC method to fit segmentation line from the binarized V disparity map points; multi-frame image filtering a module, according to the multi-frame image smoothing and filtering straight line; the original image travelable area determining module obtains the travelable area in the original grayscale image by the extracted straight line; the ground model fitting module calculates the belonging according to the original image and the disparity map The three-dimensional coordinates of the ground point in the real world coordinate system, assuming the ground is a plane model, using RANSAC to fit the plane to obtain the ground model; occupying the map seeking module, converting the entire scene in the original gray image from camera coordinates To the world coordinates, at the same time generate a plan, the plan is taken to occupy the map; the obstacle segmentation and distance calculation module, from the occupied map The position of each obstacle is obtained by the connected domain mark detection algorithm, and converted into the original image, and the distance of the obstacle to the vehicle is calculated by the disparity map; the alarm module alarms when the current vehicle distance is less than a certain threshold. Or pass the decision module to participate in the decision.
根据本发明实施例的基于立体视觉的汽车防碰撞预警方法和系统,基于双目视觉图像,求取地面模型,通过地面模型得到场景的占据地图,由占据地图得到防碰撞信息,可以适应各种各样的路面和路况,算法对disparity map精度要求低、减少前端运算量、抗干扰能力强、不依赖数据和人工设计特征所带来的影响。A stereo vision-based automobile anti-collision warning method and system according to an embodiment of the present invention obtains a ground model based on a binocular visual image, obtains a occupied map of the scene through the ground model, and obtains anti-collision information from the occupied map, and can adapt to various For various roads and road conditions, the algorithm has low accuracy requirements for disparity map, reduced front-end calculation, strong anti-interference ability, and no dependence on data and artificial design features.
附图说明DRAWINGS
从下面结合附图对本发明实施例的详细描述中,本发明的这些和/或其它方面和优点将变得更加清楚并更容易理解,其中:These and/or other aspects and advantages of the present invention will become more apparent from the following detailed description of the embodiments of the invention.
图1示出了根据本发明实施例的、车载的汽车防碰撞预警系统100的示意图;1 shows a schematic diagram of an in-vehicle anti-collision warning system 100 in accordance with an embodiment of the present invention;
图2示出了根据本发明实施例的基于双目立体视觉的汽车防碰撞预警方法的总体流程图;2 shows a general flow chart of a vehicle collision avoidance warning method based on binocular stereo vision according to an embodiment of the present invention;
图3示出了最小二乘法在存在较大噪声情况下错误提取直线情形的示意图;FIG. 3 is a schematic diagram showing a situation in which a least square method incorrectly extracts a straight line in the presence of a large noise;
图4示出了根据本发明实施例的从V视差图的点中拟合一段直线的方法240的流程图; 4 shows a flow diagram of a method 240 of fitting a straight line from points of a V disparity map, in accordance with an embodiment of the present invention;
图5示出了根据本发明的另一实施例的汽车防碰撞预警系统300的结构框图。FIG. 5 is a block diagram showing the structure of an automobile collision avoidance warning system 300 according to another embodiment of the present invention.
具体实施方式detailed description
为了使本领域技术人员更好地理解本发明,下面结合附图和具体实施方式对本发明作进一步详细说明。The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.
首先给出本文中使用的术语的解释。First, an explanation of the terms used in this article is given.
视差图:视差图是以图像对中任一幅图像为基准,其大小为该基准图像的大小,元素值为视差值的图像。视差图包含了场景的距离信息。视差图可以从双目相机拍摄的左右图像中计算得到。普通二维视差图中的某点坐标以(u,v)表示,其中u为横坐标,v为纵坐标;点(u,v)处的像素的像素值用d(u,v)表示,像素值表示该点(u,v)处的视差。由于视差图包含了场景的距离信息,因此从立体图像对中提取视差图的图像匹配,一直是双目视觉研究中最为活跃的领域。Disparity map: The disparity map is based on any image in the image pair, the size of which is the size of the reference image, and the element value is the image of the disparity value. The disparity map contains the distance information of the scene. The parallax map can be calculated from the left and right images taken by the binocular camera. The coordinates of a point in a common two-dimensional disparity map are represented by (u, v), where u is the abscissa and v is the ordinate; the pixel value of the pixel at the point (u, v) is represented by d(u, v), The pixel value represents the parallax at this point (u, v). Since the disparity map contains the distance information of the scene, the image matching of the disparity map from the stereo image pair has been the most active field in binocular vision research.
V视差图:V视差图从视差图转换得到,V视差图中任意一点(d,v)的灰度值是对应视差图的纵坐标为v的行中视差值等于d的点的个数。形象地说,V视差图可以视为视差图的侧视图。通过累计同一行的视差值相同点的个数将原图像中的平面投影成一条直线。V disparity map: The V disparity map is converted from the disparity map, and the gray value of any point (d, v) in the V disparity map is the number of points in the line corresponding to the ordinate of the disparity map having the disparity value equal to d. To put it bluntly, the V-disparity map can be viewed as a side view of the disparity map. The plane in the original image is projected into a straight line by accumulating the number of identical points of the same line of disparity values.
RANSAC:RANdom Sample Consensus的缩写,它是根据一组包含异常数据的样本数据集,计算出数据的数学模型参数,得到有效样本数据的算法。RANSAC: Abbreviation for RANdom Sample Consensus, which is an algorithm for calculating valid mathematical data based on a set of sample data sets containing abnormal data.
占据地图(occupancy map):在早期的视觉系统中,环境知识由栅格表示,在栅格中,保存着环境中的每个物体到地面上的2D投影信息,这样的表示法被称为占据地图。Occupation map: In the early visual system, the environmental knowledge is represented by a grid, in which the 2D projection information of each object in the environment to the ground is preserved. Such a representation is called occupation. map.
双目立体视觉:双目立体视觉(Binocular Stereo Vision)是机器视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。融合两只眼睛获得的图像并观察它们之间的差别,使我们可以获得明显的深度感,建立特征间的对应关系,将同一空间物理点在不同图像中的映 像点对应起来,这个差别,我们称作视差(Disparity)图像。双目立体视觉测量方法具有效率高、精度合适、系统结构简单、成本低等优点。对运动物体(包括动物和人体形体)测量中,由于图像获取是在瞬间完成的,因此立体视觉方法是一种更有效的测量方法。双目立体视觉系统是计算机视觉的关键技术之一,获取空间三维场景的距离信息也是计算机视觉研究中最基础的内容。Binocular stereo vision: Binocular Stereo Vision is an important form of machine vision. It is based on the parallax principle and uses the imaging device to acquire two images of the measured object from different positions. A method of positional deviation to obtain three-dimensional geometric information of an object. Combining the images obtained by the two eyes and observing the difference between them, we can obtain a clear sense of depth, establish the correspondence between the features, and map the same spatial physical points in different images. Corresponding to the point, this difference, we call the disparity image. The binocular stereo vision measurement method has the advantages of high efficiency, appropriate accuracy, simple system structure, and low cost. In the measurement of moving objects (including animals and human bodies), the stereoscopic method is a more effective measurement method because image acquisition is done in an instant. Binocular stereo vision system is one of the key technologies of computer vision. The distance information of spatial 3D scenes is also the most basic content in computer vision research.
图1示出了根据本发明实施例的、车载的用于检测汽车可行驶区域的系统100的示意图,包括双目摄像头110和计算装置120。1 shows a schematic diagram of a system 100 for detecting a travelable area of a vehicle, including a binocular camera 110 and a computing device 120, in accordance with an embodiment of the present invention.
双目摄像头110持续拍摄得到沿汽车行驶方向的汽车前方的左右两张灰度图像。The binocular camera 110 continuously captures two left and right grayscale images in front of the car in the direction of travel of the car.
双目摄像头110例如安装在车辆的顶部前方,使其摄像范围集中于车辆前部的路面。The binocular camera 110 is mounted, for example, in front of the top of the vehicle such that its imaging range is concentrated on the road surface at the front of the vehicle.
计算装置120包括存储器121、处理器122、通信接口123、总线124。存储器121、通信接口123和处理器122都连接到总线124,存储器中存储有计算机可执行指令,计算装置经由通信接口能够获得双目摄像头拍摄得到的左右两张灰度图像,当处理器执行所述计算机可执行指令时,执行汽车防碰撞预警的方法。 Computing device 120 includes a memory 121, a processor 122, a communication interface 123, and a bus 124. The memory 121, the communication interface 123 and the processor 122 are both connected to the bus 124. The memory stores computer executable instructions, and the computing device can obtain the left and right grayscale images captured by the binocular camera via the communication interface, when the processor executes the A method of performing an automobile collision avoidance warning when the computer executable instruction is described.
计算装置120中还可以包括报警器125,用于在发现危险或紧急情况时给出报警信号或向外发送通知。An alarm 125 may also be included in the computing device 120 for providing an alert signal or sending a notification when a danger or emergency is detected.
图1所示的结构仅为示例,可以根据需要进行增加、减少、替换等。The structure shown in Fig. 1 is merely an example, and can be added, reduced, replaced, etc. as needed.
另外,需要说明的是,有些功能或功能中的一部分可以根据需要由不同部件来实现,例如从左右图像来计算求得视差图在实施例中被描述为由计算装置来实现,不过根据需要也可以在双目相机中增加用于计算视差图的软件、硬件或固件,或者也可以在车辆中部署专门的用于基于左右图像计算视差图的部件,这些都在本发明构思范围之内。In addition, it should be noted that some of the functions or functions may be implemented by different components as needed, for example, calculation from the left and right images to obtain the disparity map is described in the embodiment as being implemented by the computing device, but as needed It is within the scope of the present invention to add software, hardware or firmware for calculating a disparity map in a binocular camera, or to deploy a special component for calculating a disparity map based on left and right images in a vehicle.
下面结合图2详细描述根据本发明实施例的实时检测汽车可行驶区域的方法。A method of detecting a travelable area of a car in real time according to an embodiment of the present invention will be described in detail below with reference to FIG.
本发明实施例的实时检测汽车可行驶区域的技术,通过双目摄像头传感器获得左右两张图像,由左右两张图像得到视差图(disparity map),用视差图来构造V视差图(V-disparity map),再在V视差图上利用RANSAC求取分 段直线,并根据多帧图像对直线进行平滑滤波,再最终由这些直线所对应的视差在原始图像中得到可行驶的安全区域。In the embodiment of the present invention, the technology for detecting the travelable area of the automobile in real time is obtained by the binocular camera sensor, the left and right images are obtained, the disparity map is obtained from the left and right images, and the V disparity map is constructed by using the disparity map (V-disparity). Map), then use RANSAC to score points on the V disparity map A straight line is formed, and the straight line is smoothed and filtered according to the multi-frame image, and finally the parallax corresponding to the straight lines obtains a safe area that can be driven in the original image.
图2示出了根据本发明实施例的实时检测汽车的可行驶区域的方法200的总体流程图。2 shows a general flow diagram of a method 200 of detecting a travelable area of a vehicle in real time in accordance with an embodiment of the present invention.
在步骤S210中,通过汽车车身上搭载的双目摄像头拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图。In step S210, two left and right grayscale images in front of the car in the traveling direction of the vehicle are imaged by a binocular camera mounted on the vehicle body, and a parallax map is calculated.
具体地,例如,依据双目立体匹配相关算法,先找出每对图像间的对应关系,根据三角测量原理,得到当前场景的视差图。Specifically, for example, according to the binocular stereo matching correlation algorithm, the correspondence between each pair of images is first found, and the disparity map of the current scene is obtained according to the triangulation principle.
这里,还可以对视差图进行一些去噪处理等。Here, it is also possible to perform some denoising processing and the like on the parallax map.
在步骤S220中,从视差图转换得到V视差图。In step S220, a V-disparity map is obtained from the parallax map conversion.
具体地,例如,在视差图中,用灰度深浅变化来表示物体相对于镜头的相对距离远近,根据视差图中包含的景深信息,地面的视差是连续变化的,近似分段直线。假设用Md表示视差图上某点的像素值,用Mvd表示V视差图上对应点的像素值。用函数f(Md)=Mvd来表示视差图和V视差图之间的转换关系,函数f表示累计视差图每一行上具有相同视差的像素的个数Pnum,这样以视差为横轴,纵轴与视差图一致,Pnum为对应像素的灰度值,这样就得到一个灰度V视差图。Specifically, for example, in the disparity map, the relative distance of the object with respect to the lens is represented by the change in the shade of gray, and the parallax of the ground is continuously changed according to the depth information included in the disparity map, and the segment line is approximated. It is assumed that M d represents the pixel value of a point on the disparity map, and M vd represents the pixel value of the corresponding point on the V disparity map. The conversion relationship between the disparity map and the V disparity map is represented by a function f(M d )=M vd , which represents the number P num of pixels having the same disparity on each line of the cumulative disparity map, such that the disparity is the horizontal axis The vertical axis is consistent with the disparity map, and P num is the gray value of the corresponding pixel, so that a gray V disparity map is obtained.
在步骤S230中,对V视差图进行二值化。In step S230, the V-disparity map is binarized.
在一个示例中,采用如下方法进行二值化:二值化的原则是先求取每一行的最大值,每一行中仅最大值所处像素灰度值设置为255,其余像素灰度值设置为0。In one example, the following method is used for binarization: the principle of binarization is to first determine the maximum value of each row, and the gray value of the pixel at which only the maximum value is located in each row is set to 255, and the gray values of the remaining pixels are set. Is 0.
在步骤S240中,使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线。In step S240, the RANSAC method is used to fit the segmentation line from the points of the binarized V-disparity map.
下面解释下在众多的直线拟合算法中,为什么本发明实施例选择使用RANSAC方法来进行二值化后的V视差图的点的直线拟合。 In the following, in the numerous straight line fitting algorithms, why the embodiment of the present invention selects the straight line fitting of the points of the V-disparity map after binarization using the RANSAC method.
实际生活中的数据往往会有一定的偏差,或者说噪声,这给数学拟合造成了困难。例如我们知道两个变量X与Y之间呈线性关系,Y=aX+b,我们想确定参数a与b的具体值。通过实验,可以得到一组X与Y的测试值。虽然理论上两个未知数的方程只需要两组值即可确认,但由于系统误差的原因,任意取两点算出的a与b的值都不尽相同。我们希望的是,最后计算得出的理论模型与测试值的误差最小。The data in real life tends to have a certain deviation, or noise, which makes the mathematics fit difficult. For example, we know that there is a linear relationship between two variables X and Y, Y = aX + b, we want to determine the specific values of parameters a and b. Through experiments, a set of X and Y test values can be obtained. Although theoretically two equations of unknown numbers only need two sets of values to confirm, due to systematic errors, the values of a and b calculated by arbitrary two points are not the same. What we hope is that the final calculated theoretical model has the least error with the test value.
通常现有技术采用最小二乘法或霍夫变换来拟合直线。Usually the prior art uses a least squares method or a Hough transform to fit a straight line.
霍夫变换的不足在于:检测速度太慢,无法做到实时控制;精度不够高,期望的信息检测不到反而做出错误判断,进而产生大量的冗余数据。这主要源于:The disadvantage of the Hough transform is that the detection speed is too slow to control in real time; the accuracy is not high enough, the expected information is not detected, but the error is judged, and a large amount of redundant data is generated. This is mainly due to:
1、需占用大量内存空间,耗时久、实时性差;1. It takes a lot of memory space, which takes a long time and has poor real-time performance;
2、现实中的图像一般都受到外界噪声的干扰,信噪比较低,此时常规Hough变换的性能将急剧下降,进行参数空间极大值的搜索时由于合适的阈值难以确定,往往出现“虚峰”和“漏检”的问题。2. The images in reality are generally interfered by external noise, and the signal-to-noise ratio is relatively low. At this time, the performance of the conventional Hough transform will drop sharply. When the search for the parameter space maximum value is difficult to determine due to the appropriate threshold, it often appears. The problem of "virtual peaks" and "missing inspections".
最小二乘法通过计算最小均方差关于参数a、b的偏导数为零时的值。事实上,在很多情况下,最小二乘法都是线性回归的代名词。遗憾的是,最小二乘法只适合于误差较小的情况。试想一下这种情况,假使需要从一个噪音较大的数据集中提取模型(比方说只有20%的数据时符合模型的)时,最小二乘法就显得力不从心了。例如图3,肉眼可以很轻易地看出一条直线(模式),但最小二乘法却找错了。The least squares method calculates the value of the minimum mean square error with respect to the partial derivatives of the parameters a, b being zero. In fact, in many cases, the least squares method is synonymous with linear regression. Unfortunately, the least squares method is only suitable for situations where the error is small. Imagine this situation. If you need to extract a model from a noisy dataset (for example, only 20% of the data fits the model), the least squares method is not enough. For example, in Figure 3, a straight line (pattern) can be easily seen by the naked eye, but the least squares method is wrong.
本发明通过从V视差图提取直线来检测路面,视差图中具有较大的噪声,这种情况下用最小二乘法来提取直线很可能会得到错误的拟合。The present invention detects a road surface by extracting a straight line from a V-disparity map, and has a large noise in the parallax map. In this case, the least square method is used to extract the straight line, and it is likely that a wrong fit is obtained.
RANSAC算法可以从一组包含“局外点”的观测数据集中,通过迭代方式估计数学模型的参数,非常适合于含有较多噪声的观测数据的模型参数估计。在实际应用中获取到的数据,常常会包含有噪声数据,这些噪声数据会使对模型的构建造成干扰,我们称这样的噪声数据点为outliers(局外点),那些对于模型构建起积极作用的我们称它们为inliers(局内点),RANSAC做 的一件事就是先随机的选取一些点,用这些点去获得一个模型(如果是在做直线拟合的话,这个所谓的模型其实就是斜率),然后用此模型去测试剩余的点,如果测试的数据点在误差允许的范围内,则将该数据点判为局内点,否则判断为局外点。局内点的数目如果达到了某个设定的阈值,则说明此次选取的这些数据点集达到了可以接受的程度,否则继续前面的随机选取点集后所有的步骤,不断重复此过程,直到找到选取的这些数据点集达到了可以接受的程度为止,此时得到的模型便可认为是对数据点的最优模型构建。The RANSAC algorithm can estimate the parameters of the mathematical model in an iterative manner from a set of observation data sets containing “outside points”, which is very suitable for model parameter estimation of observation data with more noise. The data obtained in practical applications often contains noisy data, which will cause interference to the construction of the model. We call such noise data points outliers (outliers), which play an active role in model construction. We call them inliers (inside points), RANSAC do One thing is to randomly select some points, use these points to get a model (if the line is fitted, the so-called model is actually the slope), and then use this model to test the remaining points, if the test If the data point is within the error tolerance, the data point is judged as an intra-point, otherwise it is judged as an out-of-office point. If the number of intra-site points reaches a certain threshold, it indicates that the selected data point set has reached an acceptable level. Otherwise, all the steps after the previous random selection of the point set are continued, and the process is repeated until Finding the selected set of data points is acceptable, and the model obtained at this time can be considered as the optimal model construction for the data points.
图4示出了根据本发明实施例的从V视差图的点中拟合一段直线的方法240的流程图。该方法可以用于图2中的步骤S240。4 shows a flow diagram of a method 240 of fitting a straight line from a point of a V disparity map, in accordance with an embodiment of the present invention. This method can be used for step S240 in Fig. 2.
在步骤S241中,选择二值化后的V视差图中的点中的一组随机子集来进行直线拟合,得到直线模型。In step S241, a set of random subsets of the points in the binarized V-disparity map is selected for straight line fitting to obtain a straight line model.
在步骤S242中用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,统计局内点的数目。In step S242, the obtained line model is used to test all other data. If a point is applied to the estimated line model, it is considered to be the intra-point, and the number of points in the statistical office.
在步骤S243中,判断局内点的数目是否大于阈值,如果确定的结果为是,则前进到步骤S245,否则前进到步骤S244。In step S243, it is judged whether or not the number of intra-office points is larger than the threshold, and if the result of the determination is YES, the process proceeds to step S245, otherwise, the process proceeds to step S244.
在步骤S244中,判定估计的模型不合理,丢弃该模型,然后前进到步骤S249。In step S244, it is determined that the estimated model is unreasonable, the model is discarded, and then proceeds to step S249.
在步骤S245中,判定估计的模型是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率,然后前进到步骤S246。In step S245, it is judged that the estimated model is reasonable, then the model is re-estimated with all the intra-site points, and the error rate of the intra-point and the model is estimated, and then proceeds to step S246.
在步骤S246中,判断当前估计模型的错误率是否小于最佳模型的错误率,如果结果是肯定的,前进到步骤S247,否则前进到步骤S248。In step S246, it is judged whether the error rate of the current estimated model is smaller than the error rate of the optimal model, and if the result is affirmative, it proceeds to step S247, otherwise proceeds to step S248.
在步骤S247中,用当前估计的模型替代最佳模型,即因为根据步骤S246的判断,当前估计的模型错误率比最佳模型的错误率更低,性能比最佳模型更好,因此取代最佳模型成为新的最佳模型,然后前进到步骤S249。In step S247, the current model is substituted for the best model, that is, because the current estimated model error rate is lower than the best model error rate according to the judgment of step S246, the performance is better than the best model, so the most replaced The good model becomes the new best model, and then proceeds to step S249.
在步骤S248中,丢弃估计的模型,然后前进到步骤S249。In step S248, the estimated model is discarded, and then proceeds to step S249.
在步骤S249中,判定是否达到终止条件,如果达到终止条件,则过程终止,否则返回到步骤S241重复执行。这里的终止条件,例如可以为迭代次数达到阈值次数,错误率低于预定阈值等等。In step S249, it is determined whether or not the termination condition is reached, and if the termination condition is reached, the process is terminated, otherwise the process returns to step S241 to repeat the execution. The termination condition here may be, for example, the number of iterations reaches a threshold number of times, the error rate is lower than a predetermined threshold, and the like.
上面参考图4描述了利用RANSAC方法来从V视差图中提取一段直线的方法,地面不是平面,因此反映在V视差图中为多段连续的分段直线,提取多段的分段直线方法可以例如如下:首先按照例如结合图4所述的方法来 提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点按照同样的方法来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。The method of extracting a straight line from the V disparity map by using the RANSAC method is described above with reference to FIG. 4, and the ground is not a plane, so it is reflected in the V disparity map as a plurality of consecutive segmented straight lines, and the method of extracting the multi-segment piecewise straight lines may be, for example, as follows : first according to the method described in connection with FIG. 4, for example. Extract the first line, after the extraction is completed, remove the point belonging to the first line from the V disparity map, and then extract the second line according to the same method for the remaining points, and then repeat until the remaining points The number is less than a predetermined threshold.
回到图2,在步骤S240完成之后,前进到步骤S250。Returning to Fig. 2, after the completion of step S240, the process proceeds to step S250.
在步骤S250中,根据多帧图像平滑滤波直线。In step S250, the straight line is smoothed according to the multi-frame image.
如前所述,在专利文献CN 103489175B中,对拟合的直线进行了卡尔曼滤波。As described above, in the patent document CN 103489175B, Kalman filtering is performed on the fitted straight line.
发明人经实验分析认为,卡尔曼滤波方法基于处理对象的变化是高斯分布来进行滤波,但实际上路面的变化并不是高斯分布,另外卡尔曼滤波方法运行起来很慢,无法满足自动驾驶领域检测汽车可行驶区域的实时性要求。The inventor believes that the Kalman filtering method is based on the Gaussian distribution of the processing object, but the change of the road surface is not Gaussian. In addition, the Kalman filtering method runs slowly and cannot meet the detection of automatic driving. The real-time requirements of the car's driveable area.
根据本发明实施例的路面检测技术设计了符合实时需求的根据多帧图像平滑滤波直线的方法。由于汽车所行驶的路面其坡度不会发生巨变,其变化是均匀缓慢的,所以根据本发明实施例所拟合的直线的变化也是均匀变化的。另一方面由于双目摄像头所得出的视差图具有很多噪声,得到的直线会产生不必要的抖动。为了减少这种抖动以及考虑到上面的拟合得到的直线是均匀缓慢的性质,本发明实施例提出用多帧图像进行平滑滤波,来得到平滑均匀的直线模型。The road surface detecting technology according to the embodiment of the present invention designs a method for smoothly smoothing a straight line according to a multi-frame image in accordance with real-time requirements. Since the slope of the road on which the automobile travels does not change greatly, and the change is uniformly slow, the variation of the straight line fitted according to the embodiment of the present invention also varies uniformly. On the other hand, since the parallax map obtained by the binocular camera has a lot of noise, the obtained straight line causes unnecessary jitter. In order to reduce such jitter and consider that the straight line obtained by the above fitting is uniform and slow, the embodiment of the present invention proposes smoothing filtering using a multi-frame image to obtain a smooth and uniform straight line model.
具体地,可以如下根据多帧图像进行直线的平滑滤波:设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。例如,汽车行驶在路面上,当前时刻为tc,新拍摄得到当前图像,此时,即对于固定窗口,从窗口中去掉第一帧,然后加入新来的图像帧,对窗口内图像的直线模型参数来求平均作为新来的图像帧的直线模型参数,也即估计出的路面在V视差图中的数学模型参数;然后随着时间的进行,继续此操作,相当于随着时间进行向前滑动窗口。Specifically, linear smoothing filtering may be performed according to the multi-frame image as follows: a time window is set, and a straight line model is assumed to be ax+by+c=0, and a linear model parameter is obtained for each frame image, for each frame for each parameter. Accumulate, when each frame of a new image, subtract the line model parameters of the first frame image from the accumulated parameter results, plus the line model parameters of the current frame image, and then average the line as the frame Model parameters. For example, if the car is driving on the road, the current time is tc, and the new image is taken. At this time, for the fixed window, the first frame is removed from the window, and then the new image frame is added, and the line model of the image in the window is added. The parameters are averaged as the linear model parameters of the new image frame, that is, the estimated mathematical model parameters of the road surface in the V disparity map; then, as time progresses, the operation continues, which is equivalent to moving forward with time Slide the window.
在步骤S260中,通过所提取的直线得到原灰度图像中的可行驶区域。 In step S260, the travelable area in the original grayscale image is obtained by the extracted straight line.
在一个示例中,可以如下通过V视差图中所提取的直线得到原灰度图像中的可行驶区域:针对V视差图中的每一行,选取提取的直线上的视差值为d的点,在视差图对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。In one example, the travelable area in the original grayscale image may be obtained by the line extracted in the V disparity map as follows: for each row in the V disparity map, the point of the disparity value on the extracted line is d, In the row corresponding to the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
在灰度图中得到安全的可行驶区域的信息,能够为辅助驾驶、自动驾驶和无人驾驶提供关键的决策信息,防止碰撞发生,保证安全。Information on safe travelable areas in grayscale maps provides critical decision-making information for assisted driving, autonomous driving and driverless driving to prevent collisions and ensure safety.
回到图2,在步骤S270中,根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型。Returning to FIG. 2, in step S270, according to the original image and the disparity map, the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system are calculated, and the ground is assumed to be a plane model, and the plane is fitted using RANSAC to obtain the ground model. .
具体地,在一个示例中,通过下述操作了拟合得到地面模型:根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中三维坐标。在真实世界坐标系中,把地面假设为一个平面模型,表示为ax+by+cz+d=0,然后使用RANSAC来拟合该平面。RANSAC通过反复选择地面候选点中的极大值点中的一组随机子集来进行直线拟合,由于视差图中包含了很多噪声,在进行一次RANSAC后,把所得到属于地面的点集合再进行一次RANSAC来拟合平面,最后得到地面模型。Specifically, in one example, the ground model is obtained by fitting the following operations: According to the original image and the disparity map, the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system are calculated. In the real world coordinate system, the ground is assumed to be a planar model, denoted as ax+by+cz+d=0, and then RANSAC is used to fit the plane. RANSAC performs line fitting by repeatedly selecting a random subset of the maximum points in the ground candidate points. Since the disparity map contains a lot of noise, after performing a RANSAC, the points belonging to the ground are collected. A RANSAC is performed to fit the plane, and finally the ground model is obtained.
在步骤S280中,将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图。In step S280, the entire scene in the original grayscale image is converted from camera coordinates to world coordinates, while a plan view is generated, and the map is occupied by the plan view.
在一个示例中,占据地图的求取过程为:首先将高于地面一定高度的点抽取出来,将这些点转换到地面坐标系中;我们将高于一定高度并低于一定高度的点在占据地图中标出;占据地图中每个像素的值是其相应点的高度累加和,由此得到占据地图。In one example, the process of occupying the map is: first extracting points above a certain height of the ground, and converting those points into the ground coordinate system; we will occupy points above a certain height and below a certain height. The map is marked; the value occupying each pixel in the map is the sum of the heights of its corresponding points, thereby obtaining the occupied map.
在步骤S290中,从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离。In step S290, the position of each obstacle is segmented from the occupied map by the connected domain mark detection algorithm, and converted into the original image, and the distance of the obstacle to the vehicle is calculated by the disparity map.
在一个示例中,根据占据地图中每个像素值的大小将其转换为彩色带有标记的图像,像素值越大越偏向红色,像素值越小越偏向于蓝色。然后通过连通域标记检测算法分割出不同的物体(),有关连通域标记检测算法,可以参考文献,Di Stefano,Luigi,and Andrea Bulgarelli."A simple and efficient connected components labeling algorithm."Image Analysis and Processing,1999.Proceedings.International Conference on.IEEE, 1999.。由此得到每个障碍物的具体位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离。有关距离可以基于视差与深度之间的关系来求得。In one example, the image is converted to a color-marked image based on the size of each pixel value in the map. The larger the pixel value, the more red, and the smaller the pixel value, the more blue. Then, different objects () are segmented by the connected domain label detection algorithm. For the connected domain label detection algorithm, refer to the literature, Di Stefano, Luigi, and Andrea Bulgarelli. "A simple and efficient connected components labeling algorithm." Image Analysis and Processing , 1999. Proceedings.International Conference on. IEEE, 1999. The specific position of each obstacle is obtained and converted into the original image, and the parallax map is used to calculate the distance of the obstacle to the vehicle. The relevant distance can be found based on the relationship between the parallax and the depth.
在步骤S2901中,当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。In step S2901, when the current vehicle distance is less than a certain threshold, an alarm is issued or the decision module is involved in the decision.
图5示出了根据本发明的另一实施例的实时检测汽车的可行驶区域的汽车可行驶区域实时检测系统300的结构框图。系统300安置在汽车上,用于实时检测汽车的可行驶区域,为汽车的辅助驾驶、自动驾驶和无人驾驶提供关键支撑。FIG. 5 is a block diagram showing the structure of a real-time detection system 300 for a motor vehicle that can detect a travelable area of a vehicle in real time according to another embodiment of the present invention. The system 300 is placed in the vehicle for detecting the travelable area of the car in real time, providing critical support for the driver's assisted driving, automatic driving and driverless driving.
如图5所示,汽车可行驶区域实时检测系统300可以包括:双目相机310、视差图计算部件320、V视差图转换部件330、二值化部件340、RANSAC直线拟合部件350、多帧图像滤波部件360、原图像可行驶区域确定部件370、地面模型拟合模块380、占据地图求取模块390、障碍物分割及距离计算模块391、报警模块392。As shown in FIG. 5, the vehicle travelable area real-time detection system 300 may include: a binocular camera 310, a parallax map calculation unit 320, a V-disparity map conversion unit 330, a binarization unit 340, a RANSAC line fitting unit 350, and a multi-frame. The image filtering unit 360, the original image travelable area determining unit 370, the ground model fitting module 380, the occupation map obtaining module 390, the obstacle division and distance calculating module 391, and the alarm module 392.
双目相机310配置为拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像。视差图计算部件320从左右两张灰度图像计算得到视差图。V视差图转换部件330从视差图转换得到V视差图。二值化部件340对V视差图进行二值化。RANSAC直线拟合部件350使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线。多帧图像滤波部件360根据多帧图像平滑滤波直线。原图像可行驶区域确定部件370通过所提取的直线得到原灰度图像中的可行驶区域。地面模型拟合模块380根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型。占据地图求取模块390将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图。障碍物分割及距离计算模块391,从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离。报警模块392,当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。The binocular camera 310 is configured to capture two left and right grayscale images in front of the car in the direction of travel of the car. The parallax map calculation unit 320 calculates a parallax map from the left and right two grayscale images. The V-disparity map conversion section 330 converts the parallax map to obtain a V-disparity map. The binarization unit 340 binarizes the V-disparity map. The RANSAC line fitting component 350 uses the RANSAC method to fit a segmentation line from the points of the binarized V-disparity map. The multi-frame image filtering section 360 smooth-filters the straight line according to the multi-frame image. The original image travelable area determining section 370 obtains the travelable area in the original grayscale image by the extracted straight line. The ground model fitting module 380 calculates the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system according to the original image and the disparity map, and assumes that the ground is a plane model, and uses RANSAC to fit the plane to obtain a ground model. The Occupied Map Fetch Module 390 converts the entire scene in the original grayscale image from camera coordinates to world coordinates, while generating a floor plan, which is taken up by the plan view to occupy the map. The obstacle segmentation and distance calculation module 391 divides the position of each obstacle from the occupied map by the connected domain mark detection algorithm, converts it into the original image, and calculates the distance of the obstacle to the vehicle through the disparity map. . The alarm module 392 performs an alarm when the current vehicle distance is less than a certain threshold or enters a decision module to participate in the decision.
关于视差图计算部件320、V视差图转换部件330、二值化部件340、RANSAC直线拟合部件350、多帧图像滤波部件360、原图像可行驶区域确定部件370、地面模型拟合模块380、占据地图求取模块390、障碍物分割及 距离计算模块391、报警模块392的功能和具体实现可以参考图2对应步骤的描述,这里不再赘述。The parallax map calculation unit 320, the V-disparity map conversion unit 330, the binarization unit 340, the RANSAC line fitting unit 350, the multi-frame image filter unit 360, the original image travelable area determining unit 370, the ground model fitting module 380, Occupy map seeking module 390, obstacle segmentation and For the functions and specific implementations of the distance calculation module 391 and the alarm module 392, reference may be made to the description of the corresponding steps in FIG. 2, and details are not described herein again.
需要说明的是,本文中的双目相机应该做广义理解,任何能够获得左图像和右图像的相机或具有摄像功能的设备都可以视为本文中的双目相机。It should be noted that the binocular camera in this paper should be understood in a broad sense. Any camera that can obtain left and right images or a device with camera function can be regarded as a binocular camera in this paper.
关于视差图计算部件320、V视差图转换部件330、二值化部件340、RANSAC直线拟合部件350、多帧图像滤波部件360、原图像可行驶区域确定部件370、地面模型拟合模块380、占据地图求取模块390、障碍物分割及距离计算模块391、报警模块392也应做广义理解,这些部件可以以软件、固件或硬件或者这些的组合来实现,以及各个部件可以彼此进行组合、次组合或者进一步进行拆分等等,这些都落入本公开的范围。The parallax map calculation unit 320, the V-disparity map conversion unit 330, the binarization unit 340, the RANSAC line fitting unit 350, the multi-frame image filter unit 360, the original image travelable area determining unit 370, the ground model fitting module 380, The occupation map entropy module 390, the obstacle segmentation and distance calculation module 391, and the alarm module 392 should also be understood in a broad sense. These components can be implemented in software, firmware or hardware or a combination of these, and the components can be combined with each other. Combinations or further splits and the like are intended to fall within the scope of the present disclosure.
根据本发明实施例的汽车可行驶区域实时检测方法和系统,可以适应各种各样的路面和路况,对视差图精度要求低,减少前端运算量,抗干扰能力强,提高实时性,这些对于汽车的自动安全驾驶非常关键。The real-time detecting method and system for a travelable area of a vehicle according to an embodiment of the present invention can adapt to various road surfaces and road conditions, have low precision requirements for parallax maps, reduce front end calculation amount, have strong anti-interference ability, and improve real-time performance. Auto-safe driving of cars is critical.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。因此,本发明的保护范围应该以权利要求的保护范围为准。 The embodiments of the present invention have been described above, and the foregoing description is illustrative, not limiting, and not limited to the disclosed embodiments. Numerous modifications and changes will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the scope of protection of the present invention should be determined by the scope of the claims.

Claims (17)

  1. 一种基于双目立体视觉的汽车防碰撞预警方法,包括:An automobile anti-collision warning method based on binocular stereo vision, comprising:
    通过汽车车身上搭载的双目摄像头拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图;The left and right two grayscale images in front of the car in the direction of travel of the car are captured by a binocular camera mounted on the vehicle body, and a parallax map is calculated;
    从视差图转换得到V视差图;Converting a parallax map to a V-disparity map;
    对V视差图进行二值化;Binarizing the V disparity map;
    使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;Using the RANSAC method to fit a segmented straight line from the point of the binarized V disparity map;
    根据多帧图像平滑滤波直线;以及Smoothing the line according to the multi-frame image;
    通过所提取的直线得到原灰度图像中的可行驶区域;Obtaining a travelable area in the original grayscale image by the extracted straight line;
    根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型;According to the original image and the disparity map, calculate the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system, and assume that the ground is a plane model, and use RANSAC to fit the plane to obtain the ground model;
    将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图;Converting the entire scene in the original grayscale image from camera coordinates to world coordinates, and generating a plan view, which is obtained by the plan view;
    从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离;The position of each obstacle is segmented from the occupied map by the connected domain mark detection algorithm, and converted into the original image, and the distance of the obstacle to the vehicle is calculated by the disparity map;
    当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。When the current vehicle distance is less than a certain threshold, an alarm is issued or the decision module is involved to participate in the decision.
  2. 根据权利要求1的汽车防碰撞预警方法,由平面图求取占据地图包括:The automobile anti-collision warning method according to claim 1, wherein the occupying the map by the plan view comprises:
    首先依据各个点的世界坐标,将高于地面第一阈值高度的点抽取出来,将这些点转换到地面坐标系中;将高于第一阈值高度并低于第二阈值高度的点在占据地图中标出;占据地图中每个像素的值是其相应点的高度累加和,由此得到占据地图。First, according to the world coordinates of each point, points higher than the first threshold height of the ground are extracted, and the points are converted into the ground coordinate system; points higher than the first threshold height and lower than the second threshold height occupy the map The mark is taken; the value occupying each pixel in the map is the sum of the heights of its corresponding points, thereby obtaining the occupied map.
  3. 根据权利要求1的汽车防碰撞预警方法,所述从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置包括:The automobile anti-collision warning method according to claim 1, wherein the position of each obstacle obtained by the connected domain mark detection algorithm from the occupied map comprises:
    根据占据地图中每个像素值的大小将其转换为彩色带有标记的图像,使得像素值越大越偏向红色,像素值越小越偏向于蓝色,通过连通域标记检测算法分割出不同的物体,由此得到每个障碍物的位置。According to the size of each pixel in the map, it is converted into a color-marked image, so that the larger the pixel value, the more red, the smaller the pixel value is, the more blue, and the different objects are segmented by the connected domain mark detection algorithm. , thus obtaining the position of each obstacle.
  4. 根据权利要求1的汽车防碰撞预警方法,其中,所述对V视差图进行二值化包括:The automobile collision avoidance warning method according to claim 1, wherein said binarizing the V-disparity map comprises:
    求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设 置为255,其余像素灰度值设置为0。Find the maximum value of each row of pixel values, and set the gray value of the pixel where only the maximum value is in each row. Set to 255 and the remaining pixel grayscale values are set to zero.
  5. 根据权利要求1的汽车防碰撞预警方法,使用RANSAC方法来拟合一段分段直线包括:The automobile anti-collision warning method according to claim 1, wherein the RANSAC method is used to fit a segmentation line including:
    反复执行下述操作序列,直至达到预定结束标准:Repeat the sequence of operations described below until the predetermined end criteria are reached:
    选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;Selecting a set of random subsets in the maximum point in the V disparity map to perform a straight line fit to obtain a straight line model;
    用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;Use the obtained line model to test all other data. If a point is applied to the estimated line model, it is considered to be an intra-point. If there are more than a predetermined number of points classified as intra-point, then the estimated model is considered to be Reasonable, then re-estimate the model with all intra-site points and estimate the error rate of the intra-point and model;
    如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;If the error rate of the model is lower than the current best model, replace the current best model with the model;
    以最后得到的最好的模型作为该段分段直线。The best model obtained last is used as the segmentation line of the segment.
  6. 根据权利要求5的汽车防碰撞预警方法,所述使用RANSAC方法来拟合多段分段直线包括:The automobile anti-collision warning method according to claim 5, wherein said using the RANSAC method to fit the multi-segment segmentation line comprises:
    首先按照权利要求5所述的方法来提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点按照权利要求5所述的方法来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。Firstly, the first straight line is extracted according to the method of claim 5, and after the extraction is completed, the points belonging to the first straight line are removed from the V disparity map, and then the remaining points are according to the method according to claim 5. The second line is extracted, and so on, until the number of remaining points is less than a predetermined threshold.
  7. 根据权利要求1的汽车防碰撞预警方法,所述根据多帧图像平滑滤波直线包括:The automobile anti-collision warning method according to claim 1, wherein said multi-frame image smoothing filtering straight line comprises:
    设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。Set a time window, assuming that the line model is represented as ax+by+c=0, get the line model parameters for each frame of image, accumulate each frame for each parameter, and add a new image every time, from the accumulated The parameter of the line model of the first frame is subtracted from the parameter result, and the line model parameters of the current frame image are added, and then averaged as the line model parameter of this frame.
  8. 根据权利要求1到7任一项的汽车防碰撞预警方法,所述通过所提取的直线得到原灰度图像中的可行驶区域包括:The automobile anti-collision warning method according to any one of claims 1 to 7, wherein the obtaining the travelable area in the original grayscale image by the extracted straight line comprises:
    针对V视差图中的每一行,选取提取的直线上的视差值为d的点,在视差图对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。For each row in the V disparity map, select a point on the extracted line whose disparity value is d, and compare the disparity value of each pixel with the difference of d in the row corresponding to the disparity map, when the difference is less than a certain value At the threshold value, the corresponding position of the original image is determined as a safe travelable area.
  9. 一种基于双目立体视觉的汽车防碰撞预警系统,包括: An automobile anti-collision warning system based on binocular stereo vision, comprising:
    双目摄像头,持续拍摄得到沿汽车行驶方向的汽车前方的左右两张灰度图像;The binocular camera continuously captures two left and right grayscale images in front of the car in the direction of travel of the car;
    计算装置,包括存储器、处理器、通信接口、总线,存储器、通信接口和处理器都连接到总线,存储器中存储有计算机可执行指令,计算装置经由通信接口能够获得双目摄像头拍摄得到的左右两张灰度图像,当处理器执行所述计算机可执行指令时,执行下述方法:The computing device, including a memory, a processor, a communication interface, a bus, a memory, a communication interface, and a processor are all connected to the bus, wherein the computer stores executable instructions, and the computing device can obtain the left and right images obtained by the binocular camera via the communication interface. A grayscale image, when the processor executes the computer executable instructions, performs the following method:
    基于左右两张灰度图像,计算得到视差图;Calculating a disparity map based on two left and right grayscale images;
    从视差图转换得到V视差图;Converting a parallax map to a V-disparity map;
    对V视差图进行二值化;Binarizing the V disparity map;
    使用RANSAC方法来来从二值化后的V视差图的点中拟合得到分段直线;Using the RANSAC method to fit a segmentation line from the points of the binarized V disparity map;
    根据多帧图像平滑滤波直线;Smoothing the line according to the multi-frame image;
    通过所提取的直线得到原灰度图像中的可行驶区域;Obtaining a travelable area in the original grayscale image by the extracted straight line;
    根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型;According to the original image and the disparity map, calculate the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system, and assume that the ground is a plane model, and use RANSAC to fit the plane to obtain the ground model;
    将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图;Converting the entire scene in the original grayscale image from camera coordinates to world coordinates, and generating a plan view, which is obtained by the plan view;
    从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离,The position of each obstacle is segmented from the occupied map by the connected domain mark detection algorithm, and converted into the original image, and the distance of the obstacle to the vehicle is calculated by the disparity map.
    当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。When the current vehicle distance is less than a certain threshold, an alarm is issued or the decision module is involved to participate in the decision.
  10. 根据权利要求9的汽车防碰撞预警系统,由平面图求取占据地图包括:The automobile anti-collision warning system according to claim 9, wherein the occupying the map by the plan view comprises:
    首先依据各个点的世界坐标,将高于地面第一阈值高度的点抽取出来,将这些点转换到地面坐标系中;将高于第一阈值高度并低于第二阈值高度的点在占据地图中标出;占据地图中每个像素的值是其相应点的高度累加和,由此得到占据地图。First, according to the world coordinates of each point, points higher than the first threshold height of the ground are extracted, and the points are converted into the ground coordinate system; points higher than the first threshold height and lower than the second threshold height occupy the map The mark is taken; the value occupying each pixel in the map is the sum of the heights of its corresponding points, thereby obtaining the occupied map.
  11. 根据权利要求9的汽车防碰撞预警系统,所述从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置包括:The automobile anti-collision warning system according to claim 9, wherein the position of each obstacle obtained by the connected domain mark detection algorithm from the occupied map comprises:
    根据占据地图中每个像素值的大小将其转换为彩色带有标记的图像,使得像素值越大越偏向红色,像素值越小越偏向于蓝色,通过连通域标记检测 算法分割出不同的物体,由此得到每个障碍物的位置。Converts it to a color-marked image according to the size of each pixel in the map, so that the larger the pixel value, the more red, the smaller the pixel value, the more biased toward blue, detected by the connected domain mark The algorithm splits out different objects, thereby obtaining the position of each obstacle.
  12. 根据权利要求9的汽车防碰撞预警系统,其中,所述对V视差图进行二值化包括:The automobile anti-collision warning system according to claim 9, wherein said binarizing the V-disparity map comprises:
    求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。Find the maximum value of each row of pixel values, set the gray value of the pixel where only the maximum value is in each row to 255, and set the gray value of the remaining pixels to 0.
  13. 根据权利要求9的汽车防碰撞预警系统,所述使用RANSAC方法来拟合一段分段直线包括:The automobile anti-collision warning system according to claim 9, wherein said using the RANSAC method to fit a segmented straight line comprises:
    反复执行下述操作序列,直至达到预定退出标准:Repeat the sequence of operations below until the scheduled exit criteria are reached:
    选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;Selecting a set of random subsets in the maximum point in the V disparity map to perform a straight line fit to obtain a straight line model;
    用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;Use the obtained line model to test all other data. If a point is applied to the estimated line model, it is considered to be an intra-point. If there are more than a predetermined number of points classified as intra-point, then the estimated model is considered to be Reasonable, then re-estimate the model with all intra-site points and estimate the error rate of the intra-point and model;
    如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;If the error rate of the model is lower than the current best model, replace the current best model with the model;
    以最后得到的最好的模型作为该段分段直线。The best model obtained last is used as the segmentation line of the segment.
  14. 根据权利要求13的汽车防碰撞预警系统,所述使用RANSAC方法来拟合多段分段直线包括:The automobile anti-collision warning system according to claim 13, wherein said using the RANSAC method to fit the multi-segment segmentation line comprises:
    提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。Extract the first line, after the extraction is completed, remove the point belonging to the first line from the V disparity map, and then extract the second line for the remaining points, and so on until the number of remaining points is less than the predetermined number Threshold.
  15. 根据权利要求9的汽车防碰撞预警系统,所述根据多帧图像平滑滤波直线包括:The automobile anti-collision warning system according to claim 9, wherein the smoothing filter line according to the multi-frame image comprises:
    设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。Set a time window, assuming that the line model is represented as ax+by+c=0, get the line model parameters for each frame of image, accumulate each frame for each parameter, and add a new image every time, from the accumulated The parameter of the line model of the first frame is subtracted from the parameter result, and the line model parameters of the current frame image are added, and then averaged as the line model parameter of this frame.
  16. 根据权利要求9的汽车防碰撞预警系统,所述通过所提取的直线得到原灰度图像中的可行驶区域包括:The automobile anti-collision warning system according to claim 9, wherein the obtaining the travelable area in the original grayscale image by the extracted straight line comprises:
    针对V视差图中的每一行,选取提取的直线上的视差值为d,在视差图 中对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。For each row in the V disparity map, select the disparity value on the extracted line as d, in the disparity map In the corresponding row, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
  17. 一种汽车防碰撞预警系统,包括:An automobile anti-collision warning system includes:
    双目相机,配置为拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像;a binocular camera configured to capture two left and right grayscale images in front of the car in the direction of travel of the car;
    视差图计算部件,从左右两张灰度图像计算得到视差图;A parallax map calculation component calculates a parallax map from two left and right grayscale images;
    V视差图转换模块,从视差图转换得到V视差图;V parallax map conversion module, which converts the disparity map to obtain a V disparity map;
    二值化模块,对V视差图进行二值化;a binarization module that binarizes the V disparity map;
    RANSAC直线拟合模块,使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;The RANSAC straight line fitting module uses the RANSAC method to fit a segmented straight line from the points of the binarized V disparity map;
    多帧图像滤波模块,根据多帧图像平滑滤波直线;a multi-frame image filtering module for smoothing and filtering a line according to a multi-frame image;
    原图像可行驶区域确定模块,通过所提取的直线得到原灰度图像中的可行驶区域;The original image travelable area determining module obtains the travelable area in the original grayscale image by the extracted straight line;
    地面模型拟合模块,根据原始图像和视差图,计算出属于地面的点在真实世界坐标系中的三维坐标,假设地面为平面模型,使用RANSAC来拟合该平面,得到地面模型;The ground model fitting module calculates the three-dimensional coordinates of the points belonging to the ground in the real world coordinate system according to the original image and the disparity map, and assumes that the ground is a plane model, and uses RANSAC to fit the plane to obtain a ground model;
    占据地图求取模块,将原始灰度图像中的整个场景由相机坐标转换到世界坐标,同时生成平面图,由平面图求取占据地图;Occupying the map seeking module, converting the entire scene in the original grayscale image from camera coordinates to world coordinates, and generating a plan view, which is obtained by the plan view;
    障碍物分割及距离计算模块,从占据地图中通过连通域标记检测算法分割得到每个障碍物的位置,并转换到原始图像中标记出来,并通过视差图来计算障碍物到本车的距离;The obstacle segmentation and distance calculation module divides the position of each obstacle from the occupied map by the connected domain mark detection algorithm, and converts it into the original image to mark it, and calculates the distance of the obstacle to the vehicle through the disparity map;
    报警模块,当前车距离小于一定阈值时进行报警或者传入决策模块参与决策。 The alarm module performs an alarm when the current vehicle distance is less than a certain threshold or enters a decision module to participate in the decision.
PCT/CN2016/100521 2016-09-28 2016-09-28 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision WO2018058356A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/100521 WO2018058356A1 (en) 2016-09-28 2016-09-28 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
CN201680001426.6A CN108243623B (en) 2016-09-28 2016-09-28 Automobile anti-collision early warning method and system based on binocular stereo vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/100521 WO2018058356A1 (en) 2016-09-28 2016-09-28 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision

Publications (1)

Publication Number Publication Date
WO2018058356A1 true WO2018058356A1 (en) 2018-04-05

Family

ID=61762982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/100521 WO2018058356A1 (en) 2016-09-28 2016-09-28 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision

Country Status (2)

Country Link
CN (1) CN108243623B (en)
WO (1) WO2018058356A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444916A (en) * 2018-10-17 2019-03-08 上海蔚来汽车有限公司 The unmanned travelable area determining device of one kind and method
CN109461308A (en) * 2018-11-22 2019-03-12 东软睿驰汽车技术(沈阳)有限公司 A kind of information filtering method and image processing server
CN109993060A (en) * 2019-03-01 2019-07-09 长安大学 The vehicle omnidirectional obstacle detection method of depth camera
CN110110682A (en) * 2019-05-14 2019-08-09 西安电子科技大学 The semantic stereo reconstruction method of remote sensing images
CN110472508A (en) * 2019-07-15 2019-11-19 天津大学 Lane line distance measuring method based on deep learning and binocular vision
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110969064A (en) * 2018-09-30 2020-04-07 北京四维图新科技股份有限公司 Image detection method and device based on monocular vision and storage equipment
CN111241979A (en) * 2020-01-07 2020-06-05 浙江科技学院 Real-time obstacle detection method based on image feature calibration
CN111275698A (en) * 2020-02-11 2020-06-12 长安大学 Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation
CN111382591A (en) * 2018-12-27 2020-07-07 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN111580131A (en) * 2020-04-08 2020-08-25 西安邮电大学 Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN111612760A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for detecting obstacles
CN111626095A (en) * 2020-04-06 2020-09-04 连云港市港圣开关制造有限公司 Power distribution inspection system based on Ethernet
CN111890358A (en) * 2020-07-01 2020-11-06 浙江大华技术股份有限公司 Binocular obstacle avoidance method and device, storage medium and electronic device
CN111985436A (en) * 2020-08-29 2020-11-24 浙江工业大学 Workshop ground mark line identification fitting method based on LSD
CN112097732A (en) * 2020-08-04 2020-12-18 北京中科慧眼科技有限公司 Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112200771A (en) * 2020-09-14 2021-01-08 浙江大华技术股份有限公司 Height measuring method, device, equipment and medium
CN112288791A (en) * 2020-11-06 2021-01-29 浙江中控技术股份有限公司 Disparity map obtaining method, and fish-eye camera-based three-dimensional model obtaining method and device
CN112669362A (en) * 2021-01-12 2021-04-16 四川深瑞视科技有限公司 Depth information acquisition method, device and system based on speckles
CN113112553A (en) * 2021-05-26 2021-07-13 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
US20210264175A1 (en) * 2020-02-26 2021-08-26 Nvidia Corporation Object Detection Using Image Alignment for Autonomous Machine Applications
CN113341391A (en) * 2021-06-01 2021-09-03 电子科技大学 Radar target multi-frame joint detection method in unknown environment based on deep learning
CN113370977A (en) * 2021-05-06 2021-09-10 上海大学 Intelligent vehicle forward collision early warning method and system based on vision
CN113900443A (en) * 2021-09-28 2022-01-07 合肥工业大学 Unmanned aerial vehicle obstacle avoidance early warning method and device based on binocular vision
CN113946154A (en) * 2021-12-20 2022-01-18 广东科凯达智能机器人有限公司 Visual identification method and system for inspection robot
CN114119700A (en) * 2021-11-26 2022-03-01 山东科技大学 Obstacle ranging method based on U-V disparity map
CN114494427A (en) * 2021-12-17 2022-05-13 山东鲁软数字科技有限公司 Method, system and terminal for detecting illegal behavior of person standing under suspension arm
CN115116038A (en) * 2022-08-30 2022-09-27 北京中科慧眼科技有限公司 Obstacle identification method and system based on binocular vision
CN115205809A (en) * 2022-09-15 2022-10-18 北京中科慧眼科技有限公司 Method and system for detecting roughness of road surface
CN115995163A (en) * 2023-03-23 2023-04-21 江西通慧科技集团股份有限公司 Vehicle collision early warning method and system
CN117079219A (en) * 2023-10-08 2023-11-17 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service
CN117274939A (en) * 2023-10-08 2023-12-22 北京路凯智行科技有限公司 Safety area detection method and safety area detection device
CN117930224A (en) * 2024-03-19 2024-04-26 山东科技大学 Vehicle ranging method based on monocular vision depth estimation

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909569B (en) * 2018-09-17 2022-09-23 深圳市优必选科技有限公司 Road condition information identification method and terminal equipment
CN109601109A (en) * 2018-12-07 2019-04-12 江西洪都航空工业集团有限责任公司 A kind of unmanned grass-cutting vehicle collision-proof method based on binocular vision detection
CN111469759A (en) * 2019-01-24 2020-07-31 海信集团有限公司 Scratch and rub early warning method for vehicle, vehicle and storage medium
CN110298330B (en) * 2019-07-05 2023-07-18 东北大学 Monocular detection and positioning method for power transmission line inspection robot
CN110285793B (en) * 2019-07-08 2020-05-15 中原工学院 Intelligent vehicle track measuring method based on binocular stereo vision system
CN112767818B (en) * 2019-11-01 2022-09-27 北京初速度科技有限公司 Map construction method and device
CN112567383A (en) * 2020-03-06 2021-03-26 深圳市大疆创新科技有限公司 Object detection method, movable platform, device and storage medium
CN112233136B (en) * 2020-11-03 2021-10-22 上海西井信息科技有限公司 Method, system, equipment and storage medium for alignment of container trucks based on binocular recognition
CN112418103B (en) * 2020-11-24 2022-10-11 中国人民解放军火箭军工程大学 Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN113658240B (en) * 2021-07-15 2024-04-19 北京中科慧眼科技有限公司 Main obstacle detection method and device and automatic driving system
CN113706622B (en) * 2021-10-29 2022-04-19 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN115601435B (en) * 2022-12-14 2023-03-14 天津所托瑞安汽车科技有限公司 Vehicle attitude detection method, device, vehicle and storage medium
CN117672007B (en) * 2024-02-03 2024-04-26 福建省高速公路科技创新研究院有限公司 Road construction area safety precaution system based on thunder fuses

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106651A (en) * 2012-07-16 2013-05-15 清华大学深圳研究生院 Method for obtaining parallax error plane based on three-dimensional hough
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102520721B (en) * 2011-12-08 2015-05-27 北京控制工程研究所 Autonomous obstacle-avoiding planning method of tour detector based on binocular stereo vision
CN103679127B (en) * 2012-09-24 2017-08-04 株式会社理光 The method and apparatus for detecting the wheeled region of pavement of road
CN105043350A (en) * 2015-06-25 2015-11-11 闽江学院 Binocular vision measuring method
CN105225482B (en) * 2015-09-02 2017-08-11 上海大学 Vehicle detecting system and method based on binocular stereo vision
CN105550665B (en) * 2016-01-15 2019-01-25 北京理工大学 A kind of pilotless automobile based on binocular vision can lead to method for detecting area

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106651A (en) * 2012-07-16 2013-05-15 清华大学深圳研究生院 Method for obtaining parallax error plane based on three-dimensional hough
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN103413313A (en) * 2013-08-19 2013-11-27 国家电网公司 Binocular vision navigation system and method based on power robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG YI: "Stereo based research on obstacles detection in unstructured Environment", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE- CHINA MASTER'S THESES, no. 7, Chapter 3, 15 July 2015 (2015-07-15), ISSN: 1674-0246 *

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969064A (en) * 2018-09-30 2020-04-07 北京四维图新科技股份有限公司 Image detection method and device based on monocular vision and storage equipment
CN110969064B (en) * 2018-09-30 2023-10-27 北京四维图新科技股份有限公司 Image detection method and device based on monocular vision and storage equipment
CN109444916A (en) * 2018-10-17 2019-03-08 上海蔚来汽车有限公司 The unmanned travelable area determining device of one kind and method
CN109444916B (en) * 2018-10-17 2023-07-04 上海蔚来汽车有限公司 Unmanned driving drivable area determining device and method
CN109461308A (en) * 2018-11-22 2019-03-12 东软睿驰汽车技术(沈阳)有限公司 A kind of information filtering method and image processing server
CN109461308B (en) * 2018-11-22 2020-10-16 东软睿驰汽车技术(沈阳)有限公司 Information filtering method and image processing server
CN111382591A (en) * 2018-12-27 2020-07-07 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN111382591B (en) * 2018-12-27 2023-09-29 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN109993060B (en) * 2019-03-01 2022-11-22 长安大学 Vehicle omnidirectional obstacle detection method of depth camera
CN109993060A (en) * 2019-03-01 2019-07-09 长安大学 The vehicle omnidirectional obstacle detection method of depth camera
CN110569704B (en) * 2019-05-11 2022-11-22 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110110682A (en) * 2019-05-14 2019-08-09 西安电子科技大学 The semantic stereo reconstruction method of remote sensing images
CN110472508B (en) * 2019-07-15 2023-04-28 天津大学 Lane line distance measurement method based on deep learning and binocular vision
CN110472508A (en) * 2019-07-15 2019-11-19 天津大学 Lane line distance measuring method based on deep learning and binocular vision
CN111241979A (en) * 2020-01-07 2020-06-05 浙江科技学院 Real-time obstacle detection method based on image feature calibration
CN111241979B (en) * 2020-01-07 2023-06-23 浙江科技学院 Real-time obstacle detection method based on image feature calibration
CN111275698B (en) * 2020-02-11 2023-05-09 西安汇智信息科技有限公司 Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation
CN111275698A (en) * 2020-02-11 2020-06-12 长安大学 Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation
US11961243B2 (en) * 2020-02-26 2024-04-16 Nvidia Corporation Object detection using image alignment for autonomous machine applications
US20210264175A1 (en) * 2020-02-26 2021-08-26 Nvidia Corporation Object Detection Using Image Alignment for Autonomous Machine Applications
CN111626095A (en) * 2020-04-06 2020-09-04 连云港市港圣开关制造有限公司 Power distribution inspection system based on Ethernet
CN111626095B (en) * 2020-04-06 2023-07-28 连云港市港圣开关制造有限公司 Power distribution inspection system based on Ethernet
CN111580131B (en) * 2020-04-08 2023-07-07 西安邮电大学 Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN111580131A (en) * 2020-04-08 2020-08-25 西安邮电大学 Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN111612760B (en) * 2020-05-20 2023-11-17 阿波罗智联(北京)科技有限公司 Method and device for detecting obstacles
CN111612760A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for detecting obstacles
US11688099B2 (en) 2020-05-20 2023-06-27 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for detecting obstacle
CN111890358A (en) * 2020-07-01 2020-11-06 浙江大华技术股份有限公司 Binocular obstacle avoidance method and device, storage medium and electronic device
CN111890358B (en) * 2020-07-01 2022-06-14 浙江大华技术股份有限公司 Binocular obstacle avoidance method and device, storage medium and electronic device
CN112097732A (en) * 2020-08-04 2020-12-18 北京中科慧眼科技有限公司 Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN111985436A (en) * 2020-08-29 2020-11-24 浙江工业大学 Workshop ground mark line identification fitting method based on LSD
CN111985436B (en) * 2020-08-29 2024-03-12 浙江工业大学 Workshop ground marking recognition fitting method based on LSD
CN112200771A (en) * 2020-09-14 2021-01-08 浙江大华技术股份有限公司 Height measuring method, device, equipment and medium
CN112288791B (en) * 2020-11-06 2024-04-30 浙江中控技术股份有限公司 Parallax image obtaining method, three-dimensional model obtaining method and device based on fisheye camera
CN112288791A (en) * 2020-11-06 2021-01-29 浙江中控技术股份有限公司 Disparity map obtaining method, and fish-eye camera-based three-dimensional model obtaining method and device
CN112669362A (en) * 2021-01-12 2021-04-16 四川深瑞视科技有限公司 Depth information acquisition method, device and system based on speckles
CN112669362B (en) * 2021-01-12 2024-03-29 四川深瑞视科技有限公司 Depth information acquisition method, device and system based on speckles
CN113370977A (en) * 2021-05-06 2021-09-10 上海大学 Intelligent vehicle forward collision early warning method and system based on vision
CN113112553A (en) * 2021-05-26 2021-07-13 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
CN113341391B (en) * 2021-06-01 2022-05-10 电子科技大学 Radar target multi-frame joint detection method in unknown environment based on deep learning
CN113341391A (en) * 2021-06-01 2021-09-03 电子科技大学 Radar target multi-frame joint detection method in unknown environment based on deep learning
CN113900443B (en) * 2021-09-28 2023-07-18 合肥工业大学 Unmanned aerial vehicle obstacle avoidance early warning method and device based on binocular vision
CN113900443A (en) * 2021-09-28 2022-01-07 合肥工业大学 Unmanned aerial vehicle obstacle avoidance early warning method and device based on binocular vision
CN114119700A (en) * 2021-11-26 2022-03-01 山东科技大学 Obstacle ranging method based on U-V disparity map
CN114119700B (en) * 2021-11-26 2024-03-29 山东科技大学 Obstacle ranging method based on U-V disparity map
CN114494427A (en) * 2021-12-17 2022-05-13 山东鲁软数字科技有限公司 Method, system and terminal for detecting illegal behavior of person standing under suspension arm
CN113946154A (en) * 2021-12-20 2022-01-18 广东科凯达智能机器人有限公司 Visual identification method and system for inspection robot
CN115116038A (en) * 2022-08-30 2022-09-27 北京中科慧眼科技有限公司 Obstacle identification method and system based on binocular vision
CN115205809A (en) * 2022-09-15 2022-10-18 北京中科慧眼科技有限公司 Method and system for detecting roughness of road surface
CN115995163B (en) * 2023-03-23 2023-06-27 江西通慧科技集团股份有限公司 Vehicle collision early warning method and system
CN115995163A (en) * 2023-03-23 2023-04-21 江西通慧科技集团股份有限公司 Vehicle collision early warning method and system
CN117079219B (en) * 2023-10-08 2024-01-09 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service
CN117274939A (en) * 2023-10-08 2023-12-22 北京路凯智行科技有限公司 Safety area detection method and safety area detection device
CN117079219A (en) * 2023-10-08 2023-11-17 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service
CN117274939B (en) * 2023-10-08 2024-05-28 北京路凯智行科技有限公司 Safety area detection method and safety area detection device
CN117930224A (en) * 2024-03-19 2024-04-26 山东科技大学 Vehicle ranging method based on monocular vision depth estimation

Also Published As

Publication number Publication date
CN108243623A (en) 2018-07-03
CN108243623B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2018058356A1 (en) Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
US11093763B2 (en) Onboard environment recognition device
US10937181B2 (en) Information processing apparatus, object recognition apparatus, device control system, movable body, image processing method, and computer-readable recording medium
EP1796043B1 (en) Object detection
US10580155B2 (en) Image processing apparatus, imaging device, device control system, frequency distribution image generation method, and recording medium
WO2018058355A1 (en) Method and system for detecting vehicle accessible region in real time
JP2013109760A (en) Target detection method and target detection system
JP6614247B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
EP2924655B1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, disparity value deriving method, and computer-readable storage medium
US10672141B2 (en) Device, method, system and computer-readable medium for determining collision target object rejection
CN105512641B (en) A method of dynamic pedestrian and vehicle under calibration sleet state in video
JPWO2017145634A1 (en) Image processing apparatus, imaging apparatus, mobile device control system, image processing method, and program
JPWO2017094300A1 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
Yiruo et al. Complex ground plane detection based on v-disparity map in off-road environment
Lion et al. Smart speed bump detection and estimation with kinect
JP2017151535A (en) Image processing device, object recognition device, equipment control system, image processing method, and program
JP6572696B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
JP2007249257A (en) Apparatus and method for detecting movable element
JPWO2017099199A1 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
Ma et al. A real time object detection approach applied to reliable pedestrian detection
Huang et al. Rear obstacle warning for reverse driving using stereo vision techniques
EP3540643A1 (en) Image processing apparatus and image processing method
JP2008077624A (en) Object detecting device and object detecting method
EP2919191B1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, and disparity value producing method
JP7493433B2 (en) Movement Calculation Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917111

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917111

Country of ref document: EP

Kind code of ref document: A1