CN111323767B - System and method for detecting obstacle of unmanned vehicle at night - Google Patents

System and method for detecting obstacle of unmanned vehicle at night Download PDF

Info

Publication number
CN111323767B
CN111323767B CN202010169003.3A CN202010169003A CN111323767B CN 111323767 B CN111323767 B CN 111323767B CN 202010169003 A CN202010169003 A CN 202010169003A CN 111323767 B CN111323767 B CN 111323767B
Authority
CN
China
Prior art keywords
obstacle
camera
light
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010169003.3A
Other languages
Chinese (zh)
Other versions
CN111323767A (en
Inventor
邹斌
王亚萌
李文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202010169003.3A priority Critical patent/CN111323767B/en
Publication of CN111323767A publication Critical patent/CN111323767A/en
Application granted granted Critical
Publication of CN111323767B publication Critical patent/CN111323767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of automatic driving, in particular to a night unmanned vehicle obstacle detection system and method, comprising a first line laser projector, a second line laser projector and a binocular camera; in the night environment, one camera in the binocular cameras acquires infrared images and performs target detection, the other camera and the two line structure light projectors form a line structure light vision measurement system and acquire light bar images, then position and distance measurement are performed on high and low target barriers in combination with target detection data, and finally the barrier information acquired by each sensor is fused. The invention can realize night obstacle detection and meet the real-time requirement on obstacle detection, reduce the overall cost of a target detection system and improve the practicability of unmanned vehicles with certain purposes.

Description

System and method for detecting obstacle of unmanned vehicle at night
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a multi-sensor fusion environment sensing system and method, which are proposed for environment sensing in automatic driving.
Background
Unmanned vehicles want to realize functions such as autonomous obstacle avoidance and navigation, and the like, without leaving the acquisition of surrounding environment information. The single sensor has poor environment adaptability, is difficult to deal with the interference of complex environments and noise, realizes the acquisition of needed rich information and the like, and the current unmanned vehicle perceives the surrounding environment in a multi-use multi-sensor data fusion mode so as to acquire the comprehensive information of a target object and ensure the driving reliability of the unmanned vehicle.
The current mainstream multi-sensor fusion scheme is information fusion of a laser radar, a millimeter wave radar and a camera. However, they have the disadvantages of high price, large data processing amount, high noise and the like, especially for some unmanned vehicles with low cost and special purposes, the scheme of fusing the sensors has relatively high cost, the data processing is complex, and particularly under the night environment, a plurality of sensors can not obtain effective information effectively.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the night unmanned vehicle obstacle detection system and the night unmanned vehicle obstacle detection method, which have the advantages of effective control of cost, simple and easy operation of the detection method, good environmental adaptability and convenient popularization.
In order to solve the technical problems, the invention adopts the following technical scheme:
a night time unmanned vehicle obstacle detection system, comprising: a first line laser projector and a second line laser projector, a CCD binocular camera;
the CCD binocular camera comprises a main camera and an auxiliary camera, wherein a first optical filter capable of filtering out ambient light except 650nm wavelength is arranged on the main camera, so that the main camera can only acquire images containing light bars under 650nm wavelength light; the auxiliary camera is provided with a second optical filter which can filter out ambient light except 850nm wavelength; the auxiliary camera can only acquire infrared images under the light of the infrared light supplementing lamp to identify obstacles;
the first line laser projector and the main camera form a first line structure light vision measuring system, and a light bar formed by intersecting a light plane emitted by the first line structure light vision measuring system and the surface of the obstacle is used for acquiring the position and distance information of the higher convex obstacle in a night environment; and the second line laser projector and the main camera form a second line structure light vision measuring system, and a light bar formed by intersecting a light plane emitted by the second line structure light vision measuring system and the surface of the obstacle is used for acquiring the position and distance information of the lower obstacle in a night environment.
Further, an infrared light supplementing lamp with the wavelength of 850nm is correspondingly arranged above the auxiliary camera.
Further, the first line laser projector is located above the second line laser projector, which is located above the CCD binocular camera.
Further, the height of the higher protruding obstacle is higher than that of the lower obstacle, the higher protruding obstacle comprises a pedestrian and or a vehicle, and the lower obstacle at least comprises a deceleration strip.
Further, the perspective transformation relationship between the image coordinate systems in the main camera and the auxiliary camera in the CCD binocular camera is as follows:
wherein u is 1 And v 1 The abscissa and ordinate, u, of the main camera pixel coordinate system, respectively 2 And v 2 The horizontal coordinate and the vertical coordinate of the pixel coordinate system of the auxiliary camera are respectively, R is a rotation matrix, and T is a translation vector.
A night unmanned vehicle obstacle detection method is characterized in that the night unmanned vehicle obstacle detection system is adopted, one camera of a binocular camera acquires infrared images and performs target detection in a night environment, the other camera and two line structure light projectors form a line structure light vision measurement system and acquire light bar images, the target detection results of the light bar images are combined to measure positions and distances of high and low target obstacles, and finally the type, position and distance information of obstacle information are fused.
Further, the method comprises the following steps:
step S1: under the night environment, the auxiliary camera acquires an obstacle infrared image through auxiliary illumination of the infrared light supplementing lamp, and the obstacle infrared image is used for target detection by a trained target detection algorithm to acquire the position information of a target object image; the target detection algorithm is used to acquire position coordinate information (b) of a rectangular bounding box surrounding the target object or target obstacle x ,b y ,b w ,b h ) Wherein (b) x ,b y ) Is the center coordinate of a rectangular bounding box, b w ,b h Width and height in the image of a rectangular bounding box, respectively;
step S2: the center coordinates (b) of the rectangular bounding box surrounding the target object obtained in step S1 x ,b y ) Obtaining the central coordinate (b 'of the rectangular boundary frame in the main camera pixel coordinate system after perspective transformation' x ,b′ y ) Then calculating and obtaining the area size of the rectangular bounding box after perspective transformation in the synchronous image in the main camera pixel coordinate system;
step S3: extracting a region of the rectangular boundary frame calculated in the step S2 in the synchronous image under the pixel coordinate system of the main camera from the synchronous image obtained by the main camera as a region of interest, wherein the region of interest comprises a light bar image formed by a light plane emitted by a laser projector and the surface of a target obstacle; after binarization processing is carried out on the image of the region of interest, a proper gray threshold is set to remove illumination noise, and then a gray gravity center method is used for extracting the center point coordinates of the light bar in the region of interest;
step S4: and calculating the distance and position relation information of the target obstacle at the center point of the light bar relative to the origin of the camera coordinate system according to the relative position relation between the light plane and the main camera coordinate system and the perspective projection principle of the camera.
Further, in step S4, after identifying the target obstacle and extracting the region of interest including the light bar, determining whether the target object class is a higher convex obstacle or a lower obstacle, if the target object class is determined to be a lower obstacle, selecting a calculation formula corresponding to the light plane below to calculate the distance and the position of the center of the light bar, and if the object class is determined not to be a lower obstacle, selecting a calculation formula corresponding to the light plane above to calculate the distance and the position of the center of the light bar, that is, fusing the data of the upper and lower laser projectors.
Therefore, the system and the method adopt the perspective transformation of the position coordinates of the obstacle image acquired by the target detection algorithm to acquire the interested area only containing the obstacle surface light bar on the synchronous image containing the light bar, then the light bar center extraction is carried out in the interested area instead of the light bar center extraction in the whole image containing the light bar, and the optical filter can reduce the interference of a plurality of ambient lights, so that the calculated amount is greatly reduced, and the combination of the measurement model is simpler, so that the real-time performance of the model is better; the system can finish environmental perception of a certain purpose by adopting equipment with a camera and a linear laser projector which are low in equivalent lattice and additionally provided with an infrared light supplementing lamp, and compared with other multi-sensor schemes, the system has the advantage that the cost is greatly reduced under the same purpose.
Therefore, compared with the prior art, the method can realize night obstacle detection and meet the real-time requirement on obstacle detection, can reduce the overall cost of the target detection system, is simple and easy to operate, has good environmental adaptability, and is convenient to popularize and improve the practicability of unmanned vehicles with certain purposes.
Drawings
Fig. 1 is a schematic structural diagram of a night unmanned vehicle obstacle detection system according to the present invention.
Fig. 2 is a schematic diagram of a night unmanned vehicle obstacle detection method according to the present invention.
Fig. 3 is a schematic diagram of a rectangular bounding box of the object detection algorithm.
Fig. 4 is a flow chart of the light plane selection logic of the present invention.
Detailed Description
For a further understanding of the invention, its principles, features and advantages, reference should be made to the following detailed description of the invention taken in conjunction with the accompanying drawings and specific examples.
Example 1:
referring to fig. 1, a night unmanned vehicle obstacle detection system according to the present invention is characterized by comprising: a first line laser projector 3 located above, a second line laser projector 5 located below, a CCD binocular camera 1 below the two laser projectors; an infrared light supplementing lamp is additionally arranged; the infrared light supplement lamp is placed independently of the other sensors, except for the light source that provides the illumination condition.
The CCD binocular camera 1 comprises a main camera 4 and an auxiliary camera 2, wherein a first optical filter (650 nm optical filter) is arranged on the main camera 4, and can filter out ambient light except 650nm wavelength, so that the main camera 4 can only acquire an image containing light bars under 650nm wavelength light, and a second optical filter (850 nm optical filter) is arranged on the auxiliary camera 2, and can filter out ambient light except 850nm wavelength, so that the auxiliary camera 2 can only acquire infrared images under infrared light supplementing lamp light, and the acquired infrared images are used for obstacle identification; the infrared light filling lamp is placed above the auxiliary camera, and the specific position is in order to ensure that good lighting conditions are obtained for the auxiliary camera.
The first line laser projector 3 and the main camera 4 of the CCD binocular camera 1 form a first line structure light vision measuring system 8, and a light bar formed by intersecting a light plane emitted by the first line structure light vision measuring system 8 and the surface of an obstacle is used for acquiring the position and distance information of a higher convex obstacle (such as pedestrians, vehicles and the like) in a night environment; the second line laser projector 5 and the main camera 4 of the CCD binocular camera 1 form a second line structure light vision measuring system 9, and a light bar formed by intersecting a light plane emitted by the second line structure light vision measuring system 9 and the surface of an obstacle is used for acquiring position and distance information of the lower obstacle (deceleration strip) in a night environment;
the image coordinate systems in the two cameras (the main camera 4 and the auxiliary camera 2) in the CCD binocular camera 1 have perspective transformation relation, which is simplified as a perspective transformation module 11, the main camera pixel coordinate system is taken as a fixed coordinate system, and the auxiliary camera pixel coordinate system is transformed into the main camera pixel coordinate system, so that rotation and translation are required, and the transformation relation is as follows:
wherein u is 1 And v 1 The abscissa and ordinate, u, of the main camera pixel coordinate system, respectively 2 And v 2 The horizontal coordinate and the vertical coordinate of the pixel coordinate system of the auxiliary camera are respectively, R is a rotation matrix, and T is a translation vector.
Embodiment 2, see fig. 2, a night time unmanned vehicle obstacle detection method, which is based on the night time unmanned vehicle obstacle detection system of embodiment 1, is characterized in that it comprises the following steps:
step S1: in the night environment, the auxiliary camera 2 equipped with the second filter acquires an infrared image of an obstacle by auxiliary illumination of an infrared light-compensating lamp, and the infrared image of the obstacle is used for target detection by a trained target detection algorithm (such as YOLO-V3) to acquire position information 7 of an image of the target object, and the target detection algorithm can acquire position coordinate information (b) of a rectangular bounding box surrounding the target object or the target obstacle x ,b y ,b w ,b h ) Wherein (b) x ,b y ) Is the center coordinate of a rectangular bounding box, b w ,b h The width and height in the image of the rectangular bounding box, respectively, u and v are the abscissa and ordinate, respectively, of the secondary camera pixel coordinate system, as shown in fig. 3.
Step S2: the center coordinates (b) of the rectangular bounding box surrounding the target object obtained in step S1 x ,b y ) The center coordinates (b 'of the rectangular bounding box in the main camera pixel coordinate system are obtained after the perspective transformation module 10' x ,b′ y ) The size of the region in the synchronized image of the perspective transformed rectangular bounding box in the primary camera pixel coordinate system is then calculated. The calculation process is as follows:
t x =b′ x +b w /2
t y =b′ y +b h /2
d x =b′ x -b w /2
d y =b′ y -b h /2
wherein (t) x ,t y ) Synchronizing the upper left corner coordinates of the image in the main camera pixel coordinate system for the perspective transformed rectangular bounding box, and (d x ,d y ) Is the lower right angular position.
Step S3: and (2) extracting a region of the rectangular bounding box in the synchronous image under the pixel coordinate system of the main camera, which is calculated in the step (S2), from the synchronous image acquired by the main camera as a region of interest, wherein the region of interest comprises a light bar image formed by a light plane emitted by a laser projector and the surface of the target obstacle. The light bars formed by the light planes projected by the two laser projectors are on the same image, and if the target obstacle is detected by the target detection algorithm, the region of interest can be acquired. After binarization processing is carried out on the image of the region of interest, a proper gray threshold is set to remove illumination noise, and then a gray gravity center method is used for extracting the center point coordinates of the light bar in the region of interest, and the process is as follows:
wherein (x) 0 ,y 0 ) For the extracted center point coordinates of the light bar, x i ,y i Coordinates of the ith row and the jth column, respectively, f in the image area ij For the pixel value at the ith row and jth column position, Q is the set gray threshold.
Step S4: according to the relative position relation between the light plane and the main camera coordinate system and the perspective projection principle of the camera, the distance and position relation information 11 of the target barrier at the center point of the light bar relative to the origin of the camera coordinate system is calculated, and the calculation process is as follows:
z·u=c x ·x+u 0 ·z
z·v=c y ·y+v 0 ·z
a·x+b·y+c·z+d=0
h=h′-y
wherein x, y and z are the center points of the light bars in the main camera coordinate systemZ is also the distance of the object at the center point of the light bar from the camera light center, alpha is the included angle of the object at the center point of the light bar relative to the origin of the camera coordinate system, h is the height of the object at the center point of the light bar relative to the plane of the vehicle bottom where the camera is located, h ' is the distance of the origin of the camera coordinate system relative to the plane of the vehicle bottom, (u ', v ') is the pixel coordinate of the center point of the light bar, (u) 0 ,v 0 ) For the coordinates of the origin of the image coordinate system in the pixel coordinate system, a, b, c and d are the coefficients of x, y, z and constant terms, respectively, of the line structured light plane equation.
Since there are two line laser projectors, two light planes are projected, and there are also two light plane equations a·x+b·y+c·z+d=0. After identifying the target obstacle and extracting the region of interest containing the light bar, the problem of calculating the expression of the relative relation between the light plane and the image polar coordinate system is faced when calculating the distance and position information of the center point of the light bar. As shown in fig. 1, the physical distribution position of the laser projector is shown, the light plane projected by the first laser projector 3 above is higher on the surface of the obstacle object and cannot be projected on the deceleration strip at the lower position, while the light plane projected by the second laser projector 5 below is inclined downwards on the surface of the front object, and the heights of the projected light rays of the two laser projectors are from high to low.
When the light plane projected by the lower second laser projector 5 strikes the surface of the obstacle object at a higher position, the selection of the calculation formula is required according to the logical relationship, and the logical relationship is schematically shown in fig. 4: judging the object type 6 (specifically, judging whether the object is a higher convex obstacle or a lower obstacle (such as a deceleration strip), if the object type is judged to be a lower obstacle (such as a deceleration strip), selecting a calculation formula corresponding to a lower light plane to calculate the distance and the position of the center of the light bar, and if the object type is judged to be not a lower obstacle (such as a deceleration strip), selecting a calculation formula corresponding to an upper light plane to calculate the distance and the position of the center of the light bar, namely, fusing the data of the upper laser projector and the lower laser projector.

Claims (8)

1. A night time unmanned vehicle obstacle detection system, comprising: a first line laser projector and a second line laser projector, a CCD binocular camera;
the CCD binocular camera comprises a main camera and an auxiliary camera, wherein a first optical filter capable of filtering out ambient light except 650nm wavelength is arranged on the main camera, so that the main camera can only acquire images containing light bars under 650nm wavelength light; the auxiliary camera is provided with a second optical filter which can filter out ambient light except 850nm wavelength; the auxiliary camera can only acquire infrared images under the light of the infrared light supplementing lamp to identify obstacles;
the first line laser projector and the main camera form a first line structure light vision measuring system, and a light bar formed by intersecting a light plane emitted by the first line structure light vision measuring system and the surface of the obstacle is used for acquiring the position and distance information of the higher convex obstacle in a night environment; the second line laser projector and the main camera form a second line structure light vision measuring system, and a light bar formed by intersecting a light plane emitted by the second line structure light vision measuring system and the surface of the obstacle is used for acquiring the position and distance information of the lower obstacle in a night environment;
under the night environment, one camera in the binocular CCD camera acquires an infrared image and performs target detection, the other camera and the two line structure laser projectors form a line structure light vision measuring system and acquire a light bar image, then the position and distance of high and low target barriers are measured by combining the target detection result of the light bar image, and finally the type, position and distance information of barrier information are fused; the method comprises the following steps:
step S1: under the night environment, the auxiliary camera acquires an obstacle infrared image through auxiliary illumination of the infrared light supplementing lamp, and the obstacle infrared image is used for target detection by a trained target detection algorithm to acquire the position information of a target object image; the target detection algorithm is used to acquire position coordinate information (b) of a rectangular bounding box surrounding the target object or target obstacle x ,b y ,b w ,b h ) Wherein (b) x ,b y ) Is the center coordinate of a rectangular bounding box, b w ,b h Width and height in the image of a rectangular bounding box, respectively;
step S2: the center coordinates (b) of the rectangular bounding box surrounding the target object obtained in step S1 x ,b y ) Obtaining the central coordinate (b 'of the rectangular boundary frame in the main camera pixel coordinate system after perspective transformation' x ,b′ y ) Then calculating and obtaining the area size of the rectangular bounding box after perspective transformation in the synchronous image in the main camera pixel coordinate system;
step S3: extracting a region of the rectangular boundary frame calculated in the step S2 in the synchronous image under the pixel coordinate system of the main camera from the synchronous image obtained by the main camera as a region of interest, wherein the region of interest comprises a light bar image formed by a light plane emitted by a laser projector and the surface of a target obstacle; after binarization processing is carried out on the image of the region of interest, a proper gray threshold is set to remove illumination noise, and then a gray gravity center method is used for extracting the center point coordinates of the light bar in the region of interest;
step S4: and calculating the distance and position relation information of the target obstacle at the center point of the light bar relative to the origin of the camera coordinate system according to the relative position relation between the light plane and the main camera coordinate system and the perspective projection principle of the camera.
2. The night unmanned vehicle obstacle detection system according to claim 1, wherein an infrared light supplement lamp of 850nm is correspondingly arranged above the auxiliary camera.
3. A night time unmanned vehicle obstacle detection system according to claim 1, wherein the first line laser projector is located above the second line laser projector, and the second line laser projector is located above the CCD binocular camera.
4. A night time unmanned vehicle obstacle detection system according to claim 1, wherein the upper raised obstacle is higher in height than the lower obstacle, the upper raised obstacle comprising a pedestrian or a vehicle, and the lower obstacle comprising at least a deceleration strip.
5. The night time unmanned vehicle obstacle detecting system according to claim 1, wherein the perspective transformation relationship between the image coordinate systems in the main camera and the auxiliary camera in the CCD binocular camera is as follows:
wherein u is 1 And v 1 The abscissa and ordinate, u, of the main camera pixel coordinate system, respectively 2 And v 2 The horizontal coordinate and the vertical coordinate of the pixel coordinate system of the auxiliary camera are respectively, R is a rotation matrix, and T is a translation vector.
6. A night unmanned vehicle obstacle detection method, characterized in that the night unmanned vehicle obstacle detection system according to any one of the claims 1-5 is adopted, one of the binocular CCD cameras acquires infrared images and performs target detection in a night environment, the other camera and the two line structured light projectors form a line structured light vision measurement system and acquire light bar images, the target detection result of the light bar images is combined to measure the positions and the distances of high and low target obstacles, and finally the type, the positions and the distance information of obstacle information are fused.
7. The night time unmanned vehicle obstacle detecting method according to claim 6, wherein the method comprises the steps of:
step S1: under the night environment, the auxiliary camera acquires an obstacle infrared image through auxiliary illumination of the infrared light supplementing lamp, and the obstacle infrared image is used for target detection by a trained target detection algorithm to acquire the position information of a target object image; the target detection algorithm is used to acquire position coordinate information (b) of a rectangular bounding box surrounding the target object or target obstacle x ,b y ,b w ,b h ) Wherein (b) x ,b y ) Is the center coordinate of a rectangular bounding box, b w ,b h Width and height in the image of a rectangular bounding box, respectively;
step S2: step S1 is obtainedIs defined by the center coordinates (b) of the rectangular bounding box surrounding the target object x ,b y ) Obtaining the central coordinate (b 'of the rectangular boundary frame in the main camera pixel coordinate system after perspective transformation' x ,b′ y ) Then calculating and obtaining the area size of the rectangular bounding box after perspective transformation in the synchronous image in the main camera pixel coordinate system;
step S3: extracting a region of the rectangular boundary frame calculated in the step S2 in the synchronous image under the pixel coordinate system of the main camera from the synchronous image obtained by the main camera as a region of interest, wherein the region of interest comprises a light bar image formed by a light plane emitted by a laser projector and the surface of a target obstacle; after binarization processing is carried out on the image of the region of interest, a proper gray threshold is set to remove illumination noise, and then a gray gravity center method is used for extracting the center point coordinates of the light bar in the region of interest;
step S4: and calculating the distance and position relation information of the target obstacle at the center point of the light bar relative to the origin of the camera coordinate system according to the relative position relation between the light plane and the main camera coordinate system and the perspective projection principle of the camera.
8. The night unmanned vehicle obstacle detecting method according to claim 7, wherein in step S4, after identifying a target obstacle and extracting a region of interest including a light bar, determining whether the target object is a higher convex obstacle or a lower obstacle, if the target object is determined to be a lower obstacle, selecting a calculation formula corresponding to a lower light plane to calculate a distance and a position at the center of the light bar, and if the target object is determined to be not a lower obstacle, selecting a calculation formula corresponding to an upper light plane to calculate a distance and a position at the center of the light bar, that is, fusing data of upper and lower laser projectors.
CN202010169003.3A 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night Active CN111323767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010169003.3A CN111323767B (en) 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010169003.3A CN111323767B (en) 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night

Publications (2)

Publication Number Publication Date
CN111323767A CN111323767A (en) 2020-06-23
CN111323767B true CN111323767B (en) 2023-08-08

Family

ID=71169320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010169003.3A Active CN111323767B (en) 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night

Country Status (1)

Country Link
CN (1) CN111323767B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202331A (en) * 2021-04-09 2022-10-18 灵动科技(北京)有限公司 Autonomous mobile device, control method for autonomous mobile device, and freight system
CN113269838B (en) * 2021-05-20 2023-04-07 西安交通大学 Obstacle visual detection method based on FIRA platform
CN114758249B (en) * 2022-06-14 2022-09-02 深圳市优威视讯科技股份有限公司 Target object monitoring method, device, equipment and medium based on field night environment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070036405A (en) * 2005-09-29 2007-04-03 에프엠전자(주) Sensing system in a traveling railway vehicle for sensing a human body or an obstacle on a railway track
WO2007113428A1 (en) * 2006-03-24 2007-10-11 Inrets - Institut National De Recherche Sur Les Transports Et Leur Securite Obstacle detection
WO2014136976A1 (en) * 2013-03-04 2014-09-12 公益財団法人鉄道総合技術研究所 Overhead line position measuring device and method
CN104573646A (en) * 2014-12-29 2015-04-29 长安大学 Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
CN108784534A (en) * 2018-06-11 2018-11-13 杭州果意科技有限公司 Artificial intelligence robot that keeps a public place clean
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN109358335A (en) * 2018-09-11 2019-02-19 北京理工大学 A kind of range unit of combination solid-state face battle array laser radar and double CCD cameras
CN109410264A (en) * 2018-09-29 2019-03-01 大连理工大学 A kind of front vehicles distance measurement method based on laser point cloud and image co-registration
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time
CN110595392A (en) * 2019-09-26 2019-12-20 桂林电子科技大学 Cross line structured light binocular vision scanning system and method
CN209991983U (en) * 2019-02-28 2020-01-24 深圳市道通智能航空技术有限公司 Obstacle detection equipment and unmanned aerial vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7839490B2 (en) * 2007-06-29 2010-11-23 Laughlin Richard H Single-aperture passive rangefinder and method of determining a range

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070036405A (en) * 2005-09-29 2007-04-03 에프엠전자(주) Sensing system in a traveling railway vehicle for sensing a human body or an obstacle on a railway track
WO2007113428A1 (en) * 2006-03-24 2007-10-11 Inrets - Institut National De Recherche Sur Les Transports Et Leur Securite Obstacle detection
WO2014136976A1 (en) * 2013-03-04 2014-09-12 公益財団法人鉄道総合技術研究所 Overhead line position measuring device and method
CN104573646A (en) * 2014-12-29 2015-04-29 长安大学 Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method
CN108784534A (en) * 2018-06-11 2018-11-13 杭州果意科技有限公司 Artificial intelligence robot that keeps a public place clean
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN109358335A (en) * 2018-09-11 2019-02-19 北京理工大学 A kind of range unit of combination solid-state face battle array laser radar and double CCD cameras
CN109410264A (en) * 2018-09-29 2019-03-01 大连理工大学 A kind of front vehicles distance measurement method based on laser point cloud and image co-registration
CN209991983U (en) * 2019-02-28 2020-01-24 深圳市道通智能航空技术有限公司 Obstacle detection equipment and unmanned aerial vehicle
CN110595392A (en) * 2019-09-26 2019-12-20 桂林电子科技大学 Cross line structured light binocular vision scanning system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双目立体视觉的倒车环境障碍物测量方法;刘昱岗;王卓君;王福景;张祖涛;徐宏;;交通运输系统工程与信息(第04期);第79-87页 *

Also Published As

Publication number Publication date
CN111323767A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
Possatti et al. Traffic light recognition using deep learning and prior maps for autonomous cars
CN110942449B (en) Vehicle detection method based on laser and vision fusion
CN107272021B (en) Object detection using radar and visually defined image detection areas
CN111323767B (en) System and method for detecting obstacle of unmanned vehicle at night
US7362881B2 (en) Obstacle detection system and method therefor
US8548229B2 (en) Method for detecting objects
CN101763640B (en) Online calibration processing method for vehicle-mounted multi-view camera viewing system
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
Nedevschi et al. A sensor for urban driving assistance systems based on dense stereovision
US20120327188A1 (en) Vehicle-Mounted Environment Recognition Apparatus and Vehicle-Mounted Environment Recognition System
CN108197523B (en) Night vehicle detection method and system based on image conversion and contour neighborhood difference
US20150195496A1 (en) Three-dimensional object detection device, and three-dimensional object detection method
KR20160123668A (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
CN102737236A (en) Method for automatically acquiring vehicle training sample based on multi-modal sensor data
JPH05265547A (en) On-vehicle outside monitoring device
Stein et al. Stereo-assist: Top-down stereo for driver assistance systems
US20230266473A1 (en) Method and system for object detection for a mobile robot with time-of-flight camera
KR20170104287A (en) Driving area recognition apparatus and method for recognizing driving area thereof
CN114463303B (en) Road target detection method based on fusion of binocular camera and laser radar
Fregin et al. Three ways of using stereo vision for traffic light recognition
CN109895697B (en) Driving auxiliary prompting system and method
Simond et al. Obstacle detection from ipm and super-homography
Mita et al. Robust 3d perception for any environment and any weather condition using thermal stereo
Gehrig et al. 6D vision goes fisheye for intersection assistance
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant