CN111323767A - Night unmanned vehicle obstacle detection system and method - Google Patents

Night unmanned vehicle obstacle detection system and method Download PDF

Info

Publication number
CN111323767A
CN111323767A CN202010169003.3A CN202010169003A CN111323767A CN 111323767 A CN111323767 A CN 111323767A CN 202010169003 A CN202010169003 A CN 202010169003A CN 111323767 A CN111323767 A CN 111323767A
Authority
CN
China
Prior art keywords
obstacle
camera
light
night
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010169003.3A
Other languages
Chinese (zh)
Other versions
CN111323767B (en
Inventor
邹斌
王亚萌
李文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202010169003.3A priority Critical patent/CN111323767B/en
Publication of CN111323767A publication Critical patent/CN111323767A/en
Application granted granted Critical
Publication of CN111323767B publication Critical patent/CN111323767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of automatic driving, in particular to a night unmanned vehicle obstacle detection system and a night unmanned vehicle obstacle detection method, which comprise a first line laser projector, a second line laser projector and a binocular camera; under the night environment, one camera in the binocular camera acquires an infrared image and performs target detection, the other camera and the two linear structured light projectors form a linear structured light vision measuring system and acquire light bar images, position and distance measurement is performed on high and low target obstacles by combining target detection data, and finally obstacle information acquired by each sensor is fused. The invention can realize night obstacle detection and meet the real-time requirement on the obstacle detection, can reduce the overall cost of a target detection system, and improves the practicability of the unmanned vehicle for certain purposes.

Description

Night unmanned vehicle obstacle detection system and method
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a multi-sensor fusion environment perception system and method, and provides a multi-sensor fusion strategy and method aiming at environment perception in automatic driving.
Background
The unmanned vehicle wants to achieve the functions of autonomous obstacle avoidance, navigation and the like and can not acquire the surrounding environment information. The single sensor has the defects of poor environment adaptability, difficulty in coping with interference of complex environment and noise, realization of acquisition of required rich information and the like, and the existing unmanned vehicle mostly uses a multi-sensor data fusion mode for sensing the surrounding environment so as to acquire the comprehensive information of a target object and ensure the driving reliability of the unmanned vehicle.
At present, the mainstream multi-sensor fusion scheme is information fusion of a laser radar, a millimeter wave radar and a camera. However, they have the disadvantages of high price, large data processing capacity, much noise and the like, and especially for some unmanned vehicles with low cost and special purposes, the scheme of fusing the sensors has relatively high cost and relatively complex data processing, and especially in the night environment, a plurality of sensors cannot efficiently acquire effective information.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art, and provides a night unmanned vehicle obstacle detection system and method, which can effectively control the cost, are simple and easy to operate, have good environmental adaptability and are convenient to popularize.
In order to solve the technical problems, the invention adopts the following technical scheme:
the utility model provides an unmanned car obstacle detection system night which characterized in that includes: a first line laser projector, a second line laser projector and a CCD binocular camera;
the CCD binocular camera comprises a main camera and an auxiliary camera, wherein a first optical filter capable of filtering out ambient light except 650nm wavelength is arranged on the main camera, so that the main camera can only acquire images containing optical stripes under 650nm wavelength light; the auxiliary camera is provided with a second optical filter which can filter out ambient light except for 850nm wavelength; the auxiliary camera can only acquire infrared images under the light of the infrared light supplement lamp for obstacle recognition;
the first line structured light vision measuring system is composed of a first line laser projector and a main camera, and a light strip formed by intersecting a light plane emitted by the first line structured light vision measuring system and the surface of the obstacle is used for acquiring the position and distance information of the higher convex obstacle in a night environment; the second line laser projector and the main camera form a second line structure light vision measuring system, and light planes emitted by the second line structure light vision measuring system are intersected with the surface of the obstacle to form light bars which are used for acquiring position and distance information of a lower obstacle in a night environment.
Furthermore, an 850nm infrared light supplement lamp is correspondingly arranged above the auxiliary camera.
Further, the first line laser projector is located above the second line laser projector, and the second line laser projector is located above the CCD binocular camera.
Further, the height of the higher convex barrier is higher than that of the lower barrier, the higher convex barrier comprises pedestrians and/or vehicles, and the lower barrier at least comprises a speed bump.
Further, the perspective transformation relationship between the image coordinate systems of the main camera and the auxiliary camera in the CCD binocular camera is as follows:
Figure BDA0002408497520000021
wherein u is1And v1Respectively the abscissa and ordinate, u, of the primary camera pixel coordinate system2And v2Respectively the abscissa and the ordinate of the auxiliary camera pixel coordinate system, R is a rotation matrix, and T is a translation vector.
A night unmanned vehicle obstacle detection method is characterized in that the night unmanned vehicle obstacle detection system is adopted, under the night environment, one camera of binocular cameras obtains infrared images and carries out target detection, the other camera and two line-structured light projectors form a line-structured light vision measurement system and obtain light bar images, the position and the distance of a high-low target obstacle are measured by combining the target detection results of the light bar images, and finally the type, the position and the distance information of obstacle information are fused.
Further, the method comprises the following steps:
step S1: under the night environment, auxiliary illumination is carried out through an infrared light supplement lamp, an auxiliary camera obtains an obstacle infrared image, the obstacle infrared image is used for a trained target detection algorithm to carry out target detection, and position information of a target object image is obtained; the target detection algorithm is used to acquire position coordinate information of a rectangular bounding box enclosing a target object or a target obstacle (b)x,by,bw,bh) Wherein (b)x,by) As the central coordinates of the rectangular bounding box, bw,bhWidth and height in the image, respectively, of the rectangular bounding box;
step S2: the coordinates (b) of the center of the rectangular bounding box surrounding the target object obtained in step S1x,by) Obtaining the center coordinate (b ') of the rectangular bounding box in the pixel coordinate system of the main camera after perspective transformation'x,b′y) Then calculating and obtaining the size of the region of the rectangular bounding box in the synchronous image in the main camera pixel coordinate system after perspective transformation;
step S3: extracting the area of the rectangular bounding box calculated in the step S2 in the synchronous image under the pixel coordinate system of the main camera from the synchronous image acquired by the main camera as an area of interest, wherein the area of interest contains a light bar image formed by a light plane emitted by the laser projector and the surface of the target obstacle; after the image of the region of interest is subjected to binarization processing, setting a proper gray threshold value to remove illumination noise, and then extracting coordinates of the center points of the light bars in the region of interest by using a gray gravity center method;
step S4: and calculating the distance and position relation information of the target barrier at the central point of the light bar relative to the origin of the coordinate system of the camera according to the relative position relation between the light plane and the coordinate system of the main camera and the perspective projection principle of the camera.
Further, in step S4, after the target obstacle is identified and the region of interest including the light bar is extracted, it is determined whether the type of the target object is a high convex obstacle or a low obstacle, if it is determined that the type of the target object is a low obstacle, a calculation formula corresponding to the lower light plane is selected to calculate the distance and the position at the center of the light bar, and if it is determined that the type of the object is not a low obstacle, a calculation formula corresponding to the upper light plane is selected to calculate the distance and the position at the center of the light bar, that is, data of the upper and lower laser projectors are fused.
Therefore, according to the system and the method, the position coordinates of the image of the obstacle object obtained by adopting a target detection algorithm are subjected to perspective transformation to obtain the region of interest only containing the light bar on the surface of the obstacle object on the synchronous image containing the light bar, then the center of the light bar is extracted in the region of interest, but not the whole image containing the light bar, and the light filter can reduce the interference of a plurality of ambient lights, so that the calculated amount is greatly reduced, and the measurement model is simple in combination, so that the model is relatively good in real-time performance; the system can complete environment sensing of certain purposes only by adopting the cameras and the line laser projectors and adding devices with lower price such as infrared light supplement lamps and the like, and the cost is greatly reduced compared with other multi-sensor schemes under the same purposes.
Therefore, compared with the prior art, the method can realize night obstacle detection and meet the real-time requirement on obstacle detection, can reduce the overall cost of a target detection system, is simple and easy to operate, has good environmental adaptability, is convenient to popularize and improves the practicability of the unmanned vehicle with certain application.
Drawings
Fig. 1 is a schematic structural diagram of a night unmanned vehicle obstacle detection system according to the present invention.
Fig. 2 is a schematic view of a night unmanned vehicle obstacle detection method according to the present invention.
FIG. 3 is a schematic diagram of a rectangular bounding box principle of the target detection algorithm.
Fig. 4 is a flow chart of the optical plane selection logic of the present invention.
Detailed Description
For a further understanding of the invention, its nature and function, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and specific examples.
Example 1:
referring to fig. 1, a night unmanned vehicle obstacle detection system according to the present invention is characterized by comprising: a first line laser projector 3 located above, a second line laser projector 5 located below, a CCD binocular camera 1 below the two laser projectors; an infrared light supplement lamp is additionally arranged; the infrared fill light is a light source which is independent of other sensors and only provides lighting conditions.
The CCD binocular camera 1 comprises a main camera 4 and an auxiliary camera 2, wherein a first optical filter (650nm optical filter) is installed on the main camera 4, and can filter out ambient light except 650nm wavelength, so that the main camera 4 can only acquire images containing optical strips under 650nm wavelength light, a second optical filter (850nm optical filter) is installed on the auxiliary camera 2, and can filter out ambient light except 850nm wavelength, so that the auxiliary camera 2 can only acquire infrared images under infrared light of an infrared light supplement lamp, and the acquired infrared images are used for obstacle identification; the infrared light supplement lamp is placed above the auxiliary camera, and the specific position is based on the fact that the auxiliary camera is guaranteed to acquire good lighting conditions.
The first line laser projector 3 and the main camera 4 of the CCD binocular camera 1 form a first line structured light vision measuring system 8, and light planes emitted by the first line structured light vision measuring system 8 are intersected with the surface of an obstacle to form light bars which are used for acquiring the position and distance information of a higher convex obstacle (such as a pedestrian, a vehicle and the like) in a night environment; the second line laser projector 5 and the main camera 4 of the CCD binocular camera 1 form a second line structured light vision measuring system 9, and light planes emitted by the second line structured light vision measuring system 9 are intersected with the surface of an obstacle to form light bars which are used for acquiring the position and distance information of a lower obstacle (a deceleration strip) in a night environment;
the image coordinate systems of two cameras (a main camera 4 and an auxiliary camera 2) in the CCD binocular camera 1 have a perspective transformation relationship, which is simplified as a perspective transformation module 11, and the main camera pixel coordinate system is used as a fixed coordinate system to transform the auxiliary camera pixel coordinate system to the main camera pixel coordinate system, which needs to be rotated and translated, and the transformation relationship is as follows:
Figure BDA0002408497520000041
wherein u is1And v1Respectively the abscissa and ordinate, u, of the primary camera pixel coordinate system2And v2Respectively the abscissa and the ordinate of the auxiliary camera pixel coordinate system, R is a rotation matrix, and T is a translation vector.
Embodiment 2, referring to fig. 2, a night unmanned vehicle obstacle detection method based on the night unmanned vehicle obstacle detection system of embodiment 1 is characterized by comprising the following steps:
step S1: under the night environment, the auxiliary camera 2 provided with the second optical filter acquires an infrared image of an obstacle through auxiliary illumination of an infrared supplementary light, the infrared image of the obstacle is used for target detection by a trained target detection algorithm (such as YOLO-V3) to acquire position information 7 of an image of a target object, and the target detection algorithm can acquire position coordinate information (b) of a rectangular bounding box surrounding the target object or the target obstaclex,by,bw,bh) Wherein (b)x,by) As the central coordinates of the rectangular bounding box, bw,bhThe width and height in the image of the rectangular bounding box, respectively, u and v are the abscissa and ordinate, respectively, of the secondary camera pixel coordinate system, as shown in fig. 3.
Step S2: the coordinates (b) of the center of the rectangular bounding box surrounding the target object obtained in step S1x,by) Obtaining the center coordinate (b ') of the rectangular bounding box in the pixel coordinate system of the main camera after passing through the perspective transformation module 10'x,b′y) And then calculating the size of the region of the rectangular bounding box in the synchronous image in the main camera pixel coordinate system after the perspective transformation is acquired. The calculation process is as follows:
tx=b′x+bw/2
ty=b′y+bh/2
dx=b′x-bw/2
dy=b′y-bh/2
wherein (t)x,ty) Synchronizing the coordinates of the upper left corner on the image in the main camera pixel coordinate system for the perspective transformed rectangular bounding box, and (d)x,dy) The coordinates of the lower right corner.
Step S3: the region of the rectangular bounding box calculated in step S2 in the synchronized image under the pixel coordinate system of the main camera is extracted from the synchronized image acquired by the main camera as a region of interest, and the region of interest contains the light bar image formed by the light plane emitted by the laser projector and the surface of the target obstacle. If the light bars formed by the light planes projected by the two laser projectors are on the same image, and the target obstacle is detected by the target detection algorithm, the region of interest can be obtained. After the image of the region of interest is subjected to binarization processing, a proper gray threshold value is set to remove illumination noise, and then coordinates of the center point of the light bar are extracted in the region of interest by using a gray gravity center method, wherein the process is as follows:
Figure BDA0002408497520000061
Figure BDA0002408497520000062
Figure BDA0002408497520000063
wherein (x)0,y0) For extracted coordinates of the center point of the light strip, xi,yiCoordinates of i-th row and j-th column in the image area, fijAnd Q is a set gray threshold value for the pixel value at the ith row and jth column position.
Step S4: according to the relative position relation between the light plane and the main camera coordinate system and the camera perspective projection principle, the distance and position relation information 11 of the target obstacle at the central point of the light bar relative to the origin of the camera coordinate system is calculated, and the calculation process is as follows:
z·u=cx·x+u0·z
z·v=cy·y+v0·z
a·x+b·y+c·z+d=0
Figure BDA0002408497520000064
Figure BDA0002408497520000065
Figure BDA0002408497520000066
Figure BDA0002408497520000067
h=h′-y
wherein x, y and z are coordinates of the central point of the optical strip in the coordinate system of the main camera, z is the distance between the object at the central point of the optical strip and the optical center of the camera, α is the included angle between the object at the central point of the optical strip and the origin of the coordinate system of the camera, h is the height of the object at the central point of the optical strip relative to the plane of the vehicle bottom where the camera is located, h ' is the distance between the origin of the coordinate system of the camera and the plane of the vehicle bottom, and (u ', v ') are pixel coordinates of the central point of the optical strip, (0,v0) A, b, c and d are coefficients of x, y, z and constant terms of the linear structured light plane equation, respectively, which are coordinates of an origin of an image coordinate system in a pixel coordinate system.
Due to the two line laser projectors, two light planes are projected, i.e., two light planes with the equation a · x + b · y + c · z + d equal to 0. After the target obstacle is identified and the region of interest containing the light bar is extracted, the problem of the relative relation calculation expression of which light plane is selected and the polar coordinate system is faced when the distance and the position information of the light bar central point are calculated. As shown in the physical distribution position of the laser projectors in FIG. 1, the light plane projected by the upper first laser projector 3 is higher on the surface of the obstacle and will not hit on the deceleration strip at the lower position, while the light plane projected by the lower second laser projector 5 is obliquely downward on the surface of the object at the front, and the heights of the projected light rays of the two laser projectors are from high to low.
When the light plane projected by the second laser projector 5 below hits the surface of the obstacle at a higher position, the calculation formula needs to be selected according to the logical relationship, and the schematic diagram of the logical relationship is shown in fig. 4: judging the type 6 of the target object (specifically judging whether the target object is a higher convex obstacle or a lower obstacle (such as a deceleration strip)), if the type of the object is judged to be the lower obstacle (such as the deceleration strip), selecting a calculation formula corresponding to the lower light plane to calculate the distance and the position of the center of the light strip, and if the type of the object is judged not to be the lower obstacle (such as the deceleration strip), selecting a calculation formula corresponding to the upper light plane to calculate the distance and the position of the center of the light strip, namely fusing the data of the upper laser projector and the data of the lower laser projector.

Claims (8)

1. The utility model provides an unmanned car obstacle detection system night which characterized in that includes: a first line laser projector, a second line laser projector and a CCD binocular camera;
the CCD binocular camera comprises a main camera and an auxiliary camera, wherein a first optical filter capable of filtering out ambient light except 650nm wavelength is arranged on the main camera, so that the main camera can only acquire images containing optical stripes under 650nm wavelength light; the auxiliary camera is provided with a second optical filter which can filter out ambient light except for 850nm wavelength; the auxiliary camera can only acquire infrared images under the light of the infrared light supplement lamp for obstacle recognition;
the first line structured light vision measuring system is composed of a first line laser projector and a main camera, and a light strip formed by intersecting a light plane emitted by the first line structured light vision measuring system and the surface of the obstacle is used for acquiring the position and distance information of the higher convex obstacle in a night environment; the second line laser projector and the main camera form a second line structure light vision measuring system, and light planes emitted by the second line structure light vision measuring system are intersected with the surface of the obstacle to form light bars which are used for acquiring position and distance information of a lower obstacle in a night environment.
2. The night unmanned aerial vehicle obstacle detection system of claim 1, wherein an 850nm infrared fill light is correspondingly disposed above the auxiliary camera.
3. The night unmanned vehicle obstacle detection system of claim 1, wherein the first line laser projector is located above the second line laser projector, the second line laser projector being located above the CCD binocular camera.
4. The night time unmanned vehicle obstacle detection system of claim 1, wherein the higher convex obstacle is higher than the lower obstacle, the higher convex obstacle comprises pedestrians and/or vehicles, and the lower obstacle comprises at least a speed bump.
5. The night unmanned vehicle obstacle detection system of claim 1, wherein the perspective transformation relationship between the image coordinate systems of the primary camera and the secondary camera of the CCD binocular camera is as follows:
Figure FDA0002408497510000011
wherein u is1And v1Respectively the abscissa and ordinate, u, of the primary camera pixel coordinate system2And v2Respectively the abscissa and the ordinate of the auxiliary camera pixel coordinate system, R is a rotation matrix, and T is a translation vector.
6. A night unmanned vehicle obstacle detection method is characterized in that the night unmanned vehicle obstacle detection system of any one of claims 1-5 is adopted, under a night environment, one camera of a binocular CCD camera acquires an infrared image and performs target detection, the other camera and two line-structured light projectors form a line-structured light vision measurement system and acquires light bar images, the position and distance of a high-low target obstacle are measured by combining the target detection results of the light bar images, and finally the type, position and distance information of obstacle information is fused.
7. The night time unmanned vehicle obstacle detection method of claim 6, wherein the method comprises the steps of:
step S1: under the night environment, auxiliary illumination is carried out through an infrared light supplement lamp, an auxiliary camera obtains an obstacle infrared image, the obstacle infrared image is used for a trained target detection algorithm to carry out target detection, and position information of a target object image is obtained; target detection algorithms for obtaining rectangular boundaries surrounding a target object or target obstaclePosition coordinate information of frame (b)x,by,bw,bh) Wherein (b)x,by) As the central coordinates of the rectangular bounding box, bw,bhWidth and height in the image, respectively, of the rectangular bounding box;
step S2: the coordinates (b) of the center of the rectangular bounding box surrounding the target object obtained in step S1x,by) Obtaining the center coordinate (b ') of the rectangular bounding box in the pixel coordinate system of the main camera after perspective transformation'x,b′y) Then calculating and obtaining the size of the region of the rectangular bounding box in the synchronous image in the main camera pixel coordinate system after perspective transformation;
step S3: extracting the area of the rectangular bounding box calculated in the step S2 in the synchronous image under the pixel coordinate system of the main camera from the synchronous image acquired by the main camera as an area of interest, wherein the area of interest contains a light bar image formed by a light plane emitted by the laser projector and the surface of the target obstacle; after the image of the region of interest is subjected to binarization processing, setting a proper gray threshold value to remove illumination noise, and then extracting coordinates of the center points of the light bars in the region of interest by using a gray gravity center method;
step S4: and calculating the distance and position relation information of the target barrier at the central point of the light bar relative to the origin of the coordinate system of the camera according to the relative position relation between the light plane and the coordinate system of the main camera and the perspective projection principle of the camera.
8. The method as claimed in claim 7, wherein in step S4, after the target obstacle is identified and the region of interest including the light bar is extracted, the type of the target object is determined as being a higher convex obstacle or a lower obstacle, if the type of the target object is determined as being a lower obstacle, the distance and position at the center of the light bar are calculated by using the calculation formula corresponding to the lower light plane, and if the type of the object is determined as not being a lower obstacle, the distance and position at the center of the light bar are calculated by using the calculation formula corresponding to the upper light plane, that is, the data of the upper and lower laser projectors are fused.
CN202010169003.3A 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night Active CN111323767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010169003.3A CN111323767B (en) 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010169003.3A CN111323767B (en) 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night

Publications (2)

Publication Number Publication Date
CN111323767A true CN111323767A (en) 2020-06-23
CN111323767B CN111323767B (en) 2023-08-08

Family

ID=71169320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010169003.3A Active CN111323767B (en) 2020-03-12 2020-03-12 System and method for detecting obstacle of unmanned vehicle at night

Country Status (1)

Country Link
CN (1) CN111323767B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269838A (en) * 2021-05-20 2021-08-17 西安交通大学 Obstacle visual detection method based on FIRA platform
CN114758249A (en) * 2022-06-14 2022-07-15 深圳市优威视讯科技股份有限公司 Target object monitoring method, device, equipment and medium based on field night environment
WO2022213827A1 (en) * 2021-04-09 2022-10-13 灵动科技(北京)有限公司 Autonomous mobile device, control method for autonomous mobile device, and freight system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070036405A (en) * 2005-09-29 2007-04-03 에프엠전자(주) Sensing system in a traveling railway vehicle for sensing a human body or an obstacle on a railway track
WO2007113428A1 (en) * 2006-03-24 2007-10-11 Inrets - Institut National De Recherche Sur Les Transports Et Leur Securite Obstacle detection
US20090003654A1 (en) * 2007-06-29 2009-01-01 Richard H. Laughlin Single-aperature passive rangefinder and method of determining a range
WO2014136976A1 (en) * 2013-03-04 2014-09-12 公益財団法人鉄道総合技術研究所 Overhead line position measuring device and method
CN104573646A (en) * 2014-12-29 2015-04-29 长安大学 Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
CN108784534A (en) * 2018-06-11 2018-11-13 杭州果意科技有限公司 Artificial intelligence robot that keeps a public place clean
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN109358335A (en) * 2018-09-11 2019-02-19 北京理工大学 A kind of range unit of combination solid-state face battle array laser radar and double CCD cameras
CN109410264A (en) * 2018-09-29 2019-03-01 大连理工大学 A kind of front vehicles distance measurement method based on laser point cloud and image co-registration
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time
CN110595392A (en) * 2019-09-26 2019-12-20 桂林电子科技大学 Cross line structured light binocular vision scanning system and method
CN209991983U (en) * 2019-02-28 2020-01-24 深圳市道通智能航空技术有限公司 Obstacle detection equipment and unmanned aerial vehicle

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070036405A (en) * 2005-09-29 2007-04-03 에프엠전자(주) Sensing system in a traveling railway vehicle for sensing a human body or an obstacle on a railway track
WO2007113428A1 (en) * 2006-03-24 2007-10-11 Inrets - Institut National De Recherche Sur Les Transports Et Leur Securite Obstacle detection
US20090003654A1 (en) * 2007-06-29 2009-01-01 Richard H. Laughlin Single-aperature passive rangefinder and method of determining a range
WO2014136976A1 (en) * 2013-03-04 2014-09-12 公益財団法人鉄道総合技術研究所 Overhead line position measuring device and method
CN104573646A (en) * 2014-12-29 2015-04-29 长安大学 Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
EP3517997A1 (en) * 2018-01-30 2019-07-31 Wipro Limited Method and system for detecting obstacles by autonomous vehicles in real-time
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method
CN108784534A (en) * 2018-06-11 2018-11-13 杭州果意科技有限公司 Artificial intelligence robot that keeps a public place clean
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN109358335A (en) * 2018-09-11 2019-02-19 北京理工大学 A kind of range unit of combination solid-state face battle array laser radar and double CCD cameras
CN109410264A (en) * 2018-09-29 2019-03-01 大连理工大学 A kind of front vehicles distance measurement method based on laser point cloud and image co-registration
CN209991983U (en) * 2019-02-28 2020-01-24 深圳市道通智能航空技术有限公司 Obstacle detection equipment and unmanned aerial vehicle
CN110595392A (en) * 2019-09-26 2019-12-20 桂林电子科技大学 Cross line structured light binocular vision scanning system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAI WANG 等: "A comparative study of state-of-the-art deep learning algorithms for vehicle detection", vol. 11, no. 11, pages 82 - 95, XP011720755, DOI: 10.1109/MITS.2019.2903518 *
刘博 等: "基于多特征的实时立体视觉检测方法", vol. 36, no. 36, pages 3339 - 3343 *
刘昱岗;王卓君;王福景;张祖涛;徐宏;: "基于双目立体视觉的倒车环境障碍物测量方法", 交通运输系统工程与信息, no. 04, pages 79 - 87 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022213827A1 (en) * 2021-04-09 2022-10-13 灵动科技(北京)有限公司 Autonomous mobile device, control method for autonomous mobile device, and freight system
CN113269838A (en) * 2021-05-20 2021-08-17 西安交通大学 Obstacle visual detection method based on FIRA platform
CN113269838B (en) * 2021-05-20 2023-04-07 西安交通大学 Obstacle visual detection method based on FIRA platform
CN114758249A (en) * 2022-06-14 2022-07-15 深圳市优威视讯科技股份有限公司 Target object monitoring method, device, equipment and medium based on field night environment

Also Published As

Publication number Publication date
CN111323767B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
Possatti et al. Traffic light recognition using deep learning and prior maps for autonomous cars
US8908924B2 (en) Exterior environment recognition device and exterior environment recognition method
US9064418B2 (en) Vehicle-mounted environment recognition apparatus and vehicle-mounted environment recognition system
Bertozzi et al. Obstacle detection and classification fusing radar and vision
TWI302879B (en) Real-time nighttime vehicle detection and recognition system based on computer vision
CN111323767B (en) System and method for detecting obstacle of unmanned vehicle at night
TW202019745A (en) Systems and methods for positioning vehicles under poor lighting conditions
JP5145585B2 (en) Target detection device
US20160252905A1 (en) Real-time active emergency vehicle detection
US8625850B2 (en) Environment recognition device and environment recognition method
CN117441113A (en) Vehicle-road cooperation-oriented perception information fusion representation and target detection method
JP2007183432A (en) Map creation device for automatic traveling and automatic traveling device
CN101763640A (en) Online calibration processing method for vehicle-mounted multi-view camera viewing system
US20230266473A1 (en) Method and system for object detection for a mobile robot with time-of-flight camera
CN115273028B (en) Intelligent parking lot semantic map construction method and system based on global perception
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
Ponsa et al. On-board image-based vehicle detection and tracking
Fregin et al. Three ways of using stereo vision for traffic light recognition
CN114463303A (en) Road target detection method based on fusion of binocular camera and laser radar
CN109895697B (en) Driving auxiliary prompting system and method
US20230177724A1 (en) Vehicle to infrastructure extrinsic calibration system and method
KR101868293B1 (en) Apparatus for Providing Vehicle LIDAR
Simond et al. Obstacle detection from ipm and super-homography
Mita et al. Robust 3d perception for any environment and any weather condition using thermal stereo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant