CN110058263B - Object positioning method in vehicle driving process - Google Patents
Object positioning method in vehicle driving process Download PDFInfo
- Publication number
- CN110058263B CN110058263B CN201910307774.1A CN201910307774A CN110058263B CN 110058263 B CN110058263 B CN 110058263B CN 201910307774 A CN201910307774 A CN 201910307774A CN 110058263 B CN110058263 B CN 110058263B
- Authority
- CN
- China
- Prior art keywords
- image
- vehicle
- features
- laser radar
- distance information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention discloses an object positioning method in the running process of a vehicle, which comprises the following steps: the system comprises a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module; the camera module acquires a two-dimensional image of the road environment in the running process of the vehicle; the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image; the image data processing module extracts a trunk part of the object according to the image characteristics of the object, calibrates the trunk part and obtains an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image; the data coordinate conversion module is used for converting the two-dimensional coordinates of the image into position parameters required by laser radar scanning; and controlling the laser radar to scan the object by the laser radar scanning control module according to the position parameter to obtain distance information. The method and the device can be used for quickly and accurately positioning the object in the driving process of the vehicle.
Description
Technical Field
The invention relates to the field of positioning, in particular to an object positioning method in the driving process of a vehicle.
Background
The vehicle needs to acquire the position information of the surrounding objects during the driving process. The image data acquired by the camera only has two-dimensional information, and has no distance information of the object to be shot, so that the application of the camera in the driving field is limited. At present, the two-dimensional image processing technology is mature, the image processing time is short, the accuracy rate is high, and the method can be used for identifying objects. The laser radar is a system for detecting a characteristic quantity such as a position and a velocity of a target by emitting a laser beam. The laser radar can be used for analyzing laser beams emitted to and received from the target to obtain relevant information of the target. Therefore, the laser radar is often applied to a driving system of an automobile, a building mapping and the like, which require to acquire an accurate position or speed of an object.
The existing technology for positioning objects in the vehicle driving process mainly comprises the steps of obtaining three-dimensional point cloud information of the whole detected area through a laser radar and then carrying out the subsequent processing process of point cloud data, and the method is large in data volume and long in processing time, so that the driving system with extremely high real-time requirement is very unfavorable, and the reliability of the driving system is seriously influenced.
Disclosure of Invention
In order to solve the problems, the invention provides an object positioning method in the vehicle driving process, which can combine the existing technology of rapidly processing two-dimensional image information and the technology of acquiring object position information by a laser radar, make up for the defect that a two-dimensional image cannot reflect distance information, overcome the defects of large amount of point cloud data acquired by the laser radar and long processing time, and greatly improve the object positioning speed in the vehicle driving process.
Based on this, the invention provides an object positioning method in the driving process of a vehicle, which comprises the following steps: a method for locating an object while a vehicle is traveling, comprising:
the system comprises a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module;
the camera module acquires a two-dimensional image of the road environment in the running process of the vehicle;
the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image;
the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
the data coordinate conversion module performs coordinate conversion on the two-dimensional image coordinate and converts the two-dimensional image coordinate into a position parameter required by laser radar scanning in the laser radar scanning control module;
and the laser radar scanning control module controls the laser radar to scan the object according to the position parameter to obtain distance information.
Wherein the image features of the object include: size, shape, brightness, color of the image. Wherein the extracting the stem portion of the object comprises:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
Wherein said extracting a trunk portion of the object further comprises: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
Wherein, laser radar scanning control module control laser radar is to the object scan includes:
and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
Wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
And the range of the area scanned by the laser radar is the same as the range captured by the camera.
Wherein the coordinate parameter conversion includes coordinate parameter conversion performed in a stationary or running state of the vehicle.
The camera module, the image data processing module, the data coordinate conversion module and the laser radar scanning control module are carried out simultaneously.
The invention utilizes the existing technology for rapidly processing the two-dimensional image information and the capability of the laser radar for acquiring the position information of the object. The camera and the laser radar are combined, so that the defects that the two-dimensional image cannot reflect distance information and the defects of large amount of point cloud data acquired by the laser radar and long processing time are overcome. By the method and the device, the distance information of nearby objects in the automatic driving process can be quickly obtained, and the response speed of the automatic driving system is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an object locating method during a vehicle driving process according to an embodiment of the present invention;
FIG. 2 is a flowchart of an object locating method during a driving process of a vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the positioning of an object during travel of a vehicle according to an embodiment of the present invention;
fig. 4 is a schematic diagram of the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving according to the embodiment of the present invention in the same calculation manner.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an object locating method in a vehicle driving process provided by an embodiment of the invention, where the object locating method in the vehicle driving process includes:
the system comprises a camera module 101, an image data processing module 102, a data coordinate conversion module 103 and a laser radar scanning control module 104;
the camera module 101 acquires a two-dimensional image of a road environment in the running process of a vehicle;
the image data processing module 102 identifies an object in the driving process of the vehicle according to the two-dimensional image;
the image data processing module 102 extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
the data coordinate conversion module 103 performs coordinate conversion on the two-dimensional coordinates of the image, and converts the two-dimensional coordinates into position parameters required by laser radar scanning in the laser radar scanning control module 104;
and the laser radar scanning control module 104 controls the laser radar to scan the object according to the position parameter, so as to obtain distance information.
The camera module 101 includes a camera, and captures a road environment image at a vehicle view angle, the road environment image being a two-dimensional image having a defect that information on a distance between objects cannot be reflected.
The image features of the object include: color features, texture features, shape features, and spatial relationships.
A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. However, since texture is only a characteristic of the surface of an object and does not completely reflect the essential attributes of the object, high-level image content cannot be obtained by using texture features alone. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship refers to the spatial position or relative direction relationship between a plurality of objects divided from the image, and these relationships can be classified into a connection or adjacency relationship, an overlapping or overlapping relationship, an inclusion or inclusion relationship, and the like. In general, spatial location information can be divided into two categories: relative spatial position information and absolute spatial position information. The former relation emphasizes the relative situation between the objects, such as the upper, lower, left and right relations, and the latter relation emphasizes the distance and orientation between the objects.
Wherein the image characteristics of the object mainly include: size, shape, brightness, color of the image.
Wherein the extracting the stem portion of the object comprises:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
The extracting the trunk portion of the object further includes: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
Wherein the controlling of the lidar to scan the object by the lidar scanning control module 104 includes:
and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
The obtaining distance information includes: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
The range of the area scanned by the laser radar is the same as the range captured by the camera.
The coordinate parameter conversion includes coordinate parameter conversion performed in a stationary or running state of the vehicle. And the camera module 101, the image data processing module 102, the data coordinate conversion module 103 and the laser radar scan control module 104 may be performed simultaneously.
Fig. 2 is a flowchart of an object locating method during driving of a vehicle according to an embodiment of the present invention, where the method includes:
s201, the camera module acquires a two-dimensional image of a road environment in the running process of a vehicle;
the camera module comprises a camera and is used for shooting a road environment image at a vehicle visual angle, wherein the road environment image is a two-dimensional image which has the defect that distance information between objects cannot be reflected.
The image features of the object include: color features, texture features, shape features, and spatial relationships.
A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. However, since texture is only a characteristic of the surface of an object and does not completely reflect the essential attributes of the object, high-level image content cannot be obtained by using texture features alone. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship refers to the spatial position or relative direction relationship between a plurality of objects divided from the image, and these relationships can be classified into a connection or adjacency relationship, an overlapping or overlapping relationship, an inclusion or inclusion relationship, and the like. In general, spatial location information can be divided into two categories: relative spatial position information and absolute spatial position information. The former relation emphasizes the relative situation between the objects, such as the upper, lower, left and right relations, and the latter relation emphasizes the distance and orientation between the objects.
Wherein the image characteristics of the object mainly include: size, shape, brightness, color of the image.
S202, the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image;
objects such as trees, pedestrians, other vehicles and the like existing in the two-dimensional image can be identified by the image data processing module.
S203, the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
and the image data processing module extracts the trunk part of the identified object according to the image characteristics of the object.
The extracting the stem portion of the object includes:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
Wherein said extracting a trunk portion of the object further comprises: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
For example, if the object is a tree, the tree may be regarded as a triangle and a quadrangle which are spliced up and down, three vertexes of the triangle and four vertexes of the quadrangle are used as positioning points, and the quadrangle may be regarded as a straight line and represented by a plurality of points, as shown in fig. 3.
And calibrating the positioning point, namely marking the positioning point, and acquiring the image two-dimensional coordinates of the positioning point in the two-dimensional image.
S204, the data coordinate conversion module performs coordinate conversion on the two-dimensional coordinates of the image and converts the two-dimensional coordinates into position parameters required by laser radar scanning in the laser radar scanning control module;
and the data coordinate conversion module is used for carrying out coordinate conversion on the two-dimensional coordinates of the image, namely converting the two-dimensional coordinates of a plane into three-dimensional coordinates of a space, and the radar in the laser radar scanning control module is used for scanning the object according to the three-dimensional coordinates of the space.
S205, the laser radar scanning control module controls the laser radar to scan the object according to the position parameter, and distance information is obtained.
And the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
Wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
Fig. 4 is a schematic diagram of the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving according to the embodiment of the present invention in the same calculation manner, please refer to fig. 4, the reason why the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving are the same is that:
fig. 1 and 2 are schematic diagrams of a vehicle scanning lidar at a standstill, the vehicle emitting laser light, the vehicle receiving the reflected light via reflection from the object, the distance between the object and the vehicle being mathematically derived as:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object and the vehicle.
Fig. 3 and 4 are schematic diagrams of laser radar scanning performed by a vehicle in motion, wherein the vehicle emits laser light at one position, the vehicle receives reflected light via reflection of the object at another position, and the distance between the object and the vehicle is mathematically easy to obtain:
the method comprises the steps of obtaining the time difference between laser emission and reflected laser receiving, obtaining the speed difference between the light speed and the vehicle speed, and taking one half of the product of the difference and the time difference as the distance information between an object and a vehicle, wherein the vehicle speed is far smaller than the light speed compared with the vehicle speed, so the vehicle speed can be ignored, namely the distance between the object and the vehicle is one half of the product between the light speed and the time difference.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and substitutions can be made without departing from the technical principle of the present invention, and these modifications and substitutions should also be regarded as the protection scope of the present invention.
Claims (6)
1. A method for locating an object while a vehicle is traveling, comprising:
the system comprises a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module;
the camera module acquires a two-dimensional image of the road environment in the running process of the vehicle; the two-dimensional image of the road environment is obtained by shooting from a vehicle view angle;
the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image; the image features of the object comprise color features, texture features, shape features and spatial relationships; the color features describe surface properties of a scene corresponding to the image or the image area; the texture features are obtained through statistical calculation in an image region containing a plurality of pixel points; the shape features comprise contour features and region features; the spatial relationship comprises relative spatial position information and absolute spatial position information;
the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image; the image features of the object include: size, shape, brightness, color of the image; the extracting the stem portion of the object includes: extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or polygons combined with the triangles, and vertexes of the geometric features form positioning points; the extracting the trunk portion of the object further includes: extracting points of the object with the reflection effect of the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points;
the data coordinate conversion module performs coordinate conversion on the two-dimensional image coordinate and converts the two-dimensional image coordinate into a position parameter required by laser radar scanning in the laser radar scanning control module;
the laser radar scanning control module controls a laser radar to scan the object according to the position parameter to obtain distance information; and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
2. The method of claim 1, wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
3. The method of claim 2, wherein the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving are calculated in the same manner, comprising:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
4. The method according to claim 1, wherein the laser radar scans the same area as the camera.
5. The method for positioning an object during running of a vehicle according to claim 1, wherein the coordinate parameter conversion includes coordinate parameter conversion performed in a state where the vehicle is stationary or running.
6. The method for locating an object during traveling of a vehicle according to claim 1, wherein the camera module, the image data processing module, the data coordinate conversion module and the lidar scanning control module are performed simultaneously.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910307774.1A CN110058263B (en) | 2019-04-16 | 2019-04-16 | Object positioning method in vehicle driving process |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910307774.1A CN110058263B (en) | 2019-04-16 | 2019-04-16 | Object positioning method in vehicle driving process |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110058263A CN110058263A (en) | 2019-07-26 |
CN110058263B true CN110058263B (en) | 2021-08-13 |
Family
ID=67319166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910307774.1A Active CN110058263B (en) | 2019-04-16 | 2019-04-16 | Object positioning method in vehicle driving process |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110058263B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113340313B (en) * | 2020-02-18 | 2024-04-16 | 北京四维图新科技股份有限公司 | Navigation map parameter determining method and device |
CN114841848A (en) * | 2022-04-19 | 2022-08-02 | 珠海欧比特宇航科技股份有限公司 | High bandwidth signal processing system, apparatus, method and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1628237A (en) * | 2002-09-30 | 2005-06-15 | 石川岛播磨重工业株式会社 | Method of measuring object and system for measuring object |
CN103196418A (en) * | 2013-03-06 | 2013-07-10 | 山东理工大学 | Measuring method of vehicle distance at curves |
CN105629261A (en) * | 2016-01-29 | 2016-06-01 | 大连楼兰科技股份有限公司 | No-scanning automobile crashproof laser radar system based on structured light, and working method thereof |
CN106597469A (en) * | 2016-12-20 | 2017-04-26 | 王鹏 | Actively imaging laser camera and imaging method thereof |
CN106871799A (en) * | 2017-04-10 | 2017-06-20 | 淮阴工学院 | A kind of full-automatic crops plant height measuring method and device |
CN107622499A (en) * | 2017-08-24 | 2018-01-23 | 中国东方电气集团有限公司 | A kind of identification and space-location method based on target two-dimensional silhouette model |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7028899B2 (en) * | 1999-06-07 | 2006-04-18 | Metrologic Instruments, Inc. | Method of speckle-noise pattern reduction and apparatus therefore based on reducing the temporal-coherence of the planar laser illumination beam before it illuminates the target object by applying temporal phase modulation techniques during the transmission of the plib towards the target |
CN101388077A (en) * | 2007-09-11 | 2009-03-18 | 松下电器产业株式会社 | Target shape detecting method and device |
CN104715264A (en) * | 2015-04-10 | 2015-06-17 | 武汉理工大学 | Method and system for recognizing video images of motion states of vehicles in expressway tunnel |
CN108132025B (en) * | 2017-12-24 | 2020-04-14 | 上海捷崇科技有限公司 | Vehicle three-dimensional contour scanning construction method |
CN108876719B (en) * | 2018-03-29 | 2022-07-26 | 广州大学 | Vehicle panoramic image splicing external parameter estimation method based on virtual camera model |
-
2019
- 2019-04-16 CN CN201910307774.1A patent/CN110058263B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1628237A (en) * | 2002-09-30 | 2005-06-15 | 石川岛播磨重工业株式会社 | Method of measuring object and system for measuring object |
CN103196418A (en) * | 2013-03-06 | 2013-07-10 | 山东理工大学 | Measuring method of vehicle distance at curves |
CN105629261A (en) * | 2016-01-29 | 2016-06-01 | 大连楼兰科技股份有限公司 | No-scanning automobile crashproof laser radar system based on structured light, and working method thereof |
CN106597469A (en) * | 2016-12-20 | 2017-04-26 | 王鹏 | Actively imaging laser camera and imaging method thereof |
CN106871799A (en) * | 2017-04-10 | 2017-06-20 | 淮阴工学院 | A kind of full-automatic crops plant height measuring method and device |
CN107622499A (en) * | 2017-08-24 | 2018-01-23 | 中国东方电气集团有限公司 | A kind of identification and space-location method based on target two-dimensional silhouette model |
Also Published As
Publication number | Publication date |
---|---|
CN110058263A (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021223368A1 (en) | Target detection method based on vision, laser radar, and millimeter-wave radar | |
US11719788B2 (en) | Signal processing apparatus, signal processing method, and program | |
TWI703064B (en) | Systems and methods for positioning vehicles under poor lighting conditions | |
US10024965B2 (en) | Generating 3-dimensional maps of a scene using passive and active measurements | |
US11768293B2 (en) | Method and device for adjusting parameters of LiDAR, and LiDAR | |
KR102195164B1 (en) | System and method for multiple object detection using multi-LiDAR | |
WO2021207954A1 (en) | Target identification method and device | |
Benedek et al. | Positioning and perception in LIDAR point clouds | |
US10444398B2 (en) | Method of processing 3D sensor data to provide terrain segmentation | |
CN110058263B (en) | Object positioning method in vehicle driving process | |
CN114494075A (en) | Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium | |
Choe et al. | Fast point cloud segmentation for an intelligent vehicle using sweeping 2D laser scanners | |
WO2021168854A1 (en) | Method and apparatus for free space detection | |
Steinbaeck et al. | Occupancy grid fusion of low-level radar and time-of-flight sensor data | |
CN113052916A (en) | Laser radar and camera combined calibration method using specially-made calibration object | |
TWI792108B (en) | Inland river lidar navigation system for vessels and operation method thereof | |
CN114089376A (en) | Single laser radar-based negative obstacle detection method | |
JP2019002839A (en) | Information processor, movable body, information processing method, and program | |
Lin et al. | Multi-threshold based ground detection for point cloud scene | |
Rodrigues et al. | Analytical Change Detection on the KITTI dataset | |
KR102484298B1 (en) | An inspection robot of pipe and operating method of the same | |
CN113671944B (en) | Control method, control device, intelligent robot and readable storage medium | |
WO2023044688A1 (en) | Signal processing method and apparatus, and signal transmission method and apparatus | |
WO2023194762A1 (en) | Information processing method, and information processing device | |
CN117409393A (en) | Method and system for detecting laser point cloud and visual fusion obstacle of coke oven locomotive |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |