CN112154454A - Target object detection method, system, device and storage medium - Google Patents

Target object detection method, system, device and storage medium Download PDF

Info

Publication number
CN112154454A
CN112154454A CN201980033130.6A CN201980033130A CN112154454A CN 112154454 A CN112154454 A CN 112154454A CN 201980033130 A CN201980033130 A CN 201980033130A CN 112154454 A CN112154454 A CN 112154454A
Authority
CN
China
Prior art keywords
target object
point cloud
dimensional
point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980033130.6A
Other languages
Chinese (zh)
Inventor
周游
蔡剑钊
武志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112154454A publication Critical patent/CN112154454A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

A method, system, device and storage medium for detecting a target object are provided. The method comprises the steps of clustering three-dimensional point clouds obtained by detecting detection equipment carried on a movable platform to obtain point cloud clusters corresponding to a target object, wherein in the clustering process, the height of the clustering center of the point cloud clusters meets a preset height condition, further, determining a target detection model according to the distance between the target object and the movable platform and the corresponding relation between the distance and the detection model, and detecting the point cloud clusters corresponding to the target object through the target detection model so that the target detection model can determine the object type of the target object, namely, different detection models are adopted for detecting the target objects at different distances relative to the movable platform, and therefore the detection precision of the target object is improved.

Description

Target object detection method, system, device and storage medium
Technical Field
The embodiment of the application relates to the field of movable platforms, in particular to a method, a system, equipment and a storage medium for detecting a target object.
Background
In an automatic driving system or a driving assistance system, it is necessary to detect a vehicle in a road in order to perform vehicle avoidance.
In the prior art, a shooting device is usually arranged in an automatic driving system or a driving assistance system, and surrounding vehicles are detected through two-dimensional images acquired by the shooting device, but the accuracy of vehicle detection is insufficient only through the two-dimensional images to detect the surrounding vehicles.
Disclosure of Invention
The embodiment of the application provides a method, a system, equipment and a storage medium for detecting a target object, so as to improve the detection precision of the target object.
A first aspect of an embodiment of the present application provides a method for detecting a target object, which is applied to a movable platform, where the movable platform is provided with a detection device, and the detection device is configured to detect an environment around the movable platform to obtain a three-dimensional point cloud, where the method includes:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, wherein the height of a clustering center of the clustered point cloud cluster meets a preset height condition;
determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model;
and detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
A second aspect of embodiments of the present application provides a target object detection system, including: a detection device, a memory, and a processor;
the detection equipment is used for detecting the surrounding environment of the movable platform to obtain three-dimensional point cloud;
the memory is used for storing program codes;
the processor, invoking the program code, when executed, is configured to:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, wherein the height of a clustering center of the clustered point cloud cluster meets a preset height condition;
determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model;
and detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
A third aspect of embodiments of the present application provides a movable platform, including:
a body;
the power system is arranged on the machine body and used for providing moving power;
and a detection system for a target object as described in the second aspect.
A fourth aspect of embodiments of the present application is to provide a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method of the first aspect.
In the method, the system, the device and the storage medium for detecting the target object provided by this embodiment, a three-dimensional point cloud obtained by detecting a detection device carried on a movable platform is clustered to obtain a point cloud cluster corresponding to the target object, in the clustering process, the height of a clustering center of the point cloud cluster needs to meet a preset height condition, further, a target detection model is determined according to the distance of the target object relative to the movable platform and the corresponding relationship between the distance and the detection model, and the point cloud cluster corresponding to the target object is detected by the target detection model, so that the target detection model determines the object type of the target object, that is, the target object at different distances relative to the movable platform is detected by different detection models, thereby improving the detection accuracy of the target object.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a flowchart of a target object detection method provided in an embodiment of the present application;
fig. 3 is a schematic diagram of another application scenario provided in the embodiment of the present application;
fig. 4 is a schematic diagram of another application scenario provided in the embodiment of the present application;
FIG. 5 is a schematic diagram of a detection model provided in an embodiment of the present application;
fig. 6 is a flowchart of a target object detection method according to another embodiment of the present application;
fig. 7 is a schematic diagram of projecting a three-dimensional point cloud onto a two-dimensional image according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a two-dimensional feature point provided in an embodiment of the present application;
fig. 9 is a flowchart of a target object detection method according to another embodiment of the present application;
fig. 10 is a schematic diagram of a three-dimensional point cloud provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of another three-dimensional point cloud provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of another three-dimensional point cloud provided by an embodiment of the present application;
fig. 13 is a structural diagram of a target object detection system according to an embodiment of the present application.
Reference numerals:
11: a vehicle; 12: a server; 13: a vehicle;
14: a vehicle; 15: three-dimensional point cloud; 30: ground point cloud;
31: point cloud clusters; 32: point cloud clusters;
41: a first target object; 42: a first target object; 80: a first image;
81: a projection area; 82: two-dimensional feature points; 1001: a right region;
1002: an upper left corner image; 1003: a lower left corner image; 100: white arc;
101: a first target object; 102: a first target object;
103: a first target object; 104: three-dimensional point cloud;
105: a circle; 106: identifying a frame; 130: a detection system for a target object;
131: a detection device; 132: a memory; 133: a processor.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the application provides a target object detection method. The method is applied to a movable platform, wherein the movable platform is provided with a detection device, and the detection device is used for detecting the surrounding environment of the movable platform to obtain three-dimensional point cloud. In this embodiment, the movable platform may be a drone, a mobile robot, or a vehicle.
In the embodiment of the present application, the movable platform is a vehicle, and the vehicle may be an unmanned vehicle, or a vehicle equipped with an Advanced Driver Assistance Systems (ADAS) system, or the like. As shown in fig. 1, the vehicle 11 is a carrier carrying a detection device, which may be a binocular stereo camera, a Time of flight (TOF) camera, and/or a lidar. In the driving process of the vehicle 11, the detection device detects the surrounding environment of the vehicle 11 in real time to obtain the three-dimensional point cloud. The environment around the vehicle 11 includes objects around the vehicle 11. The objects around the vehicle 11 include, among others, the ground around the vehicle 11, pedestrians, vehicles, and the like.
Taking the laser radar as an example, when a laser beam emitted by the laser radar irradiates the surface of an object, the surface of the object reflects the laser beam, and the laser radar can determine information such as the direction and the distance of the object relative to the laser radar according to the laser beam reflected by the surface of the object. If the laser beam emitted by the laser radar scans according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, and thus laser point cloud data, i.e., three-dimensional point cloud, of the object can be formed.
In addition, the embodiment does not limit the execution subject of the target object detection method, the target object detection method may be executed by an on-board device in the vehicle, or may be executed by a device having a data processing function other than the on-board device, for example, as the server 12 shown in fig. 1, the vehicle 11 and the server 12 may perform wireless communication or wired communication, the vehicle 11 may transmit the three-dimensional point cloud obtained by the detection device to the server 12, and the server 12 may execute the target object detection method. The following describes a method for detecting a target object according to an embodiment of the present application, taking an in-vehicle device as an example. The vehicle-mounted device may be a device with a data processing function integrated in a center console of the vehicle, or may also be a tablet computer, a mobile phone, a notebook computer, or the like placed in the vehicle.
Fig. 2 is a flowchart of a target object detection method according to an embodiment of the present application. As shown in fig. 2, the method in this embodiment may include:
s201, acquiring the three-dimensional point cloud.
As shown in fig. 1, during the driving process of the vehicle 11, a detection device mounted on the vehicle 11 detects the surrounding environment of the vehicle 11 in real time to obtain a three-dimensional point cloud, and the detection device may be in communication connection with an on-board device on the vehicle 11, so that the on-board device on the vehicle 11 may obtain the three-dimensional point cloud detected by the detection device in real time. For example, a three-dimensional point cloud of the ground surrounding the vehicle 11, a three-dimensional point cloud of pedestrians, a three-dimensional point cloud of other vehicles such as the vehicle 13, the vehicle 14.
S202, clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of the clustering center of the clustered point cloud cluster meets a preset height condition.
As shown in fig. 3, the three-dimensional point cloud 15 is a three-dimensional point cloud detected by a detection device mounted on the vehicle 11. The three-dimensional point cloud 15 includes a plurality of three-dimensional points, that is, the three-dimensional point cloud is a set of a plurality of three-dimensional points. In addition, three-dimensional points may also be referred to as point cloud points. Since the point cloud point in the three-dimensional point cloud detected and obtained by the detection device at each sampling time carries the position information, the position information may specifically be a three-dimensional coordinate of the point cloud point in a three-dimensional coordinate system, and the three-dimensional coordinate system is not limited in this embodiment, for example, the three-dimensional coordinate system may be a vehicle body coordinate system, a terrestrial coordinate system, or a world coordinate system. Therefore, according to the position information of each cloud point, the height of each cloud point relative to the ground can be determined.
In the process of clustering the three-dimensional point cloud 15, a K-means clustering algorithm can be specifically adopted to weight K the cloud points close to a preset height from the ground height in the three-dimensional point cloud 15, so that the height value of a clustering center is close to the preset height, and the preset height is recorded as the preset height
Figure BDA0002782460250000051
Where H represents the vehicle height. Generally, the height of a car is about 1.6 meters, the height of a large vehicle such as a bus is about 3 meters, and the vehicle height H can be 1.1 meters. Or, there may be two values of H, one value is H1 ═ 0.8 m, and the other value is H2 ═ 1.5 m, and clustering is performed through H1 and H2, respectively, so that the height value of the cluster center is close to that of the cluster center
Figure BDA0002782460250000061
Cluster and cluster center height values of
Figure BDA0002782460250000062
Clustering. Taking the value of H as 1.1 meter as an example, assuming that P1 and P2 are any two three-dimensional points in the three-dimensional point cloud 15, and correspondingly, P1 and P2 respectively correspond to a three-dimensional coordinate, where the coordinate of P1 on the z-axis, i.e., in the height direction, can be denoted as P1(z), and the coordinate of P2 on the z-axis, i.e., in the height direction, can be denoted as P2(z), and if the function value Loss calculated by the following formula (1) is less than or equal to a certain threshold, it is determined that P1 and P2 can be aggregated into a cluster.
Figure BDA0002782460250000063
Where k may be a constant. It can be understood that, when clustering the three-dimensional point cloud 15, the aggregation process between different three-dimensional points in the three-dimensional point cloud 15 may be similar to the aggregation process described in the above formula (1), and details are not repeated here.
As shown in fig. 3, after the three-dimensional point cloud 15 is clustered, a point cloud cluster 31 and a point cloud cluster 32 are obtained, wherein the heights of the clustering centers of the point cloud cluster 31 and the point cloud cluster 32 are both close to the preset height. Further, a first target object 41 shown in fig. 4 can be obtained from the point cloud cluster 31, and a first target object 42 shown in fig. 4 can be obtained from the point cloud cluster 32.
It should be understood that the first target objects are only schematically illustrated here, and the number of the first target objects is not limited.
S203, determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model.
The point cloud cluster 31 and the point cloud cluster 32 shown in fig. 3 include a plurality of point cloud points, respectively. Since the point cloud points in the three-dimensional point cloud obtained by the detection device at each sampling time all carry the position information, the distance between the point cloud point and the detection device can be calculated according to the position information of each point cloud point, further, the distance between the point cloud cluster and the vehicle body carrying the detection device can be calculated according to the distances between a plurality of point cloud points in the point cloud cluster and the detection device, and further, the distance between the first target object corresponding to the point cloud cluster and the vehicle body, for example, the distance between the first target object 41 and the vehicle 11 and the distance between the first target object 42 and the vehicle 11, can be obtained.
As shown in fig. 4, the distance of the first target object 41 from the vehicle 11 is smaller than the distance of the first target object 42 from the vehicle 11, and for example, the distance of the first target object 41 from the vehicle 11 is denoted as L1, and the distance of the first target object 42 from the vehicle 11 is denoted as L2. In the present embodiment, the vehicle-mounted device may determine the target detection model corresponding to L1 from the distance L1 of the first target object 41 with respect to the vehicle 11 and the correspondence relationship of the distance and the detection model. From the distance L2 of the first target object 42 with respect to the vehicle 11 and the correspondence of the distance to the detection model, a target detection model corresponding to L2 is determined.
In an alternative embodiment, the test models corresponding to different distances may be trained in advance.
As shown in fig. 5, in particular, the sample object may be divided into a sample object in the range of 0-90 meters, a sample object in the range of 75-165 meters, and a sample object in the range of 125-200 meters with respect to the collecting vehicle, depending on the distance between the sample object and the movable platform, e.g. the collecting vehicle, which detects the sample object. The collection vehicle may be the vehicle 11 as described above, or may be a vehicle other than the vehicle 11. Specifically, the detection model trained on the sample object in the range of 0 to 90 meters with respect to the collection vehicle is detection model 1, the detection model trained on the sample object in the range of 75 to 165 meters with respect to the collection vehicle is detection model 2, and the detection model trained on the sample object in the range of 125 to 200 meters with respect to the collection vehicle is detection model 3, so that the correspondence relationship between the distance and the detection model is obtained.
In another alternative embodiment, the detection model may make an adjustment of the adaptive point according to the actually obtained distance. For example, a parameter that can be adjusted according to the distance can be set in the test model. In specific implementation, the distance of the first target object is obtained, and parameters in the inspection model are set according to the distance to obtain the target inspection model.
S204, detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
For example, if the vehicle-mounted device determines that the distance L1 of the first target object 41 relative to the vehicle 11 is in the range of 0-90 meters, the detection model 1 is used to detect the point cloud cluster corresponding to the first target object 41 to determine the object type of the first target object 41. If the distance L2 of the first target object 42 relative to the vehicle 11 is within the range of 75 meters to 165 meters, the detection model 2 is used to detect the point cloud cluster corresponding to the first target object 42 to determine the object type of the first target object 42.
It is worth noting that the point cloud distribution characteristics of vehicles in different distance ranges are different. For example, the point cloud distribution for a remote target is sparse, while the point cloud distribution for a near target is dense. The point cloud corresponding to a near vehicle often represents a vehicle side point cloud, while the point cloud corresponding to a medium range vehicle represents more a vehicle tail point cloud. Therefore, a plurality of detection models are trained differently according to different distances, and the target object can be identified more accurately.
In addition, the object types as described above may include: road sign, vehicle, pedestrian, road sign, and the like. Furthermore, the specific type of the vehicle can be identified according to the characteristics of the point cloud cluster, for example, engineering vehicles, cars, buses and the like can be identified.
It is to be understood that the first target object in the present embodiment is only for distinguishing from the second target object in the subsequent embodiments, and both the first target object and the second target object may refer to target objects detectable by the detection device.
In the embodiment, a three-dimensional point cloud obtained by detecting a detection device carried on a movable platform is clustered to obtain a point cloud cluster corresponding to a target object, in the clustering process, the height of a clustering center of the point cloud cluster needs to meet a preset height condition, further, a target detection model is determined according to the distance between the target object and the movable platform and the corresponding relation between the distance and the detection model, and the point cloud cluster corresponding to the target object is detected through the target detection model, so that the target detection model determines the object type of the target object, that is, different detection models are adopted for detecting the target objects at different distances relative to the movable platform, and therefore the detection precision of the target object is improved.
On the basis of the above embodiment, before clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the method further includes: removing specific point clouds in the three-dimensional point clouds, wherein the specific point clouds comprise ground point clouds.
As shown in fig. 3, the three-dimensional point cloud 15 obtained by the detection device includes not only the point cloud corresponding to the target object, but also a specific point cloud, for example, a ground point cloud 30. Therefore, before clustering the three-dimensional point cloud 15, the ground point cloud 30 in the three-dimensional point cloud 15 may be identified by a plane fitting method, the ground point cloud 30 in the three-dimensional point cloud 15 may be removed, and further, the three-dimensional point cloud after removing the ground point cloud 30 may be clustered.
In the embodiment, the specific point cloud in the three-dimensional point cloud obtained by detection of the detection device carried on the movable platform is removed, and the three-dimensional point cloud after the specific point cloud is removed is clustered to obtain the point cloud cluster corresponding to the target object, so that the influence of the specific point cloud on the detection of the target object can be avoided, and the detection precision of the target object is further improved.
The embodiment of the application provides a target object detection method. Fig. 6 is a flowchart of a target object detection method according to another embodiment of the present application. As shown in fig. 6, on the basis of the foregoing embodiment, before the detecting, by the target detection model, the point cloud cluster corresponding to the first target object and determining the object type of the first target object, the method further includes: determining a direction of motion of the first target object; and adjusting the motion direction of the first target object to be a preset direction.
As a possible implementation manner, the determining the motion direction of the first target object includes: and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
Specifically, the first time is a previous time, and the second time is a current time. Taking the first target object 41 as an example, since the first target object 41 may be in a motion state, the position information of the first target object 41 may be changed in real time. In addition, the detection device on the vehicle 11 detects the surrounding environment in real time, so that the vehicle-mounted device can acquire and process the three-dimensional point cloud detected by the detection device in real time. Since the three-dimensional point cloud corresponding to the first target object 41 at the previous time and the three-dimensional point cloud corresponding to the first target object 41 at the current time may be changed, the motion direction of the first target object 41 may be determined according to the three-dimensional point cloud corresponding to the first target object 41 at the previous time and the three-dimensional point cloud corresponding to the first target object 41 at the current time.
Optionally, the determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first time and the three-dimensional point cloud corresponding to the first target object at the second time includes: respectively projecting the three-dimensional point cloud corresponding to the first target object at a first moment and the three-dimensional point cloud corresponding to the first target object at a second moment into a world coordinate system; and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
For example, a three-dimensional point cloud corresponding to the first target object 41 at the previous time and a three-dimensional point cloud corresponding to the first target object 41 at the current time are respectively projected onto a world coordinate system, and further, a relative position relationship between the three-dimensional point cloud corresponding to the first target object 41 at the previous time and the three-dimensional point cloud corresponding to the first target object 41 at the current time is calculated through an Iterative Closest Point (ICP) algorithm, where the relative position relationship includes a rotation relationship and a translation relationship, and a moving direction of the first target object 41 can be determined according to the translation relationship, which is a moving direction of the first target object 41.
As another possible implementation manner, the determining the motion direction of the first target object includes the following steps:
s601, projecting the three-dimensional point cloud corresponding to the first target object at a first moment in the two-dimensional image at the first moment to obtain a first projection point.
S602, projecting the three-dimensional point cloud corresponding to the first target object at a second moment in the two-dimensional image at the second moment to obtain a second projection point.
In this embodiment, a photographing device may be mounted on the vehicle 11, and the photographing device may be used to photograph an image of the environment around the vehicle 11, specifically, a two-dimensional image. The period of the three-dimensional point cloud obtained by the detection device and the period of the image shot by the shooting device may be the same or different. For example, the photographing apparatus photographs one frame of two-dimensional image while the three-dimensional point cloud of the first target object 41 is obtained by the detection apparatus at the previous time. The photographing device photographs another frame of two-dimensional image while the detection device at the present time detects that the three-dimensional point cloud of the first target object 41 is obtained. Here, a two-dimensional image captured by the capturing device at a previous time may be referred to as a first image, and a two-dimensional image captured by the capturing device at a current time may be referred to as a second image. Specifically, a three-dimensional point cloud of the first target object 41 at the previous time may be projected onto the first image to obtain a first projection point. And projecting the three-dimensional point cloud of the first target object 41 at the current moment onto the second image to obtain a second projection point. As shown in fig. 7, the left area represents a three-dimensional point cloud obtained by detection by the detection device at a certain time, and the right area represents a projection area of the three-dimensional point cloud on the two-dimensional image, where the projection area includes a projection point.
In an alternative embodiment, projecting the three-dimensional point cloud in the two-dimensional image comprises: and projecting partial or all point cloud points in the three-dimensional point cloud on a two-dimensional plane along the Z axis. Wherein the Z-axis may be a Z-axis in a vehicle body coordinate system. Alternatively, if the coordinates of the three-dimensional point cloud have been corrected to the terrestrial coordinate system, the Z-axis may be the Z-axis of the terrestrial coordinate system.
S603, determining three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point of which the position relation with the first projection point accords with a preset position relation.
For convenience of distinguishing, a projection point of the three-dimensional point cloud of the first target object 41 on the first image at the previous time is regarded as a first projection point, the feature point on the first image is regarded as a first feature point, and a position relationship between the first feature point and the first projection point conforms to a preset position relationship.
Optionally, the determining three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first time includes: determining a weight coefficient corresponding to the first projection point according to the distance between the first projection point and a first feature point in the two-dimensional image at the first moment; and determining the three-dimensional information of the first characteristic point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
As shown in fig. 8, 80 denotes a first image captured by the capturing apparatus at the previous time, and 81 denotes a projection area formed by projecting the three-dimensional point cloud of the first target object 41 at the previous time onto the first image 80. Two-dimensional feature points, i.e., first feature points, can be extracted in this projection region 81. The two-dimensional feature points are not necessarily projection points, that is, the two-dimensional feature points do not necessarily have three-dimensional information. Here, the three-dimensional information of the two-dimensional feature points can be estimated by gaussian distribution. As shown in fig. 8, 82 represents any two-dimensional feature point in the projection region 81, and further, projection points within a predetermined range, for example, a 10 × 10 pixel region, around the two-dimensional feature point 82 are determined, for example, A, B, C, D are respectively projection points within the predetermined range. The distance of the projection point A from the two-dimensional feature point 82 is denoted as d1The distance of the projection point B from the two-dimensional feature point 82 is denoted as d2The distance of the projection point C from the two-dimensional feature point 82 is denoted as d3The distance of the projection point D from the two-dimensional feature point 82 is denoted as D4. Wherein the content of the first and second substances,
Figure BDA0002782460250000111
1,v1) Representing the pixel coordinates of projection point A on the first image 80, (μ)0,v0) Representing the pixel coordinates of the two-dimensional feature point 82 on the first image 80.
Figure BDA0002782460250000112
2,v2) Representing the pixel coordinates of proxel B on the first image 80.
Figure BDA0002782460250000113
3,v3) Representing the pixel coordinates of the proxel C on the first image 80.
Figure BDA0002782460250000114
4,v4) Representing the pixel coordinates of the proxel D on the first image 80. In addition, three-dimensional information of a three-dimensional point corresponding to the projection point a is denoted as P1Recording the three-dimensional information of the three-dimensional point corresponding to the projection point B as P2Recording the three-dimensional information of the three-dimensional point corresponding to the projection point C as P3Recording the three-dimensional information of the three-dimensional point corresponding to the projection point D as P4. Wherein, P1、P2、P3、P4Respectively, vectors, respectively, including xyz three-axis coordinates.
The three-dimensional information of the two-dimensional feature points 82 is denoted as P0,P0Can be calculated by the following formulas (2) and (3):
Figure BDA0002782460250000115
Figure BDA0002782460250000116
where n represents the number of projection points within a preset range around the two-dimensional feature point 82, ωiRepresenting the weight coefficients, different projection points may correspond to different weight coefficients, or the same weight coefficients. σ is a tunable parameter, which may be, for example, an empirically tuned parameter.
It is understood that the process of calculating the three-dimensional information of other two-dimensional feature points in the projection region 81 is similar to the process of calculating the three-dimensional information of the two-dimensional feature points 82 as described above, and will not be described herein again.
S604, determining three-dimensional information of a second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, wherein the second feature point is a feature point of which the position relation with the second projection point accords with a preset position relation, and the second feature point corresponds to the first feature point.
For convenience of distinguishing, a projection point of the three-dimensional point cloud of the first target object 41 on the second image at the current time is recorded as a second projection point, the feature point on the second image is recorded as a second feature point, and a position relationship between the second feature point and the second projection point conforms to a preset position relationship.
From the first feature points on the first image 80, a corner point Tracking algorithm (KLT) is used to calculate second feature points on the second image corresponding to the first feature points.
Optionally, the determining, according to the second projection point and a second feature point in the two-dimensional image at the second time, three-dimensional information of the second feature point includes: determining a weight coefficient corresponding to the second projection point according to the distance between the second projection point and a second feature point in the two-dimensional image at the second moment; and determining the three-dimensional information of the second characteristic point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
Specifically, the process of calculating the three-dimensional information of the second feature point on the second image is similar to the process of calculating the three-dimensional information of the first feature point on the first image, and is not repeated here.
S605, determining the motion direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
Specifically, the three-dimensional information of the first feature point is the three-dimensional information P of the two-dimensional feature point 82 as described above0The three-dimensional information of the second feature point is the three-dimensional information of the two-dimensional feature point corresponding to the two-dimensional feature point 82 in the second image, and is denoted as P'0. According to P0And P'0The direction of movement, in particular P, of the first target object 41 can be determined0And P'0The position change therebetween is the movement direction of the first target object 41.
Optionally, before determining the motion direction of the first target object according to the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the method further includes: and respectively converting the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point into a world coordinate system.
For example, P0And P'0Respectively converting into world coordinate system, calculating P in the world coordinate system0And P'0Which is the movement direction of the first target object 41.
It is understood that the movement direction of other first target objects than the first target object 41 may also be determined by several possible implementations as described above, and will not be described in detail herein.
After the movement direction of the first target object is determined, further, the movement direction of the first target object can be adjusted to a preset direction. Optionally, the preset direction is a movement direction of a sample object used for training the detection model.
For example, the direction of motion of a sample object used to train the detection model is north, or toward the front or rear of the acquisition vehicle that detected the sample object. Taking the north direction as an example, in order to enable the detection model to accurately detect the first target object 41 or the first target object 42, the moving direction of the first target object 41 or the first target object 42 needs to be adjusted to the north direction, for example, if the included angle between the moving direction of the first target object 41 or the first target object 42 and the north direction is θ, the three-dimensional point cloud corresponding to the first target object 41 or the three-dimensional point cloud corresponding to the first target object 42 is according to the rotation formula R described in the following formula (4)z(θ) is rotated so that the direction of motion of the first target object 41 or the first target object 42 is north:
Figure BDA0002782460250000131
in this embodiment, the movement direction of the target object is determined, and the movement direction of the target object is adjusted to be the preset direction, and the preset direction is the movement direction of the sample object for training the detection model, so that the detection accuracy of the target object can be further improved by adjusting the movement direction of the target object to be the preset direction and then detecting through the detection model.
The embodiment of the application provides a target object detection method. On the basis of the above embodiment, after the point cloud cluster corresponding to the first target object is detected by the target detection model and the object type of the first target object is determined, the method further includes: and if the first target object is determined to be a vehicle through the target detection model, verifying the detection result of the target detection model according to a preset condition.
For example, when the first target object 41 is determined to be a vehicle by the target detection model, the detection result is verified by a preset condition.
Optionally, the preset condition includes at least one of the following conditions: the size of the first target object meets a preset size; the spatial overlap ratio between the first target object and other target objects around the first target object is smaller than a preset threshold value.
For example, when the first target object 41 is detected to be a vehicle by the target detection model, it is further detected whether the width of the first target object 41 exceeds a preset width range, where the preset width range may be a width range of a normal vehicle, for example, 2.8 m to 3 m. If the width of the first target object 41 exceeds the preset width range, it is determined that there is a deviation in the detection result of the detection model for the first target object 41, that is, the first target object 41 may not be a vehicle. If the width of the first target object 41 is within the preset width range, the spatial overlap ratio between the first target object 41 and other surrounding target objects is further detected, where the spatial overlap ratio between the first target object 41 and other surrounding target objects may specifically be the spatial overlap ratio between the identification frame for characterizing the first target object 41 and the identification frame for characterizing other surrounding target objects. If the spatial coincidence degree is greater than the preset threshold value, it is determined that the detection result of the detection model for the first target object 41 is biased, that is, the first target object 41 may not be a vehicle. If the spatial coincidence degree is smaller than the preset threshold value, it is determined that the detection result of the detection model on the first target object 41 is correct.
In this embodiment, after the target object is detected by the target detection model corresponding to the distance according to the distance between the target object and the movable platform, if the object type of the target object is determined to be a vehicle, the detection result of the target detection model is further verified according to the preset condition, when the preset condition is met, the detection result of the target detection model is determined to be correct, and when the preset condition is not met, the detection result of the target detection model is determined to have a deviation, so that the detection precision of the target object is further improved.
The embodiment of the application provides a target object detection method. Fig. 9 is a flowchart of a target object detection method according to another embodiment of the present application. As shown in fig. 9, on the basis of the above embodiment, the distance of the first target object with respect to the movable platform is smaller than or equal to a first preset distance. As shown in fig. 10, a right area 1001 is a three-dimensional point cloud detected by a detection device, an upper left corner image 1002 represents an image of the three-dimensional point cloud with height information removed, and a lower left corner image 1003 represents a two-dimensional image. Where one turn of white coil in the right region 1001 represents the ground point cloud and the white arc 100 represents a first predetermined distance, e.g., 80 meters, from the detection device. 101. 102, 103 respectively represent first target objects having a distance of less than or equal to 80 meters with respect to the detection device. As can be seen from fig. 10, there is no white coil of one turn outside 80 meters, i.e., no ground point cloud is detected outside 80 meters. The embodiment proposes a method for detecting a ground point cloud outside a first preset distance and detecting a second target object outside the first preset distance.
After the point cloud cluster corresponding to the first target object is detected through the target detection model and the object type of the first target object is determined, the method further includes the following steps:
s901, if the first target object is determined to be a vehicle through the target detection model, determining ground point clouds beyond the first preset distance according to the position of the first target object.
In this embodiment, assuming that the vehicle-mounted device determines that the first target object 101 is a vehicle by using a target detection model corresponding to the distance between the first target object 101 and the detection device, determines that the first target object 102 is a vehicle by using a target detection model corresponding to the distance between the first target object 102 and the detection device, and determines that the first target object 103 is a vehicle by using a target detection model corresponding to the distance between the first target object 103 and the detection device, the vehicle-mounted device may further determine a ground point cloud which is more than 80 meters away from the detection device according to the positions of the first target object 101, the first target object 102, and the first target object 103.
Optionally, the determining, according to the position of the first target object, a ground point cloud outside the first preset distance includes: determining the gradient of the ground where the first target object is located according to the position of the first target object; and determining the ground point cloud beyond the first preset distance according to the gradient of the ground.
Specifically, the gradients of the ground where the first target object 101, the first target object 102, and the first target object 103 are located are determined according to the positions of the first target object 101, the first target object 102, and the first target object 103, and the ground point cloud which is more than 80 meters away from the detection device is determined according to the gradient of the ground. It is understood that the number of the first target objects is not limited in this embodiment.
Optionally, the determining, according to the position of the first target object, a gradient of a ground surface on which the first target object is located includes: and determining the gradient of a plane formed by at least three first target objects according to the positions of at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
For example, when the first target object 101, the first target object 102, and the first target object 103 are all vehicles, three vehicles may determine one plane. For example, the coordinates of the first target object 101 are denoted as a (x1, y1, z1), the coordinates of the first target object 102 are denoted as B (x2, y2, z2), the coordinates of the first target object 103 are denoted as C (x3, y3, z3), the vector AB is (x2-x1, y2-y1, z2-z1), and the vector AC is (x3-x1, y3-y1, z3-z 1). The normal vector of the plane in which AB and AC lie is AB × AC ═ a, b, c, where:
a=(y2-y1)(z3-z1)-(z2-z1)(y3-y1)
b=(z2-z1)(x3-x1)-(z3-z1)(x2-x1)
c=(x2-x1)(y3-y1)-(x3-x1)(y2-y1)
specifically, according to the normal vector of the plane where AB and AC are located, the slope of the plane formed by the first target object 101, the first target object 102, and the first target object 103 may be determined, and the slope of the plane may specifically be the slope of the ground where the first target object 101, the first target object 102, and the first target object 103 are located.
It can be understood that when the number of the first target objects is greater than 3, one plane can be determined for every 3 first target objects, so that a plurality of planes can be obtained, the slopes of the plurality of planes can be calculated by the plane slope calculation method, and at this time, the ground slope can be fitted according to the slopes of the plurality of planes.
It is understood that, according to the slope of the ground, it may be determined whether the ground is a level ground, a viaduct or an inclined slope, and in some embodiments, the ground on which the first target object is located may not be a level ground, for example, may be a viaduct or an inclined slope, and therefore, it may also be determined whether the first target object is located on the viaduct or the inclined slope according to the slope of the ground.
After the ground slope of the ground where the first target object is located is determined, the ground where the first target object is located can be extended according to the ground slope, and ground point cloud beyond 80 meters is obtained. For example, the road surface width on which the first target object is located is linearly expanded to a distance of 80 meters or less. Here, a case where the ground is a level ground at a distance of 80 m or less may be considered, and a case where there is a slope or a viaduct at a distance of 80 m or less may be temporarily considered.
S902, determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
For example, from a ground point cloud that is 80 meters away, an object type of a second target object that is 80 meters away is determined.
Optionally, the determining, according to the ground point cloud outside the first preset distance, the object type of the second target object outside the first preset distance includes: determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane; and detecting the point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
For example, as shown in fig. 11, the point cloud clusters corresponding to the second target object beyond 80 meters are determined according to the ground point cloud beyond 80 meters, because the second target object beyond 80 meters is blocked by the near object, the number of the three-dimensional point clouds 104 at the far position is small, that is, the three-dimensional point clouds 104 at the far position may be only a part of the three-dimensional point clouds at the upper part of the second target object, at this time, the remaining part of the three-dimensional point clouds of the second target object need to be complemented according to the ground point cloud beyond 80 meters, for example, the three-dimensional point clouds at the lower part of the second target object need to be complemented, so that the bottom of the second target object is in the same plane as the bottoms of the first target object 101, the first target object 102 and the first target object 103. And the point cloud cluster corresponding to the second target object can be formed by the partial three-dimensional point cloud at the upper part of the second target object and the supplemented three-dimensional point cloud at the lower part.
Further, according to the distance of the second target object relative to the detection device, the point cloud cluster corresponding to the second target object is detected by using a detection model corresponding to the distance, that is, the second target object is detected to be a pedestrian, a vehicle or other objects through the detection model. Here, the number of second target objects is not limited to one, and may be one or a plurality of second target objects. Since the distance between the second target object and the detection device is greater than the first preset distance, the second target object may be detected by using a detection model corresponding to a second preset distance greater than the first preset distance.
Optionally, the determining, according to the ground point cloud outside the first preset distance, the point cloud cluster corresponding to the second target object outside the first preset distance includes: clustering the three-dimensional point cloud except the ground point cloud in the three-dimensional point cloud beyond the first preset distance to obtain a part of point cloud corresponding to the second target object; and determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud beyond the first preset distance.
For example, a three-dimensional point cloud obtained by detection by a detection device and apart from 80 meters of the detection device is obtained, and since the three-dimensional point cloud except 80 meters may include a ground point cloud, the ground point cloud in the three-dimensional point cloud except 80 meters needs to be removed, and the three-dimensional point cloud after the ground point cloud is removed is clustered, so as to obtain a partial point cloud corresponding to the second target object, for example, the three-dimensional point cloud 104 shown in fig. 11.
Or after the ground slope of the ground where the first target object is located is determined, extending the ground where the first target object is located according to the ground slope to obtain ground point cloud of more than 80 meters. When a second target object beyond 80 meters is detected, ground point clouds extending beyond 80 meters need to be removed, and the three-dimensional point clouds obtained by removing the ground point clouds in the three-dimensional point clouds beyond 80 meters are clustered, so that part of point clouds corresponding to the second target object are obtained. Further, according to the partial point cloud corresponding to the second target object and the ground point cloud beyond 80 meters, the point cloud cluster corresponding to the second target object is determined, and specifically, the lower half portion of the second target object is filled, so that the bottom of the second target object is in the same plane as the bottoms of the first target object 101, the first target object 102, and the first target object 103.
Specifically, the clustering process is similar to the clustering process described above, and is not described herein again. The difference is that the vehicle height H used in the clustering process is greater than the vehicle height H used in the clustering process as described above, for example, the vehicle height H used in the clustering process may be 1.6 meters or 2.5 meters. Optionally, the method further includes: if the second target object is a vehicle and the width of the second target object is smaller than or equal to the first width, removing the three-dimensional point cloud with the height larger than or equal to the first height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object; if the second target object is a vehicle, the width of the second target object is greater than the first width, and the width of the second target object is less than or equal to the second width, removing the three-dimensional point cloud with the height greater than or equal to the second height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object; generating an identification frame for representing a vehicle according to the residual three-dimensional point cloud corresponding to the second target object, wherein the identification frame is used for navigation decision of the movable platform; wherein the second width is greater than the first width and the second height is greater than the first height.
It can be understood that, since tiny objects such as a guideboard or a branch may exist above the second target object, and since tiny objects such as a guideboard or a branch may be very close to the second target object, when the point cloud cluster corresponding to the second target object is obtained through clustering, the point cloud cluster corresponding to the second target object may include a three-dimensional point cloud of tiny objects such as a guideboard or a branch. Therefore, when the vehicle-mounted device determines that the second target object is a vehicle by using the detection model corresponding to the distance between the second target object and the detection device, the point cloud cluster corresponding to the second target object needs to be further processed.
Specifically, the second target object is determined to be a small vehicle or a large vehicle according to the width of the second target object, for example, if the width of the second target object is less than or equal to the first width, the second target object is determined to be a small vehicle. If the width of the second target object is greater than the first width and the width of the second target object is less than or equal to the second width, it is determined that the second target object is a cart, and specifically, the second width is greater than the first width. Further, if the second target object is a trolley, removing three-dimensional point clouds with a height greater than or equal to a first height, for example, a height of more than 1.8 m, in the point cloud cluster corresponding to the second target object to obtain remaining three-dimensional point clouds corresponding to the second target object. If the second target object is a cart, removing the three-dimensional point cloud with the height greater than or equal to a second height, for example, the height of more than 3.2 meters, in the point cloud cluster corresponding to the second target object to obtain the remaining three-dimensional point cloud corresponding to the second target object. As shown in fig. 12, the three-dimensional point cloud in the circle 105 is a three-dimensional point cloud corresponding to a branch. And further, generating an identification frame for representing the vehicle according to the residual three-dimensional point cloud corresponding to the second target object. For example, the three-dimensional point cloud corresponding to the branch in the circle 105 shown in fig. 12 is removed on the basis of the three-dimensional point cloud 104 shown in fig. 11 to obtain a remaining three-dimensional point cloud corresponding to the second target object, and further, the lower half of the second target object is filled according to the ground point cloud beyond 80 meters, for example, the three-dimensional point cloud of the lower half of the second target object needs to be filled so that the bottom of the second target object is in the same plane as the bottoms of the first target object 101, the first target object 102, and the first target object 103, so as to obtain the second target object, that is, the recognition frame 106 representing the vehicle shown in fig. 12. Further, a vehicle, such as the vehicle 11, equipped with a detection device may make navigation decisions based on the recognition block 106, such as planning a route based on the recognition block 106, planning a driving route of the vehicle 11 ahead of time, controlling the vehicle 11 to switch to another lane ahead of time, controlling the speed of the vehicle 11 ahead of time, and so forth.
According to the embodiment, the far ground point cloud is determined according to the position of the near first target object, the far second target object is detected according to the far ground point cloud, so that the movable platform provided with the detection equipment performs navigation decision according to the far second target object, and the safety of the movable platform is improved. In addition, the detection precision of the second target object is improved by detecting whether the second target object is a cart or a trolley and removing the three-dimensional point cloud which may be tiny objects such as signboards or branches in the three-dimensional point cloud corresponding to the second target object according to the height corresponding to the cart or the height corresponding to the trolley. In addition, according to the positions of the at least three first target objects, the gradient of a plane formed by the at least three first target objects is determined, the gradient of the ground where the first target objects are located is determined according to the gradient of the plane, whether the ground is a horizontal ground, a viaduct or a slope or the like can also be determined according to the gradient of the ground where the first target objects are located, and therefore the detection accuracy of ground identification is improved.
The embodiment of the application provides a target object detection system. Fig. 13 is a structural diagram of a target object detection system according to an embodiment of the present application, and as shown in fig. 13, the target object detection system 130 includes: a detection device 131, a memory 132, and a processor 133. The detection device 131 is configured to detect an environment around the movable platform to obtain a three-dimensional point cloud. The processor 133 may be specifically a component in the vehicle-mounted device in the above-described embodiment, or another component, a device, or a component with a data processing function mounted in the vehicle. In particular, memory 132 is used to store program code; a processor 133 that invokes the program code and when executed performs the operations of: acquiring the three-dimensional point cloud; clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, wherein the height of a clustering center of the clustered point cloud cluster meets a preset height condition; determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model; and detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
Optionally, the processor 133 detects the point cloud cluster corresponding to the first target object through the target detection model, and before determining the object type of the first target object, is further configured to: determining a direction of motion of the first target object; and adjusting the motion direction of the first target object to be a preset direction.
Optionally, the preset direction is a movement direction of a sample object used for training the detection model.
Optionally, when the processor 133 determines the motion direction of the first target object, it is specifically configured to: and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
Optionally, when determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first time and the three-dimensional point cloud corresponding to the first target object at the second time, the processor 133 is specifically configured to: respectively projecting the three-dimensional point cloud corresponding to the first target object at a first moment and the three-dimensional point cloud corresponding to the first target object at a second moment into a world coordinate system; and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
Optionally, when the processor 133 determines the motion direction of the first target object, it is specifically configured to: projecting the three-dimensional point cloud corresponding to the first target object at a first moment in the two-dimensional image at the first moment to obtain a first projection point; projecting the three-dimensional point cloud corresponding to the first target object at a second moment in the two-dimensional image at the second moment to obtain a second projection point; determining three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point of which the position relation with the first projection point accords with a preset position relation; determining three-dimensional information of a second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, wherein the second feature point is a feature point of which the position relation with the second projection point accords with a preset position relation, and the second feature point corresponds to the first feature point; and determining the motion direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
Optionally, when the processor 133 determines the three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first time, the processor is specifically configured to: determining a weight coefficient corresponding to the first projection point according to the distance between the first projection point and a first feature point in the two-dimensional image at the first moment; and determining the three-dimensional information of the first characteristic point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
Optionally, when the processor 133 determines the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second time, the processor is specifically configured to: determining a weight coefficient corresponding to the second projection point according to the distance between the second projection point and a second feature point in the two-dimensional image at the second moment; and determining the three-dimensional information of the second characteristic point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
Optionally, before the processor 133 determines the motion direction of the first target object according to the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the processor is further configured to: and respectively converting the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point into a world coordinate system.
Optionally, the processor 133 detects the point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, is further configured to: and if the first target object is determined to be a vehicle through the target detection model, verifying the detection result of the target detection model according to a preset condition.
Optionally, the preset condition includes at least one of the following conditions: the size of the first target object meets a preset size; the coincidence degree between the first target object and other target objects around the first target object is smaller than a preset threshold value.
Optionally, before clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the processor 133 is further configured to: removing specific point clouds in the three-dimensional point clouds, wherein the specific point clouds comprise ground point clouds.
Optionally, a distance of the first target object relative to the movable platform is less than or equal to a first preset distance; the processor 133 detects the point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, is further configured to: if the first target object is determined to be a vehicle through the target detection model, determining ground point clouds beyond the first preset distance according to the position of the first target object; and determining the object type of a second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
Optionally, when the processor 133 determines the ground point cloud outside the first preset distance according to the position of the first target object, the processor is specifically configured to: determining the gradient of the ground where the first target object is located according to the position of the first target object; and determining the ground point cloud beyond the first preset distance according to the gradient of the ground.
Optionally, when the processor 133 determines the gradient of the ground where the first target object is located according to the position of the first target object, the processor is specifically configured to: and determining the gradient of a plane formed by at least three first target objects according to the positions of at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
Optionally, when the processor 133 determines the object type of the second target object outside the first preset distance according to the ground point cloud outside the first preset distance, the processor is specifically configured to: determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane; and detecting the point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
Optionally, when the processor 133 determines, according to the ground point cloud outside the first preset distance, the point cloud cluster corresponding to the second target object outside the first preset distance, the processor is specifically configured to: clustering the three-dimensional point cloud except the ground point cloud in the three-dimensional point cloud beyond the first preset distance to obtain a part of point cloud corresponding to the second target object; and determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud beyond the first preset distance.
Optionally, the processor 133 is further configured to: if the second target object is a vehicle and the width of the second target object is smaller than or equal to the first width, removing the three-dimensional point cloud with the height larger than or equal to the first height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object; if the second target object is a vehicle, the width of the second target object is greater than the first width, and the width of the second target object is less than or equal to the second width, removing the three-dimensional point cloud with the height greater than or equal to the second height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object; generating an identification frame for representing a vehicle according to the residual three-dimensional point cloud corresponding to the second target object, wherein the identification frame is used for navigation decision of the movable platform; wherein the second width is greater than the first width and the second height is greater than the first height.
The specific principle and implementation of the target object detection system provided in the embodiment of the present application are similar to those of the above embodiments, and are not described herein again.
The embodiment of the application provides a movable platform. The movable platform comprises: the fuselage, the power system and the detection system of the target object as described in the above embodiments. Wherein, a power system is arranged on the machine body and used for providing moving power. The target object detection system may implement the target object detection method as described above, and the specific principle and implementation manner of the target object detection method are similar to those of the above embodiments, and are not described herein again. The present embodiment does not limit the specific form of the movable platform, for example, the movable platform may be an unmanned aerial vehicle, a movable robot, a vehicle, or the like.
In addition, the present embodiment also provides a computer-readable storage medium on which a computer program is stored, the computer program being executed by a processor to implement the detection method of the target object described in the above embodiment.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (39)

1. A method for detecting a target object, which is applied to a movable platform provided with a detection device for detecting an environment around the movable platform to obtain a three-dimensional point cloud, the method comprising:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, wherein the height of a clustering center of the clustered point cloud cluster meets a preset height condition;
determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model;
and detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
2. The method of claim 1, wherein before the detecting the point cloud cluster corresponding to the first target object by the target detection model and determining the object type of the first target object, the method further comprises:
determining a direction of motion of the first target object;
and adjusting the motion direction of the first target object to be a preset direction.
3. The method of claim 2, wherein the predetermined direction is a moving direction of a sample object used for training the detection model.
4. The method of claim 3, wherein the determining the direction of motion of the first target object comprises:
and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
5. The method of claim 4, wherein determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first time and the three-dimensional point cloud corresponding to the first target object at the second time comprises:
respectively projecting the three-dimensional point cloud corresponding to the first target object at a first moment and the three-dimensional point cloud corresponding to the first target object at a second moment into a world coordinate system;
and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
6. The method of claim 3, wherein the determining the direction of motion of the first target object comprises:
projecting the three-dimensional point cloud corresponding to the first target object at a first moment in the two-dimensional image at the first moment to obtain a first projection point;
projecting the three-dimensional point cloud corresponding to the first target object at a second moment in the two-dimensional image at the second moment to obtain a second projection point;
determining three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point of which the position relation with the first projection point accords with a preset position relation;
determining three-dimensional information of a second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, wherein the second feature point is a feature point of which the position relation with the second projection point accords with a preset position relation, and the second feature point corresponds to the first feature point;
and determining the motion direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
7. The method of claim 6, wherein determining three-dimensional information of the first feature point from the first projection point and the first feature point in the two-dimensional image at the first time comprises:
determining a weight coefficient corresponding to the first projection point according to the distance between the first projection point and a first feature point in the two-dimensional image at the first moment;
and determining the three-dimensional information of the first characteristic point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
8. The method according to claim 6, wherein determining three-dimensional information of a second feature point in the two-dimensional image at the second time from the second projection point and the second feature point comprises:
determining a weight coefficient corresponding to the second projection point according to the distance between the second projection point and a second feature point in the two-dimensional image at the second moment;
and determining the three-dimensional information of the second characteristic point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
9. The method according to any one of claims 6 to 8, wherein before determining the motion direction of the first target object based on the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the method further comprises:
and respectively converting the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point into a world coordinate system.
10. The method according to any one of claims 1-9, wherein after detecting the point cloud cluster corresponding to the first target object by the target detection model and determining the object type of the first target object, the method further comprises:
and if the first target object is determined to be a vehicle through the target detection model, verifying the detection result of the target detection model according to a preset condition.
11. The method of claim 10, wherein the preset condition comprises at least one of:
the size of the first target object meets a preset size;
the spatial overlap ratio between the first target object and other target objects around the first target object is smaller than a preset threshold value.
12. The method according to any one of claims 1-11, wherein before clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the method further comprises:
removing specific point clouds in the three-dimensional point clouds, wherein the specific point clouds comprise ground point clouds.
13. The method of any of claims 1-9, wherein a distance of the first target object relative to the movable platform is less than or equal to a first preset distance;
after the point cloud cluster corresponding to the first target object is detected through the target detection model and the object type of the first target object is determined, the method further includes:
if the first target object is determined to be a vehicle through the target detection model, determining ground point clouds beyond the first preset distance according to the position of the first target object;
and determining the object type of a second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
14. The method of claim 13, wherein determining the ground point cloud outside the first preset distance from the location of the first target object comprises:
determining the gradient of the ground where the first target object is located according to the position of the first target object;
and determining the ground point cloud beyond the first preset distance according to the gradient of the ground.
15. The method of claim 14, wherein determining the grade of the ground on which the first target object is based on the position of the first target object comprises:
and determining the gradient of a plane formed by at least three first target objects according to the positions of at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
16. The method according to any one of claims 13-15, wherein the determining the object type of the second target object outside the first preset distance from the ground point cloud outside the first preset distance comprises:
determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane;
and detecting the point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
17. The method of claim 16, wherein the determining point cloud clusters corresponding to second target objects outside the first preset distance according to the ground point cloud outside the first preset distance comprises:
clustering the three-dimensional point cloud except the ground point cloud in the three-dimensional point cloud beyond the first preset distance to obtain a part of point cloud corresponding to the second target object;
and determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud beyond the first preset distance.
18. The method of claim 17, further comprising:
if the second target object is a vehicle and the width of the second target object is smaller than or equal to the first width, removing the three-dimensional point cloud with the height larger than or equal to the first height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object;
if the second target object is a vehicle, the width of the second target object is greater than the first width, and the width of the second target object is less than or equal to the second width, removing the three-dimensional point cloud with the height greater than or equal to the second height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object;
generating an identification frame for representing a vehicle according to the residual three-dimensional point cloud corresponding to the second target object, wherein the identification frame is used for navigation decision of the movable platform;
wherein the second width is greater than the first width and the second height is greater than the first height.
19. A target object detection system, comprising: a detection device, a memory, and a processor;
the detection equipment is used for detecting the surrounding environment of the movable platform to obtain three-dimensional point cloud;
the memory is used for storing program codes;
the processor, invoking the program code, when executed, is configured to:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, wherein the height of a clustering center of the clustered point cloud cluster meets a preset height condition;
determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model;
and detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
20. The system of claim 19, wherein the processor, before detecting the point cloud cluster corresponding to the first target object through the target detection model and determining the object type of the first target object, is further configured to:
determining a direction of motion of the first target object;
and adjusting the motion direction of the first target object to be a preset direction.
21. The system of claim 20, wherein the predetermined direction is a moving direction of a sample object used for training the detection model.
22. The system of claim 21, wherein the processor, when determining the direction of motion of the first target object, is specifically configured to:
and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
23. The system of claim 22, wherein the processor is configured to determine the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at a first time and the three-dimensional point cloud corresponding to the first target object at a second time, and is specifically configured to:
respectively projecting the three-dimensional point cloud corresponding to the first target object at a first moment and the three-dimensional point cloud corresponding to the first target object at a second moment into a world coordinate system;
and determining the motion direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
24. The system of claim 21, wherein the processor, when determining the direction of motion of the first target object, is specifically configured to:
projecting the three-dimensional point cloud corresponding to the first target object at a first moment in the two-dimensional image at the first moment to obtain a first projection point;
projecting the three-dimensional point cloud corresponding to the first target object at a second moment in the two-dimensional image at the second moment to obtain a second projection point;
determining three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point of which the position relation with the first projection point accords with a preset position relation;
determining three-dimensional information of a second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, wherein the second feature point is a feature point of which the position relation with the second projection point accords with a preset position relation, and the second feature point corresponds to the first feature point;
and determining the motion direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
25. The system of claim 24, wherein the processor, when determining the three-dimensional information of the first feature point from the first projection point and the first feature point in the two-dimensional image at the first time, is specifically configured to:
determining a weight coefficient corresponding to the first projection point according to the distance between the first projection point and a first feature point in the two-dimensional image at the first moment;
and determining the three-dimensional information of the first characteristic point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
26. The system of claim 24, wherein the processor, when determining the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second time, is specifically configured to:
determining a weight coefficient corresponding to the second projection point according to the distance between the second projection point and a second feature point in the two-dimensional image at the second moment;
and determining the three-dimensional information of the second characteristic point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
27. The system according to any one of claims 24-26, wherein the processor is further configured to, prior to determining the direction of motion of the first target object based on the three-dimensional information of the first feature point and the three-dimensional information of the second feature point:
and respectively converting the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point into a world coordinate system.
28. The system of any one of claims 19-27, wherein the processor, after detecting the point cloud cluster corresponding to the first target object by the target detection model and determining the object type of the first target object, is further configured to:
and if the first target object is determined to be a vehicle through the target detection model, verifying the detection result of the target detection model according to a preset condition.
29. The system of claim 28, wherein the preset condition comprises at least one of:
the size of the first target object meets a preset size;
the coincidence degree between the first target object and other target objects around the first target object is smaller than a preset threshold value.
30. The system of any one of claims 19-29, wherein prior to clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the processor is further configured to:
removing specific point clouds in the three-dimensional point clouds, wherein the specific point clouds comprise ground point clouds.
31. The system of any one of claims 19-27, wherein a distance of the first target object relative to the movable platform is less than or equal to a first preset distance;
the processor detects the point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, the processor is further configured to:
if the first target object is determined to be a vehicle through the target detection model, determining ground point clouds beyond the first preset distance according to the position of the first target object;
and determining the object type of a second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
32. The system of claim 31, wherein the processor is configured to determine the ground point cloud outside the first predetermined distance based on the location of the first target object, and is further configured to:
determining the gradient of the ground where the first target object is located according to the position of the first target object;
and determining the ground point cloud beyond the first preset distance according to the gradient of the ground.
33. The system of claim 32, wherein the processor, when determining the grade of the ground on which the first target object is located based on the position of the first target object, is configured to:
and determining the gradient of a plane formed by at least three first target objects according to the positions of at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
34. The system according to any one of claims 31 to 33, wherein the processor is configured to determine the object type of the second target object outside the first predetermined distance from the ground point cloud outside the first predetermined distance, and in particular to:
determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane;
and detecting the point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
35. The system of claim 34, wherein the processor is configured to, when determining the point cloud cluster corresponding to the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance, specifically:
clustering the three-dimensional point cloud except the ground point cloud in the three-dimensional point cloud beyond the first preset distance to obtain a part of point cloud corresponding to the second target object;
and determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud beyond the first preset distance.
36. The system of claim 35, wherein the processor is further configured to:
if the second target object is a vehicle and the width of the second target object is smaller than or equal to the first width, removing the three-dimensional point cloud with the height larger than or equal to the first height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object;
if the second target object is a vehicle, the width of the second target object is greater than the first width, and the width of the second target object is less than or equal to the second width, removing the three-dimensional point cloud with the height greater than or equal to the second height in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point cloud corresponding to the second target object;
generating an identification frame for representing a vehicle according to the residual three-dimensional point cloud corresponding to the second target object, wherein the identification frame is used for navigation decision of the movable platform; wherein the second width is greater than the first width and the second height is greater than the first height.
37. A movable platform, comprising:
a body;
the power system is arranged on the machine body and used for providing moving power;
and a target object detection system according to any one of claims 19-36.
38. The movable platform of claim 37, wherein the movable platform comprises: unmanned aerial vehicle, mobile robot or vehicle.
39. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to perform the method of any one of claims 1-18.
CN201980033130.6A 2019-09-10 2019-09-10 Target object detection method, system, device and storage medium Pending CN112154454A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105158 WO2021046716A1 (en) 2019-09-10 2019-09-10 Method, system and device for detecting target object and storage medium

Publications (1)

Publication Number Publication Date
CN112154454A true CN112154454A (en) 2020-12-29

Family

ID=73891475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980033130.6A Pending CN112154454A (en) 2019-09-10 2019-09-10 Target object detection method, system, device and storage medium

Country Status (2)

Country Link
CN (1) CN112154454A (en)
WO (1) WO2021046716A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112835061A (en) * 2021-02-04 2021-05-25 郑州衡量科技股份有限公司 Dynamic vehicle separation and width and height detection method and system based on ToF sensor
CN112906519A (en) * 2021-02-04 2021-06-04 北京邮电大学 Vehicle type identification method and device
CN112907745A (en) * 2021-03-23 2021-06-04 北京三快在线科技有限公司 Method and device for generating digital orthophoto map
CN113838196A (en) * 2021-11-24 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Point cloud data processing method, device, equipment and storage medium
CN113894050A (en) * 2021-09-14 2022-01-07 深圳玩智商科技有限公司 Logistics piece sorting method, sorting equipment and storage medium
US20220207822A1 (en) * 2020-12-29 2022-06-30 Volvo Car Corporation Ensemble learning for cross-range 3d object detection in driver assist and autonomous driving systems

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076922A (en) * 2021-04-21 2021-07-06 北京经纬恒润科技股份有限公司 Object detection method and device
CN113610967B (en) * 2021-08-13 2024-03-26 北京市商汤科技开发有限公司 Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN113781639B (en) * 2021-09-22 2023-11-28 交通运输部公路科学研究所 Quick construction method for digital model of large-scene road infrastructure
CN114162126A (en) * 2021-12-28 2022-03-11 上海洛轲智能科技有限公司 Vehicle control method, device, equipment, medium and product
CN115018910A (en) * 2022-04-19 2022-09-06 京东科技信息技术有限公司 Method and device for detecting target in point cloud data and computer readable storage medium
CN115457496B (en) * 2022-09-09 2023-12-08 北京百度网讯科技有限公司 Automatic driving retaining wall detection method and device and vehicle
CN115600395B (en) * 2022-10-09 2023-07-18 南京领鹊科技有限公司 Indoor engineering quality acceptance evaluation method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983201B2 (en) * 2012-07-30 2015-03-17 Microsoft Technology Licensing, Llc Three-dimensional visual phrases for object recognition
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane
CN108319920B (en) * 2018-02-05 2021-02-09 武汉光谷卓越科技股份有限公司 Road marking detection and parameter calculation method based on line scanning three-dimensional point cloud
CN108680100B (en) * 2018-03-07 2020-04-17 福建农林大学 Method for matching three-dimensional laser point cloud data with unmanned aerial vehicle point cloud data

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220207822A1 (en) * 2020-12-29 2022-06-30 Volvo Car Corporation Ensemble learning for cross-range 3d object detection in driver assist and autonomous driving systems
CN112835061A (en) * 2021-02-04 2021-05-25 郑州衡量科技股份有限公司 Dynamic vehicle separation and width and height detection method and system based on ToF sensor
CN112906519A (en) * 2021-02-04 2021-06-04 北京邮电大学 Vehicle type identification method and device
CN112906519B (en) * 2021-02-04 2023-09-26 北京邮电大学 Vehicle type identification method and device
CN112835061B (en) * 2021-02-04 2024-02-13 郑州衡量科技股份有限公司 ToF sensor-based dynamic vehicle separation and width-height detection method and system
CN112907745A (en) * 2021-03-23 2021-06-04 北京三快在线科技有限公司 Method and device for generating digital orthophoto map
CN112907745B (en) * 2021-03-23 2022-04-01 北京三快在线科技有限公司 Method and device for generating digital orthophoto map
CN113894050A (en) * 2021-09-14 2022-01-07 深圳玩智商科技有限公司 Logistics piece sorting method, sorting equipment and storage medium
CN113838196A (en) * 2021-11-24 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Point cloud data processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2021046716A1 (en) 2021-03-18

Similar Documents

Publication Publication Date Title
CN112154454A (en) Target object detection method, system, device and storage medium
US11320833B2 (en) Data processing method, apparatus and terminal
CN108152831B (en) Laser radar obstacle identification method and system
CN108419446B (en) System and method for laser depth map sampling
US9070289B2 (en) System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
US9094673B2 (en) Arrangement and method for providing a three dimensional map representation of an area
US9121717B1 (en) Collision avoidance for vehicle control
CN110163904A (en) Object marking method, control method for movement, device, equipment and storage medium
EP2874097A2 (en) Automatic scene parsing
CN112912920A (en) Point cloud data conversion method and system for 2D convolutional neural network
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
JP2014138420A (en) Depth sensing method and system for autonomous vehicle
CN106164931B (en) Method and device for displaying objects on a vehicle display device
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN109074653B (en) Method for detecting an object next to a road of a motor vehicle, computing device, driver assistance system and motor vehicle
CN111213153A (en) Target object motion state detection method, device and storage medium
US10832428B2 (en) Method and apparatus for estimating a range of a moving object
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
CN111699410A (en) Point cloud processing method, device and computer readable storage medium
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN114119729A (en) Obstacle identification method and device
CN113516711A (en) Camera pose estimation techniques
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN113435224A (en) Method and device for acquiring 3D information of vehicle
CN114049542A (en) Fusion positioning method based on multiple sensors in dynamic scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination