WO2021046716A1 - 目标对象的检测方法、系统、设备及存储介质 - Google Patents

目标对象的检测方法、系统、设备及存储介质 Download PDF

Info

Publication number
WO2021046716A1
WO2021046716A1 PCT/CN2019/105158 CN2019105158W WO2021046716A1 WO 2021046716 A1 WO2021046716 A1 WO 2021046716A1 CN 2019105158 W CN2019105158 W CN 2019105158W WO 2021046716 A1 WO2021046716 A1 WO 2021046716A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
point cloud
dimensional
point
target
Prior art date
Application number
PCT/CN2019/105158
Other languages
English (en)
French (fr)
Inventor
周游
蔡剑钊
武志远
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/105158 priority Critical patent/WO2021046716A1/zh
Priority to CN201980033130.6A priority patent/CN112154454A/zh
Publication of WO2021046716A1 publication Critical patent/WO2021046716A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the embodiments of the present application relate to the field of movable platforms, and in particular, to a detection method, system, device, and storage medium of a target object.
  • an automatic driving system or an auxiliary driving system is usually provided with a photographing device, and the surrounding vehicles are detected through the two-dimensional images collected by the photographing device.
  • the accuracy of vehicle detection is not enough to detect the surrounding vehicles only through the two-dimensional image.
  • the embodiments of the present application provide a method, system, device, and storage medium for detecting a target object, so as to improve the accuracy of detecting the target object.
  • the first aspect of the embodiments of the present application is to provide a method for detecting a target object, which is applied to a movable platform, the movable platform is provided with a detection device, and the detection device is used to detect the surrounding environment of the movable platform to obtain a three-dimensional Point cloud, the method includes:
  • the point cloud cluster corresponding to the first target object is detected through the target detection model, and the object type of the first target object is determined.
  • the second aspect of the embodiments of the present application is to provide a target object detection system, including: a detection device, a memory, and a processor;
  • the detection device is used to detect the surrounding environment of the movable platform to obtain a three-dimensional point cloud
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the point cloud cluster corresponding to the first target object is detected through the target detection model, and the object type of the first target object is determined.
  • the third aspect of the embodiments of the present application is to provide a movable platform, including:
  • the power system is installed on the fuselage to provide mobile power
  • the fourth aspect of the embodiments of the present application is to provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method described in the first aspect.
  • the detection method, system, equipment and storage medium of the target object cluster the three-dimensional point cloud detected by the detection device mounted on the movable platform to obtain the point cloud cluster corresponding to the target object.
  • the height of the cluster centers of the point cloud clusters needs to meet the preset height conditions.
  • the target detection model is determined, and the target detection model is determined by The target detection model detects the point cloud clusters corresponding to the target object, so that the target detection model determines the object type of the target object, that is, the target objects at different distances from the movable platform are detected by different detection models, thereby improving The detection accuracy of the target object.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 2 is a flowchart of a method for detecting a target object provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of another application scenario provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of another application scenario provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of a detection model provided by an embodiment of the application.
  • FIG. 6 is a flowchart of a method for detecting a target object provided by another embodiment of the application.
  • FIG. 7 is a schematic diagram of projecting a three-dimensional point cloud onto a two-dimensional image according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of a two-dimensional feature point provided by an embodiment of this application.
  • FIG. 9 is a flowchart of a method for detecting a target object according to another embodiment of the application.
  • FIG. 10 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of another three-dimensional point cloud provided by an embodiment of this application.
  • FIG. 12 is a schematic diagram of yet another three-dimensional point cloud provided by an embodiment of this application.
  • FIG. 13 is a structural diagram of a target object detection system provided by an embodiment of the application.
  • 31 Point cloud cluster
  • 32 Point cloud cluster
  • 1002 upper left corner image
  • 1003 lower left corner image
  • 100 white arc
  • 101 the first target object
  • 102 the first target object
  • 103 the first target object
  • 104 three-dimensional point cloud
  • a component when referred to as being "fixed to” another component, it can be directly on the other component or a central component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to the other component or there may be a centered component at the same time.
  • the embodiment of the present application provides a method for detecting a target object.
  • the method is applied to a movable platform, the movable platform is provided with a detection device, and the detection device is used to detect the surrounding environment of the movable platform to obtain a three-dimensional point cloud.
  • the movable platform may be a drone, a movable robot or a vehicle.
  • the movable platform is a vehicle as an example.
  • the vehicle may be an unmanned vehicle or a vehicle equipped with an Advanced Driver Assistance Systems (ADAS) system.
  • ADAS Advanced Driver Assistance Systems
  • the vehicle 11 is a carrier equipped with a detection device, and the detection device may specifically be a binocular stereo camera, a time of flight (TOF) camera, and/or a lidar.
  • TOF time of flight
  • the detection device detects the surrounding environment of the vehicle 11 in real time to obtain a three-dimensional point cloud.
  • the environment around the vehicle 11 includes objects around the vehicle 11. Among them, the objects around the vehicle 11 include the ground around the vehicle 11, pedestrians, vehicles, and the like.
  • lidar Take lidar as an example.
  • a beam of laser light emitted by the lidar illuminates the surface of an object
  • the surface of the object will reflect the beam of laser light.
  • the lidar can determine the relative position of the object based on the laser light reflected from the surface of the object. Information such as the position and distance of the lidar. If the laser beam emitted by the lidar is scanned according to a certain trajectory, such as a 360-degree rotating scan, a large number of laser points will be obtained, and thus the laser point cloud data of the object can be formed, that is, a three-dimensional point cloud.
  • this embodiment does not limit the execution subject of the detection method of the target object.
  • the detection method of the target object can be executed by the vehicle-mounted device in the vehicle, or it can be executed by other devices with data processing functions besides the vehicle-mounted device, for example, As shown in the server 12 shown in FIG. 1, the vehicle 11 and the server 12 can perform wireless communication or wired communication.
  • the vehicle 11 can send the three-dimensional point cloud detected by the detection device to the server 12, and the server 12 executes the detection method of the target object .
  • the following uses a vehicle-mounted device as an example to introduce the target object detection method provided in the embodiment of the present application.
  • the vehicle-mounted device may be a device with a data processing function integrated in the vehicle center console, or may also be a tablet computer, a mobile phone, a notebook computer, etc. placed in the vehicle.
  • Fig. 2 is a flowchart of a method for detecting a target object provided by an embodiment of the application. As shown in Figure 2, the method in this embodiment may include:
  • the detection device mounted on the vehicle 11 detects the surrounding environment of the vehicle 11 in real time to obtain a three-dimensional point cloud.
  • the detection device can communicate with the on-board equipment on the vehicle 11, so that the vehicle
  • the vehicle-mounted device on 11 can obtain the three-dimensional point cloud detected by the detection device in real time.
  • the three-dimensional point cloud of the ground around the vehicle 11 the three-dimensional point cloud of pedestrians, and the three-dimensional point cloud of other vehicles such as the vehicle 13 and the vehicle 14.
  • S202 Perform clustering on the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of the cluster center of the clustered point cloud cluster meets a preset height condition.
  • the three-dimensional point cloud 15 is a three-dimensional point cloud detected by a detection device mounted on the vehicle 11.
  • the three-dimensional point cloud 15 includes a plurality of three-dimensional points, that is, the three-dimensional point cloud is a collection of many three-dimensional points.
  • three-dimensional points can also be referred to as point cloud points.
  • the position information may specifically be the three-dimensional coordinates of the point cloud point in the three-dimensional coordinate system.
  • the three-dimensional coordinate system is limited.
  • the three-dimensional coordinate system may specifically be a vehicle body coordinate system, an earth coordinate system, or a world coordinate system. Therefore, according to the position information of each point cloud point, the height of each point cloud point relative to the ground can be determined.
  • k can be a constant. It is understandable that when the three-dimensional point cloud 15 is clustered, the aggregation process between different three-dimensional points in the three-dimensional point cloud 15 can be similar to the aggregation process described in the above formula (1), and here is no longer one by one. Go into details.
  • the point cloud cluster 31 and the point cloud cluster 32 are obtained, wherein the height of the cluster center of the point cloud cluster 31 and the point cloud cluster 32 is close to the preset height. Further, according to the point cloud cluster 31, the first target object 41 as shown in FIG. 4 can be obtained, and according to the point cloud cluster 32, the first target object 42 as shown in FIG. 4 can be obtained.
  • first target object is only schematically described here, and the number of the first target object is not limited.
  • S203 Determine a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relationship between the distance and the detection model.
  • the point cloud cluster 31 and the point cloud cluster 32 shown in FIG. 3 respectively include a plurality of point cloud points. Since the point cloud points in the three-dimensional point cloud detected by the detection device at each sampling moment carry position information, the position information of each point cloud point can be used to calculate the distance between the point cloud point and the detection device. Further, according to the distance between multiple point cloud points in the point cloud cluster and the detection device, the distance between the point cloud cluster and the vehicle body equipped with the detection device can be calculated, and then the corresponding point cloud can be obtained The distance between the first target object of the cluster and the vehicle body, for example, the distance of the first target object 41 with respect to the vehicle 11 and the distance of the first target object 42 with respect to the vehicle 11.
  • the distance of the first target object 41 relative to the vehicle 11 is smaller than the distance of the first target object 42 relative to the vehicle 11, for example, the distance of the first target object 41 relative to the vehicle 11 is denoted as L1, and the distance of the first target object 41 relative to the vehicle 11 is recorded as L1.
  • the distance of a target object 42 relative to the vehicle 11 is denoted as L2.
  • the in-vehicle device may determine the target detection model corresponding to L1 according to the distance L1 of the first target object 41 relative to the vehicle 11 and the corresponding relationship between the distance and the detection model. According to the distance L2 of the first target object 42 relative to the vehicle 11 and the corresponding relationship between the distance and the detection model, the target detection model corresponding to L2 is determined.
  • test models corresponding to different distances can be trained in advance.
  • the sample object can be divided into the range of 0-90 meters relative to the collection vehicle.
  • the collection vehicle may be the vehicle 11 described above, or may be a vehicle other than the vehicle 11.
  • the detection model obtained by training with sample objects in the range of 0-90 meters with respect to the collection vehicle is detection model 1
  • the detection model obtained by training with sample objects in the range of 75 meters-165 meters with respect to the collection vehicle is detection model 1.
  • the model is the detection model 2
  • the detection model obtained by training with respect to the sample object of the collected vehicle in the range of 125 meters to 200 meters is the detection model 3, so as to obtain the corresponding relationship between the distance and the detection model.
  • the detection model can be adjusted according to the actual acquired distance.
  • a parameter that can be adjusted according to distance can be set in the test model.
  • the distance of the first target object is obtained, and the parameters in the inspection model are set according to the distance to obtain the target inspection model.
  • S204 Detect a point cloud cluster corresponding to the first target object through the target detection model, and determine the object type of the first target object.
  • the vehicle-mounted device determines that the distance L1 of the first target object 41 relative to the vehicle 11 is within the range of 0-90 meters, and then uses the detection model 1 to detect the point cloud cluster corresponding to the first target object 41 to determine the first target object 41 object types. If the distance L2 of the first target object 42 relative to the vehicle 11 is in the range of 75 meters to 165 meters, the detection model 2 is used to detect the point cloud clusters corresponding to the first target object 42 to determine the object of the first target object 42 Types of.
  • the point cloud distribution characteristics of vehicles within different distance ranges are different.
  • the point cloud corresponding to the long-range target is sparsely distributed
  • the point cloud corresponding to the short-range target is densely distributed.
  • the point cloud corresponding to short-range vehicles often presents a point cloud on the side of the vehicle, while the point cloud corresponding to a mid-range vehicle often presents a point cloud at the rear of the vehicle. Therefore, by training multiple detection models for different distances, the target can be identified more accurately.
  • the above-mentioned object types may include: road marking lines, vehicles, pedestrians, road signs and other types.
  • specific types of vehicles can also be identified based on the characteristics of the point cloud clusters, for example, construction vehicles, cars, buses, etc. can be identified.
  • first target object in this embodiment is only for distinguishing from the second target object in subsequent embodiments, and both the first target object and the second target object may refer to target objects that can be detected by the detection device.
  • the point cloud cluster corresponding to the target object is obtained by clustering the three-dimensional point cloud detected by the detection device mounted on the movable platform.
  • the height of the cluster center of the point cloud cluster needs to meet the expected height.
  • Set the height condition and further, determine the target detection model according to the distance of the target object relative to the movable platform and the corresponding relationship between the distance and the detection model, and detect the point cloud cluster corresponding to the target object through the target detection model, so that the The target detection model determines the object type of the target object, that is to say, different detection models are used to detect the target objects at different distances from the movable platform, thereby improving the detection accuracy of the target object.
  • the method before clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the method further includes: removing a specific point cloud in the three-dimensional point cloud, so The specific point cloud includes a ground point cloud.
  • the three-dimensional point cloud 15 detected by the detection device not only includes the point cloud corresponding to the target object, but may also include a specific point cloud, for example, the ground point cloud 30. Therefore, before clustering the three-dimensional point cloud 15, the ground point cloud 30 in the three-dimensional point cloud 15 can be identified by the plane fitting method, and the ground point cloud 30 in the three-dimensional point cloud 15 can be removed. The three-dimensional point cloud after removing the ground point cloud 30 is clustered.
  • the specific point cloud in the three-dimensional point cloud detected by the detection device mounted on the movable platform is removed, and the three-dimensional point cloud after the removal of the specific point cloud is clustered to obtain the point cloud cluster corresponding to the target object. Avoid the influence of specific point cloud on the detection target object, thereby further improving the detection accuracy of the target object.
  • FIG. 6 is a flowchart of a method for detecting a target object provided by another embodiment of the application. As shown in FIG. 6, on the basis of the foregoing embodiment, before detecting the point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object, the method It also includes: determining the direction of movement of the first target object; and adjusting the direction of movement of the first target object to a preset direction.
  • the determining the movement direction of the first target object includes: according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment A three-dimensional point cloud determines the direction of movement of the first target object.
  • the first moment is the previous moment
  • the second moment is the current moment.
  • the position information of the first target object 41 may change in real time.
  • the detection device on the vehicle 11 detects the surrounding environment in real time. Therefore, the vehicle-mounted device can acquire and process the three-dimensional point cloud detected by the detection device in real time.
  • the three-dimensional point cloud corresponding to the first target object 41 at the previous moment and the three-dimensional point cloud corresponding to the first target object 41 at the current moment may be different. Therefore, the three-dimensional point cloud corresponding to the first target object 41 at the previous moment and The three-dimensional point cloud corresponding to the first target object 41 at the current moment determines the direction of movement of the first target object 41.
  • the determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first time and the three-dimensional point cloud corresponding to the first target object at the second time includes: Project the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment into the world coordinate system respectively; according to the first moment in the world coordinate system The three-dimensional point cloud corresponding to the first target object and the three-dimensional point cloud corresponding to the first target object at the second moment determine the movement direction of the first target object.
  • the three-dimensional point cloud corresponding to the first target object 41 at the previous moment and the three-dimensional point cloud corresponding to the first target object 41 at the current moment are respectively projected into the world coordinate system, and further, calculated by the Iterated Closest Points (ICP) algorithm
  • the relative positional relationship between the three-dimensional point cloud corresponding to the first target object 41 at the previous moment and the three-dimensional point cloud corresponding to the first target object 41 at the current moment includes a rotation relationship and a translation relationship, which can be determined according to the translation relationship
  • a possible implementation manner is the movement direction of the first target object 41, and the translation relationship is the movement direction of the first target object 41.
  • the determining the movement direction of the first target object includes the following steps:
  • S601 Project a three-dimensional point cloud corresponding to the first target object at the first moment into the two-dimensional image at the first moment to obtain a first projection point.
  • the vehicle 11 may also be equipped with a photographing device, which may be used to photograph an image of the surrounding environment of the vehicle 11, and the image is specifically a two-dimensional image.
  • the period of the detection device to obtain the three-dimensional point cloud and the period of the image capturing device may be the same or different. For example, when the detection device detects and obtains the three-dimensional point cloud of the first target object 41 at the previous moment, the shooting device captures a frame of two-dimensional image. While the detection device detects and obtains the three-dimensional point cloud of the first target object 41 at the current moment, the shooting device captures another frame of two-dimensional image.
  • the two-dimensional image captured by the photographing device at the previous moment may be recorded as the first image
  • the two-dimensional image captured by the photographing device at the current moment may be recorded as the second image.
  • the three-dimensional point cloud of the first target object 41 at the previous moment may be projected onto the first image to obtain the first projection point.
  • the three-dimensional point cloud of the first target object 41 at the current moment is projected onto the second image to obtain the second projection point.
  • the left area represents the three-dimensional point cloud detected by the detection device at a certain moment
  • the right area represents the projection of the three-dimensional point cloud onto the two-dimensional image to obtain the projection of the three-dimensional point cloud on the two-dimensional image Area
  • the projection area includes projection points.
  • projecting the three-dimensional point cloud on the two-dimensional image includes: projecting part or all of the point cloud points in the three-dimensional point cloud on the two-dimensional plane along the Z axis.
  • the Z axis may be the Z axis in the vehicle body coordinate system.
  • the Z axis may be the Z axis of the earth coordinate system.
  • S603. Determine the three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, where the first feature point is the same as the first feature point in the first feature point.
  • the positional relationship between the projection points conforms to the characteristic points of the preset positional relationship.
  • the projection point of the three-dimensional point cloud of the first target object 41 on the first image at the previous moment is recorded as the first projection point
  • the feature point on the first image is recorded as the first feature point.
  • the positional relationship between a feature point and the first projection point conforms to a preset positional relationship.
  • the determining the three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment includes: according to the first projection point And the distance between the first feature point in the two-dimensional image at the first moment, and the weight coefficient corresponding to the first projection point; and the weight coefficient corresponding to the first projection point and the first projection
  • the three-dimensional information of the point determines the three-dimensional information of the first feature point.
  • 80 represents the first image captured by the photographing device at the previous moment
  • 81 represents the projection area formed by projecting the three-dimensional point cloud of the first target object 41 at the previous moment onto the first image 80.
  • a two-dimensional feature point that is, a first feature point
  • the two-dimensional feature point is not necessarily a projection point, that is, the two-dimensional feature point does not necessarily have three-dimensional information.
  • the three-dimensional information of the two-dimensional feature point can be estimated through Gaussian distribution.
  • 82 represents any two-dimensional feature point in the projection area 81.
  • a preset range around the two-dimensional feature point 82 is determined, for example, a projection point in a 10*10 pixel area, for example, A , B, C, and D are the projection points within the preset range.
  • the distance between the projection point A and the two-dimensional feature point 82 is denoted as d 1
  • the distance between the projection point B and the two-dimensional feature point 82 is denoted as d 2
  • the projection point C is relative to the two-dimensional feature point 82
  • the distance of is denoted as d as d 3
  • the distance of the projection point D relative to the two-dimensional feature point 82 is denoted as d 4 .
  • ( ⁇ 1 , ⁇ 1 ) represents the pixel coordinates of the projection point A on the first image 80
  • ( ⁇ 0 , ⁇ 0 ) represents the pixel coordinates of the two-dimensional feature point 82 on the first image 80
  • ( ⁇ 2 , ⁇ 2 ) represents the pixel coordinates of the projection point B on the first image 80
  • ( ⁇ 3 , ⁇ 3 ) represents the pixel coordinates of the projection point C on the first image 80
  • ( ⁇ 4 , ⁇ 4 ) represents the pixel coordinates of the projection point D on the first image 80.
  • the three-dimensional information of the three-dimensional point corresponding to projection point A is denoted as P 1
  • the three-dimensional information of the three-dimensional point corresponding to projection point B is denoted as P 2
  • the three-dimensional information of the three-dimensional point corresponding to projection point C is denoted as P 3
  • the three-dimensional information of the three-dimensional point corresponding to the projection point D is recorded as P 4 .
  • P 1 , P 2 , P 3 , and P 4 are vectors respectively, including xyz three-axis coordinates.
  • P 0 The three-dimensional information of the two-dimensional feature point 82 is denoted as P 0 , and P 0 can be calculated by the following formulas (2) and (3):
  • n represents the number of projection points within a preset range around the two-dimensional feature point 82
  • ⁇ i represents a weight coefficient. Different projection points may correspond to different weight coefficients or the same weight coefficient.
  • is an adjustable parameter, for example, it can be a parameter adjusted based on experience.
  • S604. Determine the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second time, where the second feature point is the same as the second feature point of the second feature point.
  • the position relationship between the two projection points conforms to the feature points of the preset position relationship, and the second feature point corresponds to the first feature point.
  • the projection point of the three-dimensional point cloud of the first target object 41 on the second image at the current moment is recorded as the second projection point
  • the feature point on the second image is recorded as the second feature point.
  • the positional relationship between the feature point and the second projection point conforms to the preset positional relationship.
  • a corner tracking algorithm Kanade-Lucas-Tomasi Tracking, KLT
  • KLT Kanade-Lucas-Tomasi Tracking
  • the determining the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second time includes: according to the second projection point Determining the weight coefficient corresponding to the second projection point according to the distance between the second feature point in the two-dimensional image at the second time and the second projection point; and the weight coefficient corresponding to the second projection point and the second projection
  • the three-dimensional information of the point determines the three-dimensional information of the second feature point.
  • the process of calculating the three-dimensional information of the second feature point on the second image is similar to the process of calculating the three-dimensional information of the first feature point on the first image, and will not be repeated here.
  • S605 Determine the movement direction of the first target object according to the three-dimensional information of the first feature point and the three-dimensional information of the second feature point.
  • the three-dimensional information of the first feature point is the three-dimensional information P 0 of the two-dimensional feature point 82 as described above
  • the three-dimensional information of the second feature point is the two-dimensional feature corresponding to the two-dimensional feature point 82 in the second image.
  • three-dimensional information of the point denoted P '0.
  • the movement direction of the first target object 41 can be determined.
  • the position change between P 0 and P′ 0 is the movement direction of the first target object 41.
  • the method further includes: The three-dimensional information of the feature point and the three-dimensional information of the second feature point are respectively converted into the world coordinate system.
  • P 0 and P′ 0 are respectively converted into the world coordinate system, and the position change between P 0 and P′ 0 is calculated in the world coordinate system, and the position change is the movement direction of the first target object 41.
  • the movement direction of the first target object may be adjusted to a preset direction.
  • the preset direction is the movement direction of the sample object used for training the detection model.
  • the movement direction of the sample object used to train the detection model is northward, or toward the front or rear of the collection vehicle that detects the sample object.
  • the movement direction of the first target object 41 or the first target object 42 needs to be adjusted to north. For example, if the angle between the movement direction of the first target object 41 or the first target object 42 and the north direction is ⁇ , then the three-dimensional point cloud corresponding to the first target object 41 or the three-dimensional point corresponding to the first target object 42 is The cloud rotates according to the rotation formula R z ( ⁇ ) described in the following formula (4), so that the movement direction of the first target object 41 or the first target object 42 is north:
  • the movement direction of the target object is determined, and the movement direction of the target object is adjusted to a preset direction. Since the preset direction is the movement direction of the sample object used to train the detection model, the movement direction of the target object is changed. After being adjusted to the preset direction, the detection model can be used to detect the target object, which can further improve the detection accuracy of the target object.
  • the embodiment of the present application provides a method for detecting a target object.
  • the method further includes: If the target detection model determines that the first target object is a vehicle, the detection result of the target detection model is verified according to a preset condition.
  • the detection result is further verified by preset conditions.
  • the preset condition includes at least one of the following: the size of the first target object meets a preset size; the space between the first target object and other target objects around the first target object coincide The degree is less than the preset threshold.
  • the width of the first target object 41 exceeds a preset width range.
  • the preset width range may be the width range of a normal vehicle, for example, 2.8 Meters-3 meters. If the width of the first target object 41 exceeds the preset width range, it is determined that the detection model has a deviation from the detection result of the first target object 41, that is, the first target object 41 may not be a vehicle. If the width of the first target object 41 is within the preset width range, the degree of spatial coincidence between the first target object 41 and other surrounding target objects is further detected.
  • the spatial coincidence degree of may specifically be the degree of spatial coincidence between the recognition frame used to characterize the first target object 41 and the recognition frame used to characterize other surrounding target objects. If the degree of spatial coincidence is greater than the preset threshold, it is determined that the detection model has a deviation from the detection result of the first target object 41, that is, the first target object 41 may not be a vehicle. If the spatial coincidence degree is less than the preset threshold, it is determined that the detection result of the first target object 41 by the detection model is correct.
  • the target detection model is further determined according to the preset conditions.
  • the detection result of the target detection model is determined to be correct.
  • the detection result of the target detection model is determined to be biased, thereby further improving the target object The detection accuracy.
  • FIG. 9 is a flowchart of a method for detecting a target object according to another embodiment of the application.
  • the distance of the first target object relative to the movable platform is less than or equal to a first preset distance.
  • the right area 1001 is the three-dimensional point cloud detected by the detection device
  • the upper left image 1002 represents the image after the height information is removed from the three-dimensional point cloud
  • the lower left image 1003 represents the two-dimensional image.
  • the white circle in the right area 1001 represents the ground point cloud
  • the white arc 100 represents a first preset distance, for example, 80 meters away, relative to the detection device.
  • 101, 102, and 103 respectively represent the first target object whose distance relative to the detection device is less than or equal to 80 meters.
  • there is no white circle around 80 meters away that is, no ground point cloud is detected 80 meters away.
  • This embodiment proposes a method for detecting a ground point cloud outside a first preset distance and detecting a second target object outside the first preset distance.
  • the method further includes the following steps:
  • the vehicle-mounted device uses the target detection model corresponding to the distance of the first target object 101 to the detection device to determine that the first target object 101 is a vehicle, and uses the distance of the first target object 102 to the detection device to correspond to
  • the target detection model of the first target object 102 is determined to be a vehicle, and the target detection model corresponding to the distance of the first target object 103 relative to the detection device is used to determine that the first target object 103 is a vehicle.
  • the positions of the target object 101, the first target object 102, and the first target object 103 are determined relative to the ground point cloud 80 meters away from the detection device.
  • the determining the ground point cloud beyond the first preset distance according to the position of the first target object includes: determining the first target object according to the position of the first target object The slope of the local surface; according to the slope of the ground, a ground point cloud beyond the first preset distance is determined.
  • the first target object 101 determines the slope of the surface where the first target object 101, the first target object 102, and the first target object 103 are located, and according to The slope of the ground determines the ground point cloud that is 80 meters away from the detection device. It can be understood that this embodiment does not limit the number of first target objects.
  • the determining the slope of the surface where the first target object is located according to the position of the first target object includes: determining that at least three of the first target objects are located according to the positions of the at least three first target objects.
  • the slope of the plane formed by the first target object, where the slope of the plane is the slope of the surface where the first target object is located.
  • the first target object 101, the first target object 102, and the first target object 103 are all vehicles, three vehicles may determine a plane.
  • the coordinate mark of the first target object 101 is A(x1, y1, z1)
  • the coordinate mark of the first target object 102 is B(x2, y2, z2)
  • the coordinate mark of the first target object 103 is C(x3 ,y3,z3)
  • the vector AB (x2-x1,y2-y1,z2-z1)
  • the vector AC (x3-x1,y3-y1,z3-z1).
  • the slope of the plane formed by the first target object 101, the first target object 102, and the first target object 103 can be determined, and the slope of the plane may specifically be the first target object 101.
  • the ground is level ground, viaduct, or slope can be determined according to the slope of the ground.
  • the ground on which the first target object is located may not be a level ground, for example, it may be a viaduct or a slope. According to the slope of the ground, it can also be determined whether the first target object is on the viaduct or the slope.
  • the ground where the first target object is located can be extended according to the ground slope to obtain a ground point cloud 80 meters away. For example, expand straight to a distance of 80 meters away according to the width of the road where the first target object is located.
  • the ground at a distance of 80 meters is level ground, and the situation where there is a slope or a viaduct at a distance of 80 meters can be temporarily ignored.
  • S902 Determine the object type of the second target object outside the first preset distance according to the ground point cloud outside the first preset distance.
  • the object type of the second target object 80 meters away is determined.
  • the determining the object type of the second target object outside the first preset distance according to the ground point cloud outside the first preset distance includes: according to the first preset distance Determining the point cloud cluster corresponding to the second target object outside the first preset distance, and the bottom of the second target object is in the same plane as the bottom of the first target object; The point cloud cluster corresponding to the second target object is detected through a detection model corresponding to the distance of the second target object relative to the movable platform, and the object type of the second target object is determined.
  • the ground point cloud 80 meters away determine the point cloud cluster corresponding to the second target object 80 meters away.
  • the second target object 80 meters away will be affected by nearby objects. Therefore, the number of distant 3D point clouds 104 is small, that is to say, the distant 3D point cloud 104 may be only part of the 3D point cloud on the upper part of the second target object.
  • it is necessary to fill in the 3D point cloud of the lower half of the second target object so that the bottom of the second target object and the first target object 101.
  • the bottoms of the first target object 102 and the first target object 103 are in the same plane.
  • a part of the three-dimensional point cloud on the upper part of the second target object and the three-dimensional point cloud on the bottom half after the completion can form a point cloud cluster corresponding to the second target object.
  • the detection model corresponding to the distance is used to detect the point cloud cluster corresponding to the second target object, that is, the detection model detects that the second target object is a pedestrian, Vehicles or other objects.
  • the number of second target objects is not limited here, and there may be one or more. Since the distance of the second target object relative to the detection device is greater than the first preset distance, the detection model corresponding to the second preset distance greater than the first preset distance may be used to detect the second target object.
  • the determining the point cloud cluster corresponding to the second target object outside the first preset distance according to the ground point cloud outside the first preset distance includes: pre-setting the first preset distance Suppose that the three-dimensional point cloud after removing the ground point cloud from the three-dimensional point cloud beyond the distance is clustered to obtain the partial point cloud corresponding to the second target object; according to the partial point cloud corresponding to the second target object and the The ground point cloud beyond the first preset distance determines the point cloud cluster corresponding to the second target object.
  • the three-dimensional point cloud of the detection device is 80 meters away. Since the three-dimensional point cloud 80 meters away may include the ground point cloud, it is necessary to remove the three-dimensional point 80 meters away.
  • the ground point cloud in the cloud is clustered, and the three-dimensional point cloud after removing the ground point cloud is clustered to obtain a part of the point cloud corresponding to the second target object, such as the three-dimensional point cloud 104 shown in FIG. 11.
  • the ground where the first target object is located is extended according to the ground slope to obtain a ground point cloud 80 meters away.
  • the ground where the first target object is located is extended according to the ground slope to obtain a ground point cloud 80 meters away.
  • the ground where the first target object is located is extended according to the ground slope to obtain a ground point cloud 80 meters away.
  • the point cloud cluster corresponding to the second target object is determined according to the part of the point cloud corresponding to the second target object and the ground point cloud 80 meters away.
  • the lower half of the second target object is supplemented. They are aligned so that the bottom of the second target object and the bottoms of the first target object 101, the first target object 102, and the first target object 103 are in the same plane.
  • the clustering process is similar to the clustering process described above, and will not be repeated here.
  • the difference is that the vehicle height H used in the clustering process here is larger than the vehicle height H used in the clustering process described above, for example, the vehicle height used in the clustering process here H can be 1.6 meters or 2.5 meters.
  • the method further includes: if the second target object is a vehicle, and the width of the second target object is less than or equal to the first width, removing the point cloud clusters corresponding to the second target object A three-dimensional point cloud with a height greater than or equal to the first height to obtain the remaining three-dimensional point cloud corresponding to the second target object; if the second target object is a vehicle, the width of the second target object is greater than the first width , And the width of the second target object is less than or equal to the second width, then remove the three-dimensional point cloud with height greater than or equal to the second height from the point cloud cluster corresponding to the second target object to obtain the second target object The corresponding remaining three-dimensional point cloud; according to the remaining three-dimensional point cloud corresponding to the second target object, a recognition frame for characterizing the vehicle is generated, and the recognition frame is used for the movable platform to make navigation decisions; wherein, the first The second width is greater than the first width, and the second height is greater than the first height.
  • the clustering method is used to obtain the corresponding objects of the second target object.
  • the point cloud cluster corresponding to the second target object may include a three-dimensional point cloud of tiny objects such as street signs or branches. Therefore, when the in-vehicle device uses the detection model corresponding to the distance of the second target object relative to the detection device to determine that the second target object is a vehicle, further processing is needed on the point cloud cluster corresponding to the second target object.
  • the width of the second target object it is determined that the second target object is a small car or a large car. For example, if the width of the second target object is less than or equal to the first width, it is determined that the second target object is a small car. If the width of the second target object is greater than the first width, and the width of the second target object is less than or equal to the second width, it is determined that the second target object is a cart. Specifically, the second width is greater than the First width.
  • the point cloud cluster corresponding to the second target object is removed from the point cloud cluster with a height greater than or equal to the first height, for example, a three-dimensional point cloud with a height of 1.8 meters or more, to obtain the corresponding second target object The remaining 3D point cloud.
  • the second target object is a large vehicle, remove the point cloud cluster corresponding to the second target object whose height is greater than or equal to the second height, for example, a three-dimensional point cloud with a height of 3.2 meters or more, and obtain the corresponding point cloud of the second target object.
  • the remaining 3D point cloud As shown in Figure 12, the three-dimensional point cloud in circle 105 is the three-dimensional point cloud corresponding to the branches.
  • a recognition frame for characterizing the vehicle is generated. For example, on the basis of the three-dimensional point cloud 104 described in FIG. 11, the three-dimensional point cloud corresponding to the branch in the circle 105 as shown in FIG. 12 is removed to obtain the remaining three-dimensional point cloud corresponding to the second target object. Further, according to For the ground point cloud 80 meters away, the bottom half of the second target object needs to be filled. For example, the three-dimensional point cloud of the bottom half of the second target object needs to be filled so that the bottom of the second target object is the same as the first part.
  • the bottoms of the target object 101, the first target object 102, and the first target object 103 are in the same plane, and the second target object as shown in FIG. 12 is obtained, that is, the recognition frame 106 that characterizes the vehicle.
  • a vehicle equipped with a detection device for example, the vehicle 11 can make navigation decisions based on the identification frame 106, for example, plan a route according to the identification frame 106, plan the driving route of the vehicle 11 in advance, or control the vehicle 11 to switch to another lane in advance, or The vehicle speed and the like of the vehicle 11 are controlled in advance.
  • the distant ground point cloud is determined according to the position of the first target object nearby, and the second target object in the distance is detected according to the distant ground point cloud, so that a movable platform equipped with a detection device
  • the navigation decision is made according to the second target object in the distance, which improves the safety of the movable platform.
  • the three-dimensional point cloud corresponding to the second target object may be a three-dimensional point cloud of tiny objects such as street signs or branches. Removal improves the detection accuracy of the second target object.
  • the slope of the plane formed by the at least three first target objects is determined, and the slope of the surface where the first target object is located is determined according to the slope of the plane, and according to the first target object
  • the slope of the location can also determine whether the ground is level ground, viaduct or slope, etc., thereby improving the detection accuracy of ground recognition.
  • the point cloud of the road surface is reduced, thereby reducing the impact of the point cloud of the level ground, the point cloud of the viaduct or the road surface on the detection of the first target object or the second target object, thereby further improving the impact on the first target object or the second target object. 2.
  • the detection accuracy of the target object is improved.
  • FIG. 13 is a structural diagram of a target object detection system provided by an embodiment of the application.
  • the target object detection system 130 includes a detection device 131, a memory 132, and a processor 133.
  • the detection device 131 is used to detect the surrounding environment of the movable platform to obtain a three-dimensional point cloud.
  • the processor 133 may specifically be a component in the in-vehicle equipment in the foregoing embodiment, or other components, devices, or components with data processing functions carried in the vehicle.
  • the memory 132 is used to store program code; the processor 133 calls the program code, and when the program code is executed, is used to perform the following operations: obtain the three-dimensional point cloud; cluster the three-dimensional point cloud , Obtain the point cloud cluster corresponding to the first target object, wherein the height of the cluster center of the clustered point cloud cluster meets the preset height condition; according to the distance of the first target object relative to the movable platform, And the corresponding relationship between the distance and the detection model to determine a target detection model; the point cloud cluster corresponding to the first target object is detected through the target detection model, and the object type of the first target object is determined.
  • the processor 133 detects the point cloud cluster corresponding to the first target object through the target detection model, and before determining the object type of the first target object, is further configured to: determine the value of the first target object Movement direction; adjusting the movement direction of the first target object to a preset direction.
  • the preset direction is the movement direction of the sample object used for training the detection model.
  • the processor 133 determines the direction of movement of the first target object, it is specifically configured to: according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment A three-dimensional point cloud determines the direction of movement of the first target object.
  • the processor 133 determines the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment, It is specifically used to: project the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment into the world coordinate system respectively; The three-dimensional point cloud corresponding to the first target object at a moment and the three-dimensional point cloud corresponding to the first target object at a second moment determine the direction of movement of the first target object.
  • the processor 133 determines the direction of movement of the first target object, it is specifically configured to: project a three-dimensional point cloud corresponding to the first target object at the first moment into the two-dimensional image at the first moment , Obtain the first projection point; project the three-dimensional point cloud corresponding to the first target object at the second time in the two-dimensional image at the second time to obtain the second projection point; according to the first projection point and the The first feature point in the two-dimensional image at the first moment determines the three-dimensional information of the first feature point, wherein the positional relationship between the first feature point and the first projection point conforms to a preset A feature point of a positional relationship; the three-dimensional information of the second feature point is determined according to the second projection point and the second feature point in the two-dimensional image at the second time, wherein the second feature point is A feature point whose positional relationship with the second projection point conforms to a preset positional relationship, and the second feature point corresponds to the first feature point; according to the three-dimensional information of the first
  • the processor 133 determines the three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, it is specifically configured to: Determine the weight coefficient corresponding to the first projection point according to the distance between the first projection point and the first feature point in the two-dimensional image at the first time; The three-dimensional information of the first projection point determines the three-dimensional information of the first feature point.
  • the processor 133 determines the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second time, it is specifically configured to: The distance between the second projection point and the second feature point in the two-dimensional image at the second moment is determined by the weight coefficient corresponding to the second projection point; and the weight coefficient corresponding to the second projection point is determined according to the weight coefficient and the weight coefficient corresponding to the second projection point.
  • the three-dimensional information of the second projection point determines the three-dimensional information of the second feature point.
  • the processor 133 is further configured to: before determining the movement direction of the first target object according to the three-dimensional information of the first feature point and the three-dimensional information of the second feature point: The three-dimensional information of the point and the three-dimensional information of the second feature point are respectively converted into the world coordinate system.
  • the processor 133 detects the point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, is further configured to: if it is determined through the target detection model If the first target object is a vehicle, the detection result of the target detection model is verified according to a preset condition.
  • the preset condition includes at least one of the following: the size of the first target object meets a preset size; the degree of coincidence between the first target object and other target objects around the first target object Less than the preset threshold.
  • the processor 133 is further configured to: remove a specific point cloud in the three-dimensional point cloud, and the specific point Clouds include ground point clouds.
  • the distance of the first target object relative to the movable platform is less than or equal to a first preset distance; the processor 133 detects the point cloud cluster corresponding to the first target object through the target detection model, After the object type of the first target object is determined, it is further used to: if the first target object is determined to be a vehicle through the target detection model, determine the first prediction according to the position of the first target object. Set the ground point cloud beyond the distance; according to the ground point cloud beyond the first preset distance, determine the object type of the second target object outside the first preset distance.
  • the processor 133 determines the ground point cloud beyond the first preset distance according to the position of the first target object, it is specifically configured to: determine the position of the first target object according to the position of the first target object. The slope of the surface where the first target object is located; and according to the slope of the ground, a ground point cloud outside the first preset distance is determined.
  • the processor 133 determines the slope of the surface where the first target object is located according to the position of the first target object
  • the processor 133 is specifically configured to: according to the positions of at least three first target objects, determine that the The slope of the plane formed by the three first target objects, and the slope of the plane is the slope of the surface where the first target object is located.
  • the processor 133 determines the object type of the second target object outside the first preset distance according to the ground point cloud outside the first preset distance, it is specifically configured to: according to the first preset distance A ground point cloud beyond a preset distance is determined, and a point cloud cluster corresponding to a second target object beyond the first preset distance is determined. The bottom of the second target object is at the bottom of the first target object. In the same plane; the point cloud cluster corresponding to the second target object is detected by the detection model corresponding to the distance of the second target object relative to the movable platform, and the object type of the second target object is determined.
  • the processor 133 determines the point cloud cluster corresponding to the second target object outside the first preset distance according to the ground point cloud outside the first preset distance, it is specifically configured to: The three-dimensional point cloud after removing the ground point cloud from the three-dimensional point cloud outside the first preset distance is clustered to obtain a part of the point cloud corresponding to the second target object; according to the part of the point corresponding to the second target object The cloud and the ground point cloud beyond the first preset distance determine the point cloud cluster corresponding to the second target object.
  • the processor 133 is further configured to: if the second target object is a vehicle, and the width of the second target object is less than or equal to the first width, remove the point cloud clusters corresponding to the second target object In the three-dimensional point cloud whose height is greater than or equal to the first height, the remaining three-dimensional point cloud corresponding to the second target object is obtained; if the second target object is a vehicle, the width of the second target object is greater than that of the first Width, and the width of the second target object is less than or equal to the second width, then remove the three-dimensional point cloud with height greater than or equal to the second height from the point cloud cluster corresponding to the second target object to obtain the second target The remaining three-dimensional point cloud corresponding to the object; according to the remaining three-dimensional point cloud corresponding to the second target object, a recognition frame for characterizing the vehicle is generated, and the recognition frame is used for the movable platform to make navigation decisions; wherein, the The second width is greater than the first width, and the second height is greater than the first width,
  • the embodiment of the application provides a movable platform.
  • the movable platform includes: a fuselage, a power system, and the target object detection system as described in the above embodiment.
  • the power system is installed on the fuselage to provide moving power.
  • the target object detection system can implement the above-mentioned target object detection method, and the specific principle and implementation manner of the target object detection method are similar to the foregoing embodiment, and will not be repeated here.
  • This embodiment does not limit the specific form of the movable platform.
  • the movable platform may be a drone, a movable robot, or a vehicle.
  • this embodiment also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the target object detection method described in the foregoing embodiment.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) or a processor to execute the method described in each embodiment of the present application. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种目标对象的检测方法、系统、设备及存储介质。通过对可移动平台上搭载的探测设备探测获得的三维点云进行聚类,得到目标对象对应的点云簇,在聚类过程中,点云簇的聚类中心的高度需要符合预设高度条件,进一步,根据目标对象相对于可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型,并通过该目标检测模型检测目标对象对应的点云簇,以便该目标检测模型确定目标对象的对象类型,也就是说,相对于可移动平台不同距离的目标对象采用不同的检测模型进行检测,从而提高了对目标对象的检测精度。

Description

目标对象的检测方法、系统、设备及存储介质 技术领域
本申请实施例涉及可移动平台领域,尤其涉及一种目标对象的检测方法、系统、设备及存储介质。
背景技术
在自动驾驶系统或辅助驾驶系统中,需要对道路中的车辆进行检测,以便进行车辆避让。
现有技术中,自动驾驶系统或辅助驾驶系统中通常设置有拍摄设备,并通过拍摄设备采集的二维图像检测周围车辆,但是,仅通过二维图像检测周围车辆,车辆检测的精准度不够。
发明内容
本申请实施例提供一种目标对象的检测方法、系统、设备及存储介质,以提高对目标对象的检测精度。
本申请实施例的第一方面是提供一种目标对象的检测方法,应用于可移动平台,所述可移动平台设置有探测设备,所述探测设备用于探测所述可移动平台周围环境得到三维点云,所述方法包括:
获取所述三维点云;
对所述三维点云进行聚类,得到第一目标对象对应的点云簇,其中,已聚类的点云簇的聚类中心的高度符合预设高度条件;
根据所述第一目标对象相对于所述可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型;
通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型。
本申请实施例的第二方面是提供一种目标对象的检测系统,包括:探测设备、存储器和处理器;
所述探测设备用于探测可移动平台周围环境得到三维点云;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取所述三维点云;
对所述三维点云进行聚类,得到第一目标对象对应的点云簇,其中,已聚类的点云簇的聚类中心的高度符合预设高度条件;
根据所述第一目标对象相对于所述可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型;
通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型。
本申请实施例的第三方面是提供一种可移动平台,包括:
机身;
动力系统,安装在所述机身,用于提供移动动力;
以及如第二方面所述的目标对象的检测系统。
本申请实施例的第四方面是提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现第一方面所述的方法。
本实施例提供的目标对象的检测方法、系统、设备及存储介质,通过对可移动平台上搭载的探测设备探测获得的三维点云进行聚类,得到目标对象对应的点云簇,在聚类过程中,点云簇的聚类中心的高度需要符合预设高度条件,进一步,根据目标对象相对于可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型,并通过该目标检测模型检测目标对象对应的点云簇,以便该目标检测模型确定目标对象的对象类型,也就是说,相对于可移动平台不同距离的目标对象采用不同的检测模型进行检测,从而提高了对目标对象的检测精度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种应用场景的示意图;
图2为本申请实施例提供的目标对象的检测方法的流程图;
图3为本申请实施例提供的另一种应用场景的示意图;
图4为本申请实施例提供的另一种应用场景的示意图;
图5为本申请实施例提供的一种检测模型的示意图;
图6为本申请另一实施例提供的目标对象的检测方法的流程图;
图7为本申请实施例提供的一种三维点云投影到二维图像的示意图;
图8为本申请实施例提供的一种二维特征点的示意图;
图9为本申请另一实施例提供的目标对象的检测方法的流程图;
图10为本申请实施例提供的一种三维点云的示意图;
图11为本申请实施例提供的另一种三维点云的示意图;
图12为本申请实施例提供的又一种三维点云的示意图;
图13为本申请实施例提供的目标对象的检测系统的结构图。
附图标记:
11:车辆;12:服务器;13:车辆;
14:车辆;15:三维点云;30:地面点云;
31:点云簇;32:点云簇;
41:第一目标对象;42:第一目标对象;80:第一图像;
81:投影区域;82:二维特征点;1001:右侧区域;
1002:左上角图像;1003:左下角图像;100:白色弧线;
101:第一目标对象;102:第一目标对象;
103:第一目标对象;104:三维点云;
105:圆圈;106:识别框;130:目标对象的检测系统;
131:探测设备;132:存储器;133:处理器。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本申请实施例提供一种目标对象的检测方法。该方法应用于可移动平台,所述可移动平台设置有探测设备,所述探测设备用于探测所述可移动平台周围环境得到三维点云。在本实施例中,该可移动平台可以是无人机、可移动机器人或车辆。
本申请实施例以可移动平台是车辆为例,该车辆可以是无人驾驶车辆,或者是搭载有高级辅助驾驶(Advanced Driver Assistance Systems,ADAS)系统的车辆等。如图1所示,车辆11为搭载有探测设备的载体,该探测设备具体可以是双目立体相机、飞行时间测距法(Time of flight,TOF)相机和/或激光雷达。车辆11在行驶的过程中,探测设备实时探测车辆11周围环境得到三维点云。车辆11周围环境包括车辆11周围的物体。其中,车辆11周围的物体包括车辆11周围的地面、行人、车辆等。
以激光雷达为例,当该激光雷达发射出的一束激光照射到物体表面时,该物体表面将会对该束激光进行反射,该激光雷达根据该物体表面反射的激光,可确定该物体相对于该激光雷达的方位、距离等信息。若该激光雷达发射出的该束激光按照某种轨迹进行扫描,例如360度旋转扫描,将得到大量的激光点,因而就可形成该物体的激光点云数据,也就是三维点云。
另外,本实施例并不限定目标对象的检测方法的执行主体,该目标对象的检测方法可以由车辆中的车载设备执行,也可以由车载设备之外的其他具有数据处理功能的设备执行,例如,如图1所示的服务器12,车辆11和服务器12可进行无线通信或有线通信,车辆11可以将探测设备探测 获得的三维点云发送给服务器12,由服务器12执行该目标对象的检测方法。下面以车载设备为例对本申请实施例提供的目标对象的检测方法进行介绍。其中,车载设备可以是集成在车辆中控台中的具有数据处理功能的设备,或者也可以是放置在车辆内的平板电脑、手机、笔记本电脑等。
图2为本申请实施例提供的目标对象的检测方法的流程图。如图2所示,本实施例中的方法,可以包括:
S201、获取所述三维点云。
如图1所示,车辆11在行驶过程中,车辆11上搭载的探测设备实时探测车辆11周围环境得到三维点云,该探测设备可以和该车辆11上的车载设备通信连接,从而使得该车辆11上的车载设备可以实时获取到该探测设备探测得到的三维点云。例如,车辆11周围地面的三维点云、行人的三维点云、其他车辆例如车辆13、车辆14的三维点云。
S202、对所述三维点云进行聚类,得到第一目标对象对应的点云簇,其中,已聚类的点云簇的聚类中心的高度符合预设高度条件。
如图3所示,三维点云15是车辆11上搭载的探测设备探测得到的三维点云。三维点云15中包括多个三维点,也就是说,三维点云是由很多的三维点构成的集合。另外,三维点也可以称为点云点。由于探测设备在每一采样时刻探测获取的三维点云中的点云点均携带有位置信息,该位置信息具体可以是该点云点在三维坐标系中的三维坐标,本实施例并不对该三维坐标系进行限定,例如,该三维坐标系具体可以是车体坐标系、地球坐标系、或世界坐标系等。因此,根据每个点云点的位置信息,可确定出每个点云点相对于地面的高度。
在对三维点云15进行聚类的过程中,具体可采用k-means聚类算法对三维点云15中距离地面高度接近预设高度的点云点进行加权K,使得聚类中心的高度值接近预设高度,该预设高度记为
Figure PCTCN2019105158-appb-000001
其中,H表示车辆高度。通常轿车高度大约为1.6米,大型车例如公交车高度大约为3米,此处的车辆高度H可以取值为1.1米。或者,此处的H可以有两种取值,一种取值为H1=0.8米,另一种取值为H2=1.5米,通过H1和H2分别进行聚类,得到聚类中心的高度值接近
Figure PCTCN2019105158-appb-000002
的聚类和聚类中心的高度值接近
Figure PCTCN2019105158-appb-000003
的聚类。 此处以H的取值为1.1米为例,假设P1和P2分别为三维点云15中的任意两个三维点,相应的,P1和P2分别对应有一个三维坐标,其中,P1在z轴即高度方向上的坐标可记为P1(z),P2在z轴即高度方向上的坐标可记为P2(z),若通过如下公式(1)计算得到的函数值Loss小于或等于某一阈值,则确定P1和P2可以聚合到一个聚类中。
Figure PCTCN2019105158-appb-000004
其中,k可以是一个常数。可以理解的是,在对三维点云15进行聚类时,三维点云15中不同三维点之间的聚合过程均可类似于上述公式(1)所述的聚合过程,此处不再一一赘述。
如图3所示,对三维点云15进行聚类后,得到点云簇31和点云簇32,其中,点云簇31和点云簇32的聚类中心的高度均接近预设高度。进一步,根据点云簇31可得到如图4所示的第一目标对象41,根据点云簇32可得到如图4所示的第一目标对象42。
可以理解的是,此处只是对第一目标对象进行示意性说明,并不限定第一目标对象的个数。
S203、根据所述第一目标对象相对于所述可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型。
如图3所示的点云簇31和点云簇32分别包括多个点云点。由于探测设备在每一采样时刻探测获取的三维点云中的点云点均携带有位置信息,因此,可以根据每个点云点的位置信息,计算出该点云点与该探测设备之间的距离,进一步,可以根据点云簇中多个点云点与该探测设备之间的距离,计算出点云簇与搭载有该探测设备的车体之间的距离,进而得到对应该点云簇的第一目标对象相对于车体之间的距离,例如,第一目标对象41相对于车辆11的距离和第一目标对象42相对于车辆11的距离。
如图4所示,第一目标对象41相对于车辆11的距离小于第一目标对象42相对于车辆11的距离,例如,将第一目标对象41相对于车辆11的距离记为L1,将第一目标对象42相对于车辆11的距离记为L2。在本实施例中,车载设备可根据第一目标对象41相对于车辆11的距离L1,以及所述距离与检测模型的对应关系,确定与L1对应的目标检测模型。根据 第一目标对象42相对于车辆11的距离L2,以及所述距离与检测模型的对应关系,确定与L2对应的目标检测模型。
在一种可选的实施方式中,可以预先训练不同距离对应的检验模型。
例如图5所示,具体的,根据样本对象相对于探测该样本对象的可移动平台,例如采集车辆之间的距离,可以将样本对象划分为相对于该采集车辆在0-90米范围内的样本对象、75米-165米范围内的样本对象、125米-200米范围内的样本对象。该采集车辆可以是如上所述的车辆11,也可以是除车辆11之外的车辆。具体的,通过相对于该采集车辆在0-90米范围内的样本对象训练得到的检测模型为检测模型1,通过相对于该采集车辆在75米-165米范围内的样本对象训练得到的检测模型为检测模型2,通过相对于该采集车辆在125米-200米范围内的样本对象训练得到的检测模型为检测模型3,从而得到所述距离与检测模型的对应关系。
在另一种可选的实施方式中,检测模型可以根据实际获取的距离做适应性点的调整。例如,可以在检验模型中可设置的一可以根据距离调整的参数。具体实施时,获取第一目标对象的距离,再根据该距离设置检验模型中的参数,得到目标检验模型。
S204、通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型。
例如,车载设备确定第一目标对象41相对于车辆11的距离L1在0-90米范围内,则采用检测模型1对第一目标对象41对应的点云簇进行检测,以确定第一目标对象41的对象类型。如果第一目标对象42相对于车辆11的距离L2在75米-165米范围内,则采用检测模型2对第一目标对象42对应的点云簇进行检测,以确定第一目标对象42的对象类型。
值得说明的是,不同距离范围内的车辆,其点云分布特征是不同的。例如,对应远程目标物的点云分布稀疏,而对应近程目标物的点云分布稠密。对应近程车辆的点云往往呈现的是车辆侧面点云,而对应中程车辆的点云呈现的更多是车辆尾部点云。因此,针对不同距离有区别的训练多个检测模型,可以更加精确进行目标物的识别。
另外,如上所述的对象类型可以包括:道路标示线、车辆、行人、道路标识牌等类型。进一步的,还可以根据点云簇的特征,对车辆具体类型 进行识别,例如,可以识别工程车辆、轿车、公交车等等。
可以理解的是,本实施例中的第一目标对象只是为了和后续实施例中的第二目标对象进行区分,第一目标对象和第二目标对象均可以指探测设备可探测到的目标对象。
本实施例通过对可移动平台上搭载的探测设备探测获得的三维点云进行聚类,得到目标对象对应的点云簇,在聚类过程中,点云簇的聚类中心的高度需要符合预设高度条件,进一步,根据目标对象相对于可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型,并通过该目标检测模型检测目标对象对应的点云簇,以便该目标检测模型确定目标对象的对象类型,也就是说,相对于可移动平台不同距离的目标对象采用不同的检测模型进行检测,从而提高了对目标对象的检测精度。
在上述实施例的基础上,在对所述三维点云进行聚类,得到第一目标对象对应的点云簇之前,所述方法还包括:去除所述三维点云中的特定点云,所述特定点云包括地面点云。
如图3所示,探测设备探测获得的三维点云15中不仅包括目标对象对应的点云,可能还会包括特定点云,例如,地面点云30。因此,在对三维点云15进行聚类之前,可以先通过平面拟合的方法识别出三维点云15中的地面点云30,并去除三维点云15中的地面点云30,进一步,对去除地面点云30后的三维点云进行聚类。
本实施例通过去除可移动平台上搭载的探测设备探测获得的三维点云中的特定点云,并对去除特定点云后的三维点云进行聚类,得到目标对象对应的点云簇,可避免特定点云对检测目标对象造成的影响,从而进一步提高了对目标对象的检测精度。
本申请实施例提供一种目标对象的检测方法。图6为本申请另一实施例提供的目标对象的检测方法的流程图。如图6所示,在上述实施例的基础上,所述通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之前,所述方法还包括:确定所述第一目标对象的运动方向;将所述第一目标对象的运动方向调整为预设方向。
作为一种可能的实现方式,所述确定所述第一目标对象的运动方向,包括:根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第 一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
具体的,第一时刻为前一时刻,第二时刻为当前时刻。以第一目标对象41为例,由于第一目标对象41可能处于运动状态,因此,第一目标对象41的位置信息可能是实时变化的。另外,车辆11上的探测设备是实时在探测周围环境的,因此,车载设备可实时获取并处理该探测设备探测得到的三维点云。而前一时刻第一目标对象41对应的三维点云和当前时刻第一目标对象41对应的三维点云可能是变化的,因此,可以根据前一时刻第一目标对象41对应的三维点云和当前时刻第一目标对象41对应的三维点云,确定第一目标对象41的运动方向。
可选的,所述根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向,包括:将第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云分别投影到世界坐标系中;根据所述世界坐标系中第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
例如,将前一时刻第一目标对象41对应的三维点云和当前时刻第一目标对象41对应的三维点云分别投影到世界坐标系中,进一步,通过迭代最近点(IteratedClosestPoints,ICP)算法计算前一时刻第一目标对象41对应的三维点云和当前时刻第一目标对象41对应的三维点云之间的相对位置关系,该相对位置关系包括旋转关系和平移关系,根据该平移关系可确定出第一目标对象41的运动方向,一种可能的实现方式,该平移关系为该第一目标对象41的运动方向。
作为另一种可能的实现方式,所述确定所述第一目标对象的运动方向,包括如下步骤:
S601、将第一时刻所述第一目标对象对应的三维点云投影在所述第一时刻的二维图像中,得到第一投影点。
S602、将第二时刻所述第一目标对象对应的三维点云投影在所述第二时刻的二维图像中,得到第二投影点。
在本实施例中,车辆11上还可以搭载有拍摄设备,该拍摄设备可用于拍摄车辆11周围环境的图像,该图像具体为二维图像。探测设备探测 获得三维点云的周期和拍摄设备拍摄图像的周期可能相同,也可能不同。例如,在前一时刻探测设备探测获得第一目标对象41的三维点云的同时,该拍摄设备拍摄有一帧二维图像。在当前时刻探测设备探测获得第一目标对象41的三维点云的同时,该拍摄设备拍摄有另一帧二维图像。此处,可以将该拍摄设备在前一时刻拍摄获得的二维图像记为第一图像,将该拍摄设备在当前时刻拍摄获得的二维图像记为第二图像。具体的,可以将前一时刻第一目标对象41的三维点云投影到第一图像上,得到第一投影点。将当前时刻第一目标对象41的三维点云投影到第二图像上,得到第二投影点。如图7所示,左侧区域表示某一时刻探测设备探测获得的三维点云,右侧区域表示将该三维点云投影到二维图像上,得到三维点云在该二维图像上的投影区域,该投影区域中包括投影点。
在一种可选的实施方式中,将三维点云投影在二维图像中包括:将三维点云中的部分或者全部点云点沿Z轴投影在二维平面。其中,该Z轴可以是车体坐标系下的Z轴。或者,若三维点云的坐标已矫正至地球坐标系,该Z轴可以是地球坐标系的Z轴。
S603、根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息,其中,所述第一特征点为与所述第一投影点之间的位置关系符合预设位置关系的特征点。
为了便于区分,将前一时刻第一目标对象41的三维点云在第一图像上的投影点记为第一投影点,将该第一图像上的特征点记为第一特征点,该第一特征点与第一投影点之间的位置关系符合预设位置关系。
可选的,所述根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息,包括:根据所述第一投影点和所述第一时刻的二维图像中的第一特征点之间的距离,确定所述第一投影点对应的权重系数;根据所述第一投影点对应的权重系数和所述第一投影点的三维信息,确定所述第一特征点的三维信息。
如图8所示,80表示拍摄设备在前一时刻拍摄获得的第一图像,81表示将前一时刻第一目标对象41的三维点云投影到该第一图像80上形成的投影区域。在该投影区域81中可提取到二维特征点,即第一特征点。该二维特征点不一定是投影点,也就是说,该二维特征点不一定具有三维 信息。此处可通过高斯分布来估计该二维特征点的三维信息。如图8所示,82表示该投影区域81中的任意一个二维特征点,进一步,确定该二维特征点82周围预设范围,例如,10*10像素区域内的投影点,例如,A、B、C、D分别为该预设范围内的投影点。将投影点A相对于该二维特征点82的距离记为d 1,将投影点B相对于该二维特征点82的距离记为d 2,将投影点C相对于该二维特征点82的距离记为d 3,将投影点D相对于该二维特征点82的距离记为d 4。其中,
Figure PCTCN2019105158-appb-000005
11)表示投影点A在该第一图像80上的像素坐标,(μ 00)表示该二维特征点82在该第一图像80上的像素坐标。
Figure PCTCN2019105158-appb-000006
22)表示投影点B在该第一图像80上的像素坐标。
Figure PCTCN2019105158-appb-000007
33)表示投影点C在该第一图像80上的像素坐标。
Figure PCTCN2019105158-appb-000008
44)表示投影点D在该第一图像80上的像素坐标。另外,将投影点A对应的三维点的三维信息记为P 1,将投影点B对应的三维点的三维信息记为P 2,将投影点C对应的三维点的三维信息记为P 3,将投影点D对应的三维点的三维信息记为P 4。其中,P 1、P 2、P 3、P 4分别为向量,分别包括xyz三轴坐标。
二维特征点82的三维信息记为P 0,P 0可通过如下公式(2)、(3)计算得到:
Figure PCTCN2019105158-appb-000009
Figure PCTCN2019105158-appb-000010
其中,n表示该二维特征点82周围预设范围内的投影点的个数,ω i表示权重系数,不同的投影点可对应有不同的权重系数,或相同的权重系数。σ是一个可调的参数,例如可以是一个根据经验调整的参数。
可以理解的是,该投影区域81中其他二维特征点的三维信息的计算过程类似于如上所述的二维特征点82的三维信息的计算过程,此处不再赘述。
S604、根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息,其中,所述第二特征点为与所述第二投影点之间的位置关系符合预设位置关系的特征点,所述第二特征点与所述第一特征点对应。
为了便于区分,将当前时刻第一目标对象41的三维点云在第二图像上的投影点记为第二投影点,将该第二图像上的特征点记为第二特征点,该第二特征点与第二投影点之间的位置关系符合预设位置关系。
根据第一图像80上的第一特征点,采用角点跟踪算法(Kanade-Lucas-Tomasi Tracking,KLT)可计算出在第二图像上与该第一特征点对应的第二特征点。
可选的,所述根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息,包括:根据所述第二投影点和所述第二时刻的二维图像中的第二特征点之间的距离,确定所述第二投影点对应的权重系数;根据所述第二投影点对应的权重系数和所述第二投影点的三维信息,确定所述第二特征点的三维信息。
具体的,计算第二图像上的第二特征点的三维信息与计算第一图像上的第一特征点的三维信息的过程类似,此处不再赘述。
S605、根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向。
具体的,第一特征点的三维信息为如上所述的二维特征点82的三维信息P 0,第二特征点的三维信息为第二图像中与该二维特征点82对应的二维特征点的三维信息,记为P' 0。根据P 0和P' 0可确定出第一目标对象41的运动方向,具体的,P 0和P' 0之间的位置变化为第一目标对象41的运动方向。
可选的,所述根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向之前,所述方法还包括:将所述第一特征点的三维信息和所述第二特征点的三维信息分别转换到世界坐标系中。
例如,将P 0和P' 0分别转换到世界坐标系中,在世界坐标系中计算P 0和P' 0之间的位置变化,该位置变化为第一目标对象41的运动方向。
可以理解的是,第一目标对象41之外的其他第一目标对象的运动方向也可通过如上所述的几种可能的实现方式来确定,此处不再一一赘述。
在确定出第一目标对象的运动方向后,进一步,还可以将第一目标对象的运动方向调整为预设方向。可选的,所述预设方向为用于训练所述检测模型的样本对象的运动方向。
例如,用于训练检测模型的样本对象的运动方向向北,或者朝向探测该样本对象的采集车辆的前方或后方。以向北为例,为了使得该检测模型可以对第一目标对象41或第一目标对象42进行准确的检测,需要将第一目标对象41或第一目标对象42的运动方向调整为向北,例如,第一目标对象41或第一目标对象42的运动方向与向北方向之间的夹角为θ,则将第一目标对象41对应的三维点云或第一目标对象42对应的三维点云按照如下公式(4)所述的旋转公式R z(θ)进行旋转,从而使得第一目标对象41或第一目标对象42的运动方向为向北:
Figure PCTCN2019105158-appb-000011
本实施例通过确定目标对象的运动方向,并将目标对象的运动方向调整为预设方向,由于该预设方向为用于训练检测模型的样本对象的运动方向,因此,将目标对象的运动方向调整为预设方向后,再通过该检测模型进行检测,可进一步提高对目标对象的检测精度。
本申请实施例提供一种目标对象的检测方法。在上述实施例的基础上,所述通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,所述方法还包括:若通过所述目标检测模型确定所述第一目标对象为车辆,则根据预设条件对所述目标检测模型的检测结果进行验证。
例如,通过目标检测模型确定出第一目标对象41为车辆时,进一步,通过预设条件对该检测结果进行验证。
可选的,所述预设条件包括如下至少一种:所述第一目标对象的大小满足预设大小;所述第一目标对象和所述第一目标对象周围其他目标对象之间的空间重合度小于预设阈值。
例如,通过目标检测模型检测出第一目标对象41为车辆时,进一步,检测第一目标对象41的宽度是否超出预设宽度范围,该预设宽度范围可以是通常车辆的宽度范围,例如为2.8米-3米。若该第一目标对象41的宽度超出该预设宽度范围,则确定该检测模型对第一目标对象41的检测结果存在偏差,也就是说,该第一目标对象41可能并不是车辆。如果第 一目标对象41的宽度在该预设宽度范围内,则进一步检测第一目标对象41和周围其他目标对象之间的空间重合度,其中,第一目标对象41和周围其他目标对象之间的空间重合度具体可以是用于表征第一目标对象41的识别框与用于表征周围其他目标对象的识别框的空间重合度。如果该空间重合度大于预设阈值,则确定该检测模型对第一目标对象41的检测结果存在偏差,也就是说,该第一目标对象41可能并不是车辆。如果该空间重合度小于预设阈值,则确定该检测模型对第一目标对象41的检测结果是正确的。
本实施例根据目标对象相对于可移动平台的距离,通过与该距离对应的目标检测模型对目标对象进行检测之后,若确定目标对象的对象类型为车辆,则进一步根据预设条件对目标检测模型的检测结果进行验证,当满足该预设条件时,确定目标检测模型的检测结果正确,当不满足该预设条件时,确定该目标检测模型的检测结果存在偏差,从而进一步提高了对目标对象的检测精度。
本申请实施例提供一种目标对象的检测方法。图9为本申请另一实施例提供的目标对象的检测方法的流程图。如图9所示,在上述实施例的基础上,所述第一目标对象相对于所述可移动平台的距离小于或等于第一预设距离。如图10所示,右侧区域1001为探测设备探测得到的三维点云,左上角图像1002表示三维点云去除高度信息后的图像,左下角图像1003表示二维图像。其中,右侧区域1001中一圈一圈的白色线圈表示地面点云,白色弧线100表示相对于探测设备在第一预设距离,例如80米远的地方。101、102、103分别表示相对于探测设备的距离小于或等于80米的第一目标对象。根据图10可知,在80米之外没有一圈一圈的白色线圈,即在80米之外没有探测到地面点云。本实施例提出了一种用于检测第一预设距离之外的地面点云、以及检测第一预设距离之外的第二目标对象的方法。
所述通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,所述方法还包括如下步骤:
S901、若通过所述目标检测模型确定所述第一目标对象为车辆,则根 据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云。
在本实施例中,假设车载设备采用第一目标对象101相对于探测设备的距离所对应的目标检测模型确定第一目标对象101为车辆,采用第一目标对象102相对于探测设备的距离所对应的目标检测模型确定第一目标对象102为车辆,采用第一目标对象103相对于探测设备的距离所对应的目标检测模型确定第一目标对象103为车辆,则该车载设备还可以根据该第一目标对象101、第一目标对象102和第一目标对象103的位置,确定相对于该探测设备在80米之外的地面点云。
可选的,所述根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云,包括:根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度;根据所述地面的坡度,确定所述第一预设距离之外的地面点云。
具体的,根据该第一目标对象101、第一目标对象102和第一目标对象103的位置,确定第一目标对象101、第一目标对象102和第一目标对象103所在地面的坡度,并根据该地面的坡度,确定相对于该探测设备在80米之外的地面点云。可以理解的是,本实施例并不限定第一目标对象的个数。
可选的,所述根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度,包括:根据至少三个所述第一目标对象的位置,确定由至少三个所述第一目标对象构成的平面的坡度,所述平面的坡度为所述第一目标对象所在地面的坡度。
例如,当第一目标对象101、第一目标对象102和第一目标对象103都是车辆时,三个车辆可确定一个平面。例如,第一目标对象101的坐标记为A(x1,y1,z1),第一目标对象102的坐标记为B(x2,y2,z2),第一目标对象103的坐标记为C(x3,y3,z3),则向量AB=(x2-x1,y2-y1,z2-z1),向量AC=(x3-x1,y3-y1,z3-z1)。AB和AC所在平面的法向量即为AB×AC=(a,b,c),其中:
a=(y2-y1)(z3-z1)-(z2-z1)(y3-y1)
b=(z2-z1)(x3-x1)-(z3-z1)(x2-x1)
c=(x2-x1)(y3-y1)-(x3-x1)(y2-y1)
具体的,根据AB和AC所在平面的法向量,可确定出第一目标对象101、第一目标对象102和第一目标对象103构成的平面的坡度,该平面的坡度具体可以是第一目标对象101、第一目标对象102和第一目标对象103所在地面的坡度。
可以理解的是,当第一目标对象的个数大于3时,每3个第一目标对象即可确定一个平面,如此可得到多个平面,通过如上所述的平面坡度的计算方法可计算出多个平面的坡度,此时,可根据该多个平面的坡度拟合出地面坡度。
可以理解的是,根据地面坡度可确定出地面是否为水平地面、高架桥或斜坡,在一些实施例中,第一目标对象所处的地面可能并不是水平地面,例如,可能是高架桥或斜坡,因此,根据该地面坡度还可以确定出第一目标对象是否处于高架桥或斜坡上。
在确定出第一目标对象所处地面的地面坡度后,可根据该地面坡度对第一目标对象所处的地面进行延伸,得到80米之外的地面点云。例如,按照第一目标对象所处的路面宽度向80米之外的远处进行直线扩展。此处,可以考虑80米之外的远处地面是水平地面的情况,可以暂时不考虑80米之外的远处有斜坡或高架桥的情况。
S902、根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型。
例如,根据80米之外的地面点云,确定80米之外的第二目标对象的对象类型。
可选的,所述根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型,包括:根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇,所述第二目标对象的底部与所述第一目标对象的底部在同一平面内;通过所述第二目标对象相对于所述可移动平台的距离所对应的检测模型检测所述第二目标对象对应的点云簇,确定所述第二目标对象的对象类型。
例如,根据80米之外的地面点云,确定80米之外的第二目标对象对应的点云簇,如图11所示,由于80米之外的第二目标对象会被近处的物体所遮挡,因此,远处的三维点云104个数较少,也就是说,远处的三维 点云104可能只是第二目标对象上部的部分三维点云,此时,需要根据80米之外的地面点云,对该第二目标对象剩余的部分三维点云进行补齐,例如,需要补齐第二目标对象下半部分的三维点云,使得第二目标对象的底部与第一目标对象101、第一目标对象102和第一目标对象103的底部在同一平面内。其中,第二目标对象上部的部分三维点云和补齐后的下半部分的三维点云可构成该第二目标对象对应的点云簇。
进一步,根据第二目标对象相对于探测设备的距离,采用与该距离对应的检测模型对该第二目标对象对应的点云簇进行检测,即通过该检测模型检测该第二目标对象是行人、车辆或其他物体。另外,此处并不限定第二目标对象的个数,可能是一个,也可能是多个。由于第二目标对象相对于探测设备的距离大于第一预设距离,因此,可以采用比该第一预设距离大的第二预设距离对应的检测模型对该第二目标对象进行检测。
可选的,所述根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇,包括:对所述第一预设距离之外的三维点云中去除地面点云后的三维点云进行聚类,得到所述第二目标对象对应的部分点云;根据所述第二目标对象对应的部分点云和所述第一预设距离之外的地面点云,确定所述第二目标对象对应的点云簇。
例如,获取探测设备探测获得的距离该探测设备在80米之外的三维点云,由于80米之外的三维点云中有可能包括地面点云,因此,需要去除80米之外的三维点云中的地面点云,并对去除地面点云后的三维点云进行聚类,得到第二目标对象对应的部分点云,例如图11所示的三维点云104。
或者,在确定出第一目标对象所处地面的地面坡度后,根据该地面坡度对第一目标对象所处的地面进行延伸,得到80米之外的地面点云。在检测80米之外的第二目标对象时,需要去除延伸出的80米之外的地面点云,并对80米之外的三维点云中去除地面点云后的三维点云进行聚类,从而得到第二目标对象对应的部分点云。进一步,根据该第二目标对象对应的部分点云和80米之外的地面点云,确定该第二目标对象对应的点云簇,具体的,对该第二目标对象的下半部分进行补齐,使得第二目标对象的底部与第一目标对象101、第一目标对象102和第一目标对象103的底 部在同一平面内。
具体的,聚类过程类似于如上所述的聚类过程,此处不再赘述。不同之处在于,此处的聚类过程所采用的车辆高度H相比于如上所述的聚类过程所采用的车辆高度H更大一些,例如,此处的聚类过程所采用的车辆高度H可以取值为1.6米或2.5米。可选的,所述方法还包括:若所述第二目标对象为车辆,且所述第二目标对象的宽度小于或等于第一宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第一高度的三维点云,得到所述第二目标对象对应的剩余三维点云;若所述第二目标对象为车辆,所述第二目标对象的宽度大于所述第一宽度,且所述第二目标对象的宽度小于或等于第二宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第二高度的三维点云,得到所述第二目标对象对应的剩余三维点云;根据所述第二目标对象对应的剩余三维点云,生成用于表征车辆的识别框,所述识别框用于所述可移动平台进行导航决策;其中,所述第二宽度大于所述第一宽度,所述第二高度大于所述第一高度。
可以理解的是,由于第二目标对象的上方可能会有路牌或树枝等微小物体,由于路牌或树枝等微小物体相对于第二目标对象可能很近,在通过聚类得到第二目标对象对应的点云簇时,该第二目标对象对应的点云簇中可能会包括路牌或树枝等微小物体的三维点云。因此,当车载设备采用第二目标对象相对于探测设备的距离所对应的检测模型确定第二目标对象为车辆时,还需要对第二目标对象对应的点云簇进行进一步处理。
具体的,根据第二目标对象的宽度,确定该第二目标对象是小车或大车,例如,该第二目标对象的宽度小于或等于第一宽度,则确定该第二目标对象是小车。若该第二目标对象的宽度大于第一宽度,且该第二目标对象的宽度小于或等于第二宽度,则确定该第二目标对象是大车,具体的,所述第二宽度大于所述第一宽度。进一步,若该第二目标对象是小车,则去除该第二目标对象对应的点云簇中高度大于或等于第一高度,例如高度为1.8米以上的三维点云,得到该第二目标对象对应的剩余三维点云。若该第二目标对象是大车,则去除该第二目标对象对应的点云簇中高度大于或等于第二高度,例如高度为3.2米以上的三维点云,得到该第二目标对象对应的剩余三维点云。如图12所示,圆圈105中的三维点云为树枝对 应的三维点云。进一步,根据所述第二目标对象对应的剩余三维点云,生成用于表征车辆的识别框。例如,在如图11所述的三维点云104的基础上去除如图12所示的圆圈105中的树枝对应的三维点云,得到该第二目标对象对应的剩余三维点云,进一步,根据80米之外的地面点云,对该第二目标对象的下半部分进行补齐,例如,需要补齐第二目标对象下半部分的三维点云,使得第二目标对象的底部与第一目标对象101、第一目标对象102和第一目标对象103的底部在同一平面内,得到如图12所示的第二目标对象即表征车辆的识别框106。进一步,搭载有探测设备的车辆,例如车辆11可根据该识别框106进行导航决策,例如,根据识别框106规划路线,提前规划车辆11的行驶路线,或提前控制车辆11转换到其他车道,或提前控制车辆11的车速等。
本实施例通过根据近处的第一目标对象的位置,确定远处的地面点云,并根据远处的地面点云,检测远处的第二目标对象,使得搭载有探测设备的可移动平台根据远处的第二目标对象进行导航决策,提高了可移动平台的安全性。另外,通过检测第二目标对象是大车或小车,并根据大车对应的高度或小车对应的高度,对第二目标对象对应的三维点云中可能是路牌或树枝等微小物体的三维点云进行去除,提高了对第二目标对象的检测精度。此外,根据至少三个第一目标对象的位置,确定由至少三个第一目标对象构成的平面的坡度,并根据该平面的坡度确定第一目标对象所在地面的坡度,根据该第一目标对象所在地面的坡度还可以确定出地面是否为水平地面、高架桥或斜坡等,从而提高了地面识别的检测精度,在去除地面点云时,不仅可以去除水平地面的点云,还可以去除高架桥或斜坡等路面的点云,从而减少了水平地面的点云、高架桥或斜坡等路面的点云对检测第一目标对象或第二目标对象带来的影响,从而进一步提高了对第一目标对象或第二目标对象的检测精度。
本申请实施例提供一种目标对象的检测系统。图13为本申请实施例提供的目标对象的检测系统的结构图,如图13所示,目标对象的检测系统130包括:探测设备131、存储器132和处理器133。其中,探测设备131用于探测可移动平台周围环境得到三维点云。处理器133具体可以是 上述实施例中车载设备中的部件,或者是车辆中搭载的具有数据处理功能的其他部件、器件或组件。具体的,存储器132用于存储程序代码;处理器133,调用所述程序代码,当程序代码被执行时,用于执行以下操作:获取所述三维点云;对所述三维点云进行聚类,得到第一目标对象对应的点云簇,其中,已聚类的点云簇的聚类中心的高度符合预设高度条件;根据所述第一目标对象相对于所述可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型;通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型。
可选的,处理器133通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之前,还用于:确定所述第一目标对象的运动方向;将所述第一目标对象的运动方向调整为预设方向。
可选的,所述预设方向为用于训练所述检测模型的样本对象的运动方向。
可选的,处理器133确定所述第一目标对象的运动方向时,具体用于:根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
可选的,处理器133根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向时,具体用于:将第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云分别投影到世界坐标系中;根据所述世界坐标系中第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
可选的,处理器133确定所述第一目标对象的运动方向时,具体用于:将第一时刻所述第一目标对象对应的三维点云投影在所述第一时刻的二维图像中,得到第一投影点;将第二时刻所述第一目标对象对应的三维点云投影在所述第二时刻的二维图像中,得到第二投影点;根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息,其中,所述第一特征点为与所述第一投影点之间的位置关系符 合预设位置关系的特征点;根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息,其中,所述第二特征点为与所述第二投影点之间的位置关系符合预设位置关系的特征点,所述第二特征点与所述第一特征点对应;根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向。
可选的,处理器133根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息时,具体用于:根据所述第一投影点和所述第一时刻的二维图像中的第一特征点之间的距离,确定所述第一投影点对应的权重系数;根据所述第一投影点对应的权重系数和所述第一投影点的三维信息,确定所述第一特征点的三维信息。
可选的,处理器133根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息时,具体用于:根据所述第二投影点和所述第二时刻的二维图像中的第二特征点之间的距离,确定所述第二投影点对应的权重系数;根据所述第二投影点对应的权重系数和所述第二投影点的三维信息,确定所述第二特征点的三维信息。
可选的,处理器133根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向之前,还用于:将所述第一特征点的三维信息和所述第二特征点的三维信息分别转换到世界坐标系中。
可选的,处理器133通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,还用于:若通过所述目标检测模型确定所述第一目标对象为车辆,则根据预设条件对所述目标检测模型的检测结果进行验证。
可选的,所述预设条件包括如下至少一种:所述第一目标对象的大小满足预设大小;所述第一目标对象和所述第一目标对象周围其他目标对象之间的重合度小于预设阈值。
可选的,在对所述三维点云进行聚类,得到第一目标对象对应的点云簇之前,处理器133还用于:去除所述三维点云中的特定点云,所述特定点云包括地面点云。
可选的,所述第一目标对象相对于所述可移动平台的距离小于或等于 第一预设距离;处理器133通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,还用于:若通过所述目标检测模型确定所述第一目标对象为车辆,则根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云;根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型。
可选的,处理器133根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云时,具体用于:根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度;根据所述地面的坡度,确定所述第一预设距离之外的地面点云。
可选的,处理器133根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度时,具体用于:根据至少三个所述第一目标对象的位置,确定由至少三个所述第一目标对象构成的平面的坡度,所述平面的坡度为所述第一目标对象所在地面的坡度。
可选的,处理器133根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型时,具体用于:根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇,所述第二目标对象的底部与所述第一目标对象的底部在同一平面内;通过所述第二目标对象相对于所述可移动平台的距离所对应的检测模型检测所述第二目标对象对应的点云簇,确定所述第二目标对象的对象类型。
可选的,处理器133根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇时,具体用于:对所述第一预设距离之外的三维点云中去除地面点云后的三维点云进行聚类,得到所述第二目标对象对应的部分点云;根据所述第二目标对象对应的部分点云和所述第一预设距离之外的地面点云,确定所述第二目标对象对应的点云簇。
可选的,处理器133还用于:若所述第二目标对象为车辆,且所述第二目标对象的宽度小于或等于第一宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第一高度的三维点云,得到所述第二目标对象对 应的剩余三维点云;若所述第二目标对象为车辆,所述第二目标对象的宽度大于所述第一宽度,且所述第二目标对象的宽度小于或等于第二宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第二高度的三维点云,得到所述第二目标对象对应的剩余三维点云;根据所述第二目标对象对应的剩余三维点云,生成用于表征车辆的识别框,所述识别框用于所述可移动平台进行导航决策;其中,所述第二宽度大于所述第一宽度,所述第二高度大于所述第一高度。
本申请实施例提供的目标对象的检测系统的具体原理和实现方式均与上述实施例类似,此处不再赘述。
本申请实施例提供一种可移动平台。该可移动平台包括:机身、动力系统和如上实施例所述的目标对象的检测系统。其中,动力系统安装在所述机身,用于提供移动动力。目标对象的检测系统可以实现如上所述的目标对象的检测方法,该目标对象的检测方法的具体原理和实现方式均与上述实施例类似,此处不再赘述。本实施例并不限定该可移动平台的具体形态,例如,该可移动平台可以是无人机、可移动机器人或车辆等。
另外,本实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现上述实施例所述的目标对象的检测方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的 部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (39)

  1. 一种目标对象的检测方法,其特征在于,应用于可移动平台,所述可移动平台设置有探测设备,所述探测设备用于探测所述可移动平台周围环境得到三维点云,所述方法包括:
    获取所述三维点云;
    对所述三维点云进行聚类,得到第一目标对象对应的点云簇,其中,已聚类的点云簇的聚类中心的高度符合预设高度条件;
    根据所述第一目标对象相对于所述可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型;
    通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型。
  2. 根据权利要求1所述的方法,其特征在于,所述通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之前,所述方法还包括:
    确定所述第一目标对象的运动方向;
    将所述第一目标对象的运动方向调整为预设方向。
  3. 根据权利要求2所述的方法,其特征在于,所述预设方向为用于训练所述检测模型的样本对象的运动方向。
  4. 根据权利要求3所述的方法,其特征在于,所述确定所述第一目标对象的运动方向,包括:
    根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
  5. 根据权利要求4所述的方法,其特征在于,所述根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向,包括:
    将第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云分别投影到世界坐标系中;
    根据所述世界坐标系中第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
  6. 根据权利要求3所述的方法,其特征在于,所述确定所述第一目标对象的运动方向,包括:
    将第一时刻所述第一目标对象对应的三维点云投影在所述第一时刻的二维图像中,得到第一投影点;
    将第二时刻所述第一目标对象对应的三维点云投影在所述第二时刻的二维图像中,得到第二投影点;
    根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息,其中,所述第一特征点为与所述第一投影点之间的位置关系符合预设位置关系的特征点;
    根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息,其中,所述第二特征点为与所述第二投影点之间的位置关系符合预设位置关系的特征点,所述第二特征点与所述第一特征点对应;
    根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息,包括:
    根据所述第一投影点和所述第一时刻的二维图像中的第一特征点之间的距离,确定所述第一投影点对应的权重系数;
    根据所述第一投影点对应的权重系数和所述第一投影点的三维信息,确定所述第一特征点的三维信息。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息,包括:
    根据所述第二投影点和所述第二时刻的二维图像中的第二特征点之间的距离,确定所述第二投影点对应的权重系数;
    根据所述第二投影点对应的权重系数和所述第二投影点的三维信息,确定所述第二特征点的三维信息。
  9. 根据权利要求6-8任一项所述的方法,其特征在于,所述根据所 述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向之前,所述方法还包括:
    将所述第一特征点的三维信息和所述第二特征点的三维信息分别转换到世界坐标系中。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,所述方法还包括:
    若通过所述目标检测模型确定所述第一目标对象为车辆,则根据预设条件对所述目标检测模型的检测结果进行验证。
  11. 根据权利要求10所述的方法,其特征在于,所述预设条件包括如下至少一种:
    所述第一目标对象的大小满足预设大小;
    所述第一目标对象和所述第一目标对象周围其他目标对象之间的空间重合度小于预设阈值。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,在对所述三维点云进行聚类,得到第一目标对象对应的点云簇之前,所述方法还包括:
    去除所述三维点云中的特定点云,所述特定点云包括地面点云。
  13. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一目标对象相对于所述可移动平台的距离小于或等于第一预设距离;
    所述通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,所述方法还包括:
    若通过所述目标检测模型确定所述第一目标对象为车辆,则根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云;
    根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云,包括:
    根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度;
    根据所述地面的坡度,确定所述第一预设距离之外的地面点云。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度,包括:
    根据至少三个所述第一目标对象的位置,确定由至少三个所述第一目标对象构成的平面的坡度,所述平面的坡度为所述第一目标对象所在地面的坡度。
  16. 根据权利要求13-15任一项所述的方法,其特征在于,所述根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型,包括:
    根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇,所述第二目标对象的底部与所述第一目标对象的底部在同一平面内;
    通过所述第二目标对象相对于所述可移动平台的距离所对应的检测模型检测所述第二目标对象对应的点云簇,确定所述第二目标对象的对象类型。
  17. 根据权利要求16所述的方法,其特征在于,所述根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇,包括:
    对所述第一预设距离之外的三维点云中去除地面点云后的三维点云进行聚类,得到所述第二目标对象对应的部分点云;
    根据所述第二目标对象对应的部分点云和所述第一预设距离之外的地面点云,确定所述第二目标对象对应的点云簇。
  18. 根据权利要求17所述的方法,其特征在于,所述方法还包括:
    若所述第二目标对象为车辆,且所述第二目标对象的宽度小于或等于第一宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第一高度的三维点云,得到所述第二目标对象对应的剩余三维点云;
    若所述第二目标对象为车辆,所述第二目标对象的宽度大于所述第一宽度,且所述第二目标对象的宽度小于或等于第二宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第二高度的三维点云,得到所述第二目标对象对应的剩余三维点云;
    根据所述第二目标对象对应的剩余三维点云,生成用于表征车辆的识别框,所述识别框用于所述可移动平台进行导航决策;
    其中,所述第二宽度大于所述第一宽度,所述第二高度大于所述第一高度。
  19. 一种目标对象的检测系统,其特征在于,包括:探测设备、存储器和处理器;
    所述探测设备用于探测可移动平台周围环境得到三维点云;
    所述存储器用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取所述三维点云;
    对所述三维点云进行聚类,得到第一目标对象对应的点云簇,其中,已聚类的点云簇的聚类中心的高度符合预设高度条件;
    根据所述第一目标对象相对于所述可移动平台的距离,以及所述距离与检测模型的对应关系,确定目标检测模型;
    通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型。
  20. 根据权利要求19所述的系统,其特征在于,所述处理器通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之前,还用于:
    确定所述第一目标对象的运动方向;
    将所述第一目标对象的运动方向调整为预设方向。
  21. 根据权利要求20所述的系统,其特征在于,所述预设方向为用于训练所述检测模型的样本对象的运动方向。
  22. 根据权利要求21所述的系统,其特征在于,所述处理器确定所述第一目标对象的运动方向时,具体用于:
    根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
  23. 根据权利要求22所述的系统,其特征在于,所述处理器根据第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象 对应的三维点云,确定所述第一目标对象的运动方向时,具体用于:
    将第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云分别投影到世界坐标系中;
    根据所述世界坐标系中第一时刻所述第一目标对象对应的三维点云和第二时刻所述第一目标对象对应的三维点云,确定所述第一目标对象的运动方向。
  24. 根据权利要求21所述的系统,其特征在于,所述处理器确定所述第一目标对象的运动方向时,具体用于:
    将第一时刻所述第一目标对象对应的三维点云投影在所述第一时刻的二维图像中,得到第一投影点;
    将第二时刻所述第一目标对象对应的三维点云投影在所述第二时刻的二维图像中,得到第二投影点;
    根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息,其中,所述第一特征点为与所述第一投影点之间的位置关系符合预设位置关系的特征点;
    根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二特征点的三维信息,其中,所述第二特征点为与所述第二投影点之间的位置关系符合预设位置关系的特征点,所述第二特征点与所述第一特征点对应;
    根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向。
  25. 根据权利要求24所述的系统,其特征在于,所述处理器根据所述第一投影点和所述第一时刻的二维图像中的第一特征点,确定所述第一特征点的三维信息时,具体用于:
    根据所述第一投影点和所述第一时刻的二维图像中的第一特征点之间的距离,确定所述第一投影点对应的权重系数;
    根据所述第一投影点对应的权重系数和所述第一投影点的三维信息,确定所述第一特征点的三维信息。
  26. 根据权利要求24所述的系统,其特征在于,所述处理器根据所述第二投影点和所述第二时刻的二维图像中的第二特征点,确定所述第二 特征点的三维信息时,具体用于:
    根据所述第二投影点和所述第二时刻的二维图像中的第二特征点之间的距离,确定所述第二投影点对应的权重系数;
    根据所述第二投影点对应的权重系数和所述第二投影点的三维信息,确定所述第二特征点的三维信息。
  27. 根据权利要求24-26任一项所述的系统,其特征在于,所述处理器根据所述第一特征点的三维信息和所述第二特征点的三维信息,确定所述第一目标对象的运动方向之前,还用于:
    将所述第一特征点的三维信息和所述第二特征点的三维信息分别转换到世界坐标系中。
  28. 根据权利要求19-27任一项所述的系统,其特征在于,所述处理器通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,还用于:
    若通过所述目标检测模型确定所述第一目标对象为车辆,则根据预设条件对所述目标检测模型的检测结果进行验证。
  29. 根据权利要求28所述的系统,其特征在于,所述预设条件包括如下至少一种:
    所述第一目标对象的大小满足预设大小;
    所述第一目标对象和所述第一目标对象周围其他目标对象之间的重合度小于预设阈值。
  30. 根据权利要求19-29任一项所述的系统,其特征在于,在对所述三维点云进行聚类,得到第一目标对象对应的点云簇之前,所述处理器还用于:
    去除所述三维点云中的特定点云,所述特定点云包括地面点云。
  31. 根据权利要求19-27任一项所述的系统,其特征在于,所述第一目标对象相对于所述可移动平台的距离小于或等于第一预设距离;
    所述处理器通过所述目标检测模型检测所述第一目标对象对应的点云簇,确定所述第一目标对象的对象类型之后,还用于:
    若通过所述目标检测模型确定所述第一目标对象为车辆,则根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云;
    根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型。
  32. 根据权利要求31所述的系统,其特征在于,所述处理器根据所述第一目标对象的位置,确定所述第一预设距离之外的地面点云时,具体用于:
    根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度;
    根据所述地面的坡度,确定所述第一预设距离之外的地面点云。
  33. 根据权利要求32所述的系统,其特征在于,所述处理器根据所述第一目标对象的位置,确定所述第一目标对象所在地面的坡度时,具体用于:
    根据至少三个所述第一目标对象的位置,确定由至少三个所述第一目标对象构成的平面的坡度,所述平面的坡度为所述第一目标对象所在地面的坡度。
  34. 根据权利要求31-33任一项所述的系统,其特征在于,所述处理器根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象的对象类型时,具体用于:
    根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇,所述第二目标对象的底部与所述第一目标对象的底部在同一平面内;
    通过所述第二目标对象相对于所述可移动平台的距离所对应的检测模型检测所述第二目标对象对应的点云簇,确定所述第二目标对象的对象类型。
  35. 根据权利要求34所述的系统,其特征在于,所述处理器根据所述第一预设距离之外的地面点云,确定所述第一预设距离之外的第二目标对象对应的点云簇时,具体用于:
    对所述第一预设距离之外的三维点云中去除所述地面点云后的三维点云进行聚类,得到所述第二目标对象对应的部分点云;
    根据所述第二目标对象对应的部分点云和所述第一预设距离之外的地面点云,确定所述第二目标对象对应的点云簇。
  36. 根据权利要求35所述的系统,其特征在于,所述处理器还用于:
    若所述第二目标对象为车辆,且所述第二目标对象的宽度小于或等于第一宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第一高度的三维点云,得到所述第二目标对象对应的剩余三维点云;
    若所述第二目标对象为车辆,所述第二目标对象的宽度大于所述第一宽度,且所述第二目标对象的宽度小于或等于第二宽度,则去除所述第二目标对象对应的点云簇中高度大于或等于第二高度的三维点云,得到所述第二目标对象对应的剩余三维点云;
    根据所述第二目标对象对应的剩余三维点云,生成用于表征车辆的识别框,所述识别框用于所述可移动平台进行导航决策;其中,所述第二宽度大于所述第一宽度,所述第二高度大于所述第一高度。
  37. 一种可移动平台,其特征在于,包括:
    机身;
    动力系统,安装在所述机身,用于提供移动动力;
    以及权利要求19-36中任一项所述的目标对象的检测系统。
  38. 根据权利要求37所述的可移动平台,其特征在于,所述可移动平台包括:无人机、可移动机器人或车辆。
  39. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行以实现权利要求1-18中任一项所述的方法。
PCT/CN2019/105158 2019-09-10 2019-09-10 目标对象的检测方法、系统、设备及存储介质 WO2021046716A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/105158 WO2021046716A1 (zh) 2019-09-10 2019-09-10 目标对象的检测方法、系统、设备及存储介质
CN201980033130.6A CN112154454A (zh) 2019-09-10 2019-09-10 目标对象的检测方法、系统、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105158 WO2021046716A1 (zh) 2019-09-10 2019-09-10 目标对象的检测方法、系统、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021046716A1 true WO2021046716A1 (zh) 2021-03-18

Family

ID=73891475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105158 WO2021046716A1 (zh) 2019-09-10 2019-09-10 目标对象的检测方法、系统、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112154454A (zh)
WO (1) WO2021046716A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076922A (zh) * 2021-04-21 2021-07-06 北京经纬恒润科技股份有限公司 一种物体检测方法及装置
CN113610967A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维点检测的方法、装置、电子设备及存储介质
CN113781639A (zh) * 2021-09-22 2021-12-10 交通运输部公路科学研究所 一种大场景道路基础设施数字化模型快速构建方法
CN114162126A (zh) * 2021-12-28 2022-03-11 上海洛轲智能科技有限公司 车辆控制方法、装置、设备、介质及产品
CN115457496A (zh) * 2022-09-09 2022-12-09 北京百度网讯科技有限公司 自动驾驶的挡墙检测方法、装置及车辆
CN115600395A (zh) * 2022-10-09 2023-01-13 南京领鹊科技有限公司(Cn) 一种室内工程质量验收评价方法及装置
WO2023202401A1 (zh) * 2022-04-19 2023-10-26 京东科技信息技术有限公司 点云数据中目标的检测方法、装置和计算机可读存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220207822A1 (en) * 2020-12-29 2022-06-30 Volvo Car Corporation Ensemble learning for cross-range 3d object detection in driver assist and autonomous driving systems
CN112835061B (zh) * 2021-02-04 2024-02-13 郑州衡量科技股份有限公司 基于ToF传感器的动态车辆分离与宽高检测方法与系统
CN112906519B (zh) * 2021-02-04 2023-09-26 北京邮电大学 一种车辆类型识别方法及装置
CN112907745B (zh) * 2021-03-23 2022-04-01 北京三快在线科技有限公司 一种数字正射影像图生成方法及装置
CN113894050B (zh) * 2021-09-14 2023-05-23 深圳玩智商科技有限公司 物流件分拣方法、分拣设备及存储介质
CN113838196A (zh) * 2021-11-24 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 点云数据的处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140029856A1 (en) * 2012-07-30 2014-01-30 Microsoft Corporation Three-dimensional visual phrases for object recognition
CN108171796A (zh) * 2017-12-25 2018-06-15 燕山大学 一种基于三维点云的巡检机器人视觉系统及控制方法
CN108317953A (zh) * 2018-01-19 2018-07-24 东北电力大学 一种基于无人机的双目视觉目标表面3d检测方法及系统
CN108319920A (zh) * 2018-02-05 2018-07-24 武汉武大卓越科技有限责任公司 一种基于线扫描三维点云的路面标线检测及参数计算方法
CN108680100A (zh) * 2018-03-07 2018-10-19 福建农林大学 三维激光点云数据与无人机点云数据匹配方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975907B (zh) * 2016-04-27 2019-05-21 江苏华通晟云科技有限公司 基于分布式平台的svm模型行人检测方法
CN106204586B (zh) * 2016-07-08 2019-07-19 华南农业大学 一种基于跟踪的复杂场景下的运动目标检测方法
CN107895386A (zh) * 2017-11-14 2018-04-10 中国航空工业集团公司西安飞机设计研究所 一种多平台联合目标自主识别方法
CN108197566B (zh) * 2017-12-29 2022-03-25 成都三零凯天通信实业有限公司 一种基于多路神经网络的监控视频行为检测方法
CN109813277B (zh) * 2019-02-26 2021-07-16 北京中科慧眼科技有限公司 测距模型的构建方法、测距方法、装置以及自动驾驶系统
CN109902629A (zh) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 一种复杂交通场景下的实时车辆目标检测模型

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140029856A1 (en) * 2012-07-30 2014-01-30 Microsoft Corporation Three-dimensional visual phrases for object recognition
CN108171796A (zh) * 2017-12-25 2018-06-15 燕山大学 一种基于三维点云的巡检机器人视觉系统及控制方法
CN108317953A (zh) * 2018-01-19 2018-07-24 东北电力大学 一种基于无人机的双目视觉目标表面3d检测方法及系统
CN108319920A (zh) * 2018-02-05 2018-07-24 武汉武大卓越科技有限责任公司 一种基于线扫描三维点云的路面标线检测及参数计算方法
CN108680100A (zh) * 2018-03-07 2018-10-19 福建农林大学 三维激光点云数据与无人机点云数据匹配方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076922A (zh) * 2021-04-21 2021-07-06 北京经纬恒润科技股份有限公司 一种物体检测方法及装置
CN113076922B (zh) * 2021-04-21 2024-05-10 北京经纬恒润科技股份有限公司 一种物体检测方法及装置
CN113610967A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维点检测的方法、装置、电子设备及存储介质
CN113610967B (zh) * 2021-08-13 2024-03-26 北京市商汤科技开发有限公司 三维点检测的方法、装置、电子设备及存储介质
CN113781639A (zh) * 2021-09-22 2021-12-10 交通运输部公路科学研究所 一种大场景道路基础设施数字化模型快速构建方法
CN113781639B (zh) * 2021-09-22 2023-11-28 交通运输部公路科学研究所 一种大场景道路基础设施数字化模型快速构建方法
CN114162126A (zh) * 2021-12-28 2022-03-11 上海洛轲智能科技有限公司 车辆控制方法、装置、设备、介质及产品
WO2023202401A1 (zh) * 2022-04-19 2023-10-26 京东科技信息技术有限公司 点云数据中目标的检测方法、装置和计算机可读存储介质
CN115457496A (zh) * 2022-09-09 2022-12-09 北京百度网讯科技有限公司 自动驾驶的挡墙检测方法、装置及车辆
CN115457496B (zh) * 2022-09-09 2023-12-08 北京百度网讯科技有限公司 自动驾驶的挡墙检测方法、装置及车辆
CN115600395A (zh) * 2022-10-09 2023-01-13 南京领鹊科技有限公司(Cn) 一种室内工程质量验收评价方法及装置

Also Published As

Publication number Publication date
CN112154454A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2021046716A1 (zh) 目标对象的检测方法、系统、设备及存储介质
US11320833B2 (en) Data processing method, apparatus and terminal
JP7073315B2 (ja) 乗物、乗物測位システム、及び乗物測位方法
EP3598874B1 (en) Systems and methods for updating a high-resolution map based on binocular images
KR102221695B1 (ko) 자율주행을 위한 고정밀 지도의 업데이트 장치 및 방법
US10152059B2 (en) Systems and methods for landing a drone on a moving base
CN111448478B (zh) 用于基于障碍物检测校正高清地图的系统和方法
US9070289B2 (en) System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
JP5926228B2 (ja) 自律車両用の奥行き検知方法及びシステム
Bounini et al. Autonomous vehicle and real time road lanes detection and tracking
CN111263960B (zh) 用于更新高清晰度地图的设备和方法
US10872246B2 (en) Vehicle lane detection system
JP2022517940A (ja) ポットホール検出システム
KR102117313B1 (ko) 그래디언트 추정 장치, 그래디언트 추정 방법, 컴퓨터 프로그램 및 제어 시스템
JP2016157197A (ja) 自己位置推定装置、自己位置推定方法およびプログラム
CN110969064A (zh) 一种基于单目视觉的图像检测方法、装置及存储设备
CN112700486B (zh) 对图像中路面车道线的深度进行估计的方法及装置
CN111213153A (zh) 目标物体运动状态检测方法、设备及存储介质
CN113033280A (zh) 拖车姿态估计的系统和方法
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
Jiménez et al. Improving the lane reference detection for autonomous road vehicle control
TWI680898B (zh) 近距離障礙物之光達偵測裝置及其方法
JP7337617B2 (ja) 推定装置、推定方法及びプログラム
WO2024036984A1 (zh) 目标定位方法及相关系统、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945323

Country of ref document: EP

Kind code of ref document: A1