CN111507973A - Target detection method and device, electronic equipment and storage medium - Google Patents

Target detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111507973A
CN111507973A CN202010314166.6A CN202010314166A CN111507973A CN 111507973 A CN111507973 A CN 111507973A CN 202010314166 A CN202010314166 A CN 202010314166A CN 111507973 A CN111507973 A CN 111507973A
Authority
CN
China
Prior art keywords
information
grid
obstacle
point cloud
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010314166.6A
Other languages
Chinese (zh)
Other versions
CN111507973B (en
Inventor
周辉
洪方舟
王哲
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202010314166.6A priority Critical patent/CN111507973B/en
Publication of CN111507973A publication Critical patent/CN111507973A/en
Priority to JP2021577017A priority patent/JP2022539093A/en
Priority to PCT/CN2021/087424 priority patent/WO2021213241A1/en
Priority to KR1020217043313A priority patent/KR20220016221A/en
Application granted granted Critical
Publication of CN111507973B publication Critical patent/CN111507973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a target detection method and apparatus, an electronic device, and a storage medium, wherein the method includes: acquiring point cloud information, wherein the point cloud information at least comprises a target object and point cloud information corresponding to an object to be detected; obtaining grid information according to the point cloud information, wherein the grid information at least comprises an object to be detected; and identifying the barrier in the object to be detected according to the grid information.

Description

Target detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a target detection method and apparatus, an electronic device, and a storage medium.
Background
The target detection of the obstacle is an important link for ensuring safe driving in automatic driving. For target detection, a deep learning technology based on a neural network can be used for predicting the possible size and position of the obstacle, however, the accuracy of target detection achieved based on the deep learning technology depends on the advantages and disadvantages of specific types of training data and training algorithms, so that the problem of low target detection accuracy of the obstacle is caused. However, no effective solution exists in the related art.
Disclosure of Invention
The present disclosure provides a technical solution for target detection.
According to an aspect of the present disclosure, there is provided an object detection method, the method including:
acquiring point cloud information, wherein the point cloud information at least comprises a target object and point cloud information corresponding to an object to be detected;
obtaining grid information according to the point cloud information, wherein the grid information at least comprises an object to be detected;
and identifying the barrier in the object to be detected according to the grid information.
In a possible implementation manner, the acquiring point cloud information includes:
acquiring a plurality of point cloud information to be processed which is respectively obtained by scanning through at least two sensors;
and splicing the plurality of point cloud information to be processed to obtain the point cloud information.
In a possible implementation, the point cloud information further includes a sensor identifier;
obtaining grid information according to the point cloud information, including:
carrying out meshing processing on the point cloud information to obtain a mesh map, wherein the mesh map comprises a plurality of mesh areas;
determining whether an obstacle exists in a target grid area of the plurality of grid areas according to the category of the sensor identifier included in the target grid area;
and obtaining the grid information under the condition that the target grid area has an obstacle.
In a possible implementation manner, the determining whether an obstacle exists in a target grid area of the multiple grid areas according to a category of a sensor identifier included in the target grid area includes:
and determining that the obstacle exists in the target grid area under the condition that the sensor identifications corresponding to at least two pixel points in the target grid area are different.
In a possible implementation, the point cloud information further includes height information;
the obtaining the grid information under the condition that the target grid area has the obstacle further comprises:
determining the type of obstacles existing in the target grid area according to the height information;
and updating the grid information according to the type of the obstacle.
In a possible implementation manner, the determining the category of the obstacle existing in the target grid area according to the height information includes:
acquiring sensor identifications and height information corresponding to at least two pixel points in the target grid area respectively;
dividing the at least two pixel points according to the sensor identification, and taking the pixel points corresponding to the same sensor identification as a group of data to obtain a plurality of groups of pixel point data;
respectively determining the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information;
and determining the type of the obstacles according to the number corresponding to the minimum height value.
In a possible implementation manner, the identifying an obstacle in the object to be detected according to the grid information includes:
analyzing a connected region according to the grid information to obtain a connected region;
and identifying the obstacles in the object to be detected according to the communication area.
In a possible implementation manner, after the obstacle in the object to be detected is identified according to the connected region, the method further includes:
acquiring a plurality of points to be processed on a first line segment of the communication area;
selecting at least two reference points from the plurality of points to be processed;
and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area.
In a possible implementation manner, after the obstacle in the object to be detected is identified according to the connected region, the method further includes:
extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object;
acquiring at least two obstacles located in the grid information;
taking the central point of the target position as a reference, and obtaining a sector area according to a guiding line emitted by a preset angle;
and deleting the second obstacle from the grid information under the condition that the fan-shaped area covers a first obstacle and a second obstacle, and the second obstacle is shielded by the first obstacle.
In a possible implementation, the method includes:
and sending a message that an obstacle exists on the navigation path to the target object, so that the target object responds to the message that the obstacle exists, and performs obstacle avoidance processing and/or replans the navigation path according to the obstacle.
According to an aspect of the present disclosure, there is also provided an object detection apparatus, the apparatus including:
the acquisition unit is used for acquiring point cloud information, and the point cloud information at least comprises a target object and point cloud information corresponding to an object to be detected;
the information processing unit is used for obtaining grid information according to the point cloud information, and the grid information at least comprises an object to be detected;
and the detection unit is used for identifying the obstacles in the object to be detected according to the grid information.
In a possible implementation manner, the obtaining unit is configured to:
acquiring a plurality of point cloud information to be processed which is respectively obtained by scanning through at least two sensors;
and splicing the plurality of point cloud information to be processed to obtain the point cloud information.
In a possible implementation, the point cloud information further includes a sensor identifier;
the information processing unit is configured to:
carrying out meshing processing on the point cloud information to obtain a mesh map, wherein the mesh map comprises a plurality of mesh areas;
determining whether an obstacle exists in a target grid area of the plurality of grid areas according to the category of the sensor identifier included in the target grid area;
and obtaining the grid information under the condition that the target grid area has an obstacle.
In a possible implementation manner, the information processing unit is configured to:
and determining that the obstacle exists in the target grid area under the condition that the sensor identifications corresponding to at least two pixel points in the target grid area are different.
In a possible implementation, the point cloud information further includes height information;
the apparatus further comprises a category determination unit configured to:
determining the type of obstacles existing in the target grid area according to the height information;
and updating the grid information according to the type of the obstacle.
In a possible implementation manner, the category determining unit is configured to:
acquiring sensor identifications and height information corresponding to at least two pixel points in the target grid area respectively;
dividing the at least two pixel points according to the sensor identification, and taking the pixel points corresponding to the same sensor identification as a group of data to obtain a plurality of groups of pixel point data;
respectively determining the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information;
and determining the type of the obstacles according to the number corresponding to the minimum height value.
In a possible implementation manner, the detection unit is configured to:
analyzing a connected region according to the grid information to obtain a connected region;
and identifying the obstacles in the object to be detected according to the communication area.
In a possible implementation manner, the apparatus further includes a connected component adjusting unit, configured to:
acquiring a plurality of points to be processed on a first line segment of the communication area;
selecting at least two reference points from the plurality of points to be processed;
and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area.
In a possible implementation manner, the apparatus further includes: an occlusion processing unit configured to:
extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object;
acquiring at least two obstacles located in the grid information;
taking the central point of the target position as a reference, and obtaining a sector area according to a guiding line emitted by a preset angle;
and deleting the second obstacle from the grid information under the condition that the fan-shaped area covers a first obstacle and a second obstacle, and the second obstacle is shielded by the first obstacle.
In a possible implementation manner, the apparatus further includes a sending unit, configured to:
and sending a message that an obstacle exists on the navigation path to the target object, so that the target object responds to the message that the obstacle exists, and performs obstacle avoidance processing and/or replans the navigation path according to the obstacle.
According to an aspect of the present disclosure, there is also provided an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above object detection method is performed.
According to an aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above object detection method.
According to the method, the grid information is obtained according to the point cloud information at least comprising a target object and corresponding to the target object to be detected, the grid information at least comprises the target object to be detected, and the obstacle in the target object to be detected is identified according to the grid information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow diagram of a target detection method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of mesh information according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram illustrating different ring IDs of pixel points in a grid area according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram illustrating a pixel point in a grid area originating from the same ring ID according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of obstacle point information in each grid area according to an embodiment of the present disclosure.
Fig. 6 a-6 b show schematic diagrams of the communication of the connected regions according to an embodiment of the disclosure.
FIG. 7 shows a schematic diagram of obstacles in a grid map according to an embodiment of the present disclosure.
FIG. 8 shows a schematic diagram of deleting occluded obstacles in a grid graph according to an embodiment of the present disclosure.
Fig. 9 shows a block diagram of an object detection apparatus according to an embodiment of the present disclosure.
Fig. 10 shows a block diagram of an electronic device according to an embodiment of the disclosure.
FIG. 11 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The detection of the target object, such as the detection of the target object such as a vehicle or a pedestrian in an automatic driving or unmanned driving scene, can be realized by adopting a deep learning technology based on a neural network. The target detection realized based on the deep learning technology is described as follows:
on one hand, the accuracy of target detection achieved based on the deep learning technology depends on specific types of training data, so that the applicable application scenarios are limited, that is, the method is feasible for a specific scenario related to the selected training data and cannot be generalized to other non-specific scenarios. For example, for a specific scene, such as the target detection of a vehicle or a pedestrian, since the specific scene is relatively common, a large amount of data related to the target detection of the vehicle or the pedestrian is accumulated, and the data is used as specific types of training data, and then an object meeting the types of features is searched in the input data based on the deep learning technology, so that the target detection accuracy in the specific scene is ensured. However, for an unusual object, such as a trunk with a random shape or an obstacle like a stone, since such an object is not seen in the training process according to the deep learning technique, it is difficult to detect the obstacle, and it is difficult to apply the deep learning technique to any other non-specific scene, so that the neural network trained in a specific scene is poor in performance in another different type of scene, and the generalization capability is poor. Moreover, the deep learning technique, which is essentially to fit a complex function to given data (an expected target), enables data conforming to the same distribution to give correct results after being input into the function to obtain a matching hypothesis, but often makes the training process overly complicated to obtain the hypothesis, and is prone to overfitting. Moreover, if the input data does not conform to the distribution of the training data, the results given are not necessarily accurate. Since it is difficult for the training data to cover all possible road situations, it is only feasible to target specific training data, and some specific scenario of interest, giving a more reliable result under specific training data.
On the other hand, the accuracy of target detection achieved based on the deep learning technology also depends on the advantages and disadvantages of the training algorithm, the characteristics of deep learning are not completely controllable, and the prediction result is unpredictable for given input data, so that the ideal value of 100% recall rate is difficult to achieve. The recall rate is the number of objects identified through target detection divided by the number of actual objects, and in an automatic driving or unmanned driving scene, the higher the recall rate is, the higher the driving safety is.
In summary, the deep learning technology is adopted to realize target detection in an automatic driving or unmanned driving scene, and is more suitable for detecting target objects such as vehicles or pedestrians. The target detection of the obstacles in the road cannot achieve the precision required by the obstacle detection, but the precision of the obstacle detection is an important link for ensuring safe driving in automatic driving, and if the precision of the obstacle detection is not achieved, the safety of automatic driving or unmanned driving cannot be ensured.
Fig. 1 shows a flowchart of an object detection method according to an embodiment of the present disclosure, which is applied to an object detection apparatus, for example, the apparatus may be deployed in a situation where a terminal device or a server or other processing device executes, and may execute processing such as object detection or object classification in automatic driving. The terminal device may be a User Equipment (UE), a mobile device, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the processing method may be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, the process includes:
s101, point cloud information is obtained, wherein the point cloud information at least comprises a target object and point cloud information corresponding to an object to be detected.
In an example, a plurality of pieces of point cloud information to be processed, which are obtained by scanning at least two sensors respectively, may be obtained, and the plurality of pieces of point cloud information to be processed may be subjected to a stitching process to obtain the point cloud information, so that a subsequent gridding process may be performed according to the point cloud information to obtain grid information.
In one example, the at least two sensors may be a plurality of sensors having laser transmitting and receiving functions in the laser radar.
In one example, the target object may refer to a target device scanned by at least two sensors during target detection, such as a vehicle in an autonomous or unmanned scene. The target object in the present disclosure is not limited to the target device, and may also include a blind-guide pedestrian or the like.
In an example, the object to be detected may refer to an object to be detected related to a target object in a target detection process, and if the target object is a vehicle in an autonomous driving or unmanned driving scene, the object to be detected related to the target object may be stones, leaves, roadblocks, and the like on a driving route of the vehicle for safe driving. The object to be detected may also refer to an object to be detected that is in the same observation picture as the target object in the target detection process, for example, the target object still takes a vehicle as an example, and the object to be detected that is in the same observation picture as the target object may be a roadside billboard, a tree, a crown thereof, and the like through which the vehicle passes.
And S102, obtaining grid information according to the point cloud information, wherein the grid information at least comprises an object to be detected.
In one example, the point cloud information may include: the point cloud information of the target object, such as point cloud information corresponding to a vehicle in an automatic driving or unmanned driving scene, may further include point cloud information corresponding to an object to be detected, such as small stones, leaves, roadblocks, roadside billboards, trees, crowns thereof, and the like. It should be noted that, in an automatic driving or unmanned driving scene, small stones, leaves, and roadblocks in an object to be detected are obstacles to be identified subsequently, and roadside billboards, trees, and tree crowns thereof are located outside a driving path of a vehicle, and therefore, the obstacles may not be considered as obstacles, which not only reduces the amount of computation, but also improves the accuracy of detecting the obstacles.
In an example, the point cloud information may be subjected to meshing processing to obtain a grid map composed of a plurality of grid regions, as shown in fig. 2, fig. 2 shows a schematic diagram of grid information according to an embodiment of the present disclosure, an implementation manner of the grid information of the present disclosure may be a grid map, or may be in other diagram forms, without limitation, in fig. 2, the grid map includes a plurality of grid regions 11, and each grid region includes one or more pixel points (in fig. 2, each grid region includes a plurality of pixel points as an example). Whether a current grid area containing pixel points has obstacle points or not needs to be identified and identified by obstacle point information, a sensor identification (ring ID) in point cloud information can be adopted in the identification process, for example, the obstacle point information can be marked in the grid area of the grid map according to the ring ID, as shown in fig. 5, fig. 5 shows a schematic diagram of the obstacle point information in each grid area according to the embodiment of the disclosure, and the numbers "0" and "1" are taken as the obstacle information as an example, wherein if the grid area is marked as "0", no obstacle point exists in the grid area, and if the grid area is "1", obstacle points exist in the grid area, so that the grid map containing the obstacle point information is obtained, and the obstacle in the object to be detected is identified according to the grid map containing the obstacle point information.
And S103, identifying the barrier in the object to be detected according to the grid information.
In an example, the grid information may be a grid map including obstacle point information, and according to the grid map including obstacle point information, an obstacle in the object to be detected may be identified, for example, a grid area is marked as "1", which indicates that an obstacle point exists in the grid area, and a connected area corresponding to the obstacle may be obtained by connecting a plurality of obstacle points.
In the method, a non-deep learning technology is combined with point cloud information, and compared with an implementation mode depending on training data of a specific type in the deep learning technology and the advantages and disadvantages of a training algorithm, the method can scan a target object according to at least two sensors to obtain point cloud information comprising the target object and an object to be detected, and can obtain grid information at least comprising the object to be detected according to the point cloud information. Because the grid information contains the obstacle information, the obstacle in the object to be detected can be identified according to the obstacle information contained in the grid information, and the target detection precision aiming at the obstacle is improved.
In one example, during the process of scanning the target object by at least two sensors, the point cloud information may be obtained according to the scanning detection signal sent by the sensor and the received return signal. For example, the sensor transmits scanning detection signals to the vehicle and the obstacle, then the sensor receives return signals reflected from the vehicle and the obstacle, and the return signals are compared with the transmitted scanning detection signals, so that parameters such as position information, height information, distance information, speed information, attitude information, shape information and the like can be obtained, and the vehicle and the obstacle can be tracked and identified according to the parameters.
It should be noted that the point cloud information of the present disclosure is a set of mass points that express the spatial distribution and surface characteristics of an object in a target area in the same spatial reference system, and records a three-dimensional coordinate of each pixel point in the form of a pixel point (where, an X/Y two-dimensional coordinate in the three-dimensional coordinate is used to calibrate position information in the above parameters, and a third dimension Z in the three-dimensional coordinate is used to calibrate height information in the above parameters), color information (RGB), and laser reflection Intensity (Intensity) information, etc.
In an example, the ring ID may be obtained from the point cloud information (each pixel point includes, in addition to a three-dimensional coordinate, RGB, Intensity information, and the like, and also includes corresponding ring ID information), whether an obstacle exists in a target grid region in the plurality of grid regions is determined according to a category of the ring ID included in the target grid region, and if the obstacle exists in the target grid region, the grid information is obtained, which includes the following contents:
fig. 3 shows a schematic diagram of pixel points in a grid area with different ring IDs according to the embodiment of the present disclosure, as shown in fig. 3, including a sensor 21, a sensor 22, a sensor 23, an obstacle 24, and a plurality of pixel points (identified by ① - ⑥, respectively), it is to be noted that the obstacle is only a schematic diagram of a triangle, and is not a limitation on the actual shape of the obstacle, and is only used to indicate that the obstacle is an obstacle, the laser beams emitted by the sensor 21 and the sensor 22 should not originally fall into the target grid area where the obstacle 24 is located, because the obstacle 24 exists in the target grid area, the laser beams emitted by the sensor 21 and the sensor 22 are reflected, wherein, when the laser beams 211 emitted by the sensor 21 are reflected by the obstacle 24, when the point cloud information composed of a plurality of pixel points is obtained by scanning the sensor 21, the pixel points ① fall into the target grid area, when the point cloud information obtained by scanning the sensor 355636, the laser beams emitted by the sensor 21 are reflected by the obstacle 24, when the point cloud information obtained by scanning the point cloud information obtained by the sensor 35563223, when the point cloud information obtained by scanning the point cloud information obtained by the point cloud sensor 35563223, and the point cloud information obtained by the point cloud sensor 355632233, when the point cloud information obtained by scanning point cloud information obtained by the point cloud sensor 23, and the point cloud information obtained by the point cloud sensor 23, and point cloud information obtained by scanning, and point cloud information obtained by the point cloud sensor under the point cloud information obtained by the point 3556355632233, and point cloud information obtained by the point marking are found by the point marking.
It should be noted that, in practical application, the original position relationship of the multiple sensors (the sensor 21, the sensor 22, and the sensor 23) in fig. 3 is not necessarily arranged dispersedly, but may also be arranged side by side, or even multiple sensors are arranged together and present different projection angles.
Fig. 4 shows a schematic diagram of pixel points in a grid area from the same ring ID according to the embodiment of the present disclosure, as shown in fig. 4, the target grid area does not have an obstacle, the sensor 31 includes a plurality of pixel points (each identified by ⑦ - ⑩), the laser beam 311, the laser beam 312, the laser beam 313, and the laser beam 314 emitted by the sensor 31 do not encounter an obstacle, the pixel point ⑦, the pixel point ⑧, the pixel point ⑨, and the pixel point ⑩ fall into the target grid area, it can be seen that the plurality of pixel points (each identified by ⑦ - ⑩) are obtained by the same sensor, and the ring IDs corresponding to the same identifier, respectively, and at this time, the target grid area does not have an obstacle.
The object to be detected included in the point cloud information may include obstacles such as small stones, leaves, and the like, and other objects considered as non-obstacles in an automatic driving or unmanned driving scene, such as a crown, a sign, and the like. Therefore, in addition to the obstacle determination by the ring ID, the height information may be further added to check the obstacle determined by the ring ID, so as to avoid the possibility of erroneous determination, for example, to recognize other objects considered as non-obstacles, such as a crown, a sign, and the like, as obstacles. Because there are some objects in the air such as the crown and the signboard, which belong to a plurality of sensors, falling on the objects, they are erroneously determined as obstacles, and in the case of an autonomous driving or unmanned driving scene, the objects in the air such as the crown and the signboard should not belong to obstacles, and the crown and the signboard are much higher than general obstacles such as stones and leaves, and therefore, the height information of the pixel points in the point cloud information can be added to exclude the objects such as the crown and the signboard, which are erroneously identified as obstacles, from the grid region.
In an example, in a case that the point cloud information further includes height information, and in a case that the target mesh area has an obstacle, obtaining the mesh information further includes: determining the type of obstacles existing in the target grid area according to the height information; and updating the grid information according to the type of the obstacle. For example, in the case where the mesh information is a mesh map, after the mesh information is updated, a more accurate mesh map including obstacle point information can be obtained for the subsequent target detection processing.
In one example, determining a category of obstacles present in the target mesh region based on the altitude information includes: acquiring ring IDs and height information corresponding to at least two pixel points in the target grid area respectively; and dividing the at least two pixel points according to the ring ID, and taking the pixel points corresponding to the same ring ID as a group of data to obtain a plurality of groups of pixel point data. And respectively determining the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information, and determining the type of the obstacle according to the number corresponding to the minimum height value. In one example, the number corresponding to the minimum height value may be compared to a threshold range to determine the category of the obstacle. The number corresponding to the minimum height value is compared with a threshold range, so that classification statistics can be realized, the threshold range can be a classification result obtained by dividing based on a certain ringID, and if the number corresponding to the minimum height value is greater than or equal to a number threshold (ring _ count _ th), and the number corresponding to the minimum height value is less than a height threshold (height _ th), it is considered that an obstacle exists in a grid area of the grid map. For example, ring _ count _ th is set to 3, and height _ th may be the height of the vehicle, for example, 2 m.
After a plurality of division results are obtained through the division based on a certain ring ID, the grid graph can be subjected to connected region analysis to obtain a connected region, and obstacles in the object to be detected are identified according to the connected region. The obstacle may be represented in the form of a polygon such as a concave polygon, a convex polygon, a rectangle, or a triangle as long as the obstacle can be recognized as being different from other objects. In an example of the present disclosure, a convex polygon is adopted, and on one hand, in terms of shape characteristics of the convex polygon, the number of sides is more than that of a rectangle or a triangle, and the shape of an obstacle is more easily and accurately represented; on the other hand, the convex polygon is adopted to be compared with the concave polygon, so that the extra calculation amount is not introduced to increase the calculation cost, and the calculation cost is moderate.
In an example, the connected region analysis may search the connected grid regions including obstacle point information according to the obstacle point information marked as "0" or "1" in the grid region as shown in fig. 5, so as to form a "connected region".
Fig. 6 a-6 b show schematic diagrams of connected component areas connectivity according to an embodiment of the disclosure, and the connected component areas operation may be implemented by a Breadth First Search (BFS) algorithm, with two connectivity approaches of 4 contiguous or 8 contiguous. In one example, the smallest unit in an image is a pixel, and each pixel has 8 neighboring pixels, there are 2 neighboring relationships: 4 adjacent (as shown in fig. 6 a) and 8 adjacent (as shown in fig. 6 b), wherein 4 adjacent to a total of 4 points, i.e. four pixel points up, down, left, right. And 8 adjacent points comprise 8 pixel points including diagonal points, if a certain pixel point A is adjacent to a pixel point B, the A is communicated with the B, the communicated points form a region, and the non-communicated points form other different regions. Thus, a set of all the connected pixels is referred to as a "connected region". The obstacle can be obtained by the connected region operation, fig. 7 shows a schematic diagram of the obstacle in the grid map according to the embodiment of the disclosure, and as shown in fig. 7, the grid map includes a plurality of obstacles represented by convex polygons.
In one example, after the obstacle in the object to be detected is identified according to the connected region, the method further includes: the method comprises the steps of obtaining a plurality of to-be-processed points on a first line segment of a communicated region, selecting at least two reference points from the plurality of to-be-processed points, connecting the at least two reference points to obtain a second line segment, and adjusting the communicated region according to the second line segment to obtain the first region. For example, the first region may be smaller than the connected region. If the obstacle is a convex polygon, the adjustment process of the connected region may be referred to as convex hull processing, for example, a line segment (referred to as a first line segment) forming the connected region has 10 points to be processed, 6 reference points are selected from the 10 points to be processed, and a line segment (referred to as a second line segment) is obtained by connecting the 6 reference points, and then the first region can be obtained after adjusting the connected region according to the second line segment, and the first region is smaller than the connected region, that is, after the convex hull processing, the number of convex edges used for representing the obstacle is reduced (because the number of points is less, the convex edges are correspondingly reduced), the convex polygon is smaller than the initial shape, and the amount of computation can be reduced by adopting the convex hull processing.
In an example, after the obstacles in the object to be detected are identified according to the connected region, the method further includes extracting point cloud information corresponding to a target object from the point cloud information, obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object, obtaining at least two obstacles in the grid information, obtaining a sector region according to a guiding line emitted from a preset angle with a center point of the target position as a reference, deleting the second obstacle from the grid information in a case that the sector region covers a first obstacle and a second obstacle, and the second obstacle is shielded by the first obstacle, fig. 8 shows a schematic diagram of the shielded obstacles in a mesh map deleted according to the present disclosure, as shown in fig. 8, the mesh map includes the target object and the second obstacle, the target object may be a vehicle 41, the first obstacle of the at least two obstacles may be a warning object 42, the second obstacle of the at least two obstacles may be one or more than the first obstacle, and the warning object may be a warning object 3543, and the warning object may be a warning object 42, and the warning object may be a warning object that the warning object is not observed from a plurality of smaller obstacle, and the warning object may be a warning object 3543, and the warning object is not observed from a plurality of a warning object, and may be a warning object 3543, where the warning object is not limited to a warning object, and the warning object is a warning object, where the warning object is a warning object, and the warning object is observed from a warning object 3543, where the warning object is a plurality of a warning object, where the warning object, and the warning object is observed in the warning object, where the warning object is a plurality of a plurality.
In one example, the method includes: and sending a message that an obstacle exists on the navigation path to a target object (such as a vehicle) so that the target object can respond to the message that the obstacle exists, and perform obstacle avoidance processing and/or replan the navigation path according to the obstacle.
Application example:
an application example according to the above embodiment includes the following:
firstly, scanning a target object according to a plurality of sensors in a laser radar to obtain point cloud information containing the target object and the object to be detected, inputting the point cloud information to obtain grid information containing the object to be detected, wherein the grid information can be a grid map for marking obstacle point information.
One or more laser radars may include a plurality of sensors, and the plurality of sensors jointly construct point cloud information of a whole scene, and correspond a whole scanning area (an area covered by a group of scanned point clouds of each laser radar at the same time or in the same time period) to a grid map. Because the included angle between the pointing direction of each laser transmitter in each laser radar and the horizontal plane is different, each sensor can scan the point cloud information of the next circle at a certain angle by rotating one circle every time the laser radar scans.
If no obstacle is raised on a certain grid area in the grid map, the grid area is a plane which is almost matched with the height of the ground, the laser emitted by the adjacent sensor of the sensor which is to be emitted to the grid area cannot be blocked by the grid area, and the laser emitted to the grid area is from the same sensor, so that the pixel points falling into the grid area are from the same sensor, the ring IDs corresponding to the pixel points are the same, namely the pixel points falling into the grid area are obtained by scanning of the same sensor; if a certain grid area has a raised obstacle, the laser light emitted by the adjacent sensors of the sensor which should be emitted to the grid area will be blocked by the raised obstacle on the grid area, and the laser light emitted to the grid area originates from different sensors, so that there are a plurality of sensors corresponding to the pixel points falling into the grid area, and the ring IDs corresponding to the pixel points are different, that is: the pixel points falling within the grid area are scanned by different sensors.
Further, whether an obstacle exists in the grid area is judged by using the number of ring IDs of pixel points falling into the grid area, and further optimization can be performed, for example, a crown, a signboard and other objects in the air, although there are also a plurality of sensors to which laser light belonging to the plurality of sensors is emitted into the same grid area, and there are a plurality of sensors corresponding to pixel points falling into the grid area, and ring IDs are different, in the case where the target object is a vehicle, for other objects in the air, such as a crown, a signboard and the like, which do not belong to an obstacle concerned by the vehicle, it is necessary to exclude the crown and the signboard as an obstacle to be avoided by the vehicle. Therefore, the height information of the pixel points needs to be added into consideration to check the obstacle obtained by the ring ID to realize the further optimization, and the obstacle higher than a certain height is filtered out, so that the obstacle detection precision is further improved.
It should be noted that if the input point cloud information is the fusion of a plurality of lidar scanning results, an N × M grid map can be constructed for each lidar scanned point cloud information, the side length of each grid can be preset to represent 0.1M in reality, and the coordinate (N/2, M/2) is set as the center of the vehicle.
And in the process of judging whether an obstacle exists in a certain grid area according to the ring ID and the height information, distributing pixel points in the point cloud information scanned by the single laser radar into grids according to the position information. For each grid region, ring IDs of points distributed to the grid region are counted (the same ring IDs are not counted repeatedly), pixel points corresponding to the same ring IDs are used as a group of data to obtain multiple groups of pixel point data, the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data is determined according to the height information, and the type of the obstacle is determined according to the number corresponding to the minimum height value. In one example, the number corresponding to the minimum height value may be compared to a threshold range to determine the category of the obstacle. If the number corresponding to the minimum height value in a certain classification result is greater than or equal to ring _ count _ th and the number corresponding to the minimum height value is less than height _ th, the grid region is considered to have an obstacle, and the classification statistics is adopted, so that the method has the advantages that: a length of the obstacle that is continuous in height is found, rather than a single point. And finally, obtaining a grid map for each laser radar, and performing OR operation on each element of the grid maps to obtain an output result, namely obtaining the grid map with the obstacle point information. One example for an OR operation is: in the grid map, "1" indicates that there is an obstacle, "0" indicates that there is no obstacle, and there are two grid maps of 1 × 3, which are [1, 0, 0] and [0, 1, 0], respectively, and then performing an or operation on the two grid maps is to perform an or operation on the corresponding grid areas, and if one or two grid areas are marked as "1", the corresponding positions of the superposed grid areas are marked as "1", and then the result after performing the or operation on the two grid maps is [1, 1, 0 ].
In one example, when calculating the ring ID of a pixel point in each grid region, the point cloud information becomes sparse on a distant object, so that a compensation method for calculating the surrounding of the grid region needs to be adopted, wherein n is around (1+ a × distance), an around function represents rounding, a is a preset small constant, classification calculation is performed by firstly sequencing all height values in a minimum height value array, and if the height difference of two continuous items in the sequenced array is greater than a certain threshold (gap _ th), the classification calculation is divided into two classes.
The method for taking the value of gap _ th can carry out certain correction according to the distance between the grid area and the center of the vehicle, for example, different compensation schemes are respectively adopted according to different conditions of installation positions, angles, point cloud sparsity and the like of the sensor.
The value of ring _ count _ th may be compensated according to the sparsity of the point cloud information, in one example, a fixed value may be taken, for example, a value of 3. height _ th, since a sensor (the sensor may be a laser radar provided on a vehicle) has a certain elevation angle, height _ th cannot be taken, and a certain angle correction may be performed according to a distance (distance) from a grid area to a center of the vehicle, for example, in one example, assuming that a tangent value of a correction angle is a, height _ th is 1+ a × distance in meters.
And secondly, analyzing a connected region of the obstacle points in the grid diagram to obtain a connected region, and obtaining the obstacle represented by the convex polygon according to the connected region.
After the grid map is obtained, the value of each grid area indicates whether the grid area has an obstacle or not, and due to the sparsity of the point cloud information, some large objects are divided into a plurality of parts, and the grid map can be processed by an image expansion algorithm, so that the parts of the same object are connected. Next, connected component analysis is performed (each connected component represents an object, such as an obstacle). For each connected region, the convex hull is calculated, and then convex hull operation, such as Ramer-Douglas-Peucker algorithm, is used for each convex hull, so that the number of edges of the convex hull can be simplified, and the operation amount is reduced. Finally, FOV analysis is performed to remove occlusion, thereby removing small obstacles that cannot be observed from the vehicle center point.
One example of a convex hull operation includes: 1. for a broken line needing to be simplified, a straight line AB is connected between the head point A and the tail point B of the broken line; 2. traversing and finding a point C on the folding line, which is farthest from the straight line AB, and calculating the distance between the point C and the straight line AB; comparing the distance with a preset threshold value, and if the distance is smaller than the threshold value, taking the straight line AB as the approximation of the section of broken line, and finishing the processing of the section of broken line; 4. if the distance is greater than the threshold value, dividing the straight line AB into two straight lines AC and BC by using the point C, and respectively carrying out the processing of the steps 1-4 on the two straight lines; 5. when all the curves are processed, the broken lines formed by all the dividing points are connected in sequence, and the broken lines can be used as approximation of the initial broken lines to obtain the updated convex hull.
One example of FOV analysis includes: for every two convex hulls, C1 and C2, it is necessary to check whether the host vehicle position can observe C1 under the shade of C2. For each point P on C1, connecting the center point A and the point P, detecting whether the straight line AP passes through C2, the detection of this step can judge whether all the points on C2 are on the same side of the straight line AP through cross multiplication operation, if all the points are on the same side, the straight line AP is considered not to pass through C2. The number n of points on the convex hull C1 that cannot be observed at the vehicle position can be found after traversing the points on C1, and if n is equal to or greater than a certain threshold fov _ th, C1 is considered invisible, and C1 is deleted.
One example of the correction that is required for the value of fov _ th is fov _ th ═ min (1, ceil (constant _ point _ num × (1-distance/a))), where constant _ point _ num is the number of points on the corresponding convex hull, distance is the distance from the convex hull to the vehicle, a is a large constant and can be taken as the maximum perceivable distance value, and ceil is an upward rounding function.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
The above-mentioned method embodiments can be combined with each other to form a combined embodiment without departing from the principle logic, which is limited by the space and will not be repeated in this disclosure.
In addition, the present disclosure also provides a target detection apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the target detection methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 9 shows a block diagram of an object detection apparatus according to an embodiment of the present disclosure, as shown in fig. 9, the apparatus including: an obtaining unit 51, configured to obtain point cloud information, where the point cloud information at least includes a target object and point cloud information corresponding to an object to be detected; the information processing unit 52 is configured to obtain mesh information according to the point cloud information, where the mesh information at least includes an object to be detected; and the detecting unit 53 is configured to identify an obstacle in the object to be detected according to the grid information.
In a possible implementation manner, the obtaining unit is configured to: acquiring a plurality of point cloud information to be processed which is respectively obtained by scanning through at least two sensors; and splicing the plurality of point cloud information to be processed to obtain the point cloud information.
In a possible implementation manner, the point cloud information further includes a ring ID, and the information processing unit is configured to: carrying out meshing processing on the point cloud information to obtain a mesh map, wherein the mesh map comprises a plurality of mesh areas; determining whether an obstacle exists in a target grid area according to the category of ring IDs included in the target grid area in the plurality of grid areas; and obtaining the grid information under the condition that the target grid area has an obstacle.
In a possible implementation manner, the information processing unit is configured to: and under the condition that ring IDs corresponding to at least two pixel points in the target grid area are different, determining that an obstacle exists in the target grid area.
In a possible implementation manner, the point cloud information further includes height information, and the apparatus further includes a category determining unit configured to: determining the type of obstacles existing in the target grid area according to the height information; and updating the grid information according to the type of the obstacle.
In a possible implementation manner, the category determining unit is configured to: acquiring ring IDs and height information corresponding to at least two pixel points in the target grid area respectively; dividing the at least two pixel points according to the ring IDs, and taking the pixel points corresponding to the same ring ID as a group of data to obtain a plurality of groups of pixel point data; respectively determining the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information; and determining the type of the obstacles according to the number corresponding to the minimum height value.
In a possible implementation manner, the detection unit is configured to: analyzing a connected region according to the grid information to obtain a connected region; and identifying the obstacles in the object to be detected according to the communication area.
In a possible implementation manner, the apparatus further includes a connected component adjusting unit, configured to: acquiring a plurality of points to be processed on a first line segment of the communication area; selecting at least two reference points from the plurality of points to be processed; and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area. In one example, the first region may be smaller than the connected region.
In a possible implementation manner, the apparatus further includes: an occlusion processing unit configured to: extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object; acquiring at least two obstacles located in the grid information; taking the central point of the target position as a reference, and obtaining a sector area according to a guiding line emitted by a preset angle; and deleting the second obstacle from the grid information under the condition that the fan-shaped area covers a first obstacle and a second obstacle, and the second obstacle is shielded by the first obstacle.
In a possible implementation manner, the apparatus further includes a sending unit, configured to: and sending a message that an obstacle exists on the navigation path to the target object, so that the target object responds to the message that the obstacle exists, and performs obstacle avoidance processing and/or replans the navigation path according to the obstacle.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 10 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 10, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user, in some embodiments, the screen may include a liquid crystal display (L CD) and a Touch Panel (TP). if the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 11 is a block diagram illustrating an electronic device 900 in accordance with an example embodiment. For example, the electronic device 900 may be provided as a server. Referring to fig. 11, electronic device 900 includes a processing component 922, which further includes one or more processors, and memory resources, represented by memory 932, for storing instructions, such as applications, that are executable by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 922 is configured to execute instructions to perform the above-described methods.
The electronic device 900 may further include a power supply component 926 configured to perform power management of the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input-output (I/O) interface 958 the electronic device 900 may be operable based on an operating system stored in the memory 932, such as WindowsServers, Mac OS XTM, UnixTM, &lTtTtranslation = & &gTt L &/T &gTtinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 932, is also provided that includes computer program instructions executable by the processing component 922 of the electronic device 900 to perform the above-described method.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including AN object oriented programming language such as Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" language or similar programming languages.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Different embodiments of the present application may be combined with each other without departing from the logic, and the descriptions of the different embodiments are focused on, and for the parts focused on the descriptions of the different embodiments, reference may be made to the descriptions of the other embodiments.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A method of object detection, the method comprising:
acquiring point cloud information, wherein the point cloud information at least comprises a target object and point cloud information corresponding to an object to be detected;
obtaining grid information according to the point cloud information, wherein the grid information at least comprises an object to be detected;
and identifying the barrier in the object to be detected according to the grid information.
2. The method of claim 1, wherein the obtaining point cloud information comprises:
acquiring a plurality of point cloud information to be processed which is respectively obtained by scanning through at least two sensors;
and splicing the plurality of point cloud information to be processed to obtain the point cloud information.
3. The method of claim 2, wherein the point cloud information further comprises a sensor identification;
obtaining grid information according to the point cloud information, including:
carrying out meshing processing on the point cloud information to obtain a mesh map, wherein the mesh map comprises a plurality of mesh areas;
determining whether an obstacle exists in a target grid area of the plurality of grid areas according to the category of the sensor identifier included in the target grid area;
and obtaining the grid information under the condition that the target grid area has an obstacle.
4. The method of claim 3, wherein determining whether an obstacle is present in a target grid area of the plurality of grid areas based on a category of sensor identifications included in the target grid area comprises:
and determining that the obstacle exists in the target grid area under the condition that the sensor identifications corresponding to at least two pixel points in the target grid area are different.
5. The method of claim 3 or 4, wherein the point cloud information further comprises height information;
the obtaining the grid information under the condition that the target grid area has the obstacle further comprises:
determining the type of obstacles existing in the target grid area according to the height information;
and updating the grid information according to the type of the obstacle.
6. The method of claim 5, wherein determining the category of obstacles present in the target mesh region based on the altitude information comprises:
acquiring sensor identifications and height information corresponding to at least two pixel points in the target grid area respectively;
dividing the at least two pixel points according to the sensor identification, and taking the pixel points corresponding to the same sensor identification as a group of data to obtain a plurality of groups of pixel point data;
respectively determining the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information;
and determining the type of the obstacles according to the number corresponding to the minimum height value.
7. The method according to any one of claims 1 to 6, wherein the identifying obstacles in the object to be detected according to the grid information comprises:
analyzing a connected region according to the grid information to obtain a connected region;
and identifying the obstacles in the object to be detected according to the communication area.
8. The method according to claim 7, wherein after identifying an obstacle in the object to be detected based on the connected component, the method further comprises:
acquiring a plurality of points to be processed on a first line segment of the communication area;
selecting at least two reference points from the plurality of points to be processed;
and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area.
9. The method according to claim 7, wherein after identifying an obstacle in the object to be detected based on the connected component, the method further comprises:
extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object;
acquiring at least two obstacles located in the grid information;
taking the central point of the target position as a reference, and obtaining a sector area according to a guiding line emitted by a preset angle;
and deleting the second obstacle from the grid information under the condition that the fan-shaped area covers a first obstacle and a second obstacle, and the second obstacle is shielded by the first obstacle.
10. The method according to any one of claims 1-9, characterized in that the method comprises:
and sending a message that an obstacle exists on the navigation path to the target object, so that the target object responds to the message that the obstacle exists, and performs obstacle avoidance processing and/or replans the navigation path according to the obstacle.
11. An object detection apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring point cloud information, and the point cloud information at least comprises a target object and point cloud information corresponding to an object to be detected;
the information processing unit is used for obtaining grid information according to the point cloud information, and the grid information at least comprises an object to be detected;
and the detection unit is used for identifying the obstacles in the object to be detected according to the grid information.
12. The apparatus of claim 11, wherein the obtaining unit is configured to:
acquiring a plurality of point cloud information to be processed which is respectively obtained by scanning through at least two sensors;
and splicing the plurality of point cloud information to be processed to obtain the point cloud information.
13. The apparatus of claim 12, wherein the point cloud information further comprises a sensor identification;
the information processing unit is configured to:
carrying out meshing processing on the point cloud information to obtain a mesh map, wherein the mesh map comprises a plurality of mesh areas;
determining whether an obstacle exists in a target grid area of the plurality of grid areas according to the category of the sensor identifier included in the target grid area;
and obtaining the grid information under the condition that the target grid area has an obstacle.
14. The apparatus of claim 13, wherein the information processing unit is configured to:
and determining that the obstacle exists in the target grid area under the condition that the sensor identifications corresponding to at least two pixel points in the target grid area are different.
15. The apparatus of claim 13 or 14, wherein the point cloud information further comprises height information;
the apparatus further comprises a category determination unit configured to:
determining the type of obstacles existing in the target grid area according to the height information;
and updating the grid information according to the type of the obstacle.
16. The apparatus of claim 15, wherein the category determining unit is configured to:
acquiring sensor identifications and height information corresponding to at least two pixel points in the target grid area respectively;
dividing the at least two pixel points according to the sensor identification, and taking the pixel points corresponding to the same sensor identification as a group of data to obtain a plurality of groups of pixel point data;
respectively determining the number corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information;
and determining the type of the obstacles according to the number corresponding to the minimum height value.
17. The apparatus according to any one of claims 11 to 16, wherein the detection unit is configured to:
analyzing a connected region according to the grid information to obtain a connected region;
and identifying the obstacles in the object to be detected according to the communication area.
18. The apparatus of claim 17, further comprising a connected region adjusting unit for:
acquiring a plurality of points to be processed on a first line segment of the communication area;
selecting at least two reference points from the plurality of points to be processed;
and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area.
19. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 10.
20. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202010314166.6A 2020-04-20 2020-04-20 Target detection method and device, electronic equipment and storage medium Active CN111507973B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010314166.6A CN111507973B (en) 2020-04-20 2020-04-20 Target detection method and device, electronic equipment and storage medium
JP2021577017A JP2022539093A (en) 2020-04-20 2021-04-15 Target detection method and device, electronic device, storage medium, and program
PCT/CN2021/087424 WO2021213241A1 (en) 2020-04-20 2021-04-15 Target detection method and apparatus, and electronic device, storage medium and program
KR1020217043313A KR20220016221A (en) 2020-04-20 2021-04-15 Target detection method and apparatus, electronic device, storage medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010314166.6A CN111507973B (en) 2020-04-20 2020-04-20 Target detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111507973A true CN111507973A (en) 2020-08-07
CN111507973B CN111507973B (en) 2024-04-12

Family

ID=71878738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010314166.6A Active CN111507973B (en) 2020-04-20 2020-04-20 Target detection method and device, electronic equipment and storage medium

Country Status (4)

Country Link
JP (1) JP2022539093A (en)
KR (1) KR20220016221A (en)
CN (1) CN111507973B (en)
WO (1) WO2021213241A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112697188A (en) * 2020-12-08 2021-04-23 北京百度网讯科技有限公司 Detection system test method and apparatus, computer device, medium, and program product
WO2021213241A1 (en) * 2020-04-20 2021-10-28 上海商汤临港智能科技有限公司 Target detection method and apparatus, and electronic device, storage medium and program
CN113901970A (en) * 2021-12-08 2022-01-07 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445802A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Point cloud processing method and device and vehicle
CN117091516B (en) * 2022-05-12 2024-05-28 广州镭晨智能装备科技有限公司 Method, system and storage medium for detecting thickness of circuit board protective layer

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779280A (en) * 2012-06-19 2012-11-14 武汉大学 Traffic information extraction method based on laser sensor
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN109145677A (en) * 2017-06-15 2019-01-04 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN109635685A (en) * 2018-11-29 2019-04-16 北京市商汤科技开发有限公司 Target object 3D detection method, device, medium and equipment
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
CN110147706A (en) * 2018-10-24 2019-08-20 腾讯科技(深圳)有限公司 The recognition methods of barrier and device, storage medium, electronic device
JP2019207655A (en) * 2018-05-30 2019-12-05 株式会社Ihi Detection device and detection system
JP2020038631A (en) * 2018-08-30 2020-03-12 キヤノン株式会社 Information processing apparatus, information processing method, program, and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957145A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Road barrier identification method and device
WO2018180285A1 (en) * 2017-03-31 2018-10-04 パイオニア株式会社 Three-dimensional data generation device, three-dimensional data generation method, three-dimensional data generation program, and computer-readable recording medium having three-dimensional data generation program recorded thereon
JP6969738B2 (en) * 2017-07-10 2021-11-24 株式会社Zmp Object detection device and method
US10354444B2 (en) * 2017-07-28 2019-07-16 The Boeing Company Resolution adaptive mesh that is generated using an intermediate implicit representation of a point cloud
JP7056842B2 (en) * 2018-03-23 2022-04-19 株式会社豊田中央研究所 State estimator and program
JP7128577B2 (en) * 2018-03-30 2022-08-31 セコム株式会社 monitoring device
CN111507973B (en) * 2020-04-20 2024-04-12 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779280A (en) * 2012-06-19 2012-11-14 武汉大学 Traffic information extraction method based on laser sensor
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN109145677A (en) * 2017-06-15 2019-01-04 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
JP2019207655A (en) * 2018-05-30 2019-12-05 株式会社Ihi Detection device and detection system
JP2020038631A (en) * 2018-08-30 2020-03-12 キヤノン株式会社 Information processing apparatus, information processing method, program, and system
CN110147706A (en) * 2018-10-24 2019-08-20 腾讯科技(深圳)有限公司 The recognition methods of barrier and device, storage medium, electronic device
CN109635685A (en) * 2018-11-29 2019-04-16 北京市商汤科技开发有限公司 Target object 3D detection method, device, medium and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LANXIANG ZHENG ET AL: "The Obstacle Detection Method of UAV Based on 2D Lidar", vol. 7, pages 163437 - 163448, XP011757889, DOI: 10.1109/ACCESS.2019.2952173 *
娄新雨等: "采用64线激光雷达的实时道路障碍物检测与分类算法的研究", vol. 41, no. 8, pages 779 - 784 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021213241A1 (en) * 2020-04-20 2021-10-28 上海商汤临港智能科技有限公司 Target detection method and apparatus, and electronic device, storage medium and program
CN112697188A (en) * 2020-12-08 2021-04-23 北京百度网讯科技有限公司 Detection system test method and apparatus, computer device, medium, and program product
CN113901970A (en) * 2021-12-08 2022-01-07 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle

Also Published As

Publication number Publication date
JP2022539093A (en) 2022-09-07
CN111507973B (en) 2024-04-12
KR20220016221A (en) 2022-02-08
WO2021213241A1 (en) 2021-10-28

Similar Documents

Publication Publication Date Title
CN111507973A (en) Target detection method and device, electronic equipment and storage medium
US11468581B2 (en) Distance measurement method, intelligent control method, electronic device, and storage medium
EP3252658B1 (en) Information processing apparatus and information processing method
US20210312214A1 (en) Image recognition method, apparatus and non-transitory computer readable storage medium
CN111340766B (en) Target object detection method, device, equipment and storage medium
US11308809B2 (en) Collision control method and apparatus, and storage medium
US20200082561A1 (en) Mapping objects detected in images to geographic positions
CN110543850B (en) Target detection method and device and neural network training method and device
CN111624622B (en) Obstacle detection method and device
EP3553752A1 (en) Information processing apparatus, information processing method, and computer-readable medium for generating an obstacle map
US11204610B2 (en) Information processing apparatus, vehicle, and information processing method using correlation between attributes
CN111881827B (en) Target detection method and device, electronic equipment and storage medium
CN111213153A (en) Target object motion state detection method, device and storage medium
CN109696173A (en) A kind of car body air navigation aid and device
KR20200095338A (en) Method and device for providing advanced pedestrian assistance system to protect pedestrian preoccupied with smartphone
KR20180086794A (en) Method and apparatus for generating an image representing an object around a vehicle
CN114419572A (en) Multi-radar target detection method and device, electronic equipment and storage medium
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN110390252B (en) Obstacle detection method and device based on prior map information and storage medium
CN109829393B (en) Moving object detection method and device and storage medium
CN113450459A (en) Method and device for constructing three-dimensional model of target object
CN115965935A (en) Object detection method, device, electronic apparatus, storage medium, and program product
CN117408935A (en) Obstacle detection method, electronic device, and storage medium
CN113065392A (en) Robot tracking method and device
CN113433965B (en) Unmanned aerial vehicle obstacle avoidance method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant