WO2021213241A1 - Target detection method and apparatus, and electronic device, storage medium and program - Google Patents
Target detection method and apparatus, and electronic device, storage medium and program Download PDFInfo
- Publication number
- WO2021213241A1 WO2021213241A1 PCT/CN2021/087424 CN2021087424W WO2021213241A1 WO 2021213241 A1 WO2021213241 A1 WO 2021213241A1 CN 2021087424 W CN2021087424 W CN 2021087424W WO 2021213241 A1 WO2021213241 A1 WO 2021213241A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- obstacle
- information
- grid
- point
- area
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 57
- 238000003860 storage Methods 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000012545 processing Methods 0.000 claims description 35
- 238000004590 computer program Methods 0.000 claims description 19
- 238000004458 analytical method Methods 0.000 claims description 7
- 230000010365 information processing Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 34
- 238000005516 engineering process Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 12
- 238000012549 training Methods 0.000 description 12
- 238000013135 deep learning Methods 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000009826 distribution Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present disclosure relates to the field of automatic driving technology, and in particular to a target detection method and device, electronic equipment, storage medium, and program.
- Target detection of obstacles is an important part of ensuring safe driving in automatic driving.
- Target detection can use deep learning technology based on neural networks to predict the possible size and location of obstacles.
- accuracy of target detection based on deep learning technology depends on specific types of training data and the pros and cons of training algorithms, resulting in low target detection accuracy for obstacles.
- the present disclosure proposes a technical solution for target detection.
- a target detection method includes: acquiring point cloud information, the point cloud information includes at least a target object and point cloud information corresponding to the object to be detected, wherein the to be detected The object is a person or thing around the target object; according to the point cloud information, grid information is obtained, and the grid information includes at least obstacle point information indicating the object to be detected; according to the grid information, identification Obstacles in the object to be detected that affect the movement of the target object are extracted.
- a target detection device including: an acquisition unit configured to acquire point cloud information, the point cloud information including at least the target object and the point cloud information corresponding to the object to be detected;
- the object to be detected is a person or thing around the target object;
- an information processing unit is configured to obtain grid information according to the point cloud information, and the grid information includes at least obstacle point information indicating the object to be detected
- the detection unit is used to identify obstacles in the object to be detected that affect the movement of the target object according to the grid information.
- an electronic device including: a processor; and a memory for storing instructions executable by the processor.
- the processor is configured to execute the above-mentioned target detection method.
- a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned target detection method when executed by a processor.
- a computer program is also provided, the computer program is stored in a storage medium, and when a processor executes the computer program, the processor is used to execute the above-mentioned target detection method.
- grid information is obtained according to point cloud information corresponding to at least the target object and the object to be detected, and the grid information includes at least obstacle point information indicating the object to be detected, so that the grid information can be Grid information to identify obstacles in the object to be detected that affect the movement of the target object. Since the content of the point cloud information is relatively rich and is not limited to a specific type of object, such as a vehicle or a pedestrian, the technical solution of the present disclosure is suitable for more target detection scenarios. In addition, by identifying the obstacle in the object to be detected according to the grid information including the obstacle point information, the target detection accuracy for the obstacle is effectively improved.
- Fig. 1 shows a flowchart of a target detection method according to an embodiment of the present disclosure.
- Fig. 2 shows a schematic diagram of grid information according to an embodiment of the present disclosure.
- Fig. 3 shows a schematic diagram of different ring IDs of pixel sources in a grid area according to an embodiment of the present disclosure.
- Fig. 4 shows a schematic diagram of the source of pixels in the grid area with the same ring ID according to an embodiment of the present disclosure.
- Fig. 5 shows a schematic diagram of obstacle point information in each grid area according to an embodiment of the present disclosure.
- Figures 6a-6b show schematic diagrams of a communication manner of a connected area according to an embodiment of the present disclosure.
- Fig. 7 shows a schematic diagram of an obstacle in a grid map according to an embodiment of the present disclosure.
- FIG. 8 shows a schematic diagram of deleting obstructed obstacles in a grid image according to an embodiment of the present disclosure.
- Fig. 9 shows a block diagram of a target detection device according to an embodiment of the present disclosure.
- FIG. 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 11 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- a and/or B can mean: A alone exists, A and B exist at the same time, and B exists alone.
- at least one herein means any one of a plurality of types or any combination of at least two of the plurality of types.
- including at least one of A, B, and C may mean including any one or more elements selected from the set consisting of A, B, and C.
- Detecting target objects such as detecting target objects such as vehicles or pedestrians in autonomous driving or unmanned driving scenes, can be achieved by using deep learning technology based on neural networks.
- the accuracy of target detection based on deep learning technology depends on specific types of training data, which limits its applicable application scenarios. That is to say, the neural network trained according to the deep learning technology is feasible for a certain specific scene related to the selected training data, but cannot be generalized to other non-specific scenes. For example, for a specific scene, such as target detection of a vehicle or pedestrian, since the specific scene is relatively common, a large amount of data related to the target detection of the vehicle or pedestrian has been accumulated. Regarding these data as a specific type of training data, a neural network trained based on deep learning technology will look for objects that meet these types of features in the input data, thereby ensuring the accuracy of target detection in the specific scene.
- the training process may become overly complicated, which is prone to overfitting.
- the results given may not be accurate. Because it is difficult for training data to cover all possible road conditions, it can only give highly reliable results for specific training data and related specific scenarios.
- the accuracy of target detection based on deep learning technology also depends on the quality of the training algorithm.
- the characteristics of deep learning are not completely controllable, that is, the prediction result for a given input data is unpredictable, so it is difficult to achieve the ideal value of 100% recall rate.
- the recall rate refers to the number of objects identified through target detection divided by the number of actual objects.
- the recall rate the higher the recall rate, the higher the safety of driving.
- the use of deep learning technology to achieve target detection in autonomous driving or unmanned driving scenarios is more suitable for the detection of target objects such as vehicles or pedestrians.
- target detection of obstacles in the road to avoid collisions cannot reach the accuracy required for obstacle detection.
- the accuracy of obstacle detection is an important part of automatic driving in order to ensure safe driving. For example, if the target detection of obstacles fails to achieve the accuracy of obstacle detection, the safety of autonomous driving or unmanned driving cannot be guaranteed.
- Fig. 1 shows a flowchart of a target detection method according to an embodiment of the present disclosure.
- the method is applied to a target detection device.
- the device can be deployed in a terminal device or a server or other processing equipment, and can perform processing such as target detection or target classification in automatic driving.
- the terminal device may be a user equipment (UE, User Equipment), mobile device, cellular phone, cordless phone, personal digital assistant (PDA, Personal Digital Assistant), handheld device, computing device, in-vehicle device, wearable device, etc.
- the method may be implemented by a processor invoking computer-readable instructions stored in the memory. As shown in Figure 1, the process includes:
- Step S101 Obtain point cloud information, where the point cloud information includes at least the point cloud information corresponding to the target object and the object to be detected.
- multiple pieces of to-be-processed point cloud information obtained through scanning by at least two sensors may be obtained, and the multiple pieces of to-be-processed point cloud information may be spliced to obtain the point cloud information.
- grid processing can be performed according to the point cloud information to obtain grid information.
- the at least two sensors may be sensors with laser emitting and receiving functions in the lidar.
- the target object may refer to a target device scanned by at least two sensors during the target detection process, such as a vehicle in an autonomous driving or unmanned driving scene.
- the target object in the present disclosure is not limited to the target device, and may also include pedestrians who guide the blind.
- the object to be detected may refer to an object related to the target object in the target detection process.
- the target object is a vehicle in an autonomous driving or unmanned driving scene, for safe driving
- the object to be detected may be stones, leaves, roadblocks, etc. on the driving route of the vehicle.
- the object to be detected may also refer to an object in the same observation frame as the target object during the target detection process.
- the target object is still a vehicle as an example, and the object to be detected may be a roadside billboard, a tree and its canopy in the same observation screen as the vehicle.
- Step S102 Obtain grid information according to the point cloud information.
- the grid information includes at least obstacle point information indicating the object to be detected.
- the point cloud information may include the point cloud information corresponding to the target object, such as the point cloud information corresponding to the vehicle in the autonomous driving or unmanned driving scene, and the point cloud information corresponding to the object to be detected, such as pebbles, Leaves, roadblocks, roadside billboards, trees and their canopies, etc.
- the point cloud information corresponding to the object to be detected such as pebbles, Leaves, roadblocks, roadside billboards, trees and their canopies, etc.
- the point cloud information can be gridded to obtain a grid map composed of multiple grid regions.
- Fig. 2 shows a schematic diagram of grid information according to an embodiment of the present disclosure.
- An implementation manner of the grid information of the present disclosure may be a grid graph or other chart forms, which is not limited.
- the grid map contains multiple grid areas 11, and each grid area includes one or more pixels (in Figure 2, each grid area includes multiple pixels as an example) .
- an obstacle point For each grid area in the grid map, it is necessary to identify whether the grid area has a pixel point corresponding to the object to be detected or even an obstacle (hereinafter referred to as an obstacle point) and identify it with obstacle point information.
- the grid graph obtained by the gridding process can be regarded as the initial grid graph, that is, the obstacle point information of each grid area is the first value representing "none", such as "0".
- the process of identifying obstacles can be regarded as updating the obstacle point information of each grid area in the grid map (it can be specifically, updating the obstacle point information of a certain grid area from a first value to a second value, for example, " 1”), and at least the sensor identification (ring ID) in the point cloud information can be used as the update basis.
- the obstacle point information can be marked in the grid area according to the ring ID.
- FIG. 5 shows a schematic diagram of obstacle point information in each grid area according to an embodiment of the present disclosure, taking the number “0" and the number "1" as the obstacle point information as an example.
- marking the grid area as the first value "0" indicates that there are no obstacles in the grid area
- marking the grid area as the second value "1" indicates that there are obstacle points in the grid area.
- a grid map containing obstacle point information is obtained, so that the obstacle in the object to be detected can be identified according to the grid map containing obstacle point information.
- Step S103 Identify obstacles in the object to be detected that affect the movement of the target object according to the grid information.
- the grid information may be a grid graph containing obstacle point information.
- obstacles in the object to be detected can be identified. For example, marking the grid area as "1" indicates that there are obstacle points in the grid area. Connecting multiple obstacle points can obtain the connected area corresponding to the multiple obstacle points, and determine the shape of the object to be detected corresponding to the multiple obstacle points, and even the shape of the obstacle.
- point cloud information corresponding to the target object and the object to be detected is obtained, and according to the point cloud information, information at least including obstacle points indicating whether the object to be detected exists is obtained
- the obstacles in the object to be detected can be identified according to the obstacle point information contained in the grid information, which improves the accuracy of target detection for the obstacles.
- the point cloud information can be obtained according to the scanning detection signal sent by the sensor and the return signal received.
- the sensor transmits a scanning detection signal to the vehicle and its surroundings, and then the sensor receives the return signal reflected from the vehicle and its surrounding objects, and compares the return signal with the transmitted scanning detection signal to obtain the vehicle and its surroundings.
- Objects such as position information, height information, distance information, speed information, posture information, shape information and other parameters, so that the vehicle and its surrounding objects can be tracked and identified based on these parameters.
- the point cloud information of the present disclosure is a collection of massive points that express the spatial distribution and surface characteristics of objects in the target area under the same spatial reference system, and the three-dimensional coordinates of each pixel point are recorded in the form of pixels ( Among them, the X/Y two-dimensional coordinates in the three-dimensional coordinates are used to calibrate the position information in the above parameters, and the third dimension Z in the three-dimensional coordinates is used to calibrate the height information in the above parameters), color information (RGB), and laser reflection intensity (Intensity ) A combination of multiple items in information, etc.
- the ring ID information of each pixel can be obtained from the point cloud information, and the ring ID included in the target grid area in the multiple grid areas can be determined to be in the target grid area. Whether there are obstacles. Further, in the case that there are obstacle points in the target grid area, updating the grid information may include the following content:
- Fig. 3 shows a schematic diagram of different ring IDs of pixel points in a grid area according to an embodiment of the present disclosure, including a sensor 21, a sensor 22, and a sensor 23, an object to be detected 24, and a plurality of pixels (respectively use 1-6 to Logo).
- the triangular shape of the object 24 to be detected is merely illustrative, and is not intended to limit the actual shape of the object to be detected.
- the laser beams emitted by the sensor 21 and the sensor 22 should not originally fall into the target grid area where the object 24 to be detected is located.
- the laser beams emitted by the sensor 21 and the sensor 22 are generated. ⁇ Reflected.
- the sensor 21 is scanned to obtain point cloud information composed of multiple pixels
- the laser beam 211 emitted by the sensor 21 meets the object 24 to be detected and is reflected, and the pixel 1 falls into the target grid area
- the sensor 22 scans to obtain point cloud information composed of multiple pixels
- the laser beam 221 and the laser beam 222 emitted by the sensor 22 meet the object 24 to be detected and reflect, and the pixel points 2 and the pixel points 3 fall into the target network.
- the laser beam 231, laser beam 232, and laser beam 233 emitted by the sensor 23 do not encounter the object to be detected 24, and the pixel points 4, Pixel 5 and pixel 6 fall into the target grid area.
- the ring IDs corresponding to multiple pixels are different identifiers, which means that the multiple pixels are obtained by different sensors.
- the obstacle point information corresponding to the target grid area is updated from the initial first value to the second value to mark the The existence of obstacle points.
- the multiple sensors (sensor 21, sensor 22, and sensor 23) in Figure 3 are not necessarily arranged separately in actual applications, they can also be arranged next to each other, or even multiple sensors can be arranged together and displayed. Different projection angles.
- multiple sensors are dispersedly arranged for more intuitiveness. Multiple sensor placement positions that can be imagined by those skilled in the art without creative work are within the protection scope of the present disclosure.
- FIG. 4 shows a schematic diagram of the source of pixels in the grid area with the same ring ID according to an embodiment of the present disclosure, including a sensor 31 and a plurality of pixels (identified by 7-10, respectively).
- the laser beam 311, laser beam 312, laser beam 313, and laser beam 314 emitted by the sensor 31 have not encountered obstacles, and the pixel point 7, pixel Point 8, pixel point 9 and pixel point 10 fall into the target grid area.
- the ring IDs corresponding to multiple pixels are the same identifier, which means that the multiple pixels are obtained by different sensors. At this time, it can be determined that there are no obstacle points in the target grid area, and the obstacle point information corresponding to the target grid area is maintained as the initial first value.
- the object to be detected included in the point cloud information may be obstacles such as pebbles and leaves, as well as non-obstacles such as tree crowns and signs in autonomous driving or unmanned driving scenarios. Therefore, on the basis of the obstacle point judgment based on the above ring ID, the height information can be further added, and the obstacle points determined by the above ring ID can be verified to avoid possible misjudgments, such as tree crowns and signs. Other non-obstacles are also recognized as obstacles. Because, in the case of autonomous driving or unmanned driving, the target object is a vehicle, and objects in the sky such as tree crowns and signboards should not be obstacles, and are usually much higher than obstacles such as stones and leaves. Therefore, the height information of the pixels in the point cloud information can be added to exclude non-obstacles such as tree crowns and signs from the grid area.
- updating the grid information further includes: determining the height information according to the height information.
- the category of the obstacle point existing in the target grid area; and the obstacle point information corresponding to the target grid area in the grid information is updated according to the category of the obstacle point.
- the grid information is a grid graph marked with obstacle point information
- the target grid area corresponds to The obstruction point information is updated from the second value to the first value, which can effectively reduce the probability of occurrence of the above-mentioned misjudgment.
- the obstacle point information corresponding to the target grid area is maintained at a second value. In this way, after the grid information is updated, a more accurate grid map containing only the obstacle point information corresponding to the obstacle can be obtained for subsequent target detection processing.
- determining the category of obstacle points existing in the target grid area according to the height information includes: obtaining ring IDs and height information corresponding to at least two pixels in the target grid area; The at least two pixels are divided according to the ring ID, and the pixels corresponding to the same ring ID are used as a set of data to obtain multiple sets of pixel data.
- the minimum height value in each group of pixel data is determined; by classifying and counting the minimum height values in the multiple groups of pixel data, one or more minimum height categories are obtained, and each group is determined accordingly.
- the minimum height category includes the number of height values and the minimum value.
- the type of obstacle point in the target grid area can be determined according to the number of height values included in each minimum height category corresponding to the target grid area and the minimum value thereof, that is, whether the obstacle point is a corresponding obstacle Of pixels.
- the number of height values included in each minimum height class corresponding to the target grid area can be compared with the number threshold (ring_count_th), and the smallest height value included in each minimum height class can be The value is compared with the height threshold (height_th) to determine the category of obstacle points in the target grid area.
- the number of height values included in the target minimum height class is greater than or equal to the number threshold (ring_count_th) and the minimum value of the included height values If it is less than the height threshold (height_th), it is considered that the obstacle points existing in the target grid area correspond to the obstacle.
- ring_count_th 3
- height_th can be the height of the vehicle, for example, 2m. .
- a connected area analysis can be performed on the grid graph to obtain a connected area, and an obstacle in the object to be detected can be identified according to the connected area.
- the obstacle can be represented in the form of a polygon such as a concave polygon, a convex polygon, a rectangle, or a triangle, as long as it can be recognized that the obstacle is different from other objects.
- convex polygons are used.
- the network with the obstacle point information "1" connected together can be searched based on the obstacle point information marked as "0" or "1" in the grid area as shown in FIG. 5.
- Grid area thus forming a "connected area”.
- Figures 6a-6b show schematic diagrams of a communication manner of a connected area according to an embodiment of the present disclosure.
- the connected area calculation can be implemented by the Breadth First Search (BFS) algorithm.
- BFS Breadth First Search
- the smallest unit in an image is a pixel, and there are 8 adjacent pixels around each pixel, and there are 2 types of adjacent relationships: 4-adjacent (as shown in Figure 6a) and 8-adjacent (as shown in Figure 6b).
- 4 is adjacent to a total of 4 points, that is, there are a total of four pixel points up, down, left, and right.
- the 8 adjacent points also include 4 points on the diagonal position, that is, a total of 8 pixel points.
- FIG. 7 shows a schematic diagram of obstacles in a grid diagram according to an embodiment of the present disclosure. As shown in FIG. 7, the grid diagram contains multiple obstacles represented by convex polygons.
- the method further includes: acquiring a plurality of points to be processed on the first line segment of the connected area, and selecting at least two points from the plurality of points to be processed
- the reference point is connected to the at least two reference points to obtain a second line segment, and the connected area is adjusted according to the second line segment to obtain the first area.
- the first area may be smaller than the connected area. If the obstacle is a convex polygon, the adjustment process of the connected area can be called convex hull processing.
- a certain line segment (called the first line segment) that constitutes a connected region has 10 points to be processed, and 6 reference points are selected from the 10 points to be processed, and the 6 reference points are connected to obtain a line segment (called the first line segment).
- the first area can be obtained after adjusting the connected area according to the second line segment, and the first area is smaller than the connected area. That is to say, after the convex hull is processed, the number of convex edges used to represent obstacles is reduced (because there are fewer points, the convex edges are reduced accordingly), and the convex polygon is smaller than its original shape. Convex hull processing can reduce the amount of calculation.
- the method further includes: extracting the point cloud information corresponding to the target object from the point cloud information, and according to the target object The coordinates of the pixel points in the corresponding point cloud information are obtained to obtain the target position corresponding to the target object; at least two obstacles identified based on the grid information are obtained; the center point of the target position is used as a reference, according to the prediction
- the guide line issued by the angle obtains a fan-shaped area; when the fan-shaped area covers the first obstacle and the second obstacle, and the second obstacle is blocked by the first obstacle, the second obstacle The obstacle point information of the obstacle is deleted from the grid information.
- the grid diagram contains a target object and at least two obstacles.
- the target object may be a vehicle 41
- at least The first obstacle among the two obstacles may be the warning object 42
- the second obstacle among the at least two obstacles may be one or more stones 43.
- a fan-shaped area is obtained according to the guide line issued by the preset angle ⁇ , and the warning object 42 and one or more stones 43 are all covered by the fan-shaped area.
- the second obstacle is not limited to small stones that are blocked, and can also be grass on the side of the road.
- the method includes: sending a message that there is an obstacle on the navigation path to a target object (such as a vehicle), so that the target object performs obstacle avoidance processing and/or replans the navigation path in response to the message.
- a target object such as a vehicle
- the grid information may be a grid graph marked with obstacle point information.
- the grid area should be a plane that almost matches the height of the ground.
- the laser light emitted by the adjacent sensor of the grid area corresponds to the sensor. Will not be hindered by the grid area, which makes the laser light emitted to the grid area come from the same sensor. Therefore, it is assumed that all pixels falling in a certain grid area originate from the same sensor, that is, the ring IDs corresponding to all pixels in the grid area are the same, that is, all the pixels falling in the grid area are the same. If the pixel points are scanned by the same sensor, it can be considered that there are no obstacle points that may correspond to obstacles in the grid area.
- the laser light emitted by the adjacent sensor of the sensor corresponding to the grid area will be blocked and reflected by the protruding object on the grid area, which makes it hit
- the laser in the grid area comes from different sensors. Therefore, it is assumed that there are multiple sensors corresponding to pixels that fall in a certain grid area, that is, the ring IDs corresponding to the pixels in the grid area are different, that is, the pixels that fall into the grid area If it is scanned by different sensors, it can be considered that there are obstacle points that may correspond to obstacles in the grid area.
- the number of ring IDs of pixels falling in the grid area is used to determine whether there may be an obstacle in the grid area, which can be further optimized.
- a certain grid area that includes objects in the air such as tree crowns, signboards, etc.
- lasers belonging to multiple sensors will also be emitted into the grid area, that is, the pixels that fall into the grid area
- the ring IDs corresponding to the points are different.
- the target object is a vehicle
- objects in the sky such as tree crowns and signboards do not belong to the obstacles that the vehicle pays attention to, and it is necessary to eliminate the situation where the tree canopy and signboards are also identified as obstacles that the vehicle needs to avoid. Therefore, the height information of the pixels can be taken into consideration to check the possible obstacles obtained by the ring ID to filter out objects higher than a certain height, thereby further improving the accuracy of obstacle detection.
- an N ⁇ M grid can be constructed for each point cloud information scanned by lidar, and each grid can be preset The side length of represents 0.1m in reality, and the coordinates (N/2, M/2) are set as the center of the vehicle.
- an N ⁇ M grid map is directly constructed. Regardless of whether the point cloud information includes the fusion of multiple lidar scan results or one lidar scan result, the following obstacle identification method is used to judge the obstacles to obtain a grid map with obstacle point information.
- the pixels in the point cloud information scanned by a single lidar can be allocated to the grid according to the position information. For each grid area, count the ring IDs of the pixels allocated to it (the same ring ID is not counted repeatedly). Then, the pixels corresponding to the same ring ID are used as a set of data to obtain multiple sets of pixel data. Then, according to the height information, the minimum height value in each group of pixel data is determined, and the minimum height values of the multiple groups of pixels are classified and counted to obtain at least one minimum height category.
- the class of possible obstacles is determined according to the number of height values included in the minimum height class and the minimum of these height values.
- the network can be determined by comparing the number of height values included in each minimum height class with a threshold value, and comparing the minimum value of the height values included in each minimum height class with the height threshold value.
- the category of obstacle points that exist in the grid area if the following target minimum height class exists in the at least one minimum height class, the number of height values included in the target minimum height class is greater than or equal to the number threshold (ring_count_th), and the minimum value of the included height values If it is less than the height threshold (height_th), it is considered that the obstacle points in the grid area correspond to obstacles that will really affect the target object.
- the advantage of using classification statistics is to find a continuous segment of obstacles in height, rather than a single point.
- a grid map can be obtained for each lidar, and each element of the multiple grid maps is merged by "OR" operation, and the output result is obtained, that is, the grid map with obstacle point information is obtained.
- An example of the "or” operation is: "1" in the grid graph indicates an obstacle point, and "0" indicates an obstacle-free point.
- the corresponding position of the grid area is marked with "1”, and the result of the "or” operation on these two grid graphs is [1, 1, 0].
- the compensation method of the surrounding grid area can be: also need to count the grid area as the center, n ⁇ n size range. in,
- the around function represents rounding, and a is a small predetermined constant.
- a is a small predetermined constant.
- gap_th is used as a correction function, and its value can be corrected according to the distance between the grid area and the center of the vehicle (distance). For example, according to different conditions such as the installation position, angle, and point cloud sparseness of the sensor, different compensation schemes are adopted. In an example,
- the unit of the threshold gap_th is meters, and a and b are relatively small constants.
- the calculated gap_th is a small value, which can be 0.1m.
- the value of the number threshold ring_count_th compensation can be made according to the sparseness of the point cloud information.
- a fixed value may be used, for example, 3.
- the value of the height threshold height_th since the sensor (the lidar that the sensor can be installed on the vehicle) has a certain elevation angle, the height threshold height_th cannot be set to a fixed value. It can be based on the distance between the grid area and the center of the vehicle (distance ) Perform a certain angle correction. For example, in an example, assuming that the tangent of the correction angle is a, then let
- the unit of the height threshold height_th is meters. It should be noted that the values of the above-mentioned parameters can be set according to actual conditions, and the specific setting methods are not limited here.
- each grid area indicates whether there is an obstacle point corresponding to the obstacle in the grid area. Due to the sparseness of the point cloud information, some larger objects are divided into many parts, and the image expansion algorithm can be used to process the grid first to connect multiple parts of the same object. Next, perform connected area analysis (each connected area can represent an object, such as an obstacle). For each connected region, calculate its convex hull, and then use convex hull operations for each convex hull, such as the Ramer–Douglas–Peucker algorithm, which can simplify the number of edges of the convex hull and reduce the amount of computation. Finally, FOV analysis is performed to remove small obstacles that cannot be observed from the center of the vehicle.
- An example of convex hull operation includes:
- the polylines formed by connecting each dividing point in turn can be used as an approximation of the initial polylines to obtain the updated convex hull.
- An example of FOV analysis includes: for every two convex hulls C1 and C2, it is necessary to detect whether the convex hull C1 can be observed from the position of the vehicle, such as the center point A of the vehicle, under the occlusion of the convex hull C2. Specifically, it can include:
- n is greater than or equal to a certain threshold fov_th, the convex hull C1 is considered invisible, and the convex Package C1 is deleted.
- the value of the threshold fov_th needs to be corrected according to the distance from the obstacle to the vehicle.
- An example of a correction is:
- fov_th min(1, ceil(convex_point_num ⁇ (1–distance/a))
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides a target detection device, electronic equipment, computer-readable storage medium, and a program, all of which can be used to implement any target detection method provided in the present disclosure.
- a target detection device electronic equipment, computer-readable storage medium, and a program, all of which can be used to implement any target detection method provided in the present disclosure.
- Fig. 9 shows a block diagram of a target detection device according to an embodiment of the present disclosure.
- the device includes: an acquiring unit 51 for acquiring point cloud information.
- the point cloud information includes at least a target object and a target object to be detected.
- Point cloud information corresponding to the object, where the target object can move, and the object to be detected is people or things around the target object;
- the information processing unit 52 is configured to obtain grid information according to the point cloud information, Wherein, the grid information includes at least obstacle point information indicating the object to be detected;
- the detection unit 53 is configured to identify, according to the grid information, that the object to be detected affects the movement of the target object Obstacles.
- the acquiring unit is configured to: acquire a plurality of to-be-processed point cloud information scanned by at least two sensors; and perform stitching processing on the plurality of to-be-processed point cloud information to obtain Describe point cloud information.
- the point cloud information further includes a sensor identification (ring ID).
- the information processing unit is configured to: perform grid processing on the point cloud information to obtain a grid graph, the grid graph includes a plurality of grid regions, and each of the grid regions corresponds to the The obstacle point information is the first value; for each grid area, according to the ring ID included in the grid area, determine whether there is an obstacle point corresponding to the object to be detected in the target grid area; If the obstacle point exists in the grid area, the obstacle point information corresponding to the grid area in the grid information is updated to a second value.
- the information processing unit is configured to determine that the obstacle point exists in the grid area when the ring IDs corresponding to at least two pixel points in the grid area are different.
- the point cloud information further includes height information
- the device further includes a category determining unit configured to determine the category of the obstacle point existing in the grid area according to the height information ; According to the category of the obstacle point, update the obstacle point information corresponding to the grid area in the grid information.
- the category determining unit is configured to: obtain ring IDs and height information respectively corresponding to at least two pixels in the grid area; and correspond to the same ring ID in the at least two pixels
- a set of data multiple sets of pixel data are obtained; according to the height information, the minimum height value in each set of pixel data is determined; the minimum height value in the multiple sets of pixel data is determined
- Categorize statistics to obtain one or more minimum height categories; determine the categories of obstacle points existing in the grid area according to the number of height values included in each minimum height category and the minimum value thereof.
- the detection unit is configured to: perform a connected area analysis according to the obstacle point information in the grid information to obtain a connected area; according to the connected area, identify the object to be detected The obstacles.
- the device further includes a connected area adjustment unit, configured to: obtain a plurality of points to be processed on the first line segment of the connected area; select at least two points from the plurality of points to be processed Reference point; connecting the at least two reference points to obtain a second line segment, and adjusting the connected area according to the second line segment to obtain the first area.
- the first area may be smaller than the connected area.
- the device further includes: an occlusion processing unit, configured to: extract the point cloud information corresponding to the target object from the point cloud information, and according to the coordinates of the pixel points in the point cloud information corresponding to the target object , Obtain the target position corresponding to the target object; obtain at least two obstacles identified based on the grid information; use the center point of the target position as a reference, and obtain a fan-shaped area according to a guide line issued at a preset angle; In the case that the fan-shaped area covers the first obstacle and the second obstacle, and the second obstacle is blocked by the first obstacle, the obstacle point information of the second obstacle is obtained from the network Delete from the grid information.
- an occlusion processing unit configured to: extract the point cloud information corresponding to the target object from the point cloud information, and according to the coordinates of the pixel points in the point cloud information corresponding to the target object , Obtain the target position corresponding to the target object; obtain at least two obstacles identified based on the grid information; use the center point of the target position as
- the device further includes a sending unit, configured to send a message that there is an obstacle on the navigation path to the target object, so that the target object performs obstacle avoidance processing and/or in response to the message Re-plan the navigation path.
- a sending unit configured to send a message that there is an obstacle on the navigation path to the target object, so that the target object performs obstacle avoidance processing and/or in response to the message Re-plan the navigation path.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above-mentioned method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 10 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method to operate on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen of an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- An input/output (I/O) interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off state of the electronic device 800 and the relative positioning of the components.
- the components are the display and keypad of the electronic device 800, the sensor component 814 can also detect the position change of the electronic device 800 or a component of the electronic device 800, the presence or absence of contact between the user and the electronic device 800, the position of the electronic device 800 or Acceleration/deceleration and temperature changes of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application-specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field-available A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- Fig. 11 is a block diagram showing an electronic device 900 according to an exemplary embodiment.
- the electronic device 900 may be provided as a server.
- the electronic device 900 includes a processing component 922, which further includes one or more processors, and a memory resource represented by a memory 932, for storing instructions that can be executed by the processing component 922, such as an application program.
- the application program stored in the memory 932 may include one or more modules each corresponding to a set of instructions.
- the processing component 922 is configured to execute instructions to perform the above-described methods.
- the electronic device 900 may also include a power component 926 configured to perform power management of the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input/output (I/O) interface 958.
- the electronic device 900 can operate based on an operating system stored in the memory 932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, such as a memory 932 including computer program instructions, which can be executed by the processing component 922 of the electronic device 900 to complete the foregoing method.
- the present disclosure may be a system, method and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may include, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing, for example.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet). connect).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- FPGA field programmable gate array
- PDA programmable logic array
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Abstract
Description
Claims (15)
- 一种目标检测方法,包括:A target detection method includes:获取点云信息,所述点云信息至少包括目标对象及待检测对象对应的点云信息,其中,所述待检测对象为所述目标对象周围的人或物;Acquiring point cloud information, where the point cloud information includes at least a target object and point cloud information corresponding to an object to be detected, wherein the object to be detected is a person or thing around the target object;根据所述点云信息,得到网格信息,所述网格信息至少包括指示所述待检测对象的障碍点信息;Obtaining grid information according to the point cloud information, where the grid information includes at least obstacle point information indicating the object to be detected;根据所述网格信息,识别出所述待检测对象中对所述目标对象的移动造成影响的障碍物。According to the grid information, an obstacle in the object to be detected that affects the movement of the target object is identified.
- 根据权利要求1所述的方法,其特征在于,所述获取点云信息,包括:The method according to claim 1, wherein said acquiring point cloud information comprises:获取通过至少两个传感器分别扫描得到的多个待处理的点云信息;Acquiring multiple to-be-processed point cloud information scanned by at least two sensors;将所述多个待处理的点云信息进行拼接处理,得到所述点云信息。The multiple point cloud information to be processed are spliced to obtain the point cloud information.
- 根据权利要求1或2所述的方法,其特征在于,所述点云信息还包括传感器标识,所述根据所述点云信息,得到网格信息,包括:The method according to claim 1 or 2, wherein the point cloud information further includes a sensor identifier, and the obtaining grid information according to the point cloud information includes:对所述点云信息进行网格化处理,得到网格图,所述网格图包括多个网格区域,且每个所述网格区域对应的所述障碍点信息为第一值;Performing griding processing on the point cloud information to obtain a grid graph, the grid graph including a plurality of grid regions, and the obstacle point information corresponding to each grid region is a first value;针对每个所述网格区域,For each grid area,根据所述网格区域包括的像素点对应的传感器标识,确定所述网格区域中是否存在对应所述待检测对象的障碍点;Determining whether there is an obstacle point corresponding to the object to be detected in the grid area according to the sensor identifiers corresponding to the pixels included in the grid area;在所述网格区域中存在有所述障碍点的情况下,将所述网格信息中对应所述网格区域的所述障碍点信息更新为第二值。In a case where the obstacle point exists in the grid area, the obstacle point information corresponding to the grid area in the grid information is updated to a second value.
- 根据权利要求3所述的方法,其特征在于,所述根据所述网格区域包括的像素点对应的传感器标识,确定所述网格区域中是否存在对应所述待检测对象的障碍点,包括:The method according to claim 3, wherein the determining whether there are obstacle points corresponding to the object to be detected in the grid area according to the sensor identifiers corresponding to the pixels included in the grid area comprises :在所述网格区域中的至少两个像素点对应的传感器标识不同的情况下,确定所述网格区域中存在所述障碍点。In a case where the sensor identifiers corresponding to at least two pixel points in the grid area are different, it is determined that the obstacle point exists in the grid area.
- 根据权利要求3或4所述的方法,其特征在于,所述点云信息还包括高度信息,所述在所述目标网格区域中存在有所述障碍点的情况下,将所述网格信息中对应所述网格区域的所述障碍点信息更新为第二值,还包括:The method according to claim 3 or 4, wherein the point cloud information further includes height information, and when the obstacle point exists in the target grid area, the grid The update of the obstacle point information corresponding to the grid area to the second value in the information further includes:根据所述网格区域中的像素点的高度信息,确定所述网格区域中存在的所述障碍点的类别;Determine the category of the obstacle point existing in the grid area according to the height information of the pixel points in the grid area;根据所述障碍点的类别,更新所述网格信息中对应所述网格区域的所述障碍点信息。According to the category of the obstacle point, the obstacle point information corresponding to the grid area in the grid information is updated.
- 根据权利要求5所述的方法,其特征在于,所述根据所述网格区域中的像素点的高度信息,确定所述网格区域中存在的所述障碍点的类别,包括:The method according to claim 5, wherein the determining the category of the obstacle point existing in the grid area according to the height information of the pixel points in the grid area comprises:获取所述网格区域中至少两个像素点分别对应的传感器标识及高度信息;Acquiring sensor identifiers and height information respectively corresponding to at least two pixels in the grid area;将所述至少两个像素点中对应同一个传感器标识的像素点作为一组数据,得到多组像素点数据;Taking the pixel points corresponding to the same sensor identifier among the at least two pixel points as a set of data to obtain multiple sets of pixel point data;根据所述高度信息,确定每一组所述像素点数据中的最小高度值;Determining the minimum height value in each group of pixel data according to the height information;对所述多组像素点数据中的最小高度值进行归类处理,获得一个或多个最小高度类;Categorizing the minimum height values in the multiple sets of pixel data to obtain one or more minimum height categories;根据每个所述最小高度类所包括的高度值的数量及其中最小值,确定所述网格区域中存在的障碍点的类别。According to the number of height values included in each of the minimum height classes and the minimum value thereof, the types of obstacle points existing in the grid area are determined.
- 根据权利要求6所述的方法,其特征在于,所述根据每个所述最小高度类所包括的高度值的数量及其中最小值,确定所述网格区域中存在的障碍点的类别,包括:The method according to claim 6, wherein the determining the category of obstacle points existing in the grid area according to the number of height values included in each of the minimum height classes and the minimum value thereof includes :在所述一个或多个最小高度类中存在目标最小高度类的情况下,则确定所述网格区域中存在的障碍点对应障碍物,其中In the case where there is a target minimum height class in the one or more minimum height classes, it is determined that the obstacle points existing in the grid area correspond to the obstacle, where所述目标最小高度类所包括的高度值的数量大于或等于预设的数量阈值,The number of height values included in the target minimum height class is greater than or equal to a preset number threshold,所述目标最小高度类所包括的高度值中的最小值小于或等于预设的高度阈值;The minimum value of the height values included in the target minimum height class is less than or equal to a preset height threshold;在所述一个或多个最小高度类中不存在所述目标最小高度类的情况下,则确定所述网格区域中存在的障碍点对应非障碍物。In the case that the target minimum height class does not exist in the one or more minimum height classes, it is determined that the obstacle points existing in the grid area correspond to non-obstacles.
- 根据权利要求5至7中任一项所述的方法,其特征在于,所述障碍点的类别包括所述障碍点对应障碍物和所述障碍点对应非障碍物,所述根据所述障碍点的类别,更新所述网格信息中对应所述网格区域的所述障碍点信息,包括:The method according to any one of claims 5 to 7, wherein the categories of the obstacle points include obstacles corresponding to the obstacle points and non-obstacles corresponding to the obstacle points. The category of updating the obstacle point information corresponding to the grid area in the grid information includes:在所述障碍点的类别表示所述障碍点对应障碍物的情况下,将所述网格信息中对应所述网格区域的所述障碍点信息保持为所述第二值;In the case where the category of the obstacle point indicates that the obstacle point corresponds to an obstacle, maintaining the obstacle point information corresponding to the grid area in the grid information as the second value;在所述障碍点的类别表示所述障碍点对应非障碍物的情况下,将所述网格信息中对应所述网格区域的所述障碍点信息更新为所述第一值。When the category of the obstacle point indicates that the obstacle point corresponds to a non-obstacle, the obstacle point information corresponding to the grid area in the grid information is updated to the first value.
- 根据权利要求1至8中任一项所述的方法,其特征在于,所述根据所述网格信息,识别出所述待检测对象中对所述目标对象的移动造成影响的障碍物,包括:The method according to any one of claims 1 to 8, wherein the identifying an obstacle in the object to be detected that affects the movement of the target object according to the grid information comprises :根据所述网格信息中的所述障碍点信息进行连通区域分析,得到连通区域;Perform a connected area analysis according to the obstacle point information in the grid information to obtain a connected area;根据所述连通区域,识别出所述待检测对象中的所述障碍物。According to the connected area, the obstacle in the object to be detected is identified.
- 根据权利要求9所述的方法,还包括:The method according to claim 9, further comprising:获取所述连通区域的第一线段上的多个待处理点;Acquiring multiple points to be processed on the first line segment of the connected region;从所述多个待处理点中选取至少两个参考点;Selecting at least two reference points from the plurality of points to be processed;连接所述至少两个参考点得到第二线段,根据所述第二线段调整所述连通区域后得到第一区域。The second line segment is obtained by connecting the at least two reference points, and the first area is obtained after adjusting the connected area according to the second line segment.
- 根据权利要求1至10中任一项所述的方法,还包括:The method according to any one of claims 1 to 10, further comprising:从所述点云信息中提取所述目标对象对应的点云信息,根据所述目标对象对应的点云信息中像素点的坐标,得到所述目标对象对应的目标位置;Extracting the point cloud information corresponding to the target object from the point cloud information, and obtaining the target position corresponding to the target object according to the coordinates of the pixel points in the point cloud information corresponding to the target object;获取基于所述网格信息识别出的至少两个障碍物;Acquiring at least two obstacles identified based on the grid information;以所述目标位置的中心点为基准,根据预设角度发出的指引线得到扇形区域;Using the center point of the target position as a reference, obtain a fan-shaped area according to a guide line issued at a preset angle;在所述扇形区域覆盖第一障碍物和第二障碍物,且所述第二障碍物被所述第一障碍物遮挡的情况下,将所述第二障碍物的障碍点信息从所述网格信息中删除。In the case that the fan-shaped area covers the first obstacle and the second obstacle, and the second obstacle is blocked by the first obstacle, the obstacle point information of the second obstacle is obtained from the network Delete from the grid information.
- 一种目标检测装置,包括:A target detection device includes:获取单元,用于获取点云信息,所述点云信息至少包括目标对象及待检测对象对应的点云信息,其中,所述待检测对象为所述目标对象周围的人或物;An acquiring unit, configured to acquire point cloud information, the point cloud information includes at least a target object and point cloud information corresponding to an object to be detected, wherein the object to be detected is a person or thing around the target object;信息处理单元,用于根据所述点云信息,得到网格信息,所述网格信息至少包括指示所述待检测对象的障碍点信息;An information processing unit, configured to obtain grid information according to the point cloud information, where the grid information includes at least obstacle point information indicating the object to be detected;检测单元,用于根据所述网格信息,识别出所述待检测对象中对所述目标对象的移动造成影响的障碍物。The detection unit is configured to identify obstacles in the object to be detected that affect the movement of the target object according to the grid information.
- 一种电子设备,包括:An electronic device including:处理器;processor;用于存储处理器可执行指令的存储器;A memory for storing processor executable instructions;其中,所述处理器被配置为执行权利要求1至11中任意一项所述的方法。Wherein, the processor is configured to execute the method of any one of claims 1-11.
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至11中任意一项所述的方法。A computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method according to any one of claims 1 to 11 is implemented.
- 一种计算机程序,所述计算机程序存储在存储介质中,当处理器执行所述计算机程序时,所述处理器用于执行权利要求1-11任一所述的目标检测方法。A computer program, the computer program is stored in a storage medium, and when a processor executes the computer program, the processor is used to execute the target detection method according to any one of claims 1-11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217043313A KR20220016221A (en) | 2020-04-20 | 2021-04-15 | Target detection method and apparatus, electronic device, storage medium and program |
JP2021577017A JP2022539093A (en) | 2020-04-20 | 2021-04-15 | Target detection method and device, electronic device, storage medium, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010314166.6A CN111507973B (en) | 2020-04-20 | 2020-04-20 | Target detection method and device, electronic equipment and storage medium |
CN202010314166.6 | 2020-04-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021213241A1 true WO2021213241A1 (en) | 2021-10-28 |
Family
ID=71878738
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/087424 WO2021213241A1 (en) | 2020-04-20 | 2021-04-15 | Target detection method and apparatus, and electronic device, storage medium and program |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2022539093A (en) |
KR (1) | KR20220016221A (en) |
CN (1) | CN111507973B (en) |
WO (1) | WO2021213241A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117091516A (en) * | 2022-05-12 | 2023-11-21 | 广州镭晨智能装备科技有限公司 | Method, system and storage medium for detecting thickness of circuit board protective layer |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111507973B (en) * | 2020-04-20 | 2024-04-12 | 上海商汤临港智能科技有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112697188B (en) * | 2020-12-08 | 2022-12-23 | 北京百度网讯科技有限公司 | Detection system test method and device, computer equipment, medium and program product |
CN113901970B (en) * | 2021-12-08 | 2022-05-24 | 深圳市速腾聚创科技有限公司 | Obstacle detection method and apparatus, medium, and electronic device |
CN115330969A (en) * | 2022-10-12 | 2022-11-11 | 之江实验室 | Local static environment vectorization description method for ground unmanned vehicle |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779280A (en) * | 2012-06-19 | 2012-11-14 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN105957145A (en) * | 2016-04-29 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Road barrier identification method and device |
CN106951847A (en) * | 2017-03-13 | 2017-07-14 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
US20190035148A1 (en) * | 2017-07-28 | 2019-01-31 | The Boeing Company | Resolution adaptive mesh that is generated using an intermediate implicit representation of a point cloud |
CN109840448A (en) * | 2017-11-24 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | Information output method and device for automatic driving vehicle |
CN111507973A (en) * | 2020-04-20 | 2020-08-07 | 上海商汤临港智能科技有限公司 | Target detection method and device, electronic equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11144747B2 (en) * | 2017-03-31 | 2021-10-12 | Pioneer Corporation | 3D data generating device, 3D data generating method, 3D data generating program, and computer-readable recording medium storing 3D data generating program |
CN109145677A (en) * | 2017-06-15 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
JP6969738B2 (en) * | 2017-07-10 | 2021-11-24 | 株式会社Zmp | Object detection device and method |
JP7056842B2 (en) * | 2018-03-23 | 2022-04-19 | 株式会社豊田中央研究所 | State estimator and program |
JP7128577B2 (en) * | 2018-03-30 | 2022-08-31 | セコム株式会社 | monitoring device |
JP2019207655A (en) * | 2018-05-30 | 2019-12-05 | 株式会社Ihi | Detection device and detection system |
JP7479799B2 (en) * | 2018-08-30 | 2024-05-09 | キヤノン株式会社 | Information processing device, information processing method, program, and system |
CN110147706B (en) * | 2018-10-24 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Obstacle recognition method and device, storage medium, and electronic device |
CN109635685B (en) * | 2018-11-29 | 2021-02-12 | 北京市商汤科技开发有限公司 | Target object 3D detection method, device, medium and equipment |
-
2020
- 2020-04-20 CN CN202010314166.6A patent/CN111507973B/en active Active
-
2021
- 2021-04-15 JP JP2021577017A patent/JP2022539093A/en active Pending
- 2021-04-15 WO PCT/CN2021/087424 patent/WO2021213241A1/en active Application Filing
- 2021-04-15 KR KR1020217043313A patent/KR20220016221A/en not_active Application Discontinuation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779280A (en) * | 2012-06-19 | 2012-11-14 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN105957145A (en) * | 2016-04-29 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Road barrier identification method and device |
CN106951847A (en) * | 2017-03-13 | 2017-07-14 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
US20190035148A1 (en) * | 2017-07-28 | 2019-01-31 | The Boeing Company | Resolution adaptive mesh that is generated using an intermediate implicit representation of a point cloud |
CN109840448A (en) * | 2017-11-24 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | Information output method and device for automatic driving vehicle |
CN111507973A (en) * | 2020-04-20 | 2020-08-07 | 上海商汤临港智能科技有限公司 | Target detection method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
SIHENG CHEN; BAOAN LIU; CHEN FENG; CARLOS VALLESPI-GONZALEZ; CARL WELLINGTON: "3D Point Cloud Processing and Learning for Autonomous Driving", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 1 March 2020 (2020-03-01), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081612080 * |
XIN, YU ET AL.: "Dynamic Obstacle Detection and Representation Approach for Unmanned Vehicles Based on Laser Sensor", ROBOT, vol. 36, no. 6, 30 November 2014 (2014-11-30), pages 654 - 661, XP055861209, ISSN: 1002-0446 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117091516A (en) * | 2022-05-12 | 2023-11-21 | 广州镭晨智能装备科技有限公司 | Method, system and storage medium for detecting thickness of circuit board protective layer |
Also Published As
Publication number | Publication date |
---|---|
JP2022539093A (en) | 2022-09-07 |
CN111507973A (en) | 2020-08-07 |
CN111507973B (en) | 2024-04-12 |
KR20220016221A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021213241A1 (en) | Target detection method and apparatus, and electronic device, storage medium and program | |
US11468581B2 (en) | Distance measurement method, intelligent control method, electronic device, and storage medium | |
US11308809B2 (en) | Collision control method and apparatus, and storage medium | |
US20210009080A1 (en) | Vehicle door unlocking method, electronic device and storage medium | |
EP3252658B1 (en) | Information processing apparatus and information processing method | |
US11301726B2 (en) | Anchor determination method and apparatus, electronic device, and storage medium | |
CN111340766A (en) | Target object detection method, device, equipment and storage medium | |
KR20180068578A (en) | Electronic device and method for recognizing object by using a plurality of senses | |
CN106934347B (en) | Obstacle identification method and device, computer equipment and readable medium | |
KR102129698B1 (en) | Automatic fish counting system | |
KR20200081450A (en) | Biometric detection methods, devices and systems, electronic devices and storage media | |
WO2021103423A1 (en) | Method and apparatus for detecting pedestrian events, electronic device and storage medium | |
CN113064135B (en) | Method and device for detecting obstacle in 3D radar point cloud continuous frame data | |
KR20220062107A (en) | Light intensity control method, apparatus, electronic device and storage medium | |
CN109696173A (en) | A kind of car body air navigation aid and device | |
US20220035003A1 (en) | Method and apparatus for high-confidence people classification, change detection, and nuisance alarm rejection based on shape classifier using 3d point cloud data | |
CN116420058A (en) | Replacing autonomous vehicle data | |
KR20210148134A (en) | Object counting method, apparatus, electronic device, storage medium and program | |
KR20180125858A (en) | Electronic device and method for controlling operation of vehicle | |
CN114332821A (en) | Decision information acquisition method, device, terminal and storage medium | |
CN115641518A (en) | View sensing network model for unmanned aerial vehicle and target detection method | |
CN110390252B (en) | Obstacle detection method and device based on prior map information and storage medium | |
CN111860074B (en) | Target object detection method and device, and driving control method and device | |
KR102120812B1 (en) | Target recognition and classification system based on probability fusion of camera-radar and method thereof | |
CN113450459A (en) | Method and device for constructing three-dimensional model of target object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21792821 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021577017 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217043313 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21792821 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM1205 DATED 12.04.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21792821 Country of ref document: EP Kind code of ref document: A1 |