WO2023050679A1 - Obstacle detection method and apparatus, and computer device, storage medium, computer program and computer program product - Google Patents

Obstacle detection method and apparatus, and computer device, storage medium, computer program and computer program product Download PDF

Info

Publication number
WO2023050679A1
WO2023050679A1 PCT/CN2022/075423 CN2022075423W WO2023050679A1 WO 2023050679 A1 WO2023050679 A1 WO 2023050679A1 CN 2022075423 W CN2022075423 W CN 2022075423W WO 2023050679 A1 WO2023050679 A1 WO 2023050679A1
Authority
WO
WIPO (PCT)
Prior art keywords
identified
point
area
historical
projection
Prior art date
Application number
PCT/CN2022/075423
Other languages
French (fr)
Chinese (zh)
Inventor
俞煌颖
傅东旭
王哲
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023050679A1 publication Critical patent/WO2023050679A1/en

Links

Images

Definitions

  • Embodiments of the present disclosure relate to, but are not limited to, the technical field of radar detection, and in particular, relate to an obstacle detection method, device, computer equipment, storage medium, computer program, and computer program product.
  • lidar When using lidar to detect the driving area, it is usually carried out by emitting laser light to the driving area and receiving the laser light reflected by the ground and obstacles. In the related art, the accuracy of using laser to detect obstacles is low.
  • Embodiments of the present disclosure at least provide an obstacle detection method, device, computer equipment, storage medium, computer program, and computer program product.
  • An embodiment of the present disclosure provides an obstacle detection method, including: based on the current frame radar scan data obtained by scanning the target scene, determining the first position point corresponding to the object to be identified in the target scene; The historical location information corresponding to the historical candidate deletion object in the radar scanning data, and the first location point, determine whether the object to be identified is the target object to be deleted; in response to the object to be identified is not the target object to be deleted The object to be recognized is determined as an obstacle in the target scene.
  • the determining whether the object to be identified is the target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point includes: Based on the first position point corresponding to the object to be identified, determine whether the object to be identified is a current candidate for deletion; The historical location information corresponding to the candidate deletion object and the first location point determine whether the object to be identified is the target object to be deleted.
  • the object to be identified in response to the fact that the object to be identified is not the current candidate object for deletion, is determined as an obstacle in the target scene. In this way, when it is judged that the object to be identified is not the current candidate for deletion, the object to be identified can be directly determined as an obstacle, which is more efficient; since the detection method provided by the embodiment of the present disclosure can more accurately determine whether the object to be identified is Delete the object for the current candidate, so it is more accurate when judging whether the object to be recognized is an obstacle in the target scene.
  • the determining the first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene includes: for each of the objects to be identified, from Determining the point cloud point corresponding to the object to be identified in the current frame radar scan data; based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, determine the corresponding the contour information; based on the contour information, determine the first location point corresponding to the object to be recognized.
  • the first position point since the first position point is obtained by using the radar scanning data, it can not only retain the information contained in the original point cloud point, but also represent more information of the object to be identified; therefore, using the first position point instead of passing through the radar Scanning the determined point cloud points requires less calculation, consumes less computing power, and is more efficient.
  • the determining the contour information corresponding to the object to be recognized based on the three-dimensional position information of the point cloud point corresponding to the object to be recognized in the target scene includes: corresponding to the object to be recognized The point cloud points of is projected onto a preset plane to obtain a first projection point; based on the two-dimensional position information of the first projection point in the preset plane, the contour information of the object to be recognized is determined.
  • the first projection point to determine the contour information of the object to be recognized compared with the method of directly determining the contour information by using point cloud points, the calculation using two-dimensional position information is simpler, and the determined contour information of the object to be recognized Also more accurate.
  • the determining the first position point corresponding to the object to be recognized based on the contour information includes: using the contour information of the object to be recognized to determine that the object to be recognized is in a preset plane The projection area; based on the area of the projection area, determine the first location point corresponding to the object to be identified.
  • determining the first position point corresponding to the object to be identified includes: comparing the area of the projection area with a preset area threshold; The area is greater than the area threshold, based on the projection area, determine the minimum bounding box of the projection area; based on the first area corresponding to the minimum bounding box and the preset first interval step, in the second Determining a plurality of candidate position points in an area; determining the candidate position point located in the projected area as the first position point.
  • the method of determining its first position point using the first area determined by the minimum bounding box corresponding to its projected area can better retain the projected The location points in the area make it more accurate when using the first location point to determine whether the object to be recognized is a target object that can be deleted.
  • the determining the first location point corresponding to the object to be identified based on the area of the projection area further includes: comparing the area of the projection area with a preset area threshold; responding When the area is less than or equal to the area threshold, determine the center point located in the projection area; based on the center point and the preset radius length, determine the center point as the center, the preset radius A second area whose length is a radius; based on the second area and a preset second interval step, the first location point is determined within the second area.
  • the to-be-recognized object whose projected area area is smaller than the area threshold is usually small in size, when the first position point is directly determined for it, the number of determined position points may also be small.
  • a plurality of position points related to the object to be identified can be determined; by increasing the number of position points, it is also possible to use the first position point to determine whether the object to be identified is Improves accuracy when objects are deletable.
  • the judging whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified includes: acquiring a current frame image obtained by scanning the target scene; Projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image and the obstacles included in the current frame image At a position in the current frame image, determine whether the object to be identified is the current candidate object for deletion. In this way, using the second projection point of the first position point in the current frame image, it is relatively easy to first determine whether the object to be identified can be used as the current candidate for deletion; if it cannot be used as the current candidate for deletion, no further Handle judgment. This can further improve the detection efficiency.
  • the number of the first position point is at least one; the projecting the first position point into the current frame image to obtain the second projected point includes: adding at least one of the first position points A position point is projected into the current frame image to obtain at least one second projection point; the position information based on the second projection point in the current frame image and the obstacles included in the current frame image At the position in the current frame image, determining whether the object to be identified is the current candidate deletion object includes: for each of the second projection points, based on the second projection point in the current frame image The position information in the current frame image and the position of the obstacle included in the current frame image, predict the obstacle prediction result corresponding to the second projection point; the obstacle prediction result includes: There is an obstacle at the position corresponding to the second projection point, or there is no obstacle; based on the obstacle prediction results corresponding to each of the second projection points, it is determined whether the object to be identified is the current candidate deletion object.
  • the determining whether the object to be identified is the current candidate object for deletion based on the obstacle prediction results corresponding to the second projection points respectively includes: Based on the corresponding obstacle prediction result, determine the confidence that the object to be identified is an obstacle; determine whether the object to be identified is the current candidate for deletion based on the confidence and a preset confidence threshold.
  • n is an integer greater than 1; based on the obstacle prediction results corresponding to each second projection point, the confidence that the object to be recognized is an obstacle is determined degree, including: traversing the 2nd to nth second projection points; for the traversed ith second projection point, based on the obstacle prediction result of the ith second projection point, determine the Criterion function corresponding to two projection points; wherein, i is a positive integer greater than 1; based on the criterion function corresponding to the i-th second projection point and the fusion criterion of the 1st to i-1th second projection points As a result, a fusion criterion result corresponding to the ith second projection point is determined; based on the fusion criterion result corresponding to the ith second projection point, a confidence degree for determining that the object to be recognized is an obstacle is obtained.
  • the determining whether the object to be identified is the target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point includes: Based on the historical position information corresponding to the historical candidate object in the historical frame radar scanning data and the first position point, determine the target candidate object from the historical candidate object; based on the first position of the target candidate object in the preset plane A projection area, and a second projection area of the object to be identified on the preset plane, determine whether the object to be identified is a target object to be deleted. In this way, it is relatively simple to use the first projection area and the second projection area to judge whether the object to be recognized is the target object to be deleted, so the detection efficiency can also be improved.
  • the historical frame radar scan data includes at least one historical candidate object; based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point, from Determining target candidate objects in the historical candidate objects includes: based on the historical position information corresponding to each historical candidate object in the historical frame radar scanning data and the first position point, determining that each historical candidate object is respectively related to the target candidate object Recognizing the distance of the object: based on the distances between each historical candidate object and the object to be recognized, determine the target candidate object with the closest distance to the object to be recognized from each of the historical candidate objects. In this way, by using the distances between the historical candidate objects and the object to be recognized, the target candidate object can be determined more quickly and accurately, and further judge whether the object to be recognized is the target object to be deleted according to the determined target candidate object.
  • the object to be identified based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane, it is determined whether the object to be identified is The target object to be deleted includes: determining whether there is an overlapping area between the first projection area and the second projection area; in response to no overlapping area between the first projection area and the second projection area, determining the The object to be identified is the target object to be deleted.
  • the target object to be deleted further includes: in response to the fact that there is an overlapping area between the first projection area and the second projection area in the current frame radar scan data, using the object to be identified as a new historical candidate object until the current In the consecutive N frames of radar scanning data after the frame of radar scanning data, if there is an overlapping area between the first projection area and the second projection area, it is determined that the candidate deletion object is not the target object to be deleted, and the new Delete historical candidates; N is a positive integer.
  • it further includes: for each of the historical candidate deletion objects, detecting the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than or equal to a preset time difference threshold, Delete the historical candidate deletion object.
  • An embodiment of the present disclosure also provides an obstacle detection device, including:
  • the first determining part is configured to determine a first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene;
  • the second determination part is configured to determine whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point;
  • the third determination part is configured to determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the target object to be deleted.
  • the second determining part determines whether the object to be identified is the one to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
  • a target object it is configured to: determine whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified; in response to the object to be identified being the current candidate for deletion and determining whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
  • the second determination part is further configured to: determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the current candidate object for deletion.
  • the first determination part is configured to determine the first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene. : For each of the objects to be identified, determine the point cloud points corresponding to the objects to be identified from the current frame radar scan data; based on the three-dimensional position of the point cloud points corresponding to the objects to be identified in the target scene Position information, determining contour information corresponding to the object to be recognized; determining a first position point corresponding to the object to be recognized based on the contour information.
  • the first determining part is determined based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, and determines the contour information corresponding to the object to be identified.
  • the configuration is as follows: project the point cloud point corresponding to the object to be identified onto a preset plane to obtain a first projection point; based on the two-dimensional position information of the first projection point in the preset plane, determine the Contour information of the object to be recognized.
  • the first determining part in the case of determining the first position point corresponding to the object to be recognized based on the contour information, is configured to: use the contour information of the object to be recognized to determine A projection area of the object to be identified on a preset plane; based on an area of the projection area, a first position point corresponding to the object to be identified is determined.
  • the first determination part is configured to: compare the area of the projection area with a predetermined Compare the set area threshold; in response to the area being greater than the area threshold, based on the projection area, determine the minimum bounding box of the projection area; based on the first area corresponding to the minimum bounding box, and the preset A first interval step of , determining a plurality of candidate location points in the first area; determining the candidate location points located in the projection area as the first location point.
  • the first determination part is configured to: compare the area of the projection area with a predetermined The set area threshold is compared; in response to the area being less than or equal to the area threshold, determine the center point located in the projection area; based on the center point and the preset radius length, determine the center point is the center of the circle, and the preset radius length is the second area of the radius; based on the second area and the preset second interval step, the first position point is determined in the second area.
  • the second determination part is configured to: acquire and collect the The current frame image obtained by the target scene; projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image, and the The position of the obstacle included in the current frame image in the current frame image is determined to determine whether the object to be identified is the current candidate deletion object.
  • the number of the first position point is at least one; when the second determining part projects the first position point into the current frame image to obtain a second projected point, It is configured to: project at least one of the first position points into the current frame image to obtain at least one second projection point; the second determination part is based on the second projection point in the current frame image In the case of determining whether the object to be identified is the current candidate deletion object, it is configured to: for each For the second projection point, predict the second projection point based on the position information of the second projection point in the current frame image and the position of obstacles included in the current frame image in the current frame image.
  • the obstacle prediction result corresponding to the projection point includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; based on the obstacles corresponding to each of the second projection points As a result of the prediction, determine whether the object to be identified is the current candidate object for deletion.
  • the second determination part is configured to determine whether the object to be identified is the current candidate object for deletion based on the obstacle prediction results corresponding to the second projection points respectively. : Based on the obstacle prediction results corresponding to each of the second projection points, determine the confidence that the object to be recognized is an obstacle; determine the object to be recognized based on the confidence and a preset confidence threshold Whether it is the current candidate deletion object.
  • the second determination part determines the object to be identified based on the obstacle prediction results corresponding to each second projection point
  • it is configured to: traverse the 2nd to nth second projection points; for the traversed i-th second projection point, based on the obstacle Object prediction results, determine the criterion function corresponding to the i-th second projection point; wherein, i is a positive integer greater than 1; based on the criterion function corresponding to the i-th second projection point, and the first to i-th Based on the fusion criterion result of one second projection point, determine the fusion criterion result corresponding to the i-th second projection point; based on the fusion criterion result corresponding to the i-th second projection point, determine the Confidence that the object to be recognized is an obstacle.
  • the second determining part determines whether the object to be identified is the one to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
  • a target object it is configured to: determine the target candidate object from the historical candidate objects based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point; A first projection area of the candidate target object on a preset plane and a second projection area of the object to be identified on the preset plane, and determine whether the object to be identified is a target object to be deleted.
  • the historical frame radar scan data includes at least one historical candidate object; the second determining part is based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data, and the first A location point, when the target candidate object is determined from the historical candidate objects, is configured to: based on the historical location information corresponding to each historical candidate object in the historical frame radar scan data and the first location point, Determine the distances between each historical candidate object and the object to be identified; based on the distances between each historical candidate object and the object to be identified, determine the historical candidate closest to the object to be identified from each of the historical candidate objects The object is the target candidate object.
  • the second determination part determines the In the case of whether the object to be identified is the target object to be deleted, it is configured to: determine whether there is an overlapping area between the first projection area and the second projection area; respond to the first projection area and the second projection area There is no overlapping area in the projection area, and it is determined that the object to be recognized is the target object to be deleted.
  • the second determination part determines the In the case of whether the object to be identified is a target object to be deleted, it is further configured to: in response to an overlapping area between the first projection area and the second projection area in the current frame radar scan data, the object to be identified is taken as A new historical candidate object, until in the continuous N frames of radar scanning data after the current frame of radar scanning data, there is an overlapping area between the first projection area and the second projection area, then it is determined that the candidate deletion object is not the object to be deleted Deleted target object, and delete the new historical candidate object; N is a positive integer.
  • the detection device further includes: a processing part configured to, for each of the historical candidate deletion objects, detect the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than or If it is equal to the preset time difference threshold, delete the historical candidate deletion object.
  • An embodiment of the present disclosure provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory.
  • the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • An embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, part or all of the steps in the above method are executed.
  • An embodiment of the present disclosure provides a computer program, including computer readable codes.
  • a processor in the computer device executes part or all of the steps in the above method.
  • An embodiment of the present disclosure provides a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program.
  • the computer program is read and executed by a computer, the above-mentioned methods are implemented. some or all steps
  • FIG. 1 is a schematic diagram of an implementation flow of an obstacle detection method provided by an embodiment of the present disclosure
  • Fig. 2a is a schematic diagram of a projection area provided by an embodiment of the present disclosure
  • Fig. 2b is a schematic diagram of an alternative location point provided by an embodiment of the present disclosure.
  • Fig. 2c is a schematic diagram of a first location point provided by an embodiment of the present disclosure.
  • Fig. 3a is a schematic diagram of a projection area provided by an embodiment of the present disclosure.
  • Fig. 3b is a schematic diagram of a first location point provided by an embodiment of the present disclosure.
  • Fig. 4a is a schematic diagram of a projection area provided by an embodiment of the present disclosure.
  • Fig. 4b is a schematic diagram of a projection area provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of an implementation process for detecting an object to be recognized provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of an obstacle detection device provided by an embodiment of the present disclosure.
  • Fig. 7 is a schematic diagram of a computer device provided by an embodiment of the present disclosure.
  • the driving area of the vehicle can be scanned by using the laser radar to determine the obstacles that may exist in the driving area.
  • the laser radar can emit laser light to the driving area and receive the reflected laser light, and determine whether there is an obstacle according to the received laser light, so that it is easy to detect the obstacle normally when the reflected laser light is abnormal after the object is scanned .
  • the laser emitted by the lidar will not be able to reflect the laser normally due to mirror reflection, so that these non-obstacle objects are judged as obstacles, and the laser is used to detect obstacles.
  • the detection accuracy is low.
  • an embodiment of the present disclosure provides an obstacle detection method, by combining the historical position information of the historical candidate deletion object in the historical frame radar data, it is determined whether to detect the object to be identified in the current frame radar scan data
  • the deletion process combines the space domain and the time domain to comprehensively judge whether the object to be recognized is the target object to be deleted, so as to judge whether the object to be recognized is an obstacle in the target scene, and has higher detection accuracy.
  • the execution subject of the method for detecting obstacles provided in the embodiments of the present disclosure is generally a computer with certain computing power equipment
  • the computer equipment includes, for example: terminal equipment or server or other processing equipment
  • the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant) Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • the obstacle detection method may be implemented by a processor calling computer-readable instructions stored in a memory.
  • Fig. 1 is a schematic diagram of an implementation flow diagram of an obstacle detection method provided by an embodiment of the present disclosure. Referring to Fig. 1 , the method includes steps S101 to S103, wherein:
  • S101 Based on the current frame radar scan data obtained by scanning the target scene, determine a first position point corresponding to the object to be identified in the target scene;
  • S102 Determine whether the object to be identified is a target object to be deleted based on the historical location information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first location point;
  • the embodiment of the present disclosure uses the current frame radar scanning data obtained from the target scene to determine the first position point corresponding to the object to be identified, and the historical position information corresponding to the historical candidate deletion object in the historical frame scanning data to determine whether the object to be identified is The target object to be deleted; when it is determined that the object to be recognized is not the target object to be deleted, the object to be recognized is determined as an obstacle in the target scene.
  • This method can combine the space domain and the time domain to comprehensively judge whether the object to be recognized is the target object to be deleted, so as to judge whether the object to be recognized is an obstacle in the target scene, and has higher detection accuracy.
  • the obstacle detection method provided by the embodiment of the present disclosure may be applied in different scenarios.
  • the target scene may include, for example, the space where the self-driving car drives, and the target scene may include, for example, other driving vehicles, lane lines, signboards, green belts, and the like.
  • the target scene may include, for example, the space where the intelligent robot travels, and the target scene may include, for example, other robots, staff, shelves, containers, positioning signs, and the like.
  • a laser radar can be installed on the autonomous vehicle to scan and detect the area where the autonomous vehicle is driving.
  • the laser radar can scan the target scene at an interval of 0.2 seconds, and obtain radar scanning data.
  • the radar scanning data whose scanning time is closest to the current moment is taken as the current frame radar scanning data obtained by scanning the target scene.
  • the multiple objects determined in the target scene can be determined by using the radar scan data of the current frame.
  • the radar scanning data of the current frame may be processed by means of object detection. Since radar scanning data can reflect the size and shape of objects, objects in some target scenes can be identified by using object detection methods, such as vehicles and signage; these objects can be determined directly because they are objects that need to be avoided. for obstacles. At the same time, there may also be objects that cannot be identified by object detection. After detecting them by object recognition, these unrecognizable objects are used as objects to be identified in the current frame of radar scanning data, that is, the classification label is Object of "unknown (unknown) objects".
  • the first position point determined for the object to be identified may be implemented by using the radar scan data of the current frame.
  • the first position point is a criterion or basis for judging whether the object to be recognized can be used as a candidate deletion object.
  • the candidate deletion objects are: objects that may belong to obstacles.
  • the candidate deletion object can be further judged to determine whether it is indeed an obstacle; if the candidate deletion object is the target object, it indicates that the corresponding object to be recognized is not an obstacle, but a false detection.
  • the following method when using the current frame radar scan data to determine the first position point corresponding to the object to be identified in the target scene, the following method may be adopted: for each of the objects to be identified, the current frame radar scanning Determine the point cloud point corresponding to the object to be identified in the data; determine the contour information corresponding to the object to be identified based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene; The contour information is used to determine the first position point corresponding to the object to be recognized.
  • the object to be identified is determined in the current frame of radar scanning data by means of object recognition, it can be determined by using the data of the current frame of radar scanning.
  • the point cloud point corresponding to the object to be recognized is determined in the current frame of radar scanning data by means of object recognition.
  • the point cloud points corresponding to the object to be recognized may be large, and the point cloud points use a large amount of three-dimensional position information data when representing the position of the object to be recognized. Therefore, for the object to be identified, use the point cloud point corresponding to the object to be identified to re-model the object to be identified, and then judge whether it is an obstacle, or use the neural network to re-identify the object corresponding to the object to be identified In terms of the method of encoding the point cloud points of the specific object and then judging the similarity with the point cloud points of the specific object determined in history, the amount of calculation is large, and it needs to consume more computing power, and the efficiency is also low.
  • the contour information corresponding to the object to be recognized can be determined based on the three-dimensional position information of the point cloud point corresponding to the object to be recognized in the target scene, and then Then use the contour information to determine the first position point corresponding to the object to be recognized, so as to use the first position point with a small amount of data to determine whether the object to be recognized is the target object to be deleted.
  • the following method can be used: project the point cloud points corresponding to the object to be recognized to the preset Obtaining a first projection point in the preset plane; determining contour information of the object to be recognized based on two-dimensional position information of the first projection point in the preset plane.
  • the preset plane may be, for example, the plane on which the ground on which the self-driving car is driving is located.
  • the first projection point After projecting the point cloud point corresponding to the object to be recognized onto the preset plane, the first projection point can be obtained; at this time, the three-dimensional position information corresponding to the point cloud point is transformed into the two-dimensional position information corresponding to the first projection point, and the amount of data reduce.
  • the projection points at the edge positions in the first projection point can be determined, and using these projection points at the edge positions and their corresponding two-dimensional position information, That is, the contour information corresponding to the object to be recognized can be determined.
  • the contour information corresponding to the object to be recognized when using the contour information to determine the first position point corresponding to the object to be recognized, for example, can be used to determine the projection area to be recognized in the preset plane; based on the area of the projection area , to determine the first location point corresponding to the object to be recognized.
  • contour information corresponding to the object to be recognized By using the contour information corresponding to the object to be recognized, multiple corner points corresponding to the contour surrounding the object to be recognized can be determined; by using the two-dimensional position information of multiple corner points, it is possible to determine the area occupied by the projection area to be recognized on the preset plane area, so as to determine the first location point corresponding to the object to be recognized.
  • the area of the projection area can be compared with a preset area threshold, and the object to be identified can be determined according to the comparison result corresponding to the first position point.
  • the preset area threshold may include, for example, 0.3 square meters, 0.5 square meters, etc.; the preset area threshold may be determined based on experience or actual conditions, for example. Taking the preset area threshold of 0.3 square meters as an example, when it is determined that the area of the projection area is greater than the area threshold, based on the projection area, determine the minimum bounding box of the projection area; An area, and a preset first interval step, a plurality of candidate position points are determined in the first area; and the candidate position points located in the projection area are determined as the first position points.
  • FIG. 2a to FIG. 2c are schematic diagrams of determining a first location point provided by an embodiment of the present disclosure.
  • a projection area 21 may be determined for the object to be identified, and the projection area 21 is, for example, an irregular polygon.
  • a corresponding minimum bounding box 22 can be determined.
  • the minimum bounding box 22 includes a rectangle.
  • the first area 23 occupied by the minimum bounding box 22 can be determined.
  • the preset first interval step for example, a plurality of candidate position points can be determined within the first area 23 .
  • the preset first interval step may include, for example, 0.2 meters. As shown in Fig.
  • a plurality of candidate position points 24 can be determined in the first area 23 by using a preset first interval step; For the position point of the selected position point 24, the interval between the candidate position points 24 respectively corresponding to the four directions of up, down, left and right is the preset first interval step.
  • the candidate position point 24 corresponding to the object to be recognized can be frame-selected as the first position point by using the projection area 21 . As shown in FIG. 2c, using a plurality of candidate position points 24 shown in FIG. 2b and the projection area 21 shown in FIG. One location point 25.
  • the candidate location points 24 respectively corresponding to points a and b can be used as the first location points 25 .
  • the number of first position points that can be determined is large enough. In order to reduce the amount of data processing and improve the processing accuracy, a Candidate location points 24 corresponding to points and b points are screened.
  • the candidate position point 24 corresponding to point a has a larger part falling into the projection area 21 than the candidate position point 24 corresponding to point b; therefore, the candidate position point corresponding to point a is reserved in the first position point 25 24, while the candidate position point 24 corresponding to point b is screened out.
  • the above two implementation processes may be determined according to actual conditions, and are not limited here.
  • the center point located in the projected area can also be determined; based on the center point and the preset radius length, determine the center point as the center, the preset Let the length of the radius be the second area of the radius; based on the second area and a preset second interval step, determine the first position point in the second area.
  • FIG. 3 a to FIG. 3 b are schematic diagrams of determining a first location point provided by an embodiment of the present disclosure.
  • a projection area 31 (indicated by a dotted line framed area in the figure) can be determined for the object to be recognized, and the projection area 31 is, for example, an irregular polygon.
  • a center point 32 of the projection area can be determined.
  • the second area 33 (indicated by the area framed by a solid line in the figure) can be determined.
  • the circular second region 33 can be defined with the center point 32 as the center and a preset radius length as the radius.
  • the preset radius length may be, for example, 0.2 meters, 0.3 meters and so on.
  • the determined maximum length may be used as a preset radius length.
  • the maximum length r from the center point 32 to the boundary of the projection area 31 can be determined, and then the maximum length r is used as a preset radius length to determine the second area 33 .
  • a larger number of first position points can be determined, so that a sufficient number of first position points can be used to further determine whether the object to be recognized is a target object to be deleted.
  • the preset second interval step for example, a plurality of position points can be determined within the second area 33 . Wherein, in order to determine more position points for the object to be recognized to a greater extent, the position points falling on the edge of the second area 33 are retained when determining the position points.
  • the preset second interval step for example, can be the same as the above-mentioned first preset interval step, for example, including 0.2 meters; it can also determine a value different from the above-mentioned preset first interval step according to the actual situation , such as 0.15 meters; it can be determined according to the actual situation during implementation. As shown in FIG. 3 b , multiple position points can be determined in the second area 33 by using the preset second interval step.
  • the position points located in the second area 33 can all be determined as the first For the position points 34 , compared with the position points falling into the projection area 31 , the number of the obtained first position points 34 is more.
  • the object to be identified is the object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scan data. target.
  • the following manner may be adopted: based on the first position point corresponding to the object to be identified, it is judged whether the object to be identified is a current candidate for deletion; in response to the object to be identified being the current candidate for deletion
  • determine whether the object to be identified is a target object to be deleted based on the historical location information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first location point.
  • the following method when judging whether the object to be identified is a current candidate for deletion based on the first position point of the object to be identified, the following method may be specifically adopted: acquire the current frame image obtained by scanning the target scene; Projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image and the obstacles included in the current frame image At a position in the current frame image, determine whether the object to be identified is the current candidate object for deletion.
  • the current frame image when acquiring the current frame image, for example, it may be acquired by using an image acquisition device mounted on the self-driving vehicle.
  • the image acquisition device may include a color camera, for example.
  • the first position point can be projected into the current frame image to obtain the second projected point.
  • the position information of the obtained second projection point may also be determined.
  • the freespace of the self-driving vehicle can be determined by using the current frame image, it can be further determined whether the object to be recognized is in the Objects in the drivable area, and then judge whether to delete the object to be recognized.
  • the obstacle prediction result includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; based on each of the second projection points corresponding to Obstacle prediction results, determine whether the object to be identified is the current candidate object for deletion.
  • the corresponding determined The obstacle prediction result is: there is an obstacle at the position corresponding to the second projection point; in another possible implementation manner, when the second projection point is not located at the position of the obstacle included in the current frame image , since the second projection point can be considered with a higher degree of confidence to indicate that there is no obstacle at the corresponding position, the correspondingly determined obstacle prediction result is: there is no obstacle at the position corresponding to the second projection point.
  • the obstacle prediction result corresponding to each second projection point can be determined.
  • determining whether the object to be identified is the current candidate object to be deleted using the obstacle prediction results corresponding to the second projection points for example, it may be based on the corresponding Determine the confidence that the object to be identified is an obstacle based on the obstacle prediction result; determine whether the object to be identified is the current candidate for deletion based on the confidence and a preset confidence threshold.
  • the determined second projection points may include, for example, n; wherein, n is a positive integer.
  • the following method can be used: traverse the 2nd to nth second projection points; The i second projection point, based on the obstacle prediction result of the i second projection point, determine the criterion function corresponding to the i second projection point; wherein, i is a positive integer greater than 1; based on the i second projection point.
  • the criterion function corresponding to the i second projection point and the fusion criterion result of the 1st to i-1th second projection point are determined to determine the fusion criterion result corresponding to the i second projection point; based on the The fusion criterion result corresponding to the i-th second projection point is used to obtain a confidence degree for determining that the object to be recognized is an obstacle.
  • the corresponding obstacle prediction result may include, for example, that there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; when the obstacle prediction result is different In the case of , the determined criterion function is also different.
  • the corresponding criterion function M_1( ⁇ ) may, for example, indicate that there is an obstacle at the second projection point, and determine the Assuming a probability function (mass) of reliability; when the obstacle prediction result includes that there is no obstacle at the position corresponding to the second projection point, the corresponding criterion function M_2( ⁇ ) can represent the second projection, for example The point is the likelihood function to determine the confidence level of the hypothesis in the absence of obstacles.
  • the fusion criterion results of the 1st to i-1th second projection points can also be determined to determine the fusion criterion corresponding to the ith second projection point According to the results.
  • a1 represents the first second projection point
  • a2 represents the second second projection point.
  • its corresponding criterion function can be represented by M 2 (a 2 ). Since when traversing to a2, the previous second projection point only includes a1, so the fusion criterion result M 1 ( ⁇ ) can be expressed directly by M 1 (a 1 ); or it can also be written as M 2 (a 1 ) form.
  • K represents a normalization coefficient, which satisfies the following formula (2):
  • the fusion criterion result M 2 12 (a 1 ; a 1 ) of a1 and a2 determined when traversing to the second second projection point a2 can be determined.
  • the superscript i of M indicates traversing to the i-th second projection point.
  • the third second projection point a3 is traversed.
  • the criterion function corresponding to a3 can be expressed as M 2 (a 3 ), for example.
  • the fusion criterion result corresponding to the third second projection point determined when traversing to a3 can be determined, which satisfies the following formula (3):
  • the normalization coefficient K satisfies the following formula (4):
  • the confidence level that the object to be recognized is an obstacle may include, for example, a probability value, such as 0.61, 0.70, or 0.86.
  • the preset confidence threshold may also include a probability value, such as 0.75, for example.
  • the preset confidence threshold may be determined based on experience or multiple experiments during implementation. In a case where the determined confidence that the object to be identified is an obstacle is numerically greater than the preset confidence threshold, it is considered that the object to be identified can be determined as the current candidate for deletion.
  • the object to be identified when it is determined that the object to be identified is an obstacle with a confidence level of 0.6, and the preset confidence threshold is 0.55, it is determined that the object to be identified is not a current candidate for deletion; and when it is determined that the object to be identified is an obstacle When the confidence level of is 0.80, it is determined that the object to be identified is the current candidate for deletion. After determining the current candidate deletion object, it may also be determined whether the object to be identified is the target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
  • the radar when the radar scans the target scene, for example, it further includes obtaining historical frame radar scan data determined before the current frame radar scan data.
  • the corresponding historical candidate deletion object can be determined by using the historical frame radar scanning data, and the historical position information of the historical candidate deletion object can be determined.
  • the distances between each historical candidate object and the object to be recognized can be determined by using the first location point.
  • the historical position information of the historical candidate object used when determining the distance may be, for example, the position coordinates representing the center point of the historical candidate object;
  • the first position point represents the position of the object to be recognized.
  • the distances corresponding to each historical candidate object and the object to be recognized can be determined.
  • the target candidate object with the closest distance to the object to be recognized can be determined from the historical candidate objects according to the distances between each historical candidate object and the object to be recognized.
  • the distance with the smallest value may be determined among the determined multiple distances, and the corresponding historical candidate object may be used as the target candidate object; or, the historical candidate object corresponding to multiple distances with smaller numerical values may be used as target candidates.
  • the manner of determining target candidate objects may be determined according to actual conditions.
  • the first projection area of the target candidate object on the preset plane and the second projection area of the object to be recognized on the preset plane may also be determined.
  • the preset plane may include, for example, the plane on which the road on which the self-driving vehicle drives is located.
  • FIG. 4a and FIG. 4b are respectively schematic diagrams of a projection area provided by an embodiment of the present disclosure.
  • the object to be recognized can be judged in the current frame is the target object to be deleted.
  • the object to be recognized since the object to be recognized is a new object, the object to be recognized can also be used as a new historical candidate object, and the position information of the object to be recognized can be saved.
  • the location information of the object to be recognized can be used to perform obstacle detection processing on the next frame of radar scan data. Using the location information of the object to be identified to perform obstacle detection processing on the next frame of radar scan data, the positions corresponding to the above-mentioned target candidate object in the historical frame scan data and the object to be identified in the current frame scan data The information is processed similarly for obstacle detection.
  • the object to be identified is judged as the target object to be deleted only for the current frame; if the object to be identified appears in the same area on the preset plane in the continuous multi-frame radar scan data, multiple After the frame radar scanning data, it is determined that the object to be identified is not the target object to be deleted.
  • the object to be identified in response to the overlap between the first projection area and the second projection area in the current frame radar scanning data, is used as a new historical candidate object until the current frame radar scanning
  • N is a positive integer.
  • the value of N may include, for example, 5, 6, 8, 10 and so on.
  • N can be determined according to the shooting interval when the radar scanning data is acquired, or determined according to experiments and other methods. The following takes N as 10 as an example for illustration.
  • the first projection area and the second projection area both have overlapping areas, then it can be judged accordingly that the object to be identified is actually present in the target scene object.
  • a warning cone is placed on the road in the target scene, after obtaining the first frame of radar scan data containing the warning cone, since the warning cone really exists, the following continuous multi-frame radar scanning
  • the warning cone is included in the data, for example, after 15 frames of radar scanning data, the warning cone cannot be captured in the radar scanning data. That is, for the object to be identified, if the object to be identified is an object that actually exists in the target scene, there will be multiple frames of continuous radar scan data that contain the object to be identified.
  • the object to be recognized can be determined not to be a target object to be deleted, correspondingly there is no need to continue detecting it, so the object to be recognized as a new historical candidate object can be directly deleted.
  • the object to be identified if for the object to be identified, there is no overlapping area between the first projection area and the second projection area in the consecutive N frames of historical frame radar scanning data. For example, after the object to be identified appears, the object to be identified is only included in the consecutive frames of radar scan data not exceeding N, and the object to be identified can be used as the target object to be deleted.
  • the object to be identified includes stagnant water, due to the influence of specular reflection and reflection angle, when multiple frames of scanning data are continuously acquired during the driving process of the autonomous vehicle, there may be unevenness in the multiple frames of continuous radar scanning data.
  • the radar will no longer detect stagnant water through scanning, so the radar In the scanning data, the radar scanning data of the object to be identified will not be collected at positions close to the object to be identified on the preset plane. In this case, it may be determined that the object to be identified is the target object to be deleted.
  • the obtained radar scanning data is also constantly changing. Therefore, for the historical candidate deletion object, if there is a historical candidate deletion object that has not appeared for a period of time, it can be considered that the historical candidate deletion object will not appear again in the process of continuing to drive.
  • the time difference between the storage time of the historical candidate deletion object and the current time may be detected; if the time difference is greater than or equal to a preset time difference threshold, the historical candidate deletion object is deleted.
  • the preset time difference threshold may include, for example, 3 seconds, 4 seconds, etc.; specifically, it may be determined according to actual conditions or experiments. In this way, it can also ensure that the objects to be identified to be deleted are gradually screened out by using the historical candidate deletion objects, and the data to be stored is reduced to a certain extent, and the matching calculation of the historical candidate deletion objects and the current object to be identified is also reduced. Improve the detection efficiency of obstacles.
  • the object to be recognized may be determined as an obstacle in the target scene.
  • the self-driving vehicle can be controlled to take an evasive action. Since the obstacle detection method provided by the embodiment of the present disclosure has a high detection accuracy when detecting and judging whether the object to be recognized is the target object to be deleted, the automatic driving is performed by applying the obstacle detection method provided by the embodiment of the present disclosure When the vehicle is driving automatically, it can more accurately determine the obstacles that actually need to be avoided in the driving area, and complete effective obstacle avoidance. In this way, the person driving the self-driving vehicle can effectively reduce the sharp steering or sudden braking caused by the recognition of non-obstacles during the process of riding the self-driving vehicle, and the driving experience is better.
  • FIG. 5 is a schematic diagram of an implementation process for detecting an object to be recognized provided by an embodiment of the present disclosure.
  • S501 Based on the current frame of radar scanning data obtained by scanning the target scene, determine a first position point corresponding to the object to be identified.
  • step S502 Based on the first location point, determine whether the object to be identified is a current candidate for deletion; if yes, go to step S503; if not, go to step S507.
  • S503 Based on the historical location information corresponding to the historical candidate objects in the historical frame radar scanning data and the first location point, determine the target candidate object from the historical candidate objects.
  • S504 Determine whether there is an overlapping area between the first projection area of the candidate target object on the preset plane and the second projection area of the object to be recognized on the preset plane; if yes, go to S505; if not, go to S508.
  • S506 Determine whether there is an overlapping area between the first projection area and the second projection area in the consecutive N frames of radar scan data after the current frame of radar scan data; if so, go to S507; if not, go to S508 .
  • S507 Determine that the object to be recognized is an obstacle in the target scene.
  • S508 Determine that the object to be identified is the target object to be deleted.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides an obstacle detection device corresponding to the obstacle detection method. Since the device in the embodiment of the present disclosure corresponds to the above-mentioned obstacle detection method in the embodiment of the present disclosure, the device The implementation can be found in the implementation of the method.
  • Fig. 6 is a schematic diagram of an obstacle detection device provided by an embodiment of the present disclosure. As shown in Fig. 6 , the device includes: a first determination part 61, a second determination part 62, and a third determination part 63; wherein,
  • the first determining part 61 is configured to determine a first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene;
  • the second determination part 62 is configured to determine whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point;
  • the third determination part 63 is configured to determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the target object to be deleted.
  • the second determination part 62 determines whether the object to be identified is to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
  • the target object it is configured to: determine whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified; For an object, determine whether the object to be identified is a target object to be deleted based on the historical location information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first location point.
  • the second determination part 62 is further configured to: determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the current candidate object for deletion .
  • the first determination part 61 is configured in the case of determining the first position point corresponding to the object to be identified in the target scene based on the current frame radar scan data obtained by scanning the target scene is: for each of the objects to be identified, determine the point cloud points corresponding to the objects to be identified from the current frame radar scan data; based on the point cloud points corresponding to the objects to be identified in the target scene
  • the three-dimensional position information is used to determine contour information corresponding to the object to be recognized; and to determine a first position point corresponding to the object to be recognized based on the contour information.
  • the first determining part 61 determines the outline information corresponding to the object to be identified based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, It is configured to: project the point cloud point corresponding to the object to be identified onto a preset plane to obtain a first projection point; determine the first projection point based on two-dimensional position information of the first projection point in the preset plane Describe the contour information of the object to be recognized.
  • the first determining part 61 when determining the first position point corresponding to the object to be identified based on the outline information, is configured to: use the outline information of the object to be identified, Determine a projection area of the object to be identified on a preset plane; determine a first position point corresponding to the object to be identified based on the area of the projection area.
  • the first determining part 61 when determining the first position point corresponding to the object to be identified based on the area of the projection area, is configured to: combine the area of the projection area with Comparing with a preset area threshold; in response to the area being greater than the area threshold, based on the projection area, determining the minimum bounding box of the projection area; based on the first area corresponding to the minimum bounding box, and the preset Determine a plurality of candidate location points in the first area, and determine the candidate location points located in the projection area as the first location point.
  • the first determining part 61 when determining the first position point corresponding to the object to be identified based on the area of the projection area, is configured to: combine the area of the projection area with comparing with a preset area threshold; in response to the area being less than or equal to the area threshold, determine the center point located in the projection area; based on the center point and the preset radius length, determine the center point The point is the center of the circle, and the preset radius length is the second area of the radius; based on the second area and the preset second interval step, the first position point is determined in the second area.
  • the second determining part 62 is configured to: obtain the collected The current frame image obtained by the target scene; projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image, and Determine whether the object to be identified is the current candidate object for deletion based on the position of the obstacle included in the current frame image.
  • the number of the first position point is at least one; when the second determining part 62 projects the first position point into the current frame image to obtain a second projected point , is configured to: project at least one of the first position points into the current frame image to obtain at least one second projection point; the second determining part 62 is based on the second projection point in the current
  • the position information in the frame image and the position of the obstacle included in the current frame image, in the case of determining whether the object to be identified is the current candidate deletion object are configured to: For each second projection point, predict the The obstacle prediction result corresponding to the second projection point; the obstacle prediction result includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; Obstacle prediction results determine whether the object to be identified is the current candidate object for deletion.
  • the second determination part 62 is configured to determine whether the object to be identified is the current candidate object for deletion based on the obstacle prediction results corresponding to the second projection points respectively. It is: based on the obstacle prediction results corresponding to the second projection points, determine the confidence that the object to be recognized is an obstacle; based on the confidence and a preset confidence threshold, determine the to-be-recognized Whether the object is the current candidate for deletion.
  • the second determining part 62 determines the In the case where the object is the confidence level of an obstacle, it is configured to: traverse the 2nd to nth second projection points; for the traversed ith second projection point, based on the For the obstacle prediction result, determine the criterion function corresponding to the i-th second projection point; wherein, i is a positive integer greater than 1; based on the criterion function corresponding to the i-th second projection point, and the first to i-th -1 fusion criterion result of the second projection point, determining the fusion criterion result corresponding to the ith second projection point; based on the fusion criterion result corresponding to the i second projection point, determining the fusion criterion result Confidence that the object to be recognized is an obstacle.
  • the second determination part 62 determines whether the object to be identified is to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
  • the target object it is configured to: determine the target candidate object from the historical candidate object based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point; The first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane, determine whether the object to be identified is the target object to be deleted.
  • the historical frame radar scan data includes at least one historical candidate object; the second determining part 62 is based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data, and the The first position point, in the case of determining the target candidate object from the historical candidate objects, is configured to: based on the historical position information corresponding to each historical candidate object in the historical frame radar scan data, and the first position point , determine the distances between each historical candidate object and the object to be identified; based on the distances between each historical candidate object and the object to be identified, determine the history closest to the object to be identified from each of the historical candidate objects
  • the candidate object is the target candidate object.
  • the second determining part 62 determines the target candidate object based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane. In the case of whether the object to be identified is a target object to be deleted, it is configured to: determine whether there is an overlapping area between the first projection area and the second projection area; respond to the first projection area and the second projection area There is no overlapping area between the two projection areas, and it is determined that the object to be recognized is the target object to be deleted.
  • the second determining part 62 determines the target candidate object based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane.
  • the object to be identified is a target object to be deleted, it is further configured to: in response to an overlapping area between the first projection area and the second projection area in the current frame radar scan data, the object to be identified As a new historical candidate object, until in the continuous N frames of radar scanning data after the current frame of radar scanning data, there is an overlapping area between the first projection area and the second projection area, then it is determined that the candidate deletion object is not the The target object to be deleted, and the new historical candidate object is deleted; N is a positive integer.
  • the detection device further includes: a processing part 64 configured to, for each of the historical candidate deletion objects, detect the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than Or if it is equal to the preset time difference threshold, delete the historical candidate deletion object.
  • a processing part 64 configured to, for each of the historical candidate deletion objects, detect the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than Or if it is equal to the preset time difference threshold, delete the historical candidate deletion object.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
  • FIG. 7 is a schematic structural diagram of the computer device provided by the embodiment of the present disclosure. As shown in FIG. 7 , the device includes:
  • Processor 10 and memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 is used to execute the machine-readable instructions stored in the memory 20, and the machine-readable instructions are executed by the processor 10 During execution, the processor 10 performs the following steps: based on the current frame radar scan data obtained by scanning the target scene, determine the first position point corresponding to the object to be identified in the target scene; The historical location information corresponding to the historical candidate deletion object and the first location point, determine whether the object to be identified is the target object to be deleted; in response to the object to be identified is not the target object to be deleted, set The object to be recognized is determined as an obstacle in the target scene.
  • memory 20 comprises memory 210 and external memory 220;
  • Memory 210 here is also called internal memory, is used for temporarily storing the operation data in processor 10, and the data exchanged with external memory 220 such as hard disk, processor 10 communicates with memory 210 through memory 210.
  • the external memory 220 performs data exchange.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the obstacle detection method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the obstacle detection method described in the above method embodiment, for details, please refer to Embodiment of the above-mentioned method.
  • the computer program product may be specifically realized by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in some embodiments, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • An embodiment of the present disclosure also provides a computer program product, the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and when the computer program is read and executed by a computer, it realizes part of the above method or all steps.
  • An embodiment of the present disclosure provides a computer program, including computer readable codes.
  • a processor in the computer device executes part or all of the steps in the above method.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • Embodiments of the present disclosure provide an obstacle detection method, device, computer equipment, storage medium, computer program, and computer program product, wherein the obstacle detection method includes: based on the current frame radar obtained by scanning the target scene Scan the data to determine the first position point corresponding to the object to be identified in the target scene; determine whether the object to be identified is to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scan data and the first position point is the target object; in response to the fact that the object to be identified is not the target object to be deleted, the object to be identified is determined as an obstacle in the target scene.
  • the obstacle detection method provided by the embodiments of the present disclosure has high accuracy when detecting obstacles.

Landscapes

  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

Provided in the embodiments of the present disclosure are an obstacle detection method and apparatus, and a computer device, a storage medium, a computer program and a computer program product. The obstacle detection method comprises: on the basis of the current frame of radar scanning data obtained by means of scanning a target scenario, determining a first position point which corresponds to an object to be recognized in the target scenario; on the basis of historical position information corresponding to a historical candidate deletion object in a historical frame of radar scanning data, and the first position point, determining whether the object to be recognized is a target object to be deleted; and in response to the fact that the object to be recognized is not the target object to be deleted, determining, as an obstacle in the target scenario, the object to be recognized.

Description

一种障碍物的检测方法、装置、计算机设备、存储介质、计算机程序及计算机程序产品Obstacle detection method, device, computer equipment, storage medium, computer program and computer program product
相关申请的交叉引用Cross References to Related Applications
本公开基于申请号为202111165461.0、申请日为2021年09月30日、申请名称为“一种障碍物的检测方法、装置、计算机设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本公开中。This disclosure is based on the Chinese patent application with the application number 202111165461.0, the application date is September 30, 2021, and the application name is "A method, device, computer equipment and storage medium for detecting obstacles", and requires that the Chinese patent application The entire content of this Chinese patent application is incorporated by reference in this disclosure.
技术领域technical field
本公开实施例涉及但不限于雷达检测技术领域,尤其涉及一种障碍物的检测方法、装置、计算机设备、存储介质、计算机程序及计算机程序产品。Embodiments of the present disclosure relate to, but are not limited to, the technical field of radar detection, and in particular, relate to an obstacle detection method, device, computer equipment, storage medium, computer program, and computer program product.
背景技术Background technique
在利用激光雷达对行驶区域进行检测时,通常是通过向行驶区域发射激光、并接收由地面和障碍物等反射的激光的方式进行的。相关技术中,利用激光对障碍物进行检测的准确度较低。When using lidar to detect the driving area, it is usually carried out by emitting laser light to the driving area and receiving the laser light reflected by the ground and obstacles. In the related art, the accuracy of using laser to detect obstacles is low.
发明内容Contents of the invention
本公开实施例至少提供一种障碍物的检测方法、装置、计算机设备、存储介质、计算机程序及计算机程序产品。Embodiments of the present disclosure at least provide an obstacle detection method, device, computer equipment, storage medium, computer program, and computer program product.
本公开实施例提供了一种障碍物的检测方法,包括:基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。An embodiment of the present disclosure provides an obstacle detection method, including: based on the current frame radar scan data obtained by scanning the target scene, determining the first position point corresponding to the object to be identified in the target scene; The historical location information corresponding to the historical candidate deletion object in the radar scanning data, and the first location point, determine whether the object to be identified is the target object to be deleted; in response to the object to be identified is not the target object to be deleted The object to be recognized is determined as an obstacle in the target scene.
这样,通过结合历史帧雷达数据中的历史候选删除对象的历史位置信息,确定是否要对当前帧雷达扫描数据中的待识别对象进行删除处理,从而将空间域和时间域结合,综合判断待识别对象是否为待删除的目标对象,以判断待识别对象是否为目标场景中的障碍物,具有更高的检测精度。In this way, by combining the historical position information of historical candidate deletion objects in the historical frame radar data, it is determined whether to delete the object to be identified in the current frame radar scan data, so that the space domain and the time domain are combined to comprehensively judge the object to be identified Whether the object is the target object to be deleted is used to judge whether the object to be recognized is an obstacle in the target scene, which has higher detection accuracy.
在一些实施方式中,所述基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象,包括:基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象;响应于所述待识别对象为所述当前候选删除对象,基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象。这样,通过第一位置点可以首先判断待识别对象是否为当前候选删除对象;在可以确定待识别对象为当前候选删除对象的情况下,在根据历史位置信息以及第一位置点,更为准确的对待识别对象是否为待删除的目标对象做出判断。In some implementations, the determining whether the object to be identified is the target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point includes: Based on the first position point corresponding to the object to be identified, determine whether the object to be identified is a current candidate for deletion; The historical location information corresponding to the candidate deletion object and the first location point determine whether the object to be identified is the target object to be deleted. In this way, it can first be judged whether the object to be identified is the current candidate deletion object through the first location point; when it can be determined that the object to be identified is the current candidate deletion object, based on the historical location information and the first location point, a more accurate A judgment is made as to whether the object to be identified is the target object to be deleted.
在一些实施方式中,响应于所述待识别对象不为所述当前候选删除对象,将所述待识别对象确定为所述目标场景中的障碍物。这样,可以在判断待识别对象不为当前候选删除对象的情况下,直接将待识别对象确定为障碍物,效率更高;由于本公开实施例提供的检测方法可以较为准确地确定待识别对象是否为当前候选删除对象,因此在判断待识别对象是否为目标场景中的障碍物时也较为准确。In some implementations, in response to the fact that the object to be identified is not the current candidate object for deletion, the object to be identified is determined as an obstacle in the target scene. In this way, when it is judged that the object to be identified is not the current candidate for deletion, the object to be identified can be directly determined as an obstacle, which is more efficient; since the detection method provided by the embodiment of the present disclosure can more accurately determine whether the object to be identified is Delete the object for the current candidate, so it is more accurate when judging whether the object to be recognized is an obstacle in the target scene.
在一些实施方式中,所述基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点,包括:针对各所述待识别对象,从 所述当前帧雷达扫描数据中确定与所述待识别对象对应的点云点;基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息;基于所述轮廓信息,确定所述待识别对象对应的第一位置点。这样,由于第一位置点是利用雷达扫描数据得到的,因此可以在保留原始的点云点包含的信息外,还可以表征待识别对象的更多信息;因此,利用第一位置点代替通过雷达扫描确定的点云点,计算量更少,耗费的算力较少,并且效率也较高。In some embodiments, the determining the first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene includes: for each of the objects to be identified, from Determining the point cloud point corresponding to the object to be identified in the current frame radar scan data; based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, determine the corresponding the contour information; based on the contour information, determine the first location point corresponding to the object to be recognized. In this way, since the first position point is obtained by using the radar scanning data, it can not only retain the information contained in the original point cloud point, but also represent more information of the object to be identified; therefore, using the first position point instead of passing through the radar Scanning the determined point cloud points requires less calculation, consumes less computing power, and is more efficient.
在一些实施方式中,所述基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息,包括:将所述待识别对象对应的点云点投影至预设平面中,得到第一投影点;基于所述第一投影点在所述预设平面中的二维位置信息,确定所述待识别对象的轮廓信息。这样,利用第一投影点确定待识别对象的轮廓信息的方式,相较于利用点云点直接确定轮廓信息的方式而言,利用二维位置信息计算更简单,确定的待识别对象的轮廓信息也更为准确。In some embodiments, the determining the contour information corresponding to the object to be recognized based on the three-dimensional position information of the point cloud point corresponding to the object to be recognized in the target scene includes: corresponding to the object to be recognized The point cloud points of is projected onto a preset plane to obtain a first projection point; based on the two-dimensional position information of the first projection point in the preset plane, the contour information of the object to be recognized is determined. In this way, using the first projection point to determine the contour information of the object to be recognized, compared with the method of directly determining the contour information by using point cloud points, the calculation using two-dimensional position information is simpler, and the determined contour information of the object to be recognized Also more accurate.
在一些实施方式中,所述基于所述轮廓信息,确定所述待识别对象对应的第一位置点,包括:利用所述待识别对象的轮廓信息,确定所述待识别对象在预设平面中的投影区域;基于所述投影区域的面积,确定所述待识别对象对应的第一位置点。In some embodiments, the determining the first position point corresponding to the object to be recognized based on the contour information includes: using the contour information of the object to be recognized to determine that the object to be recognized is in a preset plane The projection area; based on the area of the projection area, determine the first location point corresponding to the object to be identified.
在一些实施方式中,基于所述投影区域的面积,确定所述待识别对象对应的第一位置点,包括:将所述投影区域的面积与预设的面积阈值进行比对;响应于在所述面积大于所述面积阈值,基于所述投影区域,确定所述投影区域的最小包围框;基于所述最小包围框对应的第一区域、以及预设的第一间隔步长,在所述第一区域内确定多个备选位置点;将位于所述投影区域内的备选位置点确定为所述第一位置点。这样,对与投影区域的面积大于面积阈值的待识别对象而言,利用其投影区域对应的最小包围框确定的第一区域,确定其第一位置点的方式,可以较好的保留落入投影区域中的位置点,使得在后续利用第一位置点判断该待识别对象是否是可删除的目标对象时,准确性更高。In some implementations, based on the area of the projection area, determining the first position point corresponding to the object to be identified includes: comparing the area of the projection area with a preset area threshold; The area is greater than the area threshold, based on the projection area, determine the minimum bounding box of the projection area; based on the first area corresponding to the minimum bounding box and the preset first interval step, in the second Determining a plurality of candidate position points in an area; determining the candidate position point located in the projected area as the first position point. In this way, for an object to be recognized whose area of the projected area is greater than the area threshold, the method of determining its first position point using the first area determined by the minimum bounding box corresponding to its projected area can better retain the projected The location points in the area make it more accurate when using the first location point to determine whether the object to be recognized is a target object that can be deleted.
在一些实施方式中,所述基于所述投影区域的面积,确定所述待识别对象对应的第一位置点,还包括:将所述投影区域的面积与预设的面积阈值进行比对;响应于所述面积小于或者等于所述面积阈值,确定位于所述投影区域的中心点;基于所述中心点、以及预设的半径长度,确定以所述中心点为圆心、所述预设的半径长度为半径的第二区域;基于所述第二区域以及预设的第二间隔步长,在所述第二区域内确定所述第一位置点。这样,由于投影区域的面积小于面积的阈值的待识别对象通常体积较小,直接为其确定第一位置点时,确定的位置点的数量可能也较少。因此,利用该种确定第一位置点的方式,可以为该待识别对象确定与其相关的多个位置点;通过增加位置点的数量,也可以在后续利用第一位置点判断该待识别对象是否是可删除的目标对象时,提升准确性。In some implementations, the determining the first location point corresponding to the object to be identified based on the area of the projection area further includes: comparing the area of the projection area with a preset area threshold; responding When the area is less than or equal to the area threshold, determine the center point located in the projection area; based on the center point and the preset radius length, determine the center point as the center, the preset radius A second area whose length is a radius; based on the second area and a preset second interval step, the first location point is determined within the second area. In this way, since the to-be-recognized object whose projected area area is smaller than the area threshold is usually small in size, when the first position point is directly determined for it, the number of determined position points may also be small. Therefore, using this method of determining the first position point, a plurality of position points related to the object to be identified can be determined; by increasing the number of position points, it is also possible to use the first position point to determine whether the object to be identified is Improves accuracy when objects are deletable.
在一些实施方式中,所述基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象,包括:获取对所述目标场景进行扫描得到的当前帧图像;将所述第一位置点投影至所述当前帧图像中,得到第二投影点;基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象。这样,利用当前帧图像中第一位置点的第二投影点,可以较为容易的先确定该待识别对象是否可以作为当前候选删除对象;若不能作为当前候选删除对象则不再对其进行进一步的处理判断。这样可以进一步提高检测的效率。In some implementations, the judging whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified includes: acquiring a current frame image obtained by scanning the target scene; Projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image and the obstacles included in the current frame image At a position in the current frame image, determine whether the object to be identified is the current candidate object for deletion. In this way, using the second projection point of the first position point in the current frame image, it is relatively easy to first determine whether the object to be identified can be used as the current candidate for deletion; if it cannot be used as the current candidate for deletion, no further Handle judgment. This can further improve the detection efficiency.
在一些实施方式中,所述第一位置点的数量为至少一个;所述将所述第一位置点投影至所述当前帧图像中,得到第二投影点,包括:将至少一个所述第一位置点投影至所述当前帧图像中,得到至少一个第二投影点;所述基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定 所述待识别对象是否为所述当前候选删除对象,包括:针对每一所述第二投影点,基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,预测所述第二投影点对应的障碍物预测结果;所述障碍物预测结果包括:与所述第二投影点对应的位置存在障碍物,或者未存在障碍物;基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象。这样,由于第二投影点在当前帧图像中的位置信息、以及当前帧图像包括的障碍物在当前帧图像中的位置,较为容易获取到;并且,预测第二投影点对应的障碍物的预测结果,可以在保留第二投影点的位置信息的情况下,渐次对每个第二投影点表征的数据进行融合,因此得到的障碍物预测结果的准确性也更高。In some embodiments, the number of the first position point is at least one; the projecting the first position point into the current frame image to obtain the second projected point includes: adding at least one of the first position points A position point is projected into the current frame image to obtain at least one second projection point; the position information based on the second projection point in the current frame image and the obstacles included in the current frame image At the position in the current frame image, determining whether the object to be identified is the current candidate deletion object includes: for each of the second projection points, based on the second projection point in the current frame image The position information in the current frame image and the position of the obstacle included in the current frame image, predict the obstacle prediction result corresponding to the second projection point; the obstacle prediction result includes: There is an obstacle at the position corresponding to the second projection point, or there is no obstacle; based on the obstacle prediction results corresponding to each of the second projection points, it is determined whether the object to be identified is the current candidate deletion object. In this way, due to the position information of the second projection point in the current frame image and the position of the obstacle included in the current frame image in the current frame image, it is relatively easy to obtain; and the prediction of the obstacle corresponding to the second projection point is predicted As a result, the data represented by each second projection point can be gradually fused while retaining the position information of the second projection point, so the accuracy of the obtained obstacle prediction result is also higher.
在一些实施方式中,所述基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象,包括:基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度;基于所述置信度、以及预设的置信度阈值,确定所述待识别对象是否为所述当前候选删除对象。In some implementation manners, the determining whether the object to be identified is the current candidate object for deletion based on the obstacle prediction results corresponding to the second projection points respectively includes: Based on the corresponding obstacle prediction result, determine the confidence that the object to be identified is an obstacle; determine whether the object to be identified is the current candidate for deletion based on the confidence and a preset confidence threshold.
在一些实施方式中,所述第二投影点有n个,n为大于1的整数;所述基于各第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度,包括:遍历第2至第n个第二投影点;针对遍历到的第i个第二投影点,基于所述第i个第二投影点的障碍物预测结果,确定与第i个第二投影点对应的判据函数;其中,i为大于1的正整数;基于第i个第二投影点对应的判据函数、以及第1至第i-1个第二投影点的融合判据结果,确定所述第i个第二投影点对应的融合判据结果;基于所述第i个第二投影点对应的融合判据结果,得到确定所述待识别对象为障碍物的置信度。In some embodiments, there are n second projection points, and n is an integer greater than 1; based on the obstacle prediction results corresponding to each second projection point, the confidence that the object to be recognized is an obstacle is determined degree, including: traversing the 2nd to nth second projection points; for the traversed ith second projection point, based on the obstacle prediction result of the ith second projection point, determine the Criterion function corresponding to two projection points; wherein, i is a positive integer greater than 1; based on the criterion function corresponding to the i-th second projection point and the fusion criterion of the 1st to i-1th second projection points As a result, a fusion criterion result corresponding to the ith second projection point is determined; based on the fusion criterion result corresponding to the ith second projection point, a confidence degree for determining that the object to be recognized is an obstacle is obtained.
在一些实施方式中,所述基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象,包括:基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象;基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象。这样,利用第一投影区域和第二投影区域对待识别对象是否为待删除的目标对象进行判断的方式较为简单,因此还可以提升检测的效率。In some implementations, the determining whether the object to be identified is the target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point includes: Based on the historical position information corresponding to the historical candidate object in the historical frame radar scanning data and the first position point, determine the target candidate object from the historical candidate object; based on the first position of the target candidate object in the preset plane A projection area, and a second projection area of the object to be identified on the preset plane, determine whether the object to be identified is a target object to be deleted. In this way, it is relatively simple to use the first projection area and the second projection area to judge whether the object to be recognized is the target object to be deleted, so the detection efficiency can also be improved.
在一些实施方式中,所述历史帧雷达扫描数据中包括至少一个历史候选对象;所述基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象,包括:基于所述历史帧雷达扫描数据中各历史候选对象对应的历史位置信息、以及所述第一位置点,确定各历史候选对象分别与所述待识别对象的距离;基于各历史候选对象分别与所述待识别对象的距离,从各所述历史候选对象中确定与所述待识别对象距离最近的所述目标候选对象。这样,利用历史候选对象分别与待识别对象的距离,可以更快速且更准确的确定目标候选对象,并根据确定出的目标候选对象进一步判断待识别对象是否为待删除的目标对象。In some implementations, the historical frame radar scan data includes at least one historical candidate object; based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point, from Determining target candidate objects in the historical candidate objects includes: based on the historical position information corresponding to each historical candidate object in the historical frame radar scanning data and the first position point, determining that each historical candidate object is respectively related to the target candidate object Recognizing the distance of the object: based on the distances between each historical candidate object and the object to be recognized, determine the target candidate object with the closest distance to the object to be recognized from each of the historical candidate objects. In this way, by using the distances between the historical candidate objects and the object to be recognized, the target candidate object can be determined more quickly and accurately, and further judge whether the object to be recognized is the target object to be deleted according to the determined target candidate object.
在一些实施方式中,所述基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象,包括:确定所述第一投影区域和所述第二投影区域是否存在重叠区域;响应于所述第一投影区域和所述第二投影区域未存在重叠区域,确定所述待识别对象为待删除的目标对象。In some implementations, based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane, it is determined whether the object to be identified is The target object to be deleted includes: determining whether there is an overlapping area between the first projection area and the second projection area; in response to no overlapping area between the first projection area and the second projection area, determining the The object to be identified is the target object to be deleted.
在一些实施方式中,所述基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象,还包括:响应于所述当前帧雷达扫描数据中的第一投影区域和第二投影区域存在重叠区域,将所述待识别对象作为新的历史候选对象,直至所述当前帧雷达扫描数 据之后的连续N帧雷达扫描数据中,第一投影区域和第二投影区域均存在重叠区域,则确定所述候选删除对象不为所述待删除的目标对象,并将所述新的历史候选对象删除;N为正整数。这样,能够在有限帧数的扫描数据中确定待识别对象是否为待删除的目标对象,速度更快,效率更高,能够适用于需要迅速对障碍物做出及时、准确响应的场景。进而,可以使得障碍物的检测更加及时,从而进一步保证行车的安全性。In some implementations, based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane, it is determined whether the object to be identified is The target object to be deleted further includes: in response to the fact that there is an overlapping area between the first projection area and the second projection area in the current frame radar scan data, using the object to be identified as a new historical candidate object until the current In the consecutive N frames of radar scanning data after the frame of radar scanning data, if there is an overlapping area between the first projection area and the second projection area, it is determined that the candidate deletion object is not the target object to be deleted, and the new Delete historical candidates; N is a positive integer. In this way, it is possible to determine whether the object to be recognized is the target object to be deleted in the scan data with a limited number of frames, which is faster and more efficient, and can be applied to scenarios that require prompt and accurate responses to obstacles. Furthermore, the detection of obstacles can be made more timely, thereby further ensuring driving safety.
在一些实施方式中,还包括:针对各所述历史候选删除对象,检测该历史候选删除对象的保存时间、与当前时间的时间差;在所述时间差大于或者等于预设的时间差阈值的情况下,删除该历史候选删除对象。这样,还可以保证利用历史候选删除对象准确的筛选出待删除的待识别对象,并在一定程度上减少需要存储的数据,也同时减少了对历史候选删除对象与当前待识别对象的匹配计算,提高了障碍物的检测效率。In some implementations, it further includes: for each of the historical candidate deletion objects, detecting the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than or equal to a preset time difference threshold, Delete the historical candidate deletion object. In this way, it can also ensure that the objects to be identified to be deleted can be accurately screened out by using the historical candidate deletion objects, and the data to be stored is reduced to a certain extent, and the matching calculation of the historical candidate deletion objects and the current object to be identified is also reduced. Improve the detection efficiency of obstacles.
本公开实施例还提供一种障碍物的检测装置,包括:An embodiment of the present disclosure also provides an obstacle detection device, including:
第一确定部分,被配置为基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;The first determining part is configured to determine a first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene;
第二确定部分,被配置为基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;The second determination part is configured to determine whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point;
第三确定部分,被配置为响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。The third determination part is configured to determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the target object to be deleted.
在一些实施方式中,所述第二确定部分在基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象的情况下,被配置为:基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象;响应于所述待识别对象为所述当前候选删除对象,基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象。In some implementations, the second determining part determines whether the object to be identified is the one to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point. In the case of a target object, it is configured to: determine whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified; in response to the object to be identified being the current candidate for deletion and determining whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
在一些实施方式中,所述第二确定部分还被配置为:响应于所述待识别对象不为所述当前候选删除对象,将所述待识别对象确定为所述目标场景中的障碍物。In some implementations, the second determination part is further configured to: determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the current candidate object for deletion.
在一些实施方式中,所述第一确定部分在基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点的情况下,被配置为:针对各所述待识别对象,从所述当前帧雷达扫描数据中确定与所述待识别对象对应的点云点;基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息;基于所述轮廓信息,确定所述待识别对象对应的第一位置点。In some implementations, the first determination part is configured to determine the first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene. : For each of the objects to be identified, determine the point cloud points corresponding to the objects to be identified from the current frame radar scan data; based on the three-dimensional position of the point cloud points corresponding to the objects to be identified in the target scene Position information, determining contour information corresponding to the object to be recognized; determining a first position point corresponding to the object to be recognized based on the contour information.
在一些实施方式中,所述第一确定部分在基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息的情况下,被配置为:将所述待识别对象对应的点云点投影至预设平面中,得到第一投影点;基于所述第一投影点在所述预设平面中的二维位置信息,确定所述待识别对象的轮廓信息。In some implementations, the first determining part is determined based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, and determines the contour information corresponding to the object to be identified. The configuration is as follows: project the point cloud point corresponding to the object to be identified onto a preset plane to obtain a first projection point; based on the two-dimensional position information of the first projection point in the preset plane, determine the Contour information of the object to be recognized.
在一些实施方式中,所述第一确定部分在基于所述轮廓信息,确定所述待识别对象对应的第一位置点的情况下,被配置为:利用所述待识别对象的轮廓信息,确定所述待识别对象在预设平面中的投影区域;基于所述投影区域的面积,确定所述待识别对象对应的第一位置点。In some implementations, in the case of determining the first position point corresponding to the object to be recognized based on the contour information, the first determining part is configured to: use the contour information of the object to be recognized to determine A projection area of the object to be identified on a preset plane; based on an area of the projection area, a first position point corresponding to the object to be identified is determined.
在一些实施方式中,所述第一确定部分在基于所述投影区域的面积,确定所述待识别对象对应的第一位置点的情况下,被配置为:将所述投影区域的面积与预设的面积阈值进行比对;响应于所述面积大于所述面积阈值,基于所述投影区域,确定所述投影区域的最小包围框;基于所述最小包围框对应的第一区域、以及预设的第一间隔步长,在所述第一区域内确定多个备选位置点;将位于所述投影区域内的备选位置点确定为所述 第一位置点。In some implementations, the first determination part is configured to: compare the area of the projection area with a predetermined Compare the set area threshold; in response to the area being greater than the area threshold, based on the projection area, determine the minimum bounding box of the projection area; based on the first area corresponding to the minimum bounding box, and the preset A first interval step of , determining a plurality of candidate location points in the first area; determining the candidate location points located in the projection area as the first location point.
在一些实施方式中,所述第一确定部分在基于所述投影区域的面积,确定所述待识别对象对应的第一位置点的情况下,被配置为:将所述投影区域的面积与预设的面积阈值进行比对;响应于所述面积小于或者等于所述面积阈值,确定位于所述投影区域的中心点;基于所述中心点、以及预设的半径长度,确定以所述中心点为圆心、所述预设的半径长度为半径的第二区域;基于所述第二区域以及预设的第二间隔步长,在所述第二区域内确定所述第一位置点。In some implementations, the first determination part is configured to: compare the area of the projection area with a predetermined The set area threshold is compared; in response to the area being less than or equal to the area threshold, determine the center point located in the projection area; based on the center point and the preset radius length, determine the center point is the center of the circle, and the preset radius length is the second area of the radius; based on the second area and the preset second interval step, the first position point is determined in the second area.
在一些实施方式中,所述第二确定部分在基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象的情况下,被配置为:获取采集所述目标场景得到的当前帧图像;将所述第一位置点投影至所述当前帧图像中,得到第二投影点;基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, the second determination part is configured to: acquire and collect the The current frame image obtained by the target scene; projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image, and the The position of the obstacle included in the current frame image in the current frame image is determined to determine whether the object to be identified is the current candidate deletion object.
在一些实施方式中,所述第一位置点的数量为至少一个;所述第二确定部分在将所述第一位置点投影至所述当前帧图像中,得到第二投影点的情况下,被配置为:将至少一个所述第一位置点投影至所述当前帧图像中,得到至少一个第二投影点;所述第二确定部分在基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象的情况下,被配置为:针对每一所述第二投影点,基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,预测所述第二投影点对应的障碍物预测结果;所述障碍物预测结果包括:与所述第二投影点对应的位置存在障碍物,或者未存在障碍物;基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, the number of the first position point is at least one; when the second determining part projects the first position point into the current frame image to obtain a second projected point, It is configured to: project at least one of the first position points into the current frame image to obtain at least one second projection point; the second determination part is based on the second projection point in the current frame image In the case of determining whether the object to be identified is the current candidate deletion object, it is configured to: for each For the second projection point, predict the second projection point based on the position information of the second projection point in the current frame image and the position of obstacles included in the current frame image in the current frame image. The obstacle prediction result corresponding to the projection point; the obstacle prediction result includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; based on the obstacles corresponding to each of the second projection points As a result of the prediction, determine whether the object to be identified is the current candidate object for deletion.
在一些实施方式中,所述第二确定部分在基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象的情况下,被配置为:基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度;基于所述置信度、以及预设的置信度阈值,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, the second determination part is configured to determine whether the object to be identified is the current candidate object for deletion based on the obstacle prediction results corresponding to the second projection points respectively. : Based on the obstacle prediction results corresponding to each of the second projection points, determine the confidence that the object to be recognized is an obstacle; determine the object to be recognized based on the confidence and a preset confidence threshold Whether it is the current candidate deletion object.
在一些实施方式中,所述第二投影点有n个,n为大于1的整数;所述第二确定部分在基于各第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度的情况下,被配置为:遍历第2至第n个第二投影点;针对遍历到的第i个第二投影点,基于所述第i个第二投影点的障碍物预测结果,确定与第i个第二投影点对应的判据函数;其中,i为大于1的正整数;基于第i个第二投影点对应的判据函数、以及第1至第i-1个第二投影点的融合判据结果,确定所述第i个第二投影点对应的融合判据结果;基于所述第i个第二投影点对应的融合判据结果,得到确定所述待识别对象为障碍物的置信度。In some implementation manners, there are n second projection points, and n is an integer greater than 1; the second determination part determines the object to be identified based on the obstacle prediction results corresponding to each second projection point In the case of the confidence degree of an obstacle, it is configured to: traverse the 2nd to nth second projection points; for the traversed i-th second projection point, based on the obstacle Object prediction results, determine the criterion function corresponding to the i-th second projection point; wherein, i is a positive integer greater than 1; based on the criterion function corresponding to the i-th second projection point, and the first to i-th Based on the fusion criterion result of one second projection point, determine the fusion criterion result corresponding to the i-th second projection point; based on the fusion criterion result corresponding to the i-th second projection point, determine the Confidence that the object to be recognized is an obstacle.
在一些实施方式中,所述第二确定部分在基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象的情况下,被配置为:基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象;基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象。In some implementations, the second determining part determines whether the object to be identified is the one to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point. In the case of a target object, it is configured to: determine the target candidate object from the historical candidate objects based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point; A first projection area of the candidate target object on a preset plane and a second projection area of the object to be identified on the preset plane, and determine whether the object to be identified is a target object to be deleted.
在一些实施方式中,所述历史帧雷达扫描数据中包括至少一个历史候选对象;所述第二确定部分在基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象的情况下,被配置为:基 于所述历史帧雷达扫描数据中各历史候选对象对应的历史位置信息、以及所述第一位置点,确定各历史候选对象分别与所述待识别对象的距离;基于各历史候选对象分别与所述待识别对象的距离,从各所述历史候选对象中确定与所述待识别对象距离最近的历史候选对象为所述目标候选对象。In some implementations, the historical frame radar scan data includes at least one historical candidate object; the second determining part is based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data, and the first A location point, when the target candidate object is determined from the historical candidate objects, is configured to: based on the historical location information corresponding to each historical candidate object in the historical frame radar scan data and the first location point, Determine the distances between each historical candidate object and the object to be identified; based on the distances between each historical candidate object and the object to be identified, determine the historical candidate closest to the object to be identified from each of the historical candidate objects The object is the target candidate object.
在一些实施方式中,所述第二确定部分在基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象的情况下,被配置为:确定所述第一投影区域和所述第二投影区域是否存在重叠区域;响应于所述第一投影区域和所述第二投影区域未存在重叠区域,确定所述待识别对象为待删除的目标对象。In some implementations, the second determination part determines the In the case of whether the object to be identified is the target object to be deleted, it is configured to: determine whether there is an overlapping area between the first projection area and the second projection area; respond to the first projection area and the second projection area There is no overlapping area in the projection area, and it is determined that the object to be recognized is the target object to be deleted.
在一些实施方式中,所述第二确定部分在基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象的情况下,还被配置为:响应于所述当前帧雷达扫描数据中的第一投影区域和第二投影区域存在重叠区域,将所述待识别对象作为新的历史候选对象,直至所述当前帧雷达扫描数据之后的连续N帧雷达扫描数据中,第一投影区域和第二投影区域均存在重叠区域,则确定所述候选删除对象不为所述待删除的目标对象,并将所述新的历史候选对象删除;N为正整数。In some implementations, the second determination part determines the In the case of whether the object to be identified is a target object to be deleted, it is further configured to: in response to an overlapping area between the first projection area and the second projection area in the current frame radar scan data, the object to be identified is taken as A new historical candidate object, until in the continuous N frames of radar scanning data after the current frame of radar scanning data, there is an overlapping area between the first projection area and the second projection area, then it is determined that the candidate deletion object is not the object to be deleted Deleted target object, and delete the new historical candidate object; N is a positive integer.
在一些实施方式中,所述检测装置还包括:处理部分,被配置为针对各所述历史候选删除对象,检测该历史候选删除对象的保存时间、与当前时间的时间差;在所述时间差大于或者等于预设的时间差阈值的情况下,删除该历史候选删除对象。In some implementations, the detection device further includes: a processing part configured to, for each of the historical candidate deletion objects, detect the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than or If it is equal to the preset time difference threshold, delete the historical candidate deletion object.
本公开实施例提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。An embodiment of the present disclosure provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory. When the machine-readable instructions are executed by the processor, when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
本公开实施例提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行时执行上述方法中的部分或全部步骤。An embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, part or all of the steps in the above method are executed.
本公开实施例提供一种计算机程序,包括计算机可读代码,在所述计算机可读代码在计算机设备中运行的情况下,所述计算机设备中的处理器执行上述方法中的部分或全部步骤。An embodiment of the present disclosure provides a computer program, including computer readable codes. When the computer readable codes run in a computer device, a processor in the computer device executes part or all of the steps in the above method.
本公开实施例提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行的情况下,实现上述方法中的部分或全部步骤An embodiment of the present disclosure provides a computer program product. The computer program product includes a non-transitory computer-readable storage medium storing a computer program. When the computer program is read and executed by a computer, the above-mentioned methods are implemented. some or all steps
关于上述障碍物的检测装置、计算机设备、计算机可读存储介质、计算机程序及计算机程序产品的效果描述参见上述障碍物的检测方法的说明。For the effect description of the above-mentioned obstacle detection device, computer equipment, computer-readable storage medium, computer program and computer program product, please refer to the description of the above-mentioned obstacle detection method.
为使本公开实施例的上述特征和优点能更明显易懂,下文特举示例性实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned features and advantages of the embodiments of the present disclosure more comprehensible, exemplary embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the following will briefly introduce the accompanying drawings used in the embodiments. The accompanying drawings here are incorporated into the specification and constitute a part of the specification. The drawings show the embodiments consistent with the present disclosure, and are used together with the description to explain the technical solution of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. For those skilled in the art, they can also make From these drawings other related drawings are obtained.
图1为本公开实施例提供的一种障碍物的检测方法的实现流程示意图;FIG. 1 is a schematic diagram of an implementation flow of an obstacle detection method provided by an embodiment of the present disclosure;
图2a为本公开实施例提供的一种投影区域的示意图;Fig. 2a is a schematic diagram of a projection area provided by an embodiment of the present disclosure;
图2b为本公开实施例提供的一种备选位置点的示意图;Fig. 2b is a schematic diagram of an alternative location point provided by an embodiment of the present disclosure;
图2c为本公开实施例提供的一种第一位置点的示意图;Fig. 2c is a schematic diagram of a first location point provided by an embodiment of the present disclosure;
图3a为本公开实施例提供的一种投影区域的示意图;Fig. 3a is a schematic diagram of a projection area provided by an embodiment of the present disclosure;
图3b为本公开实施例提供的一种第一位置点的示意图;Fig. 3b is a schematic diagram of a first location point provided by an embodiment of the present disclosure;
图4a为本公开实施例提供的一种投影区域的示意图;Fig. 4a is a schematic diagram of a projection area provided by an embodiment of the present disclosure;
图4b为本公开实施例提供的一种投影区域的示意图;Fig. 4b is a schematic diagram of a projection area provided by an embodiment of the present disclosure;
图5为本公开实施例提供的一种对待识别对象进行检测的实现流程示意图;FIG. 5 is a schematic diagram of an implementation process for detecting an object to be recognized provided by an embodiment of the present disclosure;
图6为本公开实施例提供的一种障碍物的检测装置的示意图;FIG. 6 is a schematic diagram of an obstacle detection device provided by an embodiment of the present disclosure;
图7为本公开实施例提供的一种计算机设备的示意图。Fig. 7 is a schematic diagram of a computer device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only It is a part of the embodiments of the present disclosure, but not all of them. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the claimed disclosure, but merely represents selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present disclosure.
经研究发现,利用激光雷达可以对车辆的行驶区域进行扫描,以确定在该行驶区域中可能存在的障碍物。激光雷达可以向行驶区域发射激光并接收反射回的激光,并根据接收到的激光确定是否有障碍物,这样就容易在物体扫描后反射回的激光出现异常时,无法对障碍物进行正常的检测。例如,在路面存在积水、油漆未干的车道线时,激光雷达发射出的激光会由于镜面反射而无法正常反射激光,从而将这些非障碍物体判断为障碍物,导致利用激光对障碍物进行检测的准确度较低。After research, it is found that the driving area of the vehicle can be scanned by using the laser radar to determine the obstacles that may exist in the driving area. The laser radar can emit laser light to the driving area and receive the reflected laser light, and determine whether there is an obstacle according to the received laser light, so that it is easy to detect the obstacle normally when the reflected laser light is abnormal after the object is scanned . For example, when there is water on the road surface and lane lines with wet paint, the laser emitted by the lidar will not be able to reflect the laser normally due to mirror reflection, so that these non-obstacle objects are judged as obstacles, and the laser is used to detect obstacles. The detection accuracy is low.
基于上述研究,本公开实施例提供了一种障碍物的检测方法,通过结合历史帧雷达数据中的历史候选删除对象的历史位置信息,确定是否要对当前帧雷达扫描数据中的待识别对象进行删除处理,从而将空间域和时间域结合,综合判断待识别对象是否为待删除的目标对象,以判断待识别对象是否为目标场景中的障碍物,具有更高的检测精度。Based on the above research, an embodiment of the present disclosure provides an obstacle detection method, by combining the historical position information of the historical candidate deletion object in the historical frame radar data, it is determined whether to detect the object to be identified in the current frame radar scan data The deletion process combines the space domain and the time domain to comprehensively judge whether the object to be recognized is the target object to be deleted, so as to judge whether the object to be recognized is an obstacle in the target scene, and has higher detection accuracy.
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开实施例针对上述问题所提出的技术方案,都应该是发明人在本公开过程中对本公开做出的贡献。The defects in the above solutions are all the results obtained by the inventor after practice and careful research. Therefore, the discovery process of the above problems and the technical solutions proposed by the embodiments of the present disclosure below for the above problems should be Contributions made by the inventors to the disclosure during the course of the disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
为便于对本实施例进行理解,首先对本公开实施例所公开的一种障碍物的检测方法进行详细介绍,本公开实施例所提供的障碍物的检测方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该障碍物的检测方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In order to facilitate the understanding of this embodiment, a method for detecting obstacles disclosed in the embodiments of the present disclosure is firstly introduced in detail. The execution subject of the method for detecting obstacles provided in the embodiments of the present disclosure is generally a computer with certain computing power equipment, the computer equipment includes, for example: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant) Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc. In some possible implementation manners, the obstacle detection method may be implemented by a processor calling computer-readable instructions stored in a memory.
下面对本公开实施例提供的障碍物的检测方法加以说明。The method for detecting obstacles provided by the embodiments of the present disclosure will be described below.
图1为本公开实施例提供的一种障碍物的检测方法的实现流程示意图图,参见图1所示,所述方法包括步骤S101~S103,其中:Fig. 1 is a schematic diagram of an implementation flow diagram of an obstacle detection method provided by an embodiment of the present disclosure. Referring to Fig. 1 , the method includes steps S101 to S103, wherein:
S101:基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;S101: Based on the current frame radar scan data obtained by scanning the target scene, determine a first position point corresponding to the object to be identified in the target scene;
S102:基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;S102: Determine whether the object to be identified is a target object to be deleted based on the historical location information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first location point;
S103:响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。S103: In response to the fact that the object to be identified is not the target object to be deleted, determine the object to be identified as an obstacle in the target scene.
本公开实施例利用对目标场景得到的当前帧雷达扫描数据确定待识别对象对应的第一位置点,以及历史帧扫描数据中的历史候选删除对象对应的历史位置信息,确定待识别对象是否为待删除的目标对象;在确定待识别对象不为待删除的目标对象时,将待识别对象确定为目标场景中的障碍物。这种方法可以将空间域和时间域结合,综合判断待识别对象是否为待删除的目标对象,以判断待识别对象是否为目标场景中的障碍物,具有更高的检测精度。The embodiment of the present disclosure uses the current frame radar scanning data obtained from the target scene to determine the first position point corresponding to the object to be identified, and the historical position information corresponding to the historical candidate deletion object in the historical frame scanning data to determine whether the object to be identified is The target object to be deleted; when it is determined that the object to be recognized is not the target object to be deleted, the object to be recognized is determined as an obstacle in the target scene. This method can combine the space domain and the time domain to comprehensively judge whether the object to be recognized is the target object to be deleted, so as to judge whether the object to be recognized is an obstacle in the target scene, and has higher detection accuracy.
下面对上述S101~S103加以详细说明。The above-mentioned S101 to S103 will be described in detail below.
针对上述S101,本公开实施例提供的障碍物的检测方法可以应用于不同的场景中。示例性的,在自动驾驶场景中,目标场景例如可以包括自动驾驶汽车行驶的空间,在目标场景中例如可以包括其他行驶车辆、车道线、标识牌、绿化带等。在智能仓储场景中,目标场景例如可以包括智能机器人行驶的空间,在目标场景中例如可以包括其他机器人、工作人员、货架、货箱、定位标识等。With regard to the above S101, the obstacle detection method provided by the embodiment of the present disclosure may be applied in different scenarios. Exemplarily, in the automatic driving scene, the target scene may include, for example, the space where the self-driving car drives, and the target scene may include, for example, other driving vehicles, lane lines, signboards, green belts, and the like. In the smart storage scenario, the target scene may include, for example, the space where the intelligent robot travels, and the target scene may include, for example, other robots, staff, shelves, containers, positioning signs, and the like.
下面以自动驾驶场景为例对本公开实施例提供的障碍物的检测方法进行说明。The method for detecting obstacles provided by the embodiments of the present disclosure will be described below by taking an automatic driving scene as an example.
在自动驾驶场景中,自动驾驶汽车上例如可以安置激光雷达以对自动驾驶汽车行驶的区域进行扫描检测。在自动驾驶汽车行进的过程中,激光雷达例如可以间隔0.2秒对目标场景进行扫描,并得到雷达扫描数据。此处,将扫描时间距离当时时刻最近的雷达扫描数据,作为对目标场景进行扫描得到的当前帧雷达扫描数据。In an autonomous driving scenario, for example, a laser radar can be installed on the autonomous vehicle to scan and detect the area where the autonomous vehicle is driving. During the driving process of the self-driving car, the laser radar can scan the target scene at an interval of 0.2 seconds, and obtain radar scanning data. Here, the radar scanning data whose scanning time is closest to the current moment is taken as the current frame radar scanning data obtained by scanning the target scene.
在得到当前帧雷达扫描数据后,即可以利用当前帧雷达扫描数据确定在目标场景中确定的多个对象。示例性的,可以利用对象检测的方式对当前帧雷达扫描数据进行处理。由于雷达扫描数据可以反映物体的大小、形态等,因此利用对象检测的方式可以识别出部分目标场景中的对象,例如车辆、标识牌;这些物体由于是可以确定需要避让的物体,因此可以直接确定为障碍物。同时,还可能存在利用对象检测的方式无法识别的物体,在利用对象识别的方式将其检测出后,将这些无法识别的物体作为当前帧雷达扫描数据中的待识别对象,也即分类标签为“未知(unknown)物体”的对象。After the radar scan data of the current frame is obtained, the multiple objects determined in the target scene can be determined by using the radar scan data of the current frame. Exemplarily, the radar scanning data of the current frame may be processed by means of object detection. Since radar scanning data can reflect the size and shape of objects, objects in some target scenes can be identified by using object detection methods, such as vehicles and signage; these objects can be determined directly because they are objects that need to be avoided. for obstacles. At the same time, there may also be objects that cannot be identified by object detection. After detecting them by object recognition, these unrecognizable objects are used as objects to be identified in the current frame of radar scanning data, that is, the classification label is Object of "unknown (unknown) objects".
在确定待识别对象是否为待删除的目标对象时,例如可以利用当前帧雷达扫描数据为待识别对象确定的第一位置点来实现。When determining whether the object to be identified is the target object to be deleted, for example, the first position point determined for the object to be identified may be implemented by using the radar scan data of the current frame.
此处,第一位置点,是用来判断待识别对象是否可以作为候选删除对象的判据、或者依据。其中,候选删除对象为:可能属于障碍物的对象。后续确定目标对象的过程,可以对候选删除对象进行进一步判断,确定其是否确实为障碍物;若候选删除对象为目标对象,则表征对应的待识别对象并非障碍物,而是误检测。Here, the first position point is a criterion or basis for judging whether the object to be recognized can be used as a candidate deletion object. Wherein, the candidate deletion objects are: objects that may belong to obstacles. In the subsequent process of determining the target object, the candidate deletion object can be further judged to determine whether it is indeed an obstacle; if the candidate deletion object is the target object, it indicates that the corresponding object to be recognized is not an obstacle, but a false detection.
在一些实施方式中,在利用当前帧雷达扫描数据确定目标场景中的待识别对象对应的第一位置点时,可以采用下述方式:针对各所述待识别对象,从所述当前帧雷达扫描数据中确定与所述待识别对象对应的点云点;基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息;基于所述轮廓信息,确定所述待识别对象对应的第一位置点。In some implementation manners, when using the current frame radar scan data to determine the first position point corresponding to the object to be identified in the target scene, the following method may be adopted: for each of the objects to be identified, the current frame radar scanning Determine the point cloud point corresponding to the object to be identified in the data; determine the contour information corresponding to the object to be identified based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene; The contour information is used to determine the first position point corresponding to the object to be recognized.
以确定的各待识别对象中的任一待识别对象为例,在利用对象识别的方式在当前帧雷达扫描数据中确定待识别对象的情况下,即可以利用当前帧雷达扫描时数据确定与该待识别对象对应的点云点。Taking any object to be identified among the determined objects to be identified as an example, if the object to be identified is determined in the current frame of radar scanning data by means of object recognition, it can be determined by using the data of the current frame of radar scanning. The point cloud point corresponding to the object to be recognized.
由于待识别对象对应的点云点数量可能较多,并且点云点其在表征待识别对象的位置时利用的三维位置信息数据量较大。因此针对于待识别对象而言,在利用与该待识别 对象对应的点云点重新对待识别对象建模,然后判断其是否是障碍物的方式、或者利用神经网络重新对与该待识别对象对应的点云点进行编码,再与历史确定的特定对象的点云点进行相似性判断的方式而言,计算量较大,需要耗费较多的算力,并且效率也较低。因此,在一些实施方式中,在确定与待识别对象对应的点云点后,可以基于待识别对象对应的点云点在目标场景中的三维位置信息,确定待识别对象对应的轮廓信息,然后再利用轮廓信息确定待识别对象对应的第一位置点,从而利用数据量较少的第一位置点确定待识别对象是否为待删除的目标对象。Since the number of point cloud points corresponding to the object to be recognized may be large, and the point cloud points use a large amount of three-dimensional position information data when representing the position of the object to be recognized. Therefore, for the object to be identified, use the point cloud point corresponding to the object to be identified to re-model the object to be identified, and then judge whether it is an obstacle, or use the neural network to re-identify the object corresponding to the object to be identified In terms of the method of encoding the point cloud points of the specific object and then judging the similarity with the point cloud points of the specific object determined in history, the amount of calculation is large, and it needs to consume more computing power, and the efficiency is also low. Therefore, in some embodiments, after determining the point cloud point corresponding to the object to be recognized, the contour information corresponding to the object to be recognized can be determined based on the three-dimensional position information of the point cloud point corresponding to the object to be recognized in the target scene, and then Then use the contour information to determine the first position point corresponding to the object to be recognized, so as to use the first position point with a small amount of data to determine whether the object to be recognized is the target object to be deleted.
在利用待识别对象对应的点云点在目标场景中的三维位置信息,确定待识别对象对应的轮廓信息时,例如可以采用下述方式:将所述待识别对象对应的点云点投影至预设平面中,得到第一投影点;基于所述第一投影点在所述预设平面中的二维位置信息,确定所述待识别对象的轮廓信息。其中,预设平面例如可以是自动驾驶汽车正在行驶的地面所在的平面。将待识别对象对应的点云点投影至预设平面上后,可以得到第一投影点;此时,点云点对应的三维位置信息转变为第一投影点对应的二维位置信息,数据量减少。利用第一投影点在预设平面中的二维位置信息,可以确定在第一投影点中位于边缘位置的投影点,利用这些位于边缘位置的投影点、以及其分别对应的二维位置信息,即可以确定待识别对象对应的轮廓信息。When using the three-dimensional position information of the point cloud points corresponding to the object to be recognized in the target scene to determine the contour information corresponding to the object to be recognized, for example, the following method can be used: project the point cloud points corresponding to the object to be recognized to the preset Obtaining a first projection point in the preset plane; determining contour information of the object to be recognized based on two-dimensional position information of the first projection point in the preset plane. Wherein, the preset plane may be, for example, the plane on which the ground on which the self-driving car is driving is located. After projecting the point cloud point corresponding to the object to be recognized onto the preset plane, the first projection point can be obtained; at this time, the three-dimensional position information corresponding to the point cloud point is transformed into the two-dimensional position information corresponding to the first projection point, and the amount of data reduce. Using the two-dimensional position information of the first projection point in the preset plane, the projection points at the edge positions in the first projection point can be determined, and using these projection points at the edge positions and their corresponding two-dimensional position information, That is, the contour information corresponding to the object to be recognized can be determined.
在一些实施方式中,在利用轮廓信息确定待识别对象对应的第一位置点时,例如可以利用待识别对象对应的轮廓信息,确定待识别在预设平面中的投影区域;基于投影区域的面积,确定待识别对象对应的第一位置点。In some implementations, when using the contour information to determine the first position point corresponding to the object to be recognized, for example, the contour information corresponding to the object to be recognized can be used to determine the projection area to be recognized in the preset plane; based on the area of the projection area , to determine the first location point corresponding to the object to be recognized.
利用待识别对象对应的轮廓信息,可以确定包围待识别对象的轮廓对应的多个角点;利用多个角点的二维位置信息,即可以确定待识别在预设平面上的投影区域占据的面积,从而确定待识别对象对应的第一位置点。在实施中,在基于投影区域的面积确定待识别对象对应的第一位置点时,例如可以将投影区域的面积与预设的面积阈值进行比对,并根据比对的结果确定该待识别对象对应的第一位置点。其中,预设的面积阈值例如可以包括0.3平方米、0.5平方米等;预设的面积阈值例如可以根据经验确定或者根据实际情况确定。以预设的面积阈值为0.3平方米为例,在确定投影区域的面积大于该面积阈值时,基于所述投影区域,确定所述投影区域的最小包围框;基于所述最小包围框对应的第一区域、以及预设的第一间隔步长,在所述第一区域内确定多个备选位置点;将位于所述投影区域内的备选位置点确定为所述第一位置点。By using the contour information corresponding to the object to be recognized, multiple corner points corresponding to the contour surrounding the object to be recognized can be determined; by using the two-dimensional position information of multiple corner points, it is possible to determine the area occupied by the projection area to be recognized on the preset plane area, so as to determine the first location point corresponding to the object to be recognized. In implementation, when determining the first location point corresponding to the object to be identified based on the area of the projection area, for example, the area of the projection area can be compared with a preset area threshold, and the object to be identified can be determined according to the comparison result corresponding to the first position point. Wherein, the preset area threshold may include, for example, 0.3 square meters, 0.5 square meters, etc.; the preset area threshold may be determined based on experience or actual conditions, for example. Taking the preset area threshold of 0.3 square meters as an example, when it is determined that the area of the projection area is greater than the area threshold, based on the projection area, determine the minimum bounding box of the projection area; An area, and a preset first interval step, a plurality of candidate position points are determined in the first area; and the candidate position points located in the projection area are determined as the first position points.
图2a至图2c为本公开实施例提供的一种确定第一位置点的示意图。如图2a所示,可以为待识别对象确定投影区域21,该投影区域21例如为不规则的多边形。利用该投影区域21,可以确定对应的最小包围框22。在一些实施方式中,该最小包围框22包括矩形。在确定最小包围框22后,即可以确定最小包围框22占据的第一区域23。利用预设的第一间隔步长,例如可以在第一区域23内确定多个备选位置点。预设的第一间隔步长例如可以包括0.2米。如图2b所示,可以利用预设的第一间隔步长在第一区域23中确定多个备选位置点24;其中,针对于包括上、下、左、右四个方向分别对应有备选位置点24的位置点,其与上、下、左、右四个方向分别对应的备选位置点24之间的间隔为预设的第一间隔步长。FIG. 2a to FIG. 2c are schematic diagrams of determining a first location point provided by an embodiment of the present disclosure. As shown in FIG. 2 a , a projection area 21 may be determined for the object to be identified, and the projection area 21 is, for example, an irregular polygon. Using this projected area 21 a corresponding minimum bounding box 22 can be determined. In some embodiments, the minimum bounding box 22 includes a rectangle. After the minimum bounding box 22 is determined, the first area 23 occupied by the minimum bounding box 22 can be determined. Using the preset first interval step, for example, a plurality of candidate position points can be determined within the first area 23 . The preset first interval step may include, for example, 0.2 meters. As shown in Fig. 2b, a plurality of candidate position points 24 can be determined in the first area 23 by using a preset first interval step; For the position point of the selected position point 24, the interval between the candidate position points 24 respectively corresponding to the four directions of up, down, left and right is the preset first interval step.
在确定多个备选位置点24后,利用投影区域21,可以框选出与待识别对象对应的备选位置点24,作为第一位置点。如图2c所示,利用由图2b所示的多个备选位置点24、以及图2a所示的投影区域21,即可以框选出位于投影区域21内的备选位置点24,作为第一位置点25。After a plurality of candidate position points 24 are determined, the candidate position point 24 corresponding to the object to be recognized can be frame-selected as the first position point by using the projection area 21 . As shown in FIG. 2c, using a plurality of candidate position points 24 shown in FIG. 2b and the projection area 21 shown in FIG. One location point 25.
由于投影区域21通常为不规则的图形,因此在利用投影区域21确定第一位置点25时,可能会出现落在投影区域21边界上的备选位置点24,例如在图2c中示出的a点以 及b点。针对该种情况,在一些实施方式中,为了最大限度的保留位置点的数量,可以将a点和b点分别对应的备选位置点24均作为第一位置点25。在一些实施方式中,由于该待识别对象的投影区域的面积较大,可以确定的第一位置点的数量足够多,为了减少数据的处理量,同时提升处理精度,也可以选择性的对a点和b点分别对应的备选位置点24进行筛选。例如a点对应的备选位置点24相较于b点对应的备选位置点24有更大部分落入投影区域21中;因此在第一位置点25中保留a点对应的备选位置点24,而筛除b点对应的备选位置点24。上述两种实现的过程,可以根据实际情况确定,在此不做限定。在确定投影区域的面积小于或者等于面积阈值的情况下,还可以确定位于投影区域的中心点;基于所述中心点、以及预设的半径长度,确定以所述中心点为圆心、所述预设的半径长度为半径的第二区域;基于所述第二区域以及预设的第二间隔步长,在所述第二区域内确定所述第一位置点。Since the projection area 21 is generally an irregular figure, when the first position point 25 is determined using the projection area 21, there may be alternative position points 24 falling on the boundary of the projection area 21, such as shown in FIG. 2c point a and point b. In view of this situation, in some implementations, in order to keep the maximum number of location points, the candidate location points 24 respectively corresponding to points a and b can be used as the first location points 25 . In some implementations, since the projected area of the object to be identified has a relatively large area, the number of first position points that can be determined is large enough. In order to reduce the amount of data processing and improve the processing accuracy, a Candidate location points 24 corresponding to points and b points are screened. For example, the candidate position point 24 corresponding to point a has a larger part falling into the projection area 21 than the candidate position point 24 corresponding to point b; therefore, the candidate position point corresponding to point a is reserved in the first position point 25 24, while the candidate position point 24 corresponding to point b is screened out. The above two implementation processes may be determined according to actual conditions, and are not limited here. When it is determined that the area of the projected area is less than or equal to the area threshold, the center point located in the projected area can also be determined; based on the center point and the preset radius length, determine the center point as the center, the preset Let the length of the radius be the second area of the radius; based on the second area and a preset second interval step, determine the first position point in the second area.
图3a至图3b为本公开实施例提供的一种确定第一位置点的示意图。如图3a所示,可以为待识别对象确定投影区域31(在图中以虚线框出的区域表示),该投影区域31例如为不规则的多边形。利用该投影区域31,可以确定投影区域的中心点32。利用中心点32、以及预设的半径长度,例如可以确定第二区域33(在图中以实线框出的区域表示)。例如可以以中心点32为圆心、预设的半径长度为半径确定圆形的第二区域33。其中,预设的半径长度例如可以选取0.2米、0.3米等。FIG. 3 a to FIG. 3 b are schematic diagrams of determining a first location point provided by an embodiment of the present disclosure. As shown in FIG. 3 a , a projection area 31 (indicated by a dotted line framed area in the figure) can be determined for the object to be recognized, and the projection area 31 is, for example, an irregular polygon. Using this projection area 31 , a center point 32 of the projection area can be determined. Using the center point 32 and the preset radius length, for example, the second area 33 (indicated by the area framed by a solid line in the figure) can be determined. For example, the circular second region 33 can be defined with the center point 32 as the center and a preset radius length as the radius. Wherein, the preset radius length may be, for example, 0.2 meters, 0.3 meters and so on.
在一些实施方式中,例如可以将确定的中心点32向外辐射至投影区域31的边界时,确定的最大长度作为预设的半径长度。如图3a所示,可以确定中心点32至投影区域31边界的最大长度r,则将该最大长度r作为预设的半径长度确定第二区域33。这样,对于投影区域较小的待识别对象而言,可以确定数量较多的第一位置点,从而能够进一步利用足够数量的第一位置点对待识别对象是否为待删除的目标对象进行判断。In some implementations, for example, when the determined center point 32 radiates outward to the boundary of the projection area 31 , the determined maximum length may be used as a preset radius length. As shown in FIG. 3 a , the maximum length r from the center point 32 to the boundary of the projection area 31 can be determined, and then the maximum length r is used as a preset radius length to determine the second area 33 . In this way, for an object to be recognized with a small projection area, a larger number of first position points can be determined, so that a sufficient number of first position points can be used to further determine whether the object to be recognized is a target object to be deleted.
利用预设的第二间隔步长,例如可以在第二区域33内确定多个位置点。其中,为了在较大程度上为待识别对象确定更多的位置点,在确定位置点时保留落在第二区域33边缘的位置点。此处,预设的第二间隔步长例如可以与上述预设的第一间隔步长相同,例如包括0.2米;也可以根据实际情况确定与上述预设的第一间隔步长不同的取值,例如0.15米;在实施时可以根据实际情况确定。如图3b所示,利用预设的第二间隔步长,可以在第二区域33中确定多个位置点。此处,由于该待识别对象对应的投影区域31较小,因此为了保证对该待识别对象而言具有足够数量的位置点,则可以将位于第二区域33内的位置点均确定为第一位置点34,相较于落入投影区域31中的位置点,得到的第一位置点34的数量更多。Using the preset second interval step, for example, a plurality of position points can be determined within the second area 33 . Wherein, in order to determine more position points for the object to be recognized to a greater extent, the position points falling on the edge of the second area 33 are retained when determining the position points. Here, the preset second interval step, for example, can be the same as the above-mentioned first preset interval step, for example, including 0.2 meters; it can also determine a value different from the above-mentioned preset first interval step according to the actual situation , such as 0.15 meters; it can be determined according to the actual situation during implementation. As shown in FIG. 3 b , multiple position points can be determined in the second area 33 by using the preset second interval step. Here, since the projection area 31 corresponding to the object to be identified is relatively small, in order to ensure a sufficient number of position points for the object to be identified, the position points located in the second area 33 can all be determined as the first For the position points 34 , compared with the position points falling into the projection area 31 , the number of the obtained first position points 34 is more.
针对上述S102,在确定目标场景中的待识别对象对应的第一位置点后,还可以基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息,确定待识别对象是否为待删除的目标对象。在实施中,例如可以采用下述方式:基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象;响应于所述待识别对象为所述当前候选删除对象,基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象。For the above S102, after determining the first position point corresponding to the object to be identified in the target scene, it may also be determined whether the object to be identified is the object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scan data. target. In implementation, for example, the following manner may be adopted: based on the first position point corresponding to the object to be identified, it is judged whether the object to be identified is a current candidate for deletion; in response to the object to be identified being the current candidate for deletion For an object, determine whether the object to be identified is a target object to be deleted based on the historical location information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first location point.
在一些实施方式中,在基于待识别对象的第一位置点,判断待识别对象是否为当前候选删除对象时,具体可以采用下述方式:获取对所述目标场景进行扫描得到的当前帧图像;将所述第一位置点投影至所述当前帧图像中,得到第二投影点;基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象。In some implementation manners, when judging whether the object to be identified is a current candidate for deletion based on the first position point of the object to be identified, the following method may be specifically adopted: acquire the current frame image obtained by scanning the target scene; Projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image and the obstacles included in the current frame image At a position in the current frame image, determine whether the object to be identified is the current candidate object for deletion.
在一些实施方式中,在获取当前帧图像时,例如可以是利用搭载在自动驾驶车辆上的图像采集设备获取的。具体地,图像采集设备例如可以包括彩色相机。在获取得到当 前帧图像时,可以将第一位置点投影至该当前帧图像中,以得到第二投影点。其中,在投影得到第二投影点的同时,还可以确定得到的第二投影点的位置信息。In some implementations, when acquiring the current frame image, for example, it may be acquired by using an image acquisition device mounted on the self-driving vehicle. Specifically, the image acquisition device may include a color camera, for example. When acquiring the current frame image, the first position point can be projected into the current frame image to obtain the second projected point. Wherein, while obtaining the second projection point by projection, the position information of the obtained second projection point may also be determined.
此处,由于利用当前帧图像可以确定自动驾驶车辆的可行驶区域(freespace),因此利用将第一位置点投影至当前帧图像后得到的第二投影点,可以进一步确定待识别对象是否是在可行驶区域中的物体,再判断是否需要删除该待识别对象。Here, since the freespace of the self-driving vehicle can be determined by using the current frame image, it can be further determined whether the object to be recognized is in the Objects in the drivable area, and then judge whether to delete the object to be recognized.
在一些实施方式中,所述第一位置点的数量为至少一个;所述将所述第一位置点投影至所述当前帧图像中,得到第二投影点,可以包括:将至少一个所述第一位置点投影至所述当前帧图像中,得到至少一个第二投影点;在基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象时,例如可以采用下述方式:In some implementations, the number of the first position point is at least one; the projecting the first position point into the current frame image to obtain the second projected point may include: adding at least one of the The first position point is projected into the current frame image to obtain at least one second projection point; based on the position information of the second projection point in the current frame image and the obstacles included in the current frame image At the position in the current frame image, when determining whether the object to be identified is the current candidate object for deletion, for example, the following method may be used:
针对每一所述第二投影点,基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,预测所述第二投影点对应的障碍物预测结果;所述障碍物预测结果包括:与所述第二投影点对应的位置存在障碍物,或者未存在障碍物;基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象。For each of the second projection points, based on the position information of the second projection point in the current frame image and the position of obstacles included in the current frame image in the current frame image, predict the The obstacle prediction result corresponding to the second projection point; the obstacle prediction result includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; based on each of the second projection points corresponding to Obstacle prediction results, determine whether the object to be identified is the current candidate object for deletion.
在一些实施方式中,在预测第二投影点对应的障碍物预测结果时,例如可以根据第二投影点在当前帧图像中的位置信息、以及当前帧图像中包括的障碍物在当前帧图像中所处的位置确定。In some implementations, when predicting the obstacle prediction result corresponding to the second projection point, for example, according to the position information of the second projection point in the current frame image, and the position information of the obstacle included in the current frame image in the current frame image The location is determined.
在一些实施方式中,在第二投影点位于当前帧图像中包括的障碍物所处的位置时,由于可以以更高的置信度认为第二投影点表征对应位置存在障碍物,则对应确定的障碍物预测结果为:与所述第二投影点对应的位置存在障碍物;在另一种可能的实施方式中,在第二投影点并非位于当前帧图像中包括的障碍物所处的位置时,由于可以以更高的置信度认为第二投影点表征对应位置不存在障碍物,则对应确定的障碍物预测结果为:与所述第二投影点对应的位置未存在障碍物。In some implementations, when the second projection point is located at the position of the obstacle included in the current frame image, since the second projection point can be considered with a higher degree of confidence to indicate that there is an obstacle at the corresponding position, the corresponding determined The obstacle prediction result is: there is an obstacle at the position corresponding to the second projection point; in another possible implementation manner, when the second projection point is not located at the position of the obstacle included in the current frame image , since the second projection point can be considered with a higher degree of confidence to indicate that there is no obstacle at the corresponding position, the correspondingly determined obstacle prediction result is: there is no obstacle at the position corresponding to the second projection point.
这样,针对多个第二投影点,均可以确定各第二投影点对应的障碍物预测结果。In this way, for multiple second projection points, the obstacle prediction result corresponding to each second projection point can be determined.
在一些实施方式中,利用各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象时,例如可以基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度;基于所述置信度、以及预设的置信度阈值,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, when determining whether the object to be identified is the current candidate object to be deleted using the obstacle prediction results corresponding to the second projection points, for example, it may be based on the corresponding Determine the confidence that the object to be identified is an obstacle based on the obstacle prediction result; determine whether the object to be identified is the current candidate for deletion based on the confidence and a preset confidence threshold.
在一些实施方式中,确定的第二投影点例如可以包括n个;其中,n为正整数。在基于第二投影点分别对应的障碍物预测结果,确定待识别对象为障碍物的置信度时,例如可以采用下述方式:遍历第2至第n个第二投影点;针对遍历到的第i个第二投影点,基于所述第i个第二投影点的障碍物预测结果,确定与第i个第二投影点对应的判据函数;其中,i为大于1的正整数;基于第i个第二投影点对应的判据函数、以及第1至第i-1个第二投影点的融合判据结果,确定所述第i个第二投影点对应的融合判据结果;基于所述第i个第二投影点对应的融合判据结果,得到确定所述待识别对象为障碍物的置信度。示例性的,针对于第一个第二投影点,其对应的障碍物预测结果例如可以包括与所述第二投影点对应的位置存在障碍物,或者未存在障碍物;在障碍物预测结果不同的情况下,确定的判据函数也有所区别。例如,在障碍物预测结果包括与所述第二投影点对应的位置存在障碍物的情况下,对应的判据函数M_1(·)例如可以表征第二投影点存在障碍物的情况下,确定该假设置信度的可能性函数(mass);在障碍物预测结果包括与所述第二投影点对应的位置未存在障碍物的情况下,对应的判据函数M_2(·)例如可以表征第二投影点为不存在障碍物的情况下,确定该假设置信度的可能性函数。In some implementation manners, the determined second projection points may include, for example, n; wherein, n is a positive integer. When determining the confidence that the object to be recognized is an obstacle based on the obstacle prediction results corresponding to the second projection points, for example, the following method can be used: traverse the 2nd to nth second projection points; The i second projection point, based on the obstacle prediction result of the i second projection point, determine the criterion function corresponding to the i second projection point; wherein, i is a positive integer greater than 1; based on the i second projection point The criterion function corresponding to the i second projection point and the fusion criterion result of the 1st to i-1th second projection point are determined to determine the fusion criterion result corresponding to the i second projection point; based on the The fusion criterion result corresponding to the i-th second projection point is used to obtain a confidence degree for determining that the object to be recognized is an obstacle. Exemplarily, for the first second projection point, the corresponding obstacle prediction result may include, for example, that there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; when the obstacle prediction result is different In the case of , the determined criterion function is also different. For example, in the case where the obstacle prediction result includes an obstacle at the position corresponding to the second projection point, the corresponding criterion function M_1(·) may, for example, indicate that there is an obstacle at the second projection point, and determine the Assuming a probability function (mass) of reliability; when the obstacle prediction result includes that there is no obstacle at the position corresponding to the second projection point, the corresponding criterion function M_2(·) can represent the second projection, for example The point is the likelihood function to determine the confidence level of the hypothesis in the absence of obstacles.
此处,为了便于说明,对于第二投影点,其对应的判据函数均利用M 2(·)表示,并 对应于障碍物预测结果不同的第二投影点,可以具体为上述M_1(·)、以及M_2(·)。 Here, for the convenience of explanation, for the second projection point, its corresponding criterion function is represented by M 2 (·), and corresponds to the second projection point with different obstacle prediction results, which can be specifically the above-mentioned M_1(·) , and M_2(·).
在对第2至第n个第二投影点进行遍历时,还可以确定第1至第i-1个第二投影点的融合判据结果,以确定第i个第二投影点对应的融合判据结果。When traversing the 2nd to nth second projection points, the fusion criterion results of the 1st to i-1th second projection points can also be determined to determine the fusion criterion corresponding to the ith second projection point According to the results.
以第2个第二投影点为例,在确定第2个第二投影点对应的融合判据结果时,例如可以采用下述公式(1):Taking the second second projection point as an example, when determining the fusion criterion result corresponding to the second second projection point, for example, the following formula (1) can be used:
Figure PCTCN2022075423-appb-000001
Figure PCTCN2022075423-appb-000001
其中,a1表示第1个第二投影点、a2表示第2个第二投影点。此处,对应于a2,其对应的判据函数可以利用M 2(a 2)表示。由于在遍历至a2时,之前的第二投影点仅包括a1,因此融合判据结果M 1(·)可以直接利用M 1(a 1)表示;或者,也可以写作M 2(a 1)的形式。 Wherein, a1 represents the first second projection point, and a2 represents the second second projection point. Here, corresponding to a2, its corresponding criterion function can be represented by M 2 (a 2 ). Since when traversing to a2, the previous second projection point only includes a1, so the fusion criterion result M 1 (·) can be expressed directly by M 1 (a 1 ); or it can also be written as M 2 (a 1 ) form.
另外,K表示归一化系数,满足下述公式(2):In addition, K represents a normalization coefficient, which satisfies the following formula (2):
Figure PCTCN2022075423-appb-000002
Figure PCTCN2022075423-appb-000002
利用公式(1),即可以确定在遍历至第2个第二投影点a2时确定的a1和a2的融合判据结果M 2 12(a 1;a 1)。此处,以M的上角标i表示遍历至第i个第二投影点。 Using formula (1), the fusion criterion result M 2 12 (a 1 ; a 1 ) of a1 and a2 determined when traversing to the second second projection point a2 can be determined. Here, the superscript i of M indicates traversing to the i-th second projection point.
然后,遍历第3个第二投影点a3。类似的,a3对应的判据函数例如可以表示为M 2(a 3)。此时,可以确定遍历至a3时确定的第3个第二投影点对应的融合判据结果,满足下述公式(3): Then, the third second projection point a3 is traversed. Similarly, the criterion function corresponding to a3 can be expressed as M 2 (a 3 ), for example. At this time, the fusion criterion result corresponding to the third second projection point determined when traversing to a3 can be determined, which satisfies the following formula (3):
Figure PCTCN2022075423-appb-000003
Figure PCTCN2022075423-appb-000003
其中,归一化系数K满足下述公式(4):Among them, the normalization coefficient K satisfies the following formula (4):
Figure PCTCN2022075423-appb-000004
Figure PCTCN2022075423-appb-000004
如此持续上述过程,以继续遍历其他的第二投影点,并确定第二投影点对应的判据函数、以及确定第1个至当前遍历到的第二投影点的融合判据结果,直至遍历所有的第二投影点,以得到第n个第二投影点对应的融合判据结果,从而得到确定待识别对象为障碍物的置信度。其中,待识别对象为障碍物的置信度例如可以包括一个概率值,例如可以包括0.61、0.70或者0.86等。Continue the above process in this way to continue traversing other second projection points, and determine the criterion function corresponding to the second projection point, and determine the fusion criterion result from the first one to the second projection point currently traversed until traversing all The second projection point to obtain the fusion criterion result corresponding to the nth second projection point, so as to obtain the confidence of determining that the object to be recognized is an obstacle. Wherein, the confidence level that the object to be recognized is an obstacle may include, for example, a probability value, such as 0.61, 0.70, or 0.86.
在确定了待识别对象为障碍物的置信度后,利用预设的置信度阈值,可以确定待识别对象是否为当前候选删除对象。其中,预设的置信度阈值例如也可以包括一个概率值,例如0.75。预设的置信度阈值可以在实施时根据经验或者多次实验确定。在确定的待识别对象为障碍物的置信度在数值上大于该预设的置信度阈值的情况下,认为可以确定待识别对象为当前候选删除对象。示例性的,在确定待识别对象为障碍物的置信度为0.6,而预设的置信度阈值为0.55的情况下,确定待识别对象非当前候选删除对象;并在确定待识别对象为障碍物的置信度为0.80时,确定待识别对象为当前候选删除对象。在确定当前候选删除对象后,还可以基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象。After determining the confidence that the object to be recognized is an obstacle, it may be determined whether the object to be recognized is a current candidate for deletion by using a preset confidence threshold. Wherein, the preset confidence threshold may also include a probability value, such as 0.75, for example. The preset confidence threshold may be determined based on experience or multiple experiments during implementation. In a case where the determined confidence that the object to be identified is an obstacle is numerically greater than the preset confidence threshold, it is considered that the object to be identified can be determined as the current candidate for deletion. Exemplarily, when it is determined that the object to be identified is an obstacle with a confidence level of 0.6, and the preset confidence threshold is 0.55, it is determined that the object to be identified is not a current candidate for deletion; and when it is determined that the object to be identified is an obstacle When the confidence level of is 0.80, it is determined that the object to be identified is the current candidate for deletion. After determining the current candidate deletion object, it may also be determined whether the object to be identified is the target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point.
在一些实施方式中,可以采用下述方式确定待识别对象是否为待删除的目标对象:基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象;基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象。In some implementations, it may be determined whether the object to be identified is the target object to be deleted in the following manner: based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point, from Determining a target candidate object from the historical candidate objects; determining the target candidate object to be identified based on a first projection area of the target candidate object on a preset plane and a second projection area of the object to be identified on the preset plane Whether the object is the target object to be deleted.
在一些实施方式中,雷达在对目标场景进行扫描时,例如还包括得到在当前帧雷达扫描数据之前确定的历史帧雷达扫描数据。类似地,利用历史帧雷达扫描数据可以确定对应的历史候选删除对象,并确定历史候选删除对象的历史位置信息。在确定历史候选对象对应的历史位置信息后,利用第一位置点,可以确定各历史候选对象分别与待识别对象的距离。其中,在确定距离时所利用的历史候选对象的历史位置信息例如可以是表征历史候选对象的中心点的位置坐标;另外,也可以通过在第一位置点中确定可以表征待识别对象中心点的第一位置点,代表待识别对象的位置。然后,利用确定的历史位置信息以及确定的第一位置点,可以确定各历史候选对象与待识别对象分别对应的距离。In some implementations, when the radar scans the target scene, for example, it further includes obtaining historical frame radar scan data determined before the current frame radar scan data. Similarly, the corresponding historical candidate deletion object can be determined by using the historical frame radar scanning data, and the historical position information of the historical candidate deletion object can be determined. After the historical location information corresponding to the historical candidate objects is determined, the distances between each historical candidate object and the object to be recognized can be determined by using the first location point. Wherein, the historical position information of the historical candidate object used when determining the distance may be, for example, the position coordinates representing the center point of the historical candidate object; The first position point represents the position of the object to be recognized. Then, using the determined historical position information and the determined first position point, the distances corresponding to each historical candidate object and the object to be recognized can be determined.
这样,即可以根据各历史候选对象分别与待识别对象的距离,从历史候选对象中确定与待识别对象距离最近的目标候选对象。在一些实施方式中,可以在确定的多个距离中确定数值最小的距离,并将对应的历史候选对象作为目标候选对象;又或者,可以将数值较小的多个距离对应的历史候选对象作为目标候选对象。在实施时,确定目标候选对象的方式可以根据实际情况确定。In this way, the target candidate object with the closest distance to the object to be recognized can be determined from the historical candidate objects according to the distances between each historical candidate object and the object to be recognized. In some implementations, the distance with the smallest value may be determined among the determined multiple distances, and the corresponding historical candidate object may be used as the target candidate object; or, the historical candidate object corresponding to multiple distances with smaller numerical values may be used as target candidates. During implementation, the manner of determining target candidate objects may be determined according to actual conditions.
在确定目标候选对象后,还可以确定目标候选对象在预设平面内的第一投影区域、以及待识别对象在预设平面的第二投影区域。此处,预设平面例如可以包括自动驾驶车辆行驶的路面所在的平面。示例性的,参见图4a至图4b所示,图4a与图4b分别为本公开实施例提供的一种投影区域的示意图。在如图4a所示的情况下,由目标候选对象在预设平面的第一投影区域41、与待识别对象在预设平面的第二投影区域42(为便于区分,第一投影区域41的边界以虚线示出)存在重叠区域。在如图4b所示的情况下,第一投影区域41与第二投影区域42之间不存在重叠区域。此处,对于第一投影区域和第二投影区域未存在重叠区域的情况,可以认为在最近的一帧历史帧扫描数据中不存在对该待识别对象对应的目标候选对象,说明该待识别对象是区别于目标候选对象的新出现的对象,在最近的一段时间内并未出现过;因此,在对当前帧扫描数据中确定的待识别对象进行检测时,可以在当前帧将待识别对象判断为待删除的目标对象。After the target candidate object is determined, the first projection area of the target candidate object on the preset plane and the second projection area of the object to be recognized on the preset plane may also be determined. Here, the preset plane may include, for example, the plane on which the road on which the self-driving vehicle drives is located. For example, refer to FIG. 4a to FIG. 4b . FIG. 4a and FIG. 4b are respectively schematic diagrams of a projection area provided by an embodiment of the present disclosure. In the situation shown in Figure 4a, the first projection area 41 of the target candidate object on the preset plane, and the second projection area 42 of the object to be identified on the preset plane (for ease of distinction, the first projection area 41 of the first projection area 41 Boundaries are shown in dashed lines) with overlapping regions. In the case shown in FIG. 4 b , there is no overlapping area between the first projected area 41 and the second projected area 42 . Here, for the case where there is no overlapping area between the first projection area and the second projection area, it can be considered that there is no target candidate corresponding to the object to be identified in the latest frame of historical frame scan data, indicating that the object to be identified It is a new object that is different from the target candidate object, and has not appeared in the recent period; therefore, when detecting the object to be recognized in the current frame scan data, the object to be recognized can be judged in the current frame is the target object to be deleted.
另外,由于待识别对象为新出现的对象,因此还可以将待识别对象作为新的历史候选对象,并保存待识别对象的位置信息。待识别对象的位置信息可以用于对下一帧雷达扫描数据进行障碍物检测处理。利用该待识别对象的位置信息对下一帧雷达扫描数据进行障碍物检测处理的方式,与上述利用历史帧扫描数据中目标候选对象、以及利用当前帧扫描数据中的待识别对象分别对应的位置信息进行障碍物检测处理的方式相似。In addition, since the object to be recognized is a new object, the object to be recognized can also be used as a new historical candidate object, and the position information of the object to be recognized can be saved. The location information of the object to be recognized can be used to perform obstacle detection processing on the next frame of radar scan data. Using the location information of the object to be identified to perform obstacle detection processing on the next frame of radar scan data, the positions corresponding to the above-mentioned target candidate object in the historical frame scan data and the object to be identified in the current frame scan data The information is processed similarly for obstacle detection.
由于将待识别对象判断为待删除的目标对象仅是针对于当前帧而言的;若待识别对象在连续的多帧雷达扫描数据中出现在预设平面上的相同区域时,可以在确定多帧雷达扫描数据后判断该待识别对象并非待删除的目标对象。Since the object to be identified is judged as the target object to be deleted only for the current frame; if the object to be identified appears in the same area on the preset plane in the continuous multi-frame radar scan data, multiple After the frame radar scanning data, it is determined that the object to be identified is not the target object to be deleted.
在一些实施方式中,响应于所述当前帧雷达扫描数据中的第一投影区域和第二投影区域存在重叠区域,将所述待识别对象作为新的历史候选对象,直至所述当前帧雷达扫描数据之后的连续N帧雷达扫描数据中,第一投影区域和第二投影区域均存在重叠区域,则确定所述候选删除对象为所述待删除的目标对象,并将所述新的历史候选对象删除;N为正整数。其中,N的数值例如可以包括5、6、8、10等。N可以根据获取雷达扫描数据时的拍摄间隔确定、或者根据实验等方式确定。下面以N设置为10为例进行说明。In some implementations, in response to the overlap between the first projection area and the second projection area in the current frame radar scanning data, the object to be identified is used as a new historical candidate object until the current frame radar scanning In the consecutive N frames of radar scanning data after the data, if there is an overlapping area between the first projection area and the second projection area, it is determined that the candidate deletion object is the target object to be deleted, and the new historical candidate object Delete; N is a positive integer. Wherein, the value of N may include, for example, 5, 6, 8, 10 and so on. N can be determined according to the shooting interval when the radar scanning data is acquired, or determined according to experiments and other methods. The following takes N as 10 as an example for illustration.
在一些实施方式中,在确定出现连续的N帧历史帧雷达扫描数据中,第一投影区域和第二投影区域均存在重叠区域,则可以相应的判断待识别对象为在目标场景中实际存在的对象。示例性的,在目标场景中的路面上摆放有警戒锥时,在得到包含有该警戒锥的第一帧雷达扫描数据后,由于该警戒锥真实存在,因此在之后的连续多帧雷达扫描数据中均包含有该警戒锥,例如在经过15帧雷达扫描数据之后,才不能再在雷达扫描数据中捕捉该警戒锥。也即,对于待识别对象而言,若该待识别对象为在目标场景中实际 存在的对象,则会存在连续的多帧雷达扫描数据中包含待识别对象。由于在目标场景中实际存在的对象可能会对自动驾驶车辆的形式造成干扰,因此需要将其作为并非待删除的目标对象。另外,由于可以将待识别对象确定为并非待删除的目标对象,相应的也无需继续对其进行检测,因此可以直接将该作为新的历史候选对象的待识别对象删除。In some implementations, when it is determined that there are N consecutive frames of historical frame radar scanning data, the first projection area and the second projection area both have overlapping areas, then it can be judged accordingly that the object to be identified is actually present in the target scene object. Exemplarily, when a warning cone is placed on the road in the target scene, after obtaining the first frame of radar scan data containing the warning cone, since the warning cone really exists, the following continuous multi-frame radar scanning The warning cone is included in the data, for example, after 15 frames of radar scanning data, the warning cone cannot be captured in the radar scanning data. That is, for the object to be identified, if the object to be identified is an object that actually exists in the target scene, there will be multiple frames of continuous radar scan data that contain the object to be identified. Since objects that actually exist in the target scene may interfere with the form of the autonomous vehicle, they need to be considered as the target objects that are not to be deleted. In addition, since the object to be recognized can be determined not to be a target object to be deleted, correspondingly there is no need to continue detecting it, so the object to be recognized as a new historical candidate object can be directly deleted.
在一些实施方式中,若对于待识别对象,未能在连续的N帧历史帧雷达扫描数据中,均存在第一投影区域和第二投影区域均存在重叠区域。例如在出现该待识别对象后,仅在连续的不超过N帧雷达扫描数据中包含该待识别对象,则可以将该待识别对象作为待删除的目标对象。示例性的,在待识别对象包括积水的情况下,由于镜面反射以及反射角度的影响,在自动驾驶车辆行驶过程中连续获取多帧扫描数据时,可能存在多帧连续的雷达扫描数据中均包含该,但在经过多帧雷达扫描数据后,例如经过3帧雷达扫描数据后,由于在扫描该待识别对象时角度、距离等的改变,雷达不会再通过扫描检测到积水,因此雷达扫描数据中在该待识别对象在预设平面的相近位置处不会再采集到该待识别对象的雷达扫描数据。在该种情况下,可以确定待识别对象为待删除的目标对象。In some implementations, if for the object to be identified, there is no overlapping area between the first projection area and the second projection area in the consecutive N frames of historical frame radar scanning data. For example, after the object to be identified appears, the object to be identified is only included in the consecutive frames of radar scan data not exceeding N, and the object to be identified can be used as the target object to be deleted. Exemplarily, in the case where the object to be identified includes stagnant water, due to the influence of specular reflection and reflection angle, when multiple frames of scanning data are continuously acquired during the driving process of the autonomous vehicle, there may be unevenness in the multiple frames of continuous radar scanning data. This is included, but after multiple frames of radar scanning data, for example, after 3 frames of radar scanning data, due to changes in angle and distance when scanning the object to be identified, the radar will no longer detect stagnant water through scanning, so the radar In the scanning data, the radar scanning data of the object to be identified will not be collected at positions close to the object to be identified on the preset plane. In this case, it may be determined that the object to be identified is the target object to be deleted.
在一些实施例中,由于自动驾驶车辆在不断行驶的过程中,得到的雷达扫描数据也在不断发生变化。因此对于历史候选删除对象而言,如果存在一段时间内均未出现的历史候选删除对象,则可以认为该历史候选删除对象在继续行驶的过程中不会再次出现。In some embodiments, as the self-driving vehicle is constantly driving, the obtained radar scanning data is also constantly changing. Therefore, for the historical candidate deletion object, if there is a historical candidate deletion object that has not appeared for a period of time, it can be considered that the historical candidate deletion object will not appear again in the process of continuing to drive.
在实施中,可以针对各历史候选删除对象,检测该历史候选删除对象的保存时间、与当前时间的时间差;若时间差大于或者等于预设的时间差阈值,则删除该历史候选删除对象。其中,预设的时间差阈值例如可以包括3秒、4秒等;具体地可以根据实际情况或者实验确定。这样,还可以保证利用历史候选删除对象逐步的筛选出待删除的待识别对象,并在一定程度上减少需要存储的数据,也同时减少了对历史候选删除对象与当前待识别对象的匹配计算,提高了障碍物的检测效率。In implementation, for each historical candidate deletion object, the time difference between the storage time of the historical candidate deletion object and the current time may be detected; if the time difference is greater than or equal to a preset time difference threshold, the historical candidate deletion object is deleted. Wherein, the preset time difference threshold may include, for example, 3 seconds, 4 seconds, etc.; specifically, it may be determined according to actual conditions or experiments. In this way, it can also ensure that the objects to be identified to be deleted are gradually screened out by using the historical candidate deletion objects, and the data to be stored is reduced to a certain extent, and the matching calculation of the historical candidate deletion objects and the current object to be identified is also reduced. Improve the detection efficiency of obstacles.
针对上述S103,在根据上述S102,确定待识别对象不为待删除的目标对象的情况下,可以将待识别对象确定为目标场景中的障碍物。此时,例如可以控制自动驾驶车辆做出避让动作。由于本公开实施例提供的障碍物检测方法在对待识别对象是否为待删除的目标对象进行检测判断时,检测的准确率较高,因此应用本公开实施例提供的障碍物的检测方法进行自动驾驶车辆的自动驾驶时,能够较为准确的判断出行驶区域中实际需要避让的障碍物,并完成有效的避障。这样,驾驶自动驾驶车辆的人员在乘坐该自动驾驶车辆的过程中,可以有效的减少对非障碍物的识别而产生的急转向或者急刹车,乘车体验感更好。Regarding the above S103, if it is determined according to the above S102 that the object to be recognized is not the target object to be deleted, the object to be recognized may be determined as an obstacle in the target scene. At this time, for example, the self-driving vehicle can be controlled to take an evasive action. Since the obstacle detection method provided by the embodiment of the present disclosure has a high detection accuracy when detecting and judging whether the object to be recognized is the target object to be deleted, the automatic driving is performed by applying the obstacle detection method provided by the embodiment of the present disclosure When the vehicle is driving automatically, it can more accurately determine the obstacles that actually need to be avoided in the driving area, and complete effective obstacle avoidance. In this way, the person driving the self-driving vehicle can effectively reduce the sharp steering or sudden braking caused by the recognition of non-obstacles during the process of riding the self-driving vehicle, and the driving experience is better.
在一些实施例中,还提供了一种针对目标场景中待识别对象检测的具体实施例。图5为本公开实施例提供的一种对待识别对象进行检测的实现流程示意图;其中,In some embodiments, a specific embodiment for detecting an object to be recognized in a target scene is also provided. FIG. 5 is a schematic diagram of an implementation process for detecting an object to be recognized provided by an embodiment of the present disclosure; wherein,
S501:基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定待识别对象对应的第一位置点。S501: Based on the current frame of radar scanning data obtained by scanning the target scene, determine a first position point corresponding to the object to be identified.
S502:基于第一位置点,判断待识别对象是否为当前候选删除对象;若是,跳转至步骤S503;若否,跳转至步骤S507。S502: Based on the first location point, determine whether the object to be identified is a current candidate for deletion; if yes, go to step S503; if not, go to step S507.
S503:基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象。S503: Based on the historical location information corresponding to the historical candidate objects in the historical frame radar scanning data and the first location point, determine the target candidate object from the historical candidate objects.
S504:确定目标候选对象在预设平面的第一投影区域与待识别对象在预设平面的第二投影区域是否存在重叠区域;若是,跳转至S505;若否,跳转至S508。S504: Determine whether there is an overlapping area between the first projection area of the candidate target object on the preset plane and the second projection area of the object to be recognized on the preset plane; if yes, go to S505; if not, go to S508.
S505:将待识别对象作为新的历史候选对象。S505: Use the object to be recognized as a new historical candidate object.
S506:确定在当前帧雷达扫描数据帧至之后的连续N帧雷达扫描数据中,第一投影区域和第二投影区域是否均存在重叠区域;若是,跳转至S507;若否,跳转至S508。S506: Determine whether there is an overlapping area between the first projection area and the second projection area in the consecutive N frames of radar scan data after the current frame of radar scan data; if so, go to S507; if not, go to S508 .
S507:确定待识别对象为目标场景中的障碍物。S507: Determine that the object to be recognized is an obstacle in the target scene.
S508:确定待识别的对象为待删除的目标对象。S508: Determine that the object to be identified is the target object to be deleted.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.
基于同一发明构思,本公开实施例中还提供了与障碍物的检测方法对应的障碍物的检测装置,由于本公开实施例中的装置与本公开实施例上述障碍物的检测方法对应,因此装置的实施可以参见方法的实施。Based on the same inventive concept, the embodiment of the present disclosure also provides an obstacle detection device corresponding to the obstacle detection method. Since the device in the embodiment of the present disclosure corresponds to the above-mentioned obstacle detection method in the embodiment of the present disclosure, the device The implementation can be found in the implementation of the method.
图6为本公开实施例提供的一种障碍物的检测装置的示意图,如图6所示,所述装置包括:第一确定部分61、第二确定部分62、第三确定部分63;其中,Fig. 6 is a schematic diagram of an obstacle detection device provided by an embodiment of the present disclosure. As shown in Fig. 6 , the device includes: a first determination part 61, a second determination part 62, and a third determination part 63; wherein,
第一确定部分61,被配置为基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;The first determining part 61 is configured to determine a first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene;
第二确定部分62,被配置为基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;The second determination part 62 is configured to determine whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point;
第三确定部分63,被配置为响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。The third determination part 63 is configured to determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the target object to be deleted.
在一些实施方式中,所述第二确定部分62在基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象的情况下,被配置为:基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象;响应于所述待识别对象为所述当前候选删除对象,基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象。In some implementations, the second determination part 62 determines whether the object to be identified is to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point. In the case of the target object, it is configured to: determine whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified; For an object, determine whether the object to be identified is a target object to be deleted based on the historical location information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first location point.
在一些实施方式中,所述第二确定部分62还被配置为:响应于所述待识别对象不为所述当前候选删除对象,将所述待识别对象确定为所述目标场景中的障碍物。In some implementations, the second determination part 62 is further configured to: determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the current candidate object for deletion .
在一些实施方式中,所述第一确定部分61在基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点的情况下,被配置为:针对各所述待识别对象,从所述当前帧雷达扫描数据中确定与所述待识别对象对应的点云点;基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息;基于所述轮廓信息,确定所述待识别对象对应的第一位置点。In some implementations, the first determination part 61 is configured in the case of determining the first position point corresponding to the object to be identified in the target scene based on the current frame radar scan data obtained by scanning the target scene is: for each of the objects to be identified, determine the point cloud points corresponding to the objects to be identified from the current frame radar scan data; based on the point cloud points corresponding to the objects to be identified in the target scene The three-dimensional position information is used to determine contour information corresponding to the object to be recognized; and to determine a first position point corresponding to the object to be recognized based on the contour information.
在一些实施方式中,所述第一确定部分61在基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息的情况下,被配置为:将所述待识别对象对应的点云点投影至预设平面中,得到第一投影点;基于所述第一投影点在所述预设平面中的二维位置信息,确定所述待识别对象的轮廓信息。In some implementations, when the first determining part 61 determines the outline information corresponding to the object to be identified based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, It is configured to: project the point cloud point corresponding to the object to be identified onto a preset plane to obtain a first projection point; determine the first projection point based on two-dimensional position information of the first projection point in the preset plane Describe the contour information of the object to be recognized.
在一些实施方式中,所述第一确定部分61在基于所述轮廓信息,确定所述待识别对象对应的第一位置点的情况下,被配置为:利用所述待识别对象的轮廓信息,确定所述待识别对象在预设平面中的投影区域;基于所述投影区域的面积,确定所述待识别对象对应的第一位置点。In some implementations, when determining the first position point corresponding to the object to be identified based on the outline information, the first determining part 61 is configured to: use the outline information of the object to be identified, Determine a projection area of the object to be identified on a preset plane; determine a first position point corresponding to the object to be identified based on the area of the projection area.
在一些实施方式中,所述第一确定部分61在基于所述投影区域的面积,确定所述待识别对象对应的第一位置点的情况下,被配置为:将所述投影区域的面积与预设的面积阈值进行比对;响应于所述面积大于所述面积阈值,基于所述投影区域,确定所述投影区域的最小包围框;基于所述最小包围框对应的第一区域、以及预设的第一间隔步长,在所述第一区域内确定多个备选位置点;将位于所述投影区域内的备选位置点确定为所述第一位置点。In some implementations, when determining the first position point corresponding to the object to be identified based on the area of the projection area, the first determining part 61 is configured to: combine the area of the projection area with Comparing with a preset area threshold; in response to the area being greater than the area threshold, based on the projection area, determining the minimum bounding box of the projection area; based on the first area corresponding to the minimum bounding box, and the preset Determine a plurality of candidate location points in the first area, and determine the candidate location points located in the projection area as the first location point.
在一些实施方式中,所述第一确定部分61在基于所述投影区域的面积,确定所述 待识别对象对应的第一位置点的情况下,被配置为:将所述投影区域的面积与预设的面积阈值进行比对;响应于所述面积小于或者等于所述面积阈值,确定位于所述投影区域的中心点;基于所述中心点、以及预设的半径长度,确定以所述中心点为圆心、所述预设的半径长度为半径的第二区域;基于所述第二区域以及预设的第二间隔步长,在所述第二区域内确定所述第一位置点。In some implementations, when determining the first position point corresponding to the object to be identified based on the area of the projection area, the first determining part 61 is configured to: combine the area of the projection area with comparing with a preset area threshold; in response to the area being less than or equal to the area threshold, determine the center point located in the projection area; based on the center point and the preset radius length, determine the center point The point is the center of the circle, and the preset radius length is the second area of the radius; based on the second area and the preset second interval step, the first position point is determined in the second area.
在一些实施方式中,所述第二确定部分62在基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象的情况下,被配置为:获取采集所述目标场景得到的当前帧图像;将所述第一位置点投影至所述当前帧图像中,得到第二投影点;基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, the second determining part 62 is configured to: obtain the collected The current frame image obtained by the target scene; projecting the first position point into the current frame image to obtain a second projection point; based on the position information of the second projection point in the current frame image, and Determine whether the object to be identified is the current candidate object for deletion based on the position of the obstacle included in the current frame image.
在一些实施方式中,所述第一位置点的数量为至少一个;所述第二确定部分62在将所述第一位置点投影至所述当前帧图像中,得到第二投影点的情况下,被配置为:将至少一个所述第一位置点投影至所述当前帧图像中,得到至少一个第二投影点;所述第二确定部分62在基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象的情况下,被配置为:针对每一所述第二投影点,基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,预测所述第二投影点对应的障碍物预测结果;所述障碍物预测结果包括:与所述第二投影点对应的位置存在障碍物,或者未存在障碍物;基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, the number of the first position point is at least one; when the second determining part 62 projects the first position point into the current frame image to obtain a second projected point , is configured to: project at least one of the first position points into the current frame image to obtain at least one second projection point; the second determining part 62 is based on the second projection point in the current The position information in the frame image and the position of the obstacle included in the current frame image, in the case of determining whether the object to be identified is the current candidate deletion object, are configured to: For each second projection point, predict the The obstacle prediction result corresponding to the second projection point; the obstacle prediction result includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle; Obstacle prediction results determine whether the object to be identified is the current candidate object for deletion.
在一些实施方式中,所述第二确定部分62在基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象的情况下,被配置为:基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度;基于所述置信度、以及预设的置信度阈值,确定所述待识别对象是否为所述当前候选删除对象。In some implementations, the second determination part 62 is configured to determine whether the object to be identified is the current candidate object for deletion based on the obstacle prediction results corresponding to the second projection points respectively. It is: based on the obstacle prediction results corresponding to the second projection points, determine the confidence that the object to be recognized is an obstacle; based on the confidence and a preset confidence threshold, determine the to-be-recognized Whether the object is the current candidate for deletion.
在一些实施方式中,所述第二投影点有n个,n为大于1的整数;所述第二确定部分62在基于各第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度的情况下,被配置为:遍历第2至第n个第二投影点;针对遍历到的第i个第二投影点,基于所述第i个第二投影点的障碍物预测结果,确定与第i个第二投影点对应的判据函数;其中,i为大于1的正整数;基于第i个第二投影点对应的判据函数、以及第1至第i-1个第二投影点的融合判据结果,确定所述第i个第二投影点对应的融合判据结果;基于所述第i个第二投影点对应的融合判据结果,得到确定所述待识别对象为障碍物的置信度。In some implementations, there are n second projection points, and n is an integer greater than 1; the second determining part 62 determines the In the case where the object is the confidence level of an obstacle, it is configured to: traverse the 2nd to nth second projection points; for the traversed ith second projection point, based on the For the obstacle prediction result, determine the criterion function corresponding to the i-th second projection point; wherein, i is a positive integer greater than 1; based on the criterion function corresponding to the i-th second projection point, and the first to i-th -1 fusion criterion result of the second projection point, determining the fusion criterion result corresponding to the ith second projection point; based on the fusion criterion result corresponding to the i second projection point, determining the fusion criterion result Confidence that the object to be recognized is an obstacle.
在一些实施方式中,所述第二确定部分62在基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象的情况下,被配置为:基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象;基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象。In some implementations, the second determination part 62 determines whether the object to be identified is to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point. In the case of the target object, it is configured to: determine the target candidate object from the historical candidate object based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data and the first position point; The first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane, determine whether the object to be identified is the target object to be deleted.
在一些实施方式中,所述历史帧雷达扫描数据中包括至少一个历史候选对象;所述第二确定部分62在基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象的情况下,被配置为:基于所述历史帧雷达扫描数据中各历史候选对象对应的历史位置信息、以及所述第一位 置点,确定各历史候选对象分别与所述待识别对象的距离;基于各历史候选对象分别与所述待识别对象的距离,从各所述历史候选对象中确定与所述待识别对象距离最近的历史候选对象为所述目标候选对象。In some implementations, the historical frame radar scan data includes at least one historical candidate object; the second determining part 62 is based on the historical position information corresponding to the historical candidate object in the historical frame radar scan data, and the The first position point, in the case of determining the target candidate object from the historical candidate objects, is configured to: based on the historical position information corresponding to each historical candidate object in the historical frame radar scan data, and the first position point , determine the distances between each historical candidate object and the object to be identified; based on the distances between each historical candidate object and the object to be identified, determine the history closest to the object to be identified from each of the historical candidate objects The candidate object is the target candidate object.
在一些实施方式中,所述第二确定部分62在基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象的情况下,被配置为:确定所述第一投影区域和所述第二投影区域是否存在重叠区域;响应于所述第一投影区域和所述第二投影区域未存在重叠区域,确定所述待识别对象为待删除的目标对象。In some implementations, the second determining part 62 determines the target candidate object based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane. In the case of whether the object to be identified is a target object to be deleted, it is configured to: determine whether there is an overlapping area between the first projection area and the second projection area; respond to the first projection area and the second projection area There is no overlapping area between the two projection areas, and it is determined that the object to be recognized is the target object to be deleted.
在一些实施方式中,所述第二确定部分62在基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象的情况下,还被配置为:响应于所述当前帧雷达扫描数据中的第一投影区域和第二投影区域存在重叠区域,将所述待识别对象作为新的历史候选对象,直至所述当前帧雷达扫描数据之后的连续N帧雷达扫描数据中,第一投影区域和第二投影区域均存在重叠区域,则确定所述候选删除对象不为所述待删除的目标对象,并将所述新的历史候选对象删除;N为正整数。In some implementations, the second determining part 62 determines the target candidate object based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane. In the case of whether the object to be identified is a target object to be deleted, it is further configured to: in response to an overlapping area between the first projection area and the second projection area in the current frame radar scan data, the object to be identified As a new historical candidate object, until in the continuous N frames of radar scanning data after the current frame of radar scanning data, there is an overlapping area between the first projection area and the second projection area, then it is determined that the candidate deletion object is not the The target object to be deleted, and the new historical candidate object is deleted; N is a positive integer.
在一些实施方式中,所述检测装置还包括:处理部分64,被配置为针对各所述历史候选删除对象,检测该历史候选删除对象的保存时间、与当前时间的时间差;在所述时间差大于或者等于预设的时间差阈值的情况下,删除该历史候选删除对象。In some implementations, the detection device further includes: a processing part 64 configured to, for each of the historical candidate deletion objects, detect the time difference between the storage time of the historical candidate deletion object and the current time; when the time difference is greater than Or if it is equal to the preset time difference threshold, delete the historical candidate deletion object.
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明。For the description of the processing flow of each part in the device and the interaction flow between each part, reference may be made to the relevant descriptions in the foregoing method embodiments.
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。In the embodiments of the present disclosure and other embodiments, a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
本公开实施例还提供了一种计算机设备,图7为本公开实施例提供的计算机设备结构示意图,如图7所示,该设备包括:The embodiment of the present disclosure also provides a computer device. FIG. 7 is a schematic structural diagram of the computer device provided by the embodiment of the present disclosure. As shown in FIG. 7 , the device includes:
处理器10和存储器20;所述存储器20存储有处理器10可执行的机器可读指令,处理器10用于执行存储器20中存储的机器可读指令,所述机器可读指令被处理器10执行时,处理器10执行下述步骤:基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。上述存储器20包括内存210和外部存储器220;这里的内存210也称内存储器,用于暂时存放处理器10中的运算数据,以及与硬盘等外部存储器220交换的数据,处理器10通过内存210与外部存储器220进行数据交换。上述指令的执行过程可以参考本公开实施例中所述的障碍物的检测方法的步骤。 Processor 10 and memory 20; the memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 is used to execute the machine-readable instructions stored in the memory 20, and the machine-readable instructions are executed by the processor 10 During execution, the processor 10 performs the following steps: based on the current frame radar scan data obtained by scanning the target scene, determine the first position point corresponding to the object to be identified in the target scene; The historical location information corresponding to the historical candidate deletion object and the first location point, determine whether the object to be identified is the target object to be deleted; in response to the object to be identified is not the target object to be deleted, set The object to be recognized is determined as an obstacle in the target scene. Above-mentioned memory 20 comprises memory 210 and external memory 220; Memory 210 here is also called internal memory, is used for temporarily storing the operation data in processor 10, and the data exchanged with external memory 220 such as hard disk, processor 10 communicates with memory 210 through memory 210. The external memory 220 performs data exchange. For the execution process of the above instructions, reference may be made to the steps of the obstacle detection method described in the embodiments of the present disclosure.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的障碍物的检测方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the obstacle detection method described in the above-mentioned method embodiments are executed. . Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的障碍物的检测方法的步骤,具体可参见上述方法实施例。其中,该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一些实施例中,所述计算机程序产品具体体现为计算机存储介质,在一些实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。An embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the obstacle detection method described in the above method embodiment, for details, please refer to Embodiment of the above-mentioned method. Wherein, the computer program product may be specifically realized by hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, and in some embodiments, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
本公开实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机 程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行时,实现上述方法中的部分或全部步骤。An embodiment of the present disclosure also provides a computer program product, the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and when the computer program is read and executed by a computer, it realizes part of the above method or all steps.
本公开实施例提供一种计算机程序,包括计算机可读代码,在所述计算机可读代码在计算机设备中运行的情况下,所述计算机设备中的处理器执行上述方法中的部分或全部步骤。An embodiment of the present disclosure provides a computer program, including computer readable codes. When the computer readable codes run in a computer device, a processor in the computer device executes part or all of the steps in the above method.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。Those skilled in the art can clearly understand that for the convenience and brevity of description, for the specific working process of the system and device described above, reference can be made to the corresponding process in the foregoing method embodiments. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms. The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。Finally, it should be noted that: the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, rather than limit them, and the protection scope of the present disclosure is not limited thereto, although referring to the aforementioned The embodiments have described the present disclosure in detail, and those skilled in the art should understand that any person familiar with the technical field can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present disclosure Changes can be easily imagined, or equivalent replacements can be made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in this disclosure. within the scope of protection.
工业实用性Industrial Applicability
本公开实施例提供了一种障碍物的检测方法、装置、计算机设备、存储介质、计算机程序及计算机程序产品,其中,该障碍物的检测方法包括:基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定目标场景中的待识别对象对应的第一位置点;基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及第一位置点,确定待识别对象是否为待删除的目标对象;响应于待识别对象不为待删除的目标对象,将待识别对象确定为目标场景中的障碍物。本公开实施例提供的障碍物的检测方法在对障碍物进行检测时,准确度较高。Embodiments of the present disclosure provide an obstacle detection method, device, computer equipment, storage medium, computer program, and computer program product, wherein the obstacle detection method includes: based on the current frame radar obtained by scanning the target scene Scan the data to determine the first position point corresponding to the object to be identified in the target scene; determine whether the object to be identified is to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scan data and the first position point is the target object; in response to the fact that the object to be identified is not the target object to be deleted, the object to be identified is determined as an obstacle in the target scene. The obstacle detection method provided by the embodiments of the present disclosure has high accuracy when detecting obstacles.

Claims (22)

  1. 一种障碍物的检测方法,包括:A method for detecting an obstacle, comprising:
    基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;Based on the current frame radar scan data obtained by scanning the target scene, determine a first position point corresponding to the object to be identified in the target scene;
    基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;Based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point, determine whether the object to be identified is a target object to be deleted;
    响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。In response to the object to be identified is not the target object to be deleted, the object to be identified is determined as an obstacle in the target scene.
  2. 根据权利要求1所述的检测方法,其中,所述基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象,包括:The detection method according to claim 1, wherein, based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scan data and the first position point, it is determined whether the object to be identified is to be deleted target audience, including:
    基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象;Based on the first position point corresponding to the object to be identified, determine whether the object to be identified is a current candidate for deletion;
    响应于所述待识别对象为所述当前候选删除对象,基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象。Responding to the fact that the object to be identified is the current candidate for deletion, based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point, determine whether the object to be identified is Deleted target object.
  3. 根据权利要求2所述的检测方法,还包括:响应于所述待识别对象不为所述当前候选删除对象,将所述待识别对象确定为所述目标场景中的障碍物。The detection method according to claim 2, further comprising: determining the object to be identified as an obstacle in the target scene in response to the object to be identified not being the current candidate object for deletion.
  4. 根据权利要求1-3任一项所述的检测方法,其中,所述基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点,包括:The detection method according to any one of claims 1-3, wherein the first position point corresponding to the object to be identified in the target scene is determined based on the current frame radar scan data obtained by scanning the target scene, include:
    针对各所述待识别对象,从所述当前帧雷达扫描数据中确定与所述待识别对象对应的点云点;For each of the objects to be identified, determining a point cloud point corresponding to the object to be identified from the current frame radar scan data;
    基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息;Based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene, determine the contour information corresponding to the object to be identified;
    基于所述轮廓信息,确定所述待识别对象对应的第一位置点。Based on the contour information, a first position point corresponding to the object to be recognized is determined.
  5. 根据权利要求4所述的检测方法,其中,所述基于所述待识别对象对应的点云点在所述目标场景中的三维位置信息,确定所述待识别对象对应的轮廓信息,包括:The detection method according to claim 4, wherein the determining the contour information corresponding to the object to be identified based on the three-dimensional position information of the point cloud point corresponding to the object to be identified in the target scene comprises:
    将所述待识别对象对应的点云点投影至预设平面中,得到第一投影点;Projecting the point cloud points corresponding to the object to be identified onto a preset plane to obtain a first projection point;
    基于所述第一投影点在所述预设平面中的二维位置信息,确定所述待识别对象的轮廓信息。Based on the two-dimensional position information of the first projection point in the preset plane, the contour information of the object to be recognized is determined.
  6. 根据权利要求4或5所述的检测方法,其中,所述基于所述轮廓信息,确定所述待识别对象对应的第一位置点,包括:The detection method according to claim 4 or 5, wherein said determining the first position point corresponding to the object to be identified based on the contour information comprises:
    利用所述待识别对象的轮廓信息,确定所述待识别对象在预设平面中的投影区域;Using the outline information of the object to be identified, determine the projection area of the object to be identified in a preset plane;
    基于所述投影区域的面积,确定所述待识别对象对应的第一位置点。Based on the area of the projection area, a first position point corresponding to the object to be identified is determined.
  7. 根据权利要求6所述的检测方法,其中,所述基于所述投影区域的面积,确定所述待识别对象对应的第一位置点,包括:The detection method according to claim 6, wherein said determining the first position point corresponding to the object to be identified based on the area of the projection area comprises:
    将所述投影区域的面积与预设的面积阈值进行比对;Comparing the area of the projected area with a preset area threshold;
    响应于所述面积大于所述面积阈值,基于所述投影区域,确定所述投影区域的最小包围框;determining a minimum bounding box of the projected area based on the projected area in response to the area being greater than the area threshold;
    基于所述最小包围框对应的第一区域、以及预设的第一间隔步长,在所述第一区域内确定多个备选位置点;Based on the first area corresponding to the minimum bounding box and the preset first interval step, determine a plurality of candidate location points in the first area;
    将位于所述投影区域内的备选位置点确定为所述第一位置点。A candidate location point within the projection area is determined as the first location point.
  8. 根据权利要求6所述的检测方法,其中,所述基于所述投影区域的面积,确定所述待识别对象对应的第一位置点,包括:The detection method according to claim 6, wherein said determining the first position point corresponding to the object to be identified based on the area of the projection area comprises:
    将所述投影区域的面积与预设的面积阈值进行比对;Comparing the area of the projected area with a preset area threshold;
    响应于所述面积小于或者等于所述面积阈值,确定位于所述投影区域的中心点;In response to the area being less than or equal to the area threshold, determine a center point located in the projection area;
    基于所述中心点、以及预设的半径长度,确定以所述中心点为圆心、所述预设的半径长度为半径的第二区域;Based on the center point and the preset radius length, determine a second area with the center point as the center and the preset radius length as the radius;
    基于所述第二区域以及预设的第二间隔步长,在所述第二区域内确定所述第一位置点。The first position point is determined in the second area based on the second area and a preset second interval step.
  9. 根据权利要求1-8任一项所述的检测方法,其中,所述基于所述待识别对象对应的第一位置点,判断所述待识别对象是否为当前候选删除对象,包括:The detection method according to any one of claims 1-8, wherein the judging whether the object to be identified is a current candidate for deletion based on the first position point corresponding to the object to be identified comprises:
    获取采集所述目标场景得到的当前帧图像;Acquiring the current frame image obtained by collecting the target scene;
    将所述第一位置点投影至所述当前帧图像中,得到第二投影点;projecting the first position point into the current frame image to obtain a second projected point;
    基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象。Based on the position information of the second projection point in the current frame image and the position of obstacles included in the current frame image in the current frame image, determine whether the object to be identified is the current candidate delete object.
  10. 根据权利要求9所述的检测方法,其中,所述第一位置点的数量为至少一个;The detection method according to claim 9, wherein the number of the first position points is at least one;
    所述将所述第一位置点投影至所述当前帧图像中,得到第二投影点,包括:The step of projecting the first position point into the current frame image to obtain a second projected point includes:
    将至少一个所述第一位置点投影至所述当前帧图像中,得到至少一个第二投影点;projecting at least one of the first position points into the current frame image to obtain at least one second projected point;
    所述基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,确定所述待识别对象是否为所述当前候选删除对象,包括:Determining whether the object to be recognized is the Current candidates for deletion include:
    针对每一所述第二投影点,基于所述第二投影点在所述当前帧图像中的位置信息、以及所述当前帧图像包括的障碍物在所述当前帧图像中的位置,预测所述第二投影点对应的障碍物预测结果;所述障碍物预测结果包括:与所述第二投影点对应的位置存在障碍物,或者未存在障碍物;For each of the second projection points, based on the position information of the second projection point in the current frame image and the position of obstacles included in the current frame image in the current frame image, predict the The obstacle prediction result corresponding to the second projection point; the obstacle prediction result includes: there is an obstacle at the position corresponding to the second projection point, or there is no obstacle;
    基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象。Based on the obstacle prediction results respectively corresponding to the second projection points, it is determined whether the object to be identified is the current candidate deletion object.
  11. 根据权利要求10所述的检测方法,其中,所述基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象是否为所述当前候选删除对象,包括:The detection method according to claim 10, wherein the determining whether the object to be identified is the current candidate deletion object based on the obstacle prediction results corresponding to each of the second projection points includes:
    基于各所述第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度;determining the confidence that the object to be identified is an obstacle based on the obstacle prediction results corresponding to each of the second projection points;
    基于所述置信度、以及预设的置信度阈值,确定所述待识别对象是否为所述当前候选删除对象。Based on the confidence level and a preset confidence level threshold, it is determined whether the object to be identified is the current candidate deletion object.
  12. 根据权利要求11所述的检测方法,其中,所述第二投影点有n个,n为大于1的整数;The detection method according to claim 11, wherein there are n second projection points, and n is an integer greater than 1;
    所述基于各第二投影点分别对应的障碍物预测结果,确定所述待识别对象为障碍物的置信度,包括:The determining the confidence that the object to be identified is an obstacle based on the obstacle prediction results corresponding to the second projection points includes:
    遍历第2至第n个第二投影点;Traversing the 2nd to nth second projection points;
    针对遍历到的第i个第二投影点,基于所述第i个第二投影点的障碍物预测结果,确定与第i个第二投影点对应的判据函数;其中,i为大于1的正整数;For the i-th second projection point traversed, based on the obstacle prediction result of the i-th second projection point, determine the criterion function corresponding to the i-th second projection point; wherein, i is greater than 1 positive integer;
    基于第i个第二投影点对应的判据函数、以及第1至第i-1个第二投影点的融合判据结果,确定所述第i个第二投影点对应的融合判据结果;Based on the criterion function corresponding to the i-th second projection point and the fusion criterion results of the 1st to i-1th second projection points, determine the fusion criterion result corresponding to the i-th second projection point;
    基于所述第i个第二投影点对应的融合判据结果,得到确定所述待识别对象为障碍物的置信度。Based on the fusion criterion result corresponding to the ith second projection point, a confidence degree for determining that the object to be recognized is an obstacle is obtained.
  13. 根据权利要求1-12任一项所述的检测方法,其中,所述基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象,包括:The detection method according to any one of claims 1-12, wherein the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point are used to determine the to-be-identified Whether the object is the target object to be deleted, including:
    基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象;Determining target candidates from the historical candidate objects based on the historical position information corresponding to the historical candidate objects in the historical frame radar scan data and the first position point;
    基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象。Based on a first projection area of the candidate target object on a preset plane and a second projection area of the object to be identified on the preset plane, it is determined whether the object to be identified is a target object to be deleted.
  14. 根据权利要求13所述的检测方法,其中,所述历史帧雷达扫描数据中包括至少一个历史候选对象;所述基于所述历史帧雷达扫描数据中历史候选对象对应的历史位置信息、以及所述第一位置点,从所述历史候选对象中确定目标候选对象,包括:The detection method according to claim 13, wherein the historical frame radar scan data includes at least one historical candidate object; the historical position information corresponding to the historical candidate object in the historical frame radar scan data, and the The first position point, determining the target candidate object from the historical candidate objects, includes:
    基于所述历史帧雷达扫描数据中各历史候选对象对应的历史位置信息、以及所述第一位置点,确定各历史候选对象分别与所述待识别对象的距离;Based on the historical position information corresponding to each historical candidate object in the historical frame radar scanning data and the first position point, determine the distance between each historical candidate object and the object to be identified;
    基于各历史候选对象分别与所述待识别对象的距离,从各所述历史候选对象中确定与所述待识别对象距离最近的历史候选对象为所述目标候选对象。Based on the respective distances between each historical candidate object and the object to be identified, the historical candidate object with the closest distance to the object to be identified is determined from each of the historical candidate objects as the target candidate object.
  15. 根据权利要求13或14所述的检测方法,其中,所述基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象,包括:The detection method according to claim 13 or 14, wherein, based on the first projection area of the target candidate object on a preset plane and the second projection area of the object to be identified on the preset plane, Determining whether the object to be identified is a target object to be deleted includes:
    确定所述第一投影区域和所述第二投影区域是否存在重叠区域;determining whether there is an overlapping area between the first projected area and the second projected area;
    响应于所述第一投影区域和所述第二投影区域未存在重叠区域,确定所述待识别对象为待删除的目标对象。In response to the fact that there is no overlapping area between the first projection area and the second projection area, it is determined that the object to be recognized is the target object to be deleted.
  16. 根据权利要求15所述的检测方法,其中,所述基于所述目标候选对象在预设平面的第一投影区域、以及所述待识别对象在所述预设平面的第二投影区域,确定所述待识别对象是否为待删除的目标对象,还包括:The detection method according to claim 15, wherein the determination is based on the first projection area of the target candidate object on the preset plane and the second projection area of the object to be identified on the preset plane. Describe whether the object to be identified is the target object to be deleted, including:
    响应于所述当前帧雷达扫描数据中的第一投影区域和第二投影区域存在重叠区域,将所述待识别对象作为新的历史候选对象,直至所述当前帧雷达扫描数据之后的连续N帧雷达扫描数据中,第一投影区域和第二投影区域均存在重叠区域,则确定所述候选删除对象不为所述待删除的目标对象,并将所述新的历史候选对象删除;N为正整数。In response to an overlapping area between the first projection area and the second projection area in the current frame of radar scan data, the object to be identified is used as a new historical candidate object until consecutive N frames after the current frame of radar scan data In the radar scan data, if there is an overlapping area between the first projection area and the second projection area, it is determined that the candidate deletion object is not the target object to be deleted, and the new historical candidate object is deleted; N is positive integer.
  17. 根据权利要求1-16任一项所述的检测方法,还包括:针对各所述历史候选删除对象,检测该历史候选删除对象的保存时间、与当前时间的时间差;The detection method according to any one of claims 1-16, further comprising: for each of the historical candidate deletion objects, detecting the storage time of the historical candidate deletion object and the time difference with the current time;
    在所述时间差大于或者等于预设的时间差阈值的情况下,删除该历史候选删除对象。If the time difference is greater than or equal to a preset time difference threshold, the historical candidate deletion object is deleted.
  18. 一种障碍物的检测装置,包括:An obstacle detection device, comprising:
    第一确定部分,被配置为基于对目标场景进行扫描得到的当前帧雷达扫描数据,确定所述目标场景中的待识别对象对应的第一位置点;The first determining part is configured to determine a first position point corresponding to the object to be identified in the target scene based on the current frame radar scanning data obtained by scanning the target scene;
    第二确定部分,被配置为基于历史帧雷达扫描数据中的历史候选删除对象对应的历史位置信息、以及所述第一位置点,确定所述待识别对象是否为待删除的目标对象;The second determination part is configured to determine whether the object to be identified is a target object to be deleted based on the historical position information corresponding to the historical candidate deletion object in the historical frame radar scanning data and the first position point;
    第三确定部分,被配置为响应于所述待识别对象不为所述待删除的目标对象,将所述待识别对象确定为所述目标场景中的障碍物。The third determination part is configured to determine the object to be identified as an obstacle in the target scene in response to the object to be identified not being the target object to be deleted.
  19. 一种计算机设备,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器用于执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行的情况下,所述处理器执行如权利要求1至17任一项所述的障碍物的检测方法的步骤。A computer device, comprising: a processor and a memory, the memory stores machine-readable instructions executable by the processor, the processor is configured to execute the machine-readable instructions stored in the memory, and the machine can When the read instruction is executed by the processor, the processor executes the steps of the obstacle detection method according to any one of claims 1 to 17.
  20. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行的情况下,所述计算机设备执行如权利要求1至17 任一项所述的障碍物的检测方法的步骤。A computer-readable storage medium, a computer program is stored on the computer-readable storage medium, and when the computer program is run by a computer device, the computer device executes the method according to any one of claims 1 to 17 The steps of the obstacle detection method.
  21. 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在计算机设备中运行的情况下,所述计算机设备中的处理器执行如权利要求1至17中任一项所述的障碍物的检测方法的步骤。A computer program comprising computer readable code for a processor in said computer device to perform the obstacle according to any one of claims 1 to 17 when said computer readable code is run in a computer device The steps of the detection method of the substance.
  22. 一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行的情况下,实现如权利要求1至17中任一项所述的障碍物的检测方法的步骤。A computer program product, the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and when the computer program is read and executed by a computer, any one of claims 1 to 17 can be realized. The steps of the detection method of the obstacle described in item.
PCT/CN2022/075423 2021-09-30 2022-02-07 Obstacle detection method and apparatus, and computer device, storage medium, computer program and computer program product WO2023050679A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111165461.0A CN113887433A (en) 2021-09-30 2021-09-30 Obstacle detection method and device, computer equipment and storage medium
CN202111165461.0 2021-09-30

Publications (1)

Publication Number Publication Date
WO2023050679A1 true WO2023050679A1 (en) 2023-04-06

Family

ID=79005122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075423 WO2023050679A1 (en) 2021-09-30 2022-02-07 Obstacle detection method and apparatus, and computer device, storage medium, computer program and computer program product

Country Status (2)

Country Link
CN (1) CN113887433A (en)
WO (1) WO2023050679A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680752A (en) * 2023-05-23 2023-09-01 杭州水立科技有限公司 Hydraulic engineering safety monitoring method and system based on data processing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887433A (en) * 2021-09-30 2022-01-04 上海商汤临港智能科技有限公司 Obstacle detection method and device, computer equipment and storage medium
WO2023133772A1 (en) * 2022-01-13 2023-07-20 深圳市大疆创新科技有限公司 Obstacle detection methods and apparatus, and device, radar apparatus and movable platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120081542A1 (en) * 2010-10-01 2012-04-05 Andong University Industry-Academic Cooperation Foundation Obstacle detecting system and method
CN109509210A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Barrier tracking and device
CN109521757A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Static-obstacle thing recognition methods and device
CN112285714A (en) * 2020-09-08 2021-01-29 苏州挚途科技有限公司 Obstacle speed fusion method and device based on multiple sensors
CN112316436A (en) * 2020-11-30 2021-02-05 超参数科技(深圳)有限公司 Obstacle avoidance method and device for intelligent body, computer equipment and storage medium
CN112698315A (en) * 2019-10-23 2021-04-23 阿里巴巴集团控股有限公司 Mobile device positioning system, method and device
CN113887433A (en) * 2021-09-30 2022-01-04 上海商汤临港智能科技有限公司 Obstacle detection method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120081542A1 (en) * 2010-10-01 2012-04-05 Andong University Industry-Academic Cooperation Foundation Obstacle detecting system and method
CN109509210A (en) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 Barrier tracking and device
CN109521757A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Static-obstacle thing recognition methods and device
CN112698315A (en) * 2019-10-23 2021-04-23 阿里巴巴集团控股有限公司 Mobile device positioning system, method and device
CN112285714A (en) * 2020-09-08 2021-01-29 苏州挚途科技有限公司 Obstacle speed fusion method and device based on multiple sensors
CN112316436A (en) * 2020-11-30 2021-02-05 超参数科技(深圳)有限公司 Obstacle avoidance method and device for intelligent body, computer equipment and storage medium
CN113887433A (en) * 2021-09-30 2022-01-04 上海商汤临港智能科技有限公司 Obstacle detection method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680752A (en) * 2023-05-23 2023-09-01 杭州水立科技有限公司 Hydraulic engineering safety monitoring method and system based on data processing
CN116680752B (en) * 2023-05-23 2024-03-19 杭州水立科技有限公司 Hydraulic engineering safety monitoring method and system based on data processing

Also Published As

Publication number Publication date
CN113887433A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
WO2023050679A1 (en) Obstacle detection method and apparatus, and computer device, storage medium, computer program and computer program product
US11282210B2 (en) Method and apparatus for segmenting point cloud data, storage medium, and electronic device
US11222441B2 (en) Methods and apparatuses for object detection, and devices
CN110458854B (en) Road edge detection method and device
US10769466B2 (en) Precision aware drone-based object mapping based on spatial pattern recognition
US11132559B2 (en) Abnormality detection method, apparatus, and device for unmanned checkout
JP7413543B2 (en) Data transmission method and device
CN112966696A (en) Method, device and equipment for processing three-dimensional point cloud and storage medium
US20200200545A1 (en) Method and System for Determining Landmarks in an Environment of a Vehicle
EP2743861A2 (en) Method and device for detecting continuous object in disparity direction based on disparity map
US11379995B2 (en) System and method for 3D object detection and tracking with monocular surveillance cameras
US20200320331A1 (en) System and method for object recognition using local binarization
WO2024012211A1 (en) Autonomous-driving environmental perception method, medium and vehicle
CN113191297A (en) Pavement identification method and device and electronic equipment
WO2020127151A1 (en) Method for improved object detection
Yun et al. Speed-bump detection for autonomous vehicles by lidar and camera
CN114910927A (en) Event-based vehicle attitude estimation using monochromatic imaging
Berriel et al. A particle filter-based lane marker tracking approach using a cubic spline model
US11783597B2 (en) Image semantic segmentation for parking space detection
CN112784675A (en) Target detection method and device, storage medium and terminal
US20210272316A1 (en) Method, System and Apparatus for Object Detection in Point Clouds
CN111767751B (en) Two-dimensional code image recognition method and device
US11538260B2 (en) Object identification apparatus, object identification method, and nontransitory computer readable medium storing control program
CN112733923A (en) System and robot for determining forbidden area
US20230351765A1 (en) Systems and methods for detecting a reflection artifact in a point cloud