CN115311646A - Method and device for detecting obstacle - Google Patents

Method and device for detecting obstacle Download PDF

Info

Publication number
CN115311646A
CN115311646A CN202211113079.XA CN202211113079A CN115311646A CN 115311646 A CN115311646 A CN 115311646A CN 202211113079 A CN202211113079 A CN 202211113079A CN 115311646 A CN115311646 A CN 115311646A
Authority
CN
China
Prior art keywords
obstacle
unmanned vehicle
point cloud
frame data
driving track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211113079.XA
Other languages
Chinese (zh)
Inventor
严海旭
何贝
刘鹤云
张岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202211113079.XA priority Critical patent/CN115311646A/en
Publication of CN115311646A publication Critical patent/CN115311646A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an obstacle detection method, an obstacle detection device, electronic equipment and a machine-readable storage medium, which are applied to an unmanned vehicle, wherein the method comprises the following steps: acquiring point cloud frame data, and storing the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data; calculating the driving track of the unmanned vehicle in the scene set based on the point cloud frame data in the scene set, and determining the position and/or speed information of an obstacle in the point cloud frame data; and judging whether the barrier invades the driving track of the unmanned vehicle or not based on the position and/or the speed information of the barrier.

Description

Method and device for detecting obstacle
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for obstacle detection, an electronic device, and a machine-readable storage medium.
Background
There is a strong demand for environmental awareness and obstacle detection in vehicle autonomous driving tasks, and a safer and more reasonable decision can be made only if the environment can be accurately perceived. In a vehicle automatic driving task, 3D point cloud data is usually collected to analyze and label the surrounding environment. A point cloud is a data set of points in some coordinate system. Points contain rich information including three-dimensional coordinates X, Y, Z, color, classification values, intensity values, time, and the like. The point cloud data mainly acquire and acquire data through a three-dimensional laser scanner, then three-dimensionally rebuild through a two-dimensional image, acquire point cloud data in the rebuilding process, and some in addition calculate and acquire the point cloud through a three-dimensional model. In general, 3D point cloud data is acquired by using a Light Detection And Ranging (LiDAR) technology, which is a laser Detection And measurement technology, and the point cloud data is processed And applied while the point cloud data is acquired. The LiDAR data acquisition modes are mainly divided into three major categories: satellite-borne, airborne and ground, and most point cloud data used for automatic driving is acquired by vehicle-mounted ground. Unlike RGB images, liDAR point clouds are 3D and unstructured, and in the face of real-time performance required by vehicle automatic driving tasks, many frames of point cloud data are continuously collected during vehicle driving, and a common method is to perform obstacle detection on a plurality of frames respectively and then perform statistics on detected obstacle information, and the method cannot reflect the influence of obstacles on unmanned driving of unmanned vehicles in continuous time. Therefore, the technical problem to be solved in the field is to reliably feed back the influence of the detection result on the driving state of the current unmanned vehicle.
Disclosure of Invention
The application provides an obstacle detection method, which is characterized in that the method is applied to an unmanned vehicle, and comprises the following steps:
acquiring point cloud frame data, and storing the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data;
calculating the driving track of the unmanned vehicle in the scene set based on the point cloud frame data in the scene set, and determining the position and/or speed information of an obstacle in the point cloud frame data;
and judging whether the barrier invades the driving track of the unmanned vehicle or not based on the position and/or the speed information of the barrier.
Optionally, the storing the point cloud frame data into at least one scene set includes:
and storing the point cloud frame data of which the time interval is smaller than a preset threshold value in the point cloud frame data into the same scene set.
Optionally, the obstacle includes a static obstacle, and is characterized in that, based on the position of the obstacle and/or speed information, it is determined whether the obstacle will invade the driving track of the unmanned vehicle, including:
judging whether the static barrier invades the driving track of the unmanned vehicle or not based on the shortest distance between the position of the static barrier and the driving track of the unmanned vehicle;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
Optionally, the obstacle includes a dynamic obstacle, and the determining, based on the position of the obstacle and/or the speed information, whether the obstacle will invade the driving track of the unmanned vehicle includes:
the shortest distance from the position of the dynamic obstacle to the driving track of the unmanned vehicle is determined based on the position and speed information of the dynamic obstacle; judging whether the dynamic barrier can invade the driving track of the unmanned vehicle or not;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
The application provides obstacle detection device, its characterized in that is applied to unmanned car, the device includes:
the data acquisition module is used for acquiring point cloud frame data and storing the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data;
the track calculation module is used for calculating the driving track of the unmanned vehicle in the scene set based on the point cloud frame data in the scene set and determining the position and/or speed information of an obstacle in the point cloud frame data;
and the track judging module is used for judging whether the barrier invades the driving track of the unmanned vehicle or not based on the position and/or the speed information of the barrier.
Optionally, the storing the point cloud frame data into at least one scene set includes:
and storing the point cloud frame data of which the time interval is smaller than a preset threshold value in the point cloud frame data into the same scene set.
Optionally, the determining whether the obstacle invades the driving track of the unmanned vehicle based on the position of the obstacle and/or the speed information includes:
judging whether the static barrier invades the driving track of the unmanned vehicle or not based on the shortest distance between the position of the static barrier and the driving track of the unmanned vehicle;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
Optionally, the determining whether the obstacle invades the driving track of the unmanned vehicle based on the position of the obstacle and/or the speed information includes:
the shortest distance between the position of the dynamic barrier and the driving track of the unmanned vehicle is based on the position and speed information of the dynamic barrier; judging whether the dynamic barrier can invade the driving track of the unmanned vehicle or not;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
The present application further provides an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the above method by executing the executable instructions.
The present application also provides a machine-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the above-described method.
Through the embodiment, the obstacle detection method can calculate the minimum distance from the dynamic obstacle and the static obstacle to the driving track of the unmanned vehicle and judge whether the obstacle influences the driving track of the unmanned vehicle or not based on continuous point cloud frame data, so that the obstacle detection accuracy is improved.
Drawings
FIG. 1 is a flow chart of a method of obstacle detection in accordance with an exemplary embodiment;
FIG. 2 is a block diagram of an obstacle detection device in accordance with an exemplary embodiment;
fig. 3 is a hardware configuration diagram of an electronic device in which an obstacle detection device is provided according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
In order to enable those skilled in the art to better understand the technical solution in the embodiment of the present specification, the following briefly describes a related technology of 3D point cloud target detection related to the embodiment of the present specification.
IoU: the Intersection ratio IoU (Intersection over Intersection) is measured by the overlapping degree of two regions, and is the proportion of the area of the overlapping part of the two regions to the total area of the two regions (the overlapping part is calculated only once). In the target detection task, if the IoU values of the rectangle output by the model and the manually marked rectangle are greater than a certain threshold (usually 0.5), the model is considered to output a correct rectangle.
Point cloud frame data: laser pulses are emitted outwards by the laser radar, and are reflected from the ground or the surface of an object to form a plurality of echoes, the echoes return to the laser radar sensor, and the processed reflected data is called point cloud data.
Pose: and the transformation matrix corresponding to the relative relation between the pose of the unmanned vehicle and the global coordinate system is used for describing the position and the orientation of the unmanned vehicle.
Application scenario overview
There is a strong demand for environmental awareness and obstacle detection in vehicle autonomous driving tasks, and a safer and more reasonable decision can be made only if the environment can be accurately perceived. In a vehicle automatic driving task, 3D point cloud data is generally collected to analyze and label the surrounding environment. A point cloud is a data set of points in some coordinate system. Points contain rich information including three-dimensional coordinates X, Y, Z, color, classification value, intensity value, time, etc. The point cloud data is mainly acquired through a three-dimensional laser scanner, three-dimensional reconstruction is carried out through a two-dimensional image, point cloud data are acquired in the reconstruction process, and in addition, the point cloud data are calculated and acquired through a three-dimensional model. In general, light Detection And Ranging (LiDAR) technology is used to acquire 3D point cloud data, and LiDAR is a laser Detection And measurement technology, and is used to process And apply point cloud data while acquiring the point cloud data. LiDAR data acquisition methods fall into three major categories: satellite-borne, airborne and ground, and most point cloud data used for automatic driving is acquired by vehicle-mounted ground. Unlike the RGB image, the LIDAR point cloud is 3D and unstructured, and in the face of real-time performance required by an automatic vehicle driving task, many frames of point cloud data are continuously acquired during vehicle driving, and a common method is to perform obstacle detection on a plurality of frames respectively and then perform statistics on detected obstacle information, and the method cannot reflect the influence of obstacles on the driving condition of an unmanned vehicle in continuous time.
For example, currently, the mainstream mainly adopts the size of an interaction-over-unit (IoU) based on a 3D enveloping box (reference bounding box) predicted by a comparison algorithm and a real labeled 3D enveloping box (ground-route bounding box) to judge whether the current enveloping box is a correct detection (TP), and calculate an Average mean Precision (mapp) of the population, so as to obtain the influence of an obstacle on the driving condition of the unmanned vehicle.
Inventive concept
Firstly, the performance evaluation indexes are designed on the basis of judging whether each 3D included frame is qualified or not, so that the evaluation mode of the overall indexes is to only count and sum the correct detection amount of a plurality of independent included frames at a certain moment, and the overall detection performance in a certain continuous time period is not considered. For the automatic driving scene, the perception of the computer to external obstacles, the track prediction and the control of the vehicle are all based on continuous time and space comprehensive judgment, so that the actual use condition cannot be truly fed back by using a mainstream target detection and evaluation mode based on the mAP, and the quantitative evaluation of the model algorithm cannot be further realized. Secondly, in the driving process of the unmanned vehicle, the shaking of the obstacle can influence the path planning in some scenes, and can also falsely give an unrealistic predicted track to block the traveling route. Thereby causing the bicycle to brake. The detection effect is judged by the mAP calculation based on the size of the IoU, but the IoU cannot feed back the shaking condition of the detection result by comparing the prediction result with the real label, so that the performance evaluation is distorted, and the driving requirement of the unmanned vehicle cannot be met.
In view of this, the present specification aims to provide a technical solution that can calculate the minimum distance from a dynamic obstacle and a static obstacle to the driving track of the unmanned vehicle and determine whether the obstacle will affect the driving track of the unmanned vehicle based on continuous point cloud frame data.
The core concept of the specification is as follows:
based on the continuous point cloud frame data, the minimum distance from the dynamic barrier and the static barrier to the driving track of the unmanned vehicle is calculated, whether the barrier influences the driving track of the unmanned vehicle is judged, and therefore the accuracy of barrier detection is improved.
The present application is described below with reference to specific embodiments and specific application scenarios.
Referring to fig. 1, fig. 1 is a flow chart of an obstacle detection method according to an exemplary embodiment, the method performing the following steps:
step 102: acquiring point cloud frame data, and storing the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data.
Step 104: and calculating the driving track of the unmanned vehicle in the scene set based on the point cloud frame data in the scene set, and determining the position and/or speed information of the obstacle in the point cloud frame data.
Step 106: and judging whether the barrier invades the driving track of the unmanned vehicle or not based on the position and/or the speed information of the barrier.
After the point cloud frame data is obtained, the point cloud frame data can be segmented according to the time stamps carried by the point cloud frame data, and the point cloud frame data with continuous time stamps are stored in the same scene set, so that the point cloud frame data are divided into different scenes.
In an illustrated embodiment, the point cloud frame data with a time interval smaller than a preset threshold may be saved in the same scene set.
For example, according to the frequency of 2hz, that is, the time stamp interval of two frames of data is equal to 500ms, 500ms can be used as a judgment index to perform scene division on point cloud frame data, the time interval between the point cloud frame data is less than 500ms and can be stored in the same scene set, and if the time interval between the point cloud frame data is detected to be greater than 500ms, a new scene set can be established.
After the scenes of the point cloud frame data are determined, the driving track of the unmanned vehicle in each scene set can be calculated. Specifically, the unmanned parking position posture in each scene set can be projected to the global coordinate system, the track route of the unmanned vehicle is obtained, and the running speed of the unmanned vehicle is calculated according to the time interval. The detected obstacles can be classified according to different categories and attributes, and different semantics can be processed by combining the real requirements of the unmanned vehicle on driving in a specific scene. Such as dividing the obstacle into static obstacle, dynamic obstacle. Wherein, the specific scene can be a port scene, a parking scene and the like; dynamic obstacles may include autonomously driven travel obstacles such as pedestrians, truck heads, etc.
In one illustrated embodiment, whether the static obstacle invades the driving track of the unmanned vehicle can be judged based on the shortest distance between the position of the static obstacle and the driving track of the unmanned vehicle; and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
For the same static obstacle of different frames in a scene set which is continuous at a moment, the static obstacle can be projected under a global coordinate system through the change of the coordinate system, and the static obstacle should be at the same position theoretically. Therefore, after the static obstacle is projected to the global coordinate system, in a top view, the area of the union set of all the detected included frames can be compared with the area of the actually marked included frames, the intersection ratio IoU of the area of the detected included frames and the area of the actually marked included frames is obtained, and whether the static obstacle has an influence on unmanned planning control or not is judged by calculating the shortest distance from the center point of the included frames to the driving track of the unmanned vehicle.
For example, the slope may be calculated according to the starting point and the ending point of the driving track of the current scene set. The slope can be used for solving tangent lines of the edges of the two sides of the driving track to calculate the unmanned driving safety area; for static obstacles, all detected inclosure frames are in a superposition state in a certain area, the integral area of all the inclosure frames superposed together is calculated, then the integral area is compared with the area of the actually marked inclosure frame, and the intersection ratio IoU of the detected integral area of the inclosure frame and the area of the actually marked inclosure frame is calculated; then, the shortest distance between the real labeled inclosure frame and the unmanned safety area is obtained and normalized (the farthest distance is assumed to be 100 m); and finally, dividing the result of the second step by the result of the third step to be used as an index for judging whether the static obstacle invades the driving track of the unmanned vehicle.
In a scene set which is continuous at a moment, the movement track of the dynamic barrier does not generate frequent shake in the direction and sudden change in the speed objectively. Therefore, the local evaluation of the dynamic obstacle inclusion frame can be fed back by calculating the sudden change error of the speed between two frames before and after the inclusion frame and the jitter error of the angle. And judging whether the moving track of the enveloping frame of the predicted dynamic barrier invades the driving track of the unmanned vehicle or not, and judging whether the unmanned vehicle has adverse effect on the unmanned planning control or not according to the degree of approach. And finally, multiplying the two weights by corresponding weights, and summing the two weights to be used as an index for judging whether the dynamic barrier invades the driving track of the unmanned vehicle.
For example, the slope may be calculated according to the starting point and the ending point of the driving track of the current scene set. The slope can be used for solving tangent lines of the edges of the two sides of the driving track to calculate the unmanned driving safety area; for a dynamic obstacle, all detected inclusionary frames continuously move in a certain area, four vertexes of all the inclusionary frames are connected in a pairing mode from small to large according to a timestamp to form an inclusionary frame moving track area of the dynamic obstacle, then the moving track area of the inclusionary frame is compared with the area of a real labeled inclusionary frame, the moving track area of the real labeled inclusionary frame is obtained by the same method, and the intersection ratio IoU of two moving tracks is calculated; then, the shortest distance between the real labeled inclusional frame and the unmanned safety area is obtained and normalized (the farthest distance is assumed to be 100 m); and finally, dividing the result of the second step by the result of the third step to be used as an index for judging whether the dynamic barrier invades the driving track of the unmanned vehicle or not.
Referring to fig. 2, fig. 2 shows an obstacle detection device according to an exemplary embodiment, which is applied to an unmanned vehicle, and includes:
a data obtaining module 210, configured to obtain point cloud frame data, and store the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data;
a track calculation module 220, configured to calculate a driving track of an unmanned vehicle in the scene set based on point cloud frame data in the scene set, and determine a position and/or speed information of an obstacle in the point cloud frame data;
and a track judging module 230, configured to judge whether the obstacle may invade the driving track of the unmanned vehicle based on the position of the obstacle and/or the speed information.
Optionally, the storing the point cloud frame data into at least one scene set includes:
and storing the point cloud frame data of which the time interval is smaller than a preset threshold value in the point cloud frame data into the same scene set.
Optionally, the determining whether the obstacle invades the driving track of the unmanned vehicle based on the position of the obstacle and/or the speed information includes:
judging whether the static barrier invades the driving track of the unmanned vehicle or not based on the shortest distance between the position of the static barrier and the driving track of the unmanned vehicle;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
Optionally, the determining whether the obstacle invades the driving track of the unmanned vehicle based on the position of the obstacle and/or the speed information includes:
the shortest distance from the position of the dynamic obstacle to the driving track of the unmanned vehicle is determined based on the position and speed information of the dynamic obstacle; judging whether the dynamic barrier can invade the driving track of the unmanned vehicle or not;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
Referring to fig. 3, fig. 3 is a hardware structure diagram of an electronic device where an obstacle detection apparatus is located according to an exemplary embodiment. At the hardware level, the device includes a processor 302, an internal bus 304, a network interface 306, a memory 308, and a non-volatile memory 310, although other hardware required for the service may be included. One or more embodiments of the present description may be implemented in software, such as by processor 302 reading a corresponding computer program from non-volatile storage 310 into memory 308 and then executing the computer program. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are only illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the present specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium, that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
The foregoing description of specific embodiments has been presented for purposes of illustration and description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein in one or more embodiments to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (10)

1. An obstacle detection method, applied to an unmanned vehicle, the method comprising:
acquiring point cloud frame data, and storing the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data;
calculating the driving track of the unmanned vehicle in the scene set based on the point cloud frame data in the scene set, and determining the position and/or speed information of an obstacle in the point cloud frame data;
and judging whether the barrier invades the driving track of the unmanned vehicle or not based on the position and/or the speed information of the barrier.
2. The method of claim 1, wherein saving the point cloud frame data into at least one scene set comprises:
and storing the point cloud frame data of which the time interval is smaller than a preset threshold value in the point cloud frame data into the same scene set.
3. The method of claim 1, wherein the obstacle comprises a static obstacle, and wherein determining whether the obstacle will intrude into the unmanned vehicle's trajectory based on the obstacle's location, and/or speed information comprises:
judging whether the static barrier invades the driving track of the unmanned vehicle or not based on the shortest distance between the position of the static barrier and the driving track of the unmanned vehicle;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
4. The method of claim 1, wherein determining whether the obstacle will intrude into the trajectory of the unmanned vehicle based on the position of the obstacle and/or speed information comprises:
the shortest distance from the position of the dynamic obstacle to the driving track of the unmanned vehicle is determined based on the position and speed information of the dynamic obstacle; judging whether the dynamic barrier invades the driving track of the unmanned vehicle or not;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
5. An obstacle detection device, for use in an unmanned vehicle, the device comprising:
the data acquisition module is used for acquiring point cloud frame data and storing the continuous point cloud frame data into at least one scene set; the scene set comprises at least one frame of point cloud frame data;
the track calculation module is used for calculating the driving track of the unmanned vehicle in the scene set based on the point cloud frame data in the scene set and determining the position and/or speed information of an obstacle in the point cloud frame data;
and the track judging module is used for judging whether the barrier invades the driving track of the unmanned vehicle or not based on the position and/or the speed information of the barrier.
6. The apparatus of claim 5, wherein saving the point cloud frame data into at least one scene set comprises:
and storing the point cloud frame data of which the time interval is smaller than a preset threshold value in the point cloud frame data into the same scene set.
7. The apparatus of claim 5, wherein the obstacle comprises a static obstacle, and wherein the determining whether the obstacle will intrude into the trajectory of the unmanned vehicle based on the location of the obstacle and/or the speed information comprises:
judging whether the static barrier invades the driving track of the unmanned vehicle or not based on the shortest distance between the position of the static barrier and the driving track of the unmanned vehicle;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
8. The apparatus of claim 5, the obstacle comprising a dynamic obstacle, wherein the determining whether the obstacle will intrude into the trajectory of the unmanned vehicle based on the position of the obstacle and/or the speed information comprises:
the shortest distance from the position of the dynamic obstacle to the driving track of the unmanned vehicle is determined based on the position and speed information of the dynamic obstacle; judging whether the dynamic barrier invades the driving track of the unmanned vehicle or not;
and if the shortest distance is smaller than a distance threshold value, determining that the static barrier can invade the driving track of the unmanned vehicle.
9. A machine-readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 4.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the method of any one of claims 1-4 by executing the executable instructions.
CN202211113079.XA 2022-09-14 2022-09-14 Method and device for detecting obstacle Pending CN115311646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211113079.XA CN115311646A (en) 2022-09-14 2022-09-14 Method and device for detecting obstacle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211113079.XA CN115311646A (en) 2022-09-14 2022-09-14 Method and device for detecting obstacle

Publications (1)

Publication Number Publication Date
CN115311646A true CN115311646A (en) 2022-11-08

Family

ID=83867504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211113079.XA Pending CN115311646A (en) 2022-09-14 2022-09-14 Method and device for detecting obstacle

Country Status (1)

Country Link
CN (1) CN115311646A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597417A (en) * 2023-05-16 2023-08-15 北京斯年智驾科技有限公司 Obstacle movement track determining method, device, equipment and storage medium
WO2024099113A1 (en) * 2022-11-10 2024-05-16 上海高仙自动化科技发展有限公司 Robot speed limiting method and apparatus, and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024099113A1 (en) * 2022-11-10 2024-05-16 上海高仙自动化科技发展有限公司 Robot speed limiting method and apparatus, and electronic device
CN116597417A (en) * 2023-05-16 2023-08-15 北京斯年智驾科技有限公司 Obstacle movement track determining method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110658531B (en) Dynamic target tracking method for port automatic driving vehicle
US10671084B1 (en) Using obstacle clearance to measure precise lateral gap
KR101829556B1 (en) Lidar-based classification of object movement
CN115311646A (en) Method and device for detecting obstacle
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
KR20170106963A (en) Object detection using location data and scale space representations of image data
CN113432553B (en) Trailer pinch angle measuring method and device and vehicle
Perrollaz et al. A visibility-based approach for occupancy grid computation in disparity space
CN111986472B (en) Vehicle speed determining method and vehicle
CN112446227A (en) Object detection method, device and equipment
US11657572B2 (en) Systems and methods for map generation based on ray-casting and semantic class images
CN112744217B (en) Collision detection method, travel path recommendation device, and storage medium
Rato et al. LIDAR based detection of road boundaries using the density of accumulated point clouds and their gradients
CN111160132A (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
US11144747B2 (en) 3D data generating device, 3D data generating method, 3D data generating program, and computer-readable recording medium storing 3D data generating program
CN113111787A (en) Target detection method, device, equipment and storage medium
US20230123184A1 (en) Systems and methods for producing amodal cuboids
US20220221585A1 (en) Systems and methods for monitoring lidar sensor health
CN114648639A (en) Target vehicle detection method, system and device
CN111812602A (en) Method for evaluating performance of driving assistance system and storage medium
US20240004056A1 (en) High-resolution point cloud formation in automotive-grade radar signals
US20240062405A1 (en) Identifying stability of an object based on surface normal vectors
US20240062383A1 (en) Ground segmentation through super voxel
CN117935221A (en) Obstacle information determining method and device and perception model training method
Sandu et al. An Approach to Real-Time Collision Avoidance for Autonomous Vehicles Using LiDAR Point Clouds.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination