WO2021056499A1 - Procédé et dispositif de traitement de données, et plateforme mobile - Google Patents

Procédé et dispositif de traitement de données, et plateforme mobile Download PDF

Info

Publication number
WO2021056499A1
WO2021056499A1 PCT/CN2019/108847 CN2019108847W WO2021056499A1 WO 2021056499 A1 WO2021056499 A1 WO 2021056499A1 CN 2019108847 W CN2019108847 W CN 2019108847W WO 2021056499 A1 WO2021056499 A1 WO 2021056499A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
state information
target
data
movable platform
Prior art date
Application number
PCT/CN2019/108847
Other languages
English (en)
Chinese (zh)
Inventor
吴显亮
陈进
赖镇洲
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/108847 priority Critical patent/WO2021056499A1/fr
Priority to CN201980033428.7A priority patent/CN112154455B/zh
Publication of WO2021056499A1 publication Critical patent/WO2021056499A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the embodiments of the present application relate to the field of automatic driving technology, and in particular, to a data processing method, equipment, and movable platform.
  • the autonomous vehicle recognizes, tracks, and merges dynamic or static objects in its environment to obtain fusion data.
  • the fusion data includes the status information of the identified objects. And according to the state information of these objects, navigation planning, and control the driving of autonomous vehicles.
  • the state information of the object may include, for example, information such as object attributes, position, speed, orientation, acceleration, and so on. For example, if the autonomous vehicle estimates that there is a stopped vehicle ahead, the autonomous vehicle can perform a deceleration operation to ensure driving safety. In the process of obtaining the above-mentioned fusion data, there is more or less a certain probability of failure, which leads to inaccurate state information of the object, which in turn affects the driving of the autonomous vehicle.
  • the embodiments of the present application provide a data processing method, equipment, and a movable platform, which are used to determine the accuracy of the state information of objects in the fusion data, so as to guide and control the movement of the movable platform and ensure the safety of the movable platform.
  • an embodiment of the present application provides a data processing method, including:
  • the fusion data is obtained based on data fusion of multiple sensors, and the sensor is used to collect data on the environment in which the movable platform is located, and the fusion data includes the Status information of the detected target in the environment, where the target sensor data includes point cloud data;
  • an embodiment of the present application provides a data processing device, including: multiple sensors and processors;
  • the processor is configured to acquire target sensor data and fusion data, where the fusion data is obtained by fusion of the data of the multiple sensors, and the sensor is used for data collection of the environment in which the movable platform is located ,
  • the fusion data includes the status information of the detected target in the environment, the target sensor data includes point cloud data; performing road surface object point cloud clustering processing on the point cloud data to obtain point cloud clusters, And determine the state information of the point cloud cluster; determine whether the state information of the point cloud cluster and the state information of the target meet the consistency condition; if not, then according to the location of the sensor on the movable platform
  • the observable range in the environment in which it is located determines the probability of false detection of the status information of the target, and the probability is used to indicate whether the movable platform performs an obstacle avoidance operation.
  • an embodiment of the present application provides a movable platform, including: a movable platform body and the data processing device according to the embodiment of the present application in the second aspect, wherein the data processing device is installed on the movable platform. On the platform body.
  • an embodiment of the present application provides a readable storage medium on which a computer program is stored; when the computer program is executed, it realizes the data described in the embodiment of the present application in the first aspect. Approach.
  • an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of a removable platform can download from the readable storage medium The computer program is read, and the at least one processor executes the computer program to enable the mobile platform to implement the data processing method described in the embodiment of the present application in the first aspect.
  • the data processing method, equipment, and movable platform acquire target sensor data and fusion data, perform point cloud clustering processing on the point cloud data of the target sensor data to obtain point cloud clusters, and determine all The state information of the point cloud cluster, if the state information of the point cloud cluster and the state information of the target in the fusion data do not meet the consistency condition, then according to the sensor’s availability in the environment where the movable platform is located
  • the observation range determines the probability of false detection of the status information of the target, and the probability is used to indicate whether the movable platform performs an obstacle avoidance operation.
  • the accuracy of the consistency check on the state information of the target in the fusion data through the state information of the point cloud cluster is higher. If the consistency check fails , Through the observable range of the sensor in the environment where the movable platform is located, the probability of false detection of the target's status information is obtained, which is more in line with the objective reality, so as to more accurately guide whether the movable platform is executed Obstacle avoidance operation to ensure the safety of the movement process of the movable platform.
  • Fig. 1 is a schematic architecture diagram of an autonomous driving vehicle according to an embodiment of the present application
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 3 is a flowchart of a data processing method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of controlling the decelerating movement of a movable platform provided by an embodiment of the application
  • FIG. 5 is a corresponding diagram of the recommended comfortable dynamic object maintaining distance dist dynamic at different speeds, the motorized static object maintaining distance dist static , and the buffer distance dist margin provided by an embodiment of the application;
  • FIG. 6 is a schematic structural diagram of a data processing device provided by an embodiment of this application.
  • FIG. 7 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by another embodiment of the application.
  • the embodiments of the present application provide a data processing method, equipment, and a movable platform, where the movable platform may be an unmanned aerial vehicle, an unmanned vehicle, an unmanned boat, a robot, or an autonomous vehicle, etc.
  • Fig. 1 is a schematic architecture diagram of an autonomous driving vehicle according to an embodiment of the present application.
  • the autonomous vehicle 100 may include a sensing system 110, a control system 120, and a mechanical system 130.
  • the perception system 110 is used to measure the state information of the autonomous vehicle 100, that is, the perception data of the autonomous vehicle 100.
  • the perception data may represent the position information and/or state information of the autonomous vehicle 100, for example, position, angle, speed, Acceleration and angular velocity, etc.
  • the perception system 110 may include, for example, a visual sensor (for example, including multiple monocular or binocular vision devices), lidar, millimeter wave radar, inertial measurement unit (IMU), global navigation satellite system, gyroscope, ultrasonic sensor At least one of sensors such as, electronic compass, and barometer.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the sensing system 110 After the sensing system 110 obtains the sensing data, it can transmit the sensing data to the control system 120.
  • the control system 120 is used to make decisions on how to control the autonomous driving vehicle 100 based on the perception data, for example: how much speed to travel, or how much braking acceleration to brake, or whether to change lanes, or, Turn left/right, etc.
  • the control system 120 may include, for example, a computing platform, such as a vehicle-mounted supercomputing platform, or at least one device having processing functions such as a central processing unit and a distributed processing unit.
  • the control system 120 may also include a communication link for various data transmission on the vehicle.
  • the control system 120 may output one or more control commands to the mechanical system 130 according to the determined decision.
  • the mechanical system 130 is used to control the autonomous vehicle 100 in response to one or more control commands from the control system 120 to complete the above-mentioned decision.
  • the mechanical system 130 can drive the wheels of the autonomous vehicle 100 to rotate so as to be automatic.
  • the driving vehicle 100 provides power for driving, wherein the rotation speed of the wheels can affect the speed of the self-driving vehicle.
  • the mechanical system 130 may include, for example, at least one of a mechanical body engine/motor, a controlled wire control system, and the like.
  • FIG 2 is a schematic diagram of an application scenario provided by an embodiment of the application.
  • the autonomous vehicle can drive on the road, and the autonomous vehicle can drive on the road in the current environment (for example, by The aforementioned perception system 110) collects perception data, which can include point cloud data, image data, radar data, etc., and then obtains the fusion data based on the perception data, and how to process the fusion data after obtaining the fusion data
  • perception data can include point cloud data, image data, radar data, etc.
  • the embodiments of the present application can be applied to dynamic scenes in which the movable platform is moving, and the dynamic or static objects in the environment where the movable platform is located are identified, tracked, and fused to obtain the state estimation of these objects, thereby guiding the relevant Navigation planning and control tasks; however, the processing methods such as the identification, tracking and fusion of these objects all have a certain failure probability, that is, the correct state estimation information cannot be obtained.
  • the implementation of this application can be used Examples of the scheme to identify these failure modes, so as to actively evade processing, improve the safety performance of the movable platform.
  • the estimation failure of the object state can be roughly divided into false detection, missed detection, inaccurate state estimation (for example, the position, speed, heading of the vehicle, inaccurate category information, etc.), and related information (for example, when the object is different Whether the above time series are the same object) is not accurate and so on.
  • false detections and missed detections positive and negative types are often defined, corresponding to the presence or absence of detection, respectively, and false positive and false negative types (False Negative) correspond to errors respectively. Inspection and missed inspection.
  • the state estimation of an object is usually divided into several steps: first process the original sensor to obtain the basic data of the object state estimation. These processing methods can include image processing, point cloud processing, etc.; then Next, start the object detection, such as training with a deep neural network to obtain more accurate detection results; then, the detected objects are data associated on the time series, and the detection results of the same object at different times are associated together This correlation process is usually used in combination with tracking algorithms to obtain a more stable detection result in time sequence. If there are multiple observations for each object, such as the overlap of different camera angles or the collection of data by multiple different types of sensors, you need Combining these observations to obtain a final object state estimation involves the technology of multiple information fusion.
  • each module controls its own module in time. There is no guarantee that the failure rate of the entire system can be greatly reduced. Therefore, for the ultimate safety goal, the system should be able to automatically identify some common failure modes and actively avoid them instead of relying only on the internal components of each module. Failure detection.
  • each module In the current system construction of mobile platforms (such as drones and unmanned vehicles), these safety indicators are often assigned to each module, and each module is used to detect and avoid failure rates, such as in object detection.
  • each module In the module, there are related technical methods that can be used to reduce false detections and missed detections, such as improving sensor accuracy, setting more detailed sampling rules, and so on.
  • FIG. 3 is a flowchart of a data processing method provided by an embodiment of this application. As shown in FIG. 3, the method of this embodiment may include:
  • the fusion data is acquired, where the fusion data is obtained based on data fusion of multiple sensors of the movable platform, and the sensor is used for data collection of the environment in which the movable platform is located.
  • the sensor is For the image sensor, the image sensor collects image data of the environment where the movable platform is located; if the sensor is a laser sensor, the image sensor collects point cloud data of the environment where the movable platform is located.
  • the above-mentioned fusion data includes the status information of the detected target in the environment.
  • the above-mentioned fusion data may include the environment Status information of other vehicles that have been detected in.
  • how to obtain the fusion data according to the data fusion of multiple sensors can refer to the description in the related technology, which will not be repeated here.
  • this embodiment also acquires target sensor data.
  • the target sensor may be, for example, a sensor among the above-mentioned multiple sensors.
  • the target sensor data includes point cloud data, and the target sensor may be, for example, a laser sensor.
  • the state information of the aforementioned target may include any one or more of the following parameter information: object attributes, position, orientation, speed, acceleration.
  • the speed may include at least one of the following: linear velocity and angular velocity.
  • the object attribute can be, for example, a vehicle, or a person, and so on.
  • S302 Perform road surface object point cloud clustering processing on the point cloud data to obtain a point cloud cluster, and determine state information of the point cloud cluster.
  • the point cloud data in the above-mentioned target sensor data is subjected to road surface object point cloud clustering processing to obtain point cloud clusters, and the obtained state information of each point cloud cluster is determined.
  • the state information of the point cloud cluster may include any one or more of the following parameter information: object attributes, position, orientation, velocity, acceleration.
  • S303 Determine whether the state information of the point cloud cluster and the state information of the target in the fusion data meet the consistency condition.
  • the state information of the point cloud cluster after obtaining the state information of the point cloud cluster, it is determined whether the obtained state information of the point cloud cluster and the state information of the target in the fusion data meet the consistency condition. If the state information of the point cloud cluster and the state information of the target in the fusion data meet the consistency condition, it means that the detection of the state information of the target in the fusion data is correct. If the state information of the point cloud cluster and the state information of the target in the fusion data do not meet the consistency condition, it means that the state information of the target in the fusion data may be detected incorrectly.
  • the probability of false detection of the status information of the target in the fusion data is determined, and the probability is used to indicate whether the movable platform performs obstacle avoidance operations.
  • the target sensor data and the fusion data are acquired, the point cloud data of the target sensor data is clustered by the road object point cloud to obtain the point cloud cluster, and the state information of the point cloud cluster is determined. If the state information of the point cloud cluster and the state information of the target in the fusion data do not meet the consistency condition, then according to the observable range of the sensor in the environment where the movable platform is located, it is determined that the occurrence of the target is The probability of false detection of status information, the probability being used to indicate whether the movable platform performs an obstacle avoidance operation. Since the state information of the point cloud cluster is obtained from the point cloud data, the accuracy of the consistency check of the state information of the target in the fusion data through the state information of the point cloud cluster is higher.
  • the environment is further classified into multiple environmental categories according to the observable range of the sensor in the environment; for example, the environment may be classified into multiple environmental categories according to the observable range of at least one of the multiple sensors in the environment.
  • the environment may be divided into multiple environmental categories according to the observable range of the target sensor (such as a laser sensor) in the environment.
  • the multiple environmental categories are, for example, urban roads with buildings around, highways with mountainous areas, highways with flat terrain, highways with tunnels, etc. The present embodiment is not limited to this.
  • S304 may include S3041-S3043:
  • the environment probability information of the environment where the mobile platform is currently located belongs to each of the multiple environment categories obtained by the above division.
  • the priori probability information of the sensor's error detection in each of the above-mentioned environmental categories is also obtained. Then, according to the environmental probability information of the environment in which the mobile platform is located in each of the above environmental categories and the prior probability information of the sensor (for example, the target sensor) in each of the above environmental categories, the error detection of the target in the fusion data is determined. The probability of false detection of the state information of the object.
  • the environment is divided into N environmental categories, which are the first environmental category, the second environmental category, ..., the Nth environmental category.
  • Obtain the environmental probability information of the environment where the mobile platform is located in the first environment category is probability P(A1)
  • the environmental probability information of the environment where the mobile platform is located in the second environment category is the probability P(A2)
  • the environmental probability information of the environment in which the mobile platform is located belongs to the Nth environmental category is the probability P(AN)
  • the prior probability information of the sensor's false detection in the environment of the first environmental category is the probability P(B1), and the sensor is in the first environment.
  • the prior probability information of false detection in the environment of the environment category is the probability P(B2),...
  • the prior probability information of the sensor's false detection in the environment of the Nth environmental category is the probability P(BN); then, determine
  • the probability of erroneous detection of the status information of the target is: P(A1)*P(B1)+P(A2)*P(B2)+...+P(AN)*P(BN). Therefore, based on the obtained probability of erroneous detection of the status information of the target in the fusion data, the possibility of erroneous detection of the status information of the target can be more accurately evaluated.
  • a possible implementation manner of the foregoing S3041 is: according to the point cloud distribution density in the point cloud data, it is determined that the environment in which the movable platform is located belongs to the environmental probability information of each environment category. For example: if the point cloud distribution density is dense, it means that the environment where the movable platform is located belongs to an urban road with buildings; if the point cloud distribution density is sparse, it means that the movable platform is flat The environment is more likely to be a highway with flat ground.
  • the accuracy of the environmental probability information of the environment in which the mobile platform is obtained from the mobile platform belongs to each environmental category is higher.
  • this embodiment after performing the above S304, this embodiment also determines whether the probability determined in S304 is greater than the preset probability. If the probability is greater than the preset probability, it means that an erroneous detection of the status information of the target has occurred. If the probability is greater, it indicates that the movable platform needs to perform obstacle avoidance operations. If the probability is less than or equal to the preset probability, it indicates that the possibility of false detection of the status information of the target is less likely to occur, then the movable platform is indicated There is no need to perform obstacle avoidance operations.
  • controlling the movable platform to perform obstacle avoidance operations may be, for example, controlling the movable platform to decelerate movement, or controlling the movable platform to move (for example, change the orientation), or controlling the movable platform to decelerate and steer. , In order to make the movable platform avoid the target through these operations and ensure the safety of the movable platform.
  • a possible implementation manner of controlling the decelerated movement of the movable platform may be: calculating when the movable platform moves from the current position to the first position where the point cloud cluster is currently located, The first distance of movement of the movable platform; according to the movement parameters of the point cloud clusters and the movement parameters of the movable platform, predicting the intersection of the movement trajectory of the movable platform and the movement trajectory of the point cloud cluster Two positions; calculating the second distance of the movable platform when the movable platform moves to the second position; if the distance difference of the second distance minus the first distance is a positive number, The movable platform is controlled to perform a decelerating movement on the movement trajectory of the distance difference.
  • the current position of the movable platform is O
  • the current position of the point cloud cluster is called the first position (that is, C).
  • the calculation is movable
  • the distance from the movement of the platform to the position of the point cloud cluster is the first distance d1.
  • Figure 4 shows the point cloud cluster and the movable platform moving linearly in the same direction as an example to predict the motion trajectory of the movable platform and the point cloud cluster
  • the intersecting position is called the second position (that is, D), that is, it is predicted that the point cloud cluster and the movable platform will continue to move according to the corresponding motion parameters, and it is estimated that the target corresponding to the point cloud cluster will collide with the movable platform.
  • the position is the second position D.
  • a possible implementation manner of controlling the movable platform to perform decelerating motion on the motion trajectory of the distance difference may be: calculating the decelerating motion of the movable platform from the current position to The first position, and the first acceleration in the process when the velocity of the first position is zero; controlling the movable platform to perform decelerating motion at the second acceleration on the motion trajectory of the distance difference, the first The absolute value of the second acceleration is smaller than the absolute value of the first acceleration.
  • the movable platform may also be controlled to move at the third acceleration/deceleration rate, wherein the absolute value of the third acceleration is greater than the absolute value of the second acceleration Value, the third acceleration may be equal to the first acceleration, for example.
  • the movable platform decelerates and moves the trajectory of ⁇ d at the second acceleration. At this time, the new ⁇ d is still greater than 0, and the movable platform can continue to perform slower deceleration movement.
  • the case of missed detection is not limited to this case.
  • it can be detected by point cloud data with a high probability.
  • As a form of point cloud it lacks necessary status information and cannot express dynamics.
  • Situation if an object with speed estimation is the result of its missed detection, it is likely to degenerate into a point cloud without speed. Therefore, follow the car or predict the lane change on a movable platform (such as an autonomous vehicle)
  • a movable platform such as an autonomous vehicle
  • v r and v f are the instant speeds of the next car (that is, the mobile platform) and the preceding car, respectively, a r and a f are the instant accelerations of the following car and the preceding car, and a brake means that it can be received.
  • the braking acceleration of the following car, t resp is the reaction time of the following car.
  • the recommended comfortable dynamic object maintains the distance dist dynamic at different speeds
  • the motorized static object maintains the distance dist static , and the corresponding relationship diagram of the buffer distance dist margin. If the dynamic object degenerated into a point cloud is driving forward, the distance from the next frame to the actual object will not be shortened, that is, it will not exceed the buffer distance dist margin , and there will be a high probability of less maneuvering braking. Don't enter emergency braking, just make comfortable braking, so as to avoid the hazards of dynamic objects degenerating into point clouds while ensuring user experience.
  • dist dynamic is often greater than dist static .
  • the difference between these two distances is the buffer distance dist margin , that is, the vehicle can first perform comfortable acceleration at the buffer distance dist margin Braking, after the buffer distance dist margin is exceeded, the dynamic object has already moved forward at the next moment during motor braking or even emergency braking, so the buffer distance dist margin will be updated and longer at the next moment, resulting in a high probability that the following car will not Exceed the buffer distance, thereby improving comfort while ensuring safety.
  • the consistency conditions described in the foregoing embodiments may include at least one of the following items 1)-3):
  • point cloud clusters corresponding to the target object in the point cloud data There are point cloud clusters corresponding to the target object in the point cloud data. That is, it is judged whether there is a point cloud cluster corresponding to the target object (that is, whether there is a misdetection) in the point cloud cluster by performing road object point cloud clustering processing on the point cloud data, and if it exists, it is determined The state information of the point cloud cluster meets the condition of consistency with the state information of the target object (that is, there is no misdetection). If it does not exist, it is determined that the state information of the point cloud cluster is different from the state information of the target object. Meet the consistency condition (that is, there is a false detection).
  • false detections it is necessary to combine the environment to distinguish whether the false detection is a "false detection generated out of thin air"; for example, on a flat highway, without any obstruction, if the false detection suddenly appears, it is likely to be true It is a false detection. If it is not a false detection, it means that it is not visible for a long time for a reason. If there is an intersection or other obstructions nearby, the sudden appearance can be attributed to the sudden appearance of the intersection or other visual blind spots If there is no visual blind zone, it is considered to be caused by the previous missed detection. If the possibility of missed detection is very small, it can be considered that the high probability of the false detection is really a false detection.
  • E FP false detection
  • E TP really detect
  • E Open observed previously defined for this position may now belong to visually observable range
  • E N is defined as the period of time before detection was not detected .
  • E N , E Open) 1-P (E FP
  • state information of the target corresponding to any one of the point cloud clusters in the fusion data That is, it is determined whether there is any state information of the target object of the point cloud cluster in the fusion data, and if the state information of the target object corresponding to at least one point cloud cluster does not exist in the fusion data, the state of the point cloud cluster is determined The information does not meet the condition of consistency with the state information of the target object (that is, there is a missed detection). If the state information of the target object corresponding to any point cloud cluster exists in the fusion data, the state information of the point cloud cluster is determined The state information of the target object meets the consistency condition (that is, there is no missed detection).
  • the parameter information of the target corresponding to the point cloud cluster is consistent with the parameter information of the point cloud cluster. For example: to determine whether at least one parameter information of the position, orientation, velocity, and acceleration of each point cloud cluster is consistent with at least one parameter information of the position, orientation, velocity, and acceleration of the target corresponding to the point cloud cluster in the fusion data , If all the parameter information determined are consistent, it is determined that the state information of the point cloud cluster and the state information of the target object meet the consistency condition, and if at least one parameter information in all the determined parameter information is inconsistent, it is determined The state information of the point cloud cluster and the state information of the target object do not meet the consistency condition.
  • the parameter information of the point cloud cluster corresponding to the target object is used as the parameter of the target object information.
  • the target sensor data further includes image data
  • the target sensor further includes an image sensor.
  • a possible implementation of determining whether there is a point cloud cluster corresponding to the target object in the point cloud data may be : Determine whether there is a point cloud cluster corresponding to the target object in the point cloud data according to the intensity of the pixels in the image data. If it does not exist, it is determined that the state information of the point cloud cluster and the state information of the target object do not meet the consistency condition; if it exists, the state information of the point cloud cluster and the state information of the target object are determined The information meets the consistency conditions.
  • the intensity of the pixels in the image data is used to assist in judging whether there are point cloud clusters corresponding to the target in the point cloud data, which can improve the accuracy of the judgment result, especially when the point cloud distribution density is relatively sparse in the point cloud data , Can guarantee the accuracy of the judgment result.
  • the image data can also be used to assist in determining the point There are point cloud clusters corresponding to the black object in the cloud data.
  • the point cloud cluster is based on a laser point cloud point clustering that does not conform to a plane or does not conform to a target curved surface, and the target curved surface is a curved surface with a curvature lower than a preset curvature. This ensures that the obtained point cloud clusters correspond to the target objects above the road surface. Since the target objects above the road surface may cause safety hazards to the movable platform, the focus is on these point cloud clusters corresponding to the target objects above the road surface.
  • the point cloud data of these point cloud clusters are useful point cloud data, and the other point cloud data does not need to be used to determine whether the consistency conditions are met, which improves the processing efficiency.
  • the fusion data includes the position of the target object, and by evaluating the position of the target object, it is determined whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition, and accordingly, the point cloud cluster is determined.
  • a possible realization of whether the state information of and the state information of the target meets the consistency condition may be: judging whether the position of the target in the fusion data is consistent with the position of the point cloud cluster corresponding to the target, where, The position of the point cloud cluster is determined by the point cloud data. If it is consistent, it means that the position of the target in the fusion data is accurate. It is determined that the state information of the point cloud cluster and the state information of the target meet the consistency condition.
  • the position of the target in the fusion data is not accurate, and it is determined that the state information of the point cloud cluster and the state information of the target do not meet the consistency condition.
  • the current position of the point cloud cluster is determined by the point cloud data, which can truly reflect the current actual position of the target of the point cloud cluster, so the accuracy of judging whether the consistency condition is met is improved.
  • the fusion data includes the speed of the target, and the speed of the target is evaluated to determine whether the state information of the point cloud cluster and the state information of the target meet the consistency condition.
  • a possible implementation method for judging whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition may be: determining the target according to the historical speed parameter corresponding to the target object in the fusion data The current predicted position of the object, and then determine whether the current position of the point cloud cluster corresponding to the target object is consistent with the predicted position.
  • the current position of the point cloud cluster can be determined based on the current point cloud data. If it is consistent, the target The speed of the object is accurate. It is determined that the state information of the point cloud cluster and the state information of the target object meet the consistency condition. If it is inconsistent, it means that the speed of the target object in the fusion data may be inaccurate.
  • the state information of the target object does not meet the consistency condition.
  • the current position of the point cloud cluster is determined by the point cloud data, which can truly reflect the current actual position of the target of the point cloud cluster, so the accuracy of judging whether the consistency condition is met is improved.
  • another possible implementation manner for judging whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition may be: acquiring the point cloud cluster corresponding to the target object in the first frame The position of the point cloud cluster in the second frame, where the time of the second frame is later than the time of the first frame, and the position of the point cloud cluster in the first frame is determined based on the point cloud data of the first frame Yes, the position of the point cloud cluster in the second frame is determined according to the point cloud data of the second frame, and then according to the position of the point cloud cluster in the first frame and the position of the point cloud cluster in the second frame, The predicted speed of the point cloud cluster is determined, and the predicted speed refers to the predicted speed of the point cloud cluster from the position of the first frame to the position of the second frame within the time period from the first frame to the second frame.
  • the predicted speed is determined according to the position of the point cloud cluster in different first and second frames, which can truly reflect the actual speed of the target object of the point cloud cluster, so it improves the judgment of whether the point cloud cluster is consistent. The accuracy of sexual conditions.
  • the target sensor data also includes radar data
  • the target sensor also includes radar.
  • Another possible implementation manner for judging whether the state information of the point cloud cluster and the state information of the target meet the consistency condition may be : Determine the predicted speed of the point cloud cluster corresponding to the target based on the radar data. The predicted speed can be obtained by performing a differential processing on the radar data, for example, and then judge whether the predicted speed is consistent with the speed of the target in the fusion data. If they are consistent, it means that the speed of the target object is accurate, and it is determined that the state information of the point cloud cluster and the state information of the target object meet the consistency condition. The status information of the cluster and the status information of the target do not meet the consistency condition.
  • the predicted speed is determined based on radar data, which can more accurately reflect the actual speed of the target of the point cloud cluster, so the accuracy of judging whether the consistency condition is met is further improved.
  • the radar data is, for example, millimeter wave radar data. It should be noted that the sensor data used to obtain the speed is not limited to radar data, and may also be other sensor data.
  • the fusion data includes the acceleration of the target, and the acceleration of the target is evaluated to determine whether the state information of the point cloud cluster and the state information of the target meet the consistency condition.
  • another possible implementation manner for judging whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition may be: determining the point cloud corresponding to the target object according to the point cloud data The predicted acceleration of the cluster.
  • the predicted acceleration can be obtained by performing a differential processing on the point cloud data, and then determine whether the predicted acceleration is consistent with the acceleration of the target in the fusion data. If they are consistent, the acceleration of the target is accurate.
  • the state information of the point cloud cluster and the state information of the target object meet the consistency condition. If they are inconsistent, the acceleration of the target object in the fusion data may be inaccurate.
  • Determine the state information of the point cloud cluster and the state information of the target object Does not meet the consistency conditions.
  • the predicted acceleration is determined based on the point cloud data, which can more accurately reflect the actual speed of the target object of the point cloud cluster, so the accuracy of judging whether the consistency condition is met is improved.
  • the target sensor data also includes radar data
  • the target sensor also includes radar.
  • Another possible implementation manner for judging whether the state information of the point cloud cluster and the state information of the target meet the consistency condition may be : Determine the predicted acceleration of the point cloud cluster corresponding to the target object based on the radar data. The predicted acceleration can be obtained by performing secondary differential processing on the radar data, for example, and then determine whether the predicted acceleration is consistent with the velocity of the target in the fusion data If they are consistent, it means that the acceleration of the target object is accurate. It is determined that the state information of the point cloud cluster and the state information of the target object meet the consistency conditions. The state information of the cloud cluster does not meet the consistency condition with the state information of the target object.
  • the fusion data includes the orientation of the target, and the orientation of the target is evaluated to determine whether the status information of the point cloud cluster and the status information of the target meet the consistency condition.
  • a possible implementation manner for judging whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition may be: judging the orientation of the target object in the fusion data and the corresponding target object Whether the orientation of the point cloud clusters is consistent, where the orientation of the point cloud cluster is determined by the distribution of the point cloud in the point cloud cluster. If they are consistent, it means that the orientation of the target in the fusion data is accurate, and the state of the point cloud cluster is determined The information meets the condition of consistency with the status information of the target. If it is inconsistent, it means that the orientation of the target in the fusion data may be inaccurate. It is determined that the status information of the point cloud cluster does not meet the condition of consistency with the status information of the target. . Among them, the orientation of the point cloud cluster is determined by the point cloud data, which can truly reflect the actual orientation of the target corresponding to the point cloud cluster, so the accuracy of judging whether the consistency condition is met is improved.
  • a possible implementation manner for judging whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition may be: obtaining the speed of the point cloud cluster corresponding to the target object, and the point cloud
  • the speed of the cluster can be determined according to the point cloud data or radar data, the direction of the point cloud cluster is determined according to the speed direction of the point cloud cluster, and then it is judged whether the orientation of the target in the fusion data is consistent with the orientation of the point cloud cluster corresponding to the target. If they are consistent, it means that the orientation of the target in the fusion data is accurate, and it is determined that the status information of the point cloud cluster and the status information of the target meet the consistency condition.
  • the orientation of the target in the fusion data may be inaccurate. It is determined that the state information of the point cloud cluster and the state information of the target object do not meet the consistency condition.
  • the speed direction of the point cloud cluster can also truly reflect the actual orientation of the target corresponding to the point cloud cluster, so the accuracy of judging whether the consistency condition is met is improved.
  • the object attribute in the status information of the target in the fusion data does not match the parameter information, then it can be determined that an erroneous detection of the status information of the target has occurred, for example: the object attribute of the target is pedestrian and The moving speed of the target is 120km/h, the object attribute of the target is a vehicle, and the height of the target is 5m. If the target object in the fusion data is back-projected to the image data with scene segmentation, and it is inconsistent with the corresponding pixel label, it can be determined that an error detection of the status information of the target object has occurred. If the frame used to identify the vehicle is inside other static objects, it can be determined that an erroneous detection of the status information of the target has occurred.
  • timing correlation meets the consistency condition: if it is for a single object, it is basically equivalent to the judgment of speed consistency, but for multiple objects, it is necessary to consider the correlation of different objects. Correlation, that is, the object A and object B at a certain moment are all related to the object A at the next moment. In this case, a single match may be within the correlation threshold and no abnormality can be seen, but consider the global correlation After that, that is, at the next moment, the object B has no correlation. At this time, it should be considered that the above correlation is unreliable, which means that the timing correlation does not meet the consistency condition.
  • the state information of the point cloud clusters and the state information of the target object do not meet the consistency condition, if the position of the target object is detected incorrectly, what can actually be done is to replace the target object with the point cloud (for example, change the target object's
  • the state information is used as the state information of the target object), and the speed of the target object is used as a priori of these point clouds for prediction. If the speed of the target object is detected incorrectly, the speed of the target object can be treated as 0, so , You can use the degraded point cloud processing method to deal with, that is, define the buffer distance of the brake, and follow the car with a conservative strategy to ensure the user experience while ensuring safety.
  • the above-mentioned target speed detection error will be handled in the same way, and it will be treated as if the speed is zero. All detection errors of position, speed, and orientation will define the data interval in which these parameters may be located, and then consider whether all the states in this possible interval will cause potential collision hazards or planning difficulties for the movable platform , If there are no potential dangers and difficulties, the fault can be determined not to be dealt with. Then, the weight of these parameters participating in the motion control of the movable control platform will be adjusted.
  • Obstacles such as lane lines and static guardrails can be used to evaluate whether other vehicles can affect the illumination of movable platforms (such as autonomous vehicles). For example, vehicles on the opposite side of the guardrail can be ignored. As well as vehicles that are separated by 3 lanes, the impact can be considered small.
  • the fault detection module at the system level (for example, the module used to determine whether the above-mentioned solutions in this application meets the consistency condition (or the above-mentioned probability) is not necessarily It runs as an independent module. It can also be inside a certain functional module of the mobile platform but performs system-level fault diagnosis and detection, such as in the fusion module (for example, the module used to obtain the above-mentioned fusion data) At the same time, access the original data stream or other types of perception information to determine the consistency of the system level.
  • the fault detection at the system level (such as the global scope) or the module level (such as a single module) mainly depends on whether the input has been The function realization of this module is not very relevant, but the system level considers how to judge whether it meets the consistency conditions.
  • the original laser or millimeter wave radar as the reference information for determining whether the consistency condition is met, it does not mean that the image information cannot be used as the reference information.
  • the system has functional degradation (laser millimeter wave radar). Radar failure), the original image information can be used as the most reliable information of all information.
  • the state information of the object of the perception algorithm can be back-projected to the image, and the consistency of the image can be judged to diagnose the fault type of the detection error in the system ; It can also be judged together with the data obtained by laser, millimeter wave and other sensors, and the principle of large numbers is used to determine the source of the fault; therefore, in principle, the information obtained by TOF, ultrasound, etc. can also be used as potential raw sensing data to determine whether the consistency is consistent sexual condition determination;
  • the system-level fault detection module uses high-reliability original sensor (such as laser) data and sensing algorithm processing results (such as the above-mentioned fusion data) to compare with each other, so as to detect the conflict between the sensing algorithm results and the original sensor data. If the problem of the original sensor is ruled out at a higher confidence level, the inconsistency can be used to evaluate the type of failure that occurs in the result of the perception algorithm, such as false detection, missed detection, parameter information detection error, correlation matching error, etc. At the same time, the potential impact of these types of failures is evaluated according to specific system requirements, so as to serve as a reference for subsequent decision-making.
  • high-reliability original sensor such as laser
  • sensing algorithm processing results such as the above-mentioned fusion data
  • the failure rate cannot be reduced by itself, it can effectively convert the failure rate of the original algorithm framework from an unknown state to a most known state. This provides a basis for the subsequent post-processing algorithm to detect the type of failure. Based on the use of post-processing to further reduce the failure rate.
  • FIG. 6 is a schematic structural diagram of a data processing device provided by an embodiment of this application.
  • the data processing device 600 of this embodiment may include: multiple sensors 601 and a processor 602.
  • the processor 602 is configured to acquire target sensor data and fusion data in a plurality of sensors 601, wherein the fusion data is obtained by fusion of the data of the plurality of sensors 601, and the sensor is used to Data collection is performed on the environment in which the platform is located, the fusion data includes the status information of the detected target in the environment, the target sensor data includes point cloud data; the point cloud data is performed on the road surface object point cloud
  • the clustering process obtains the point cloud cluster, and determines the state information of the point cloud cluster; judges whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition;
  • the observable range of the sensor 601 in the environment where the movable platform is located determines the probability of false detection of the status information of the target, and the probability is used to indicate whether the movable platform performs obstacle avoidance operations .
  • the target sensor includes a laser sensor
  • the plurality of sensors 601 includes a laser sensor
  • the processor 602 is further configured to classify the environment into multiple environmental categories according to the observable range of the sensor 601 in the environment;
  • the processor 602 determines the probability of an erroneous detection of the status information of the target object according to the observable range of the sensor 601 in the environment, it is specifically configured to:
  • the probability of the occurrence of an erroneous detection of the state information of the target object is determined.
  • the processor 602 is specifically configured to:
  • the environment in which the movable platform is located belongs to the environment probability information of each environment category.
  • the processor 602 is further configured to:
  • the movable platform is controlled to perform obstacle avoidance operations.
  • the processor 602 is specifically configured to:
  • the processor 602 is specifically configured to:
  • the movable platform is controlled to perform a decelerating movement on the movement track of the distance difference.
  • the processor 602 is specifically configured to:
  • the movable platform is controlled to perform a deceleration movement with a second acceleration on the movement track of the distance difference, and the absolute value of the second acceleration is smaller than the absolute value of the first acceleration.
  • the state information includes any parameter information of an object attribute, position, orientation, speed, acceleration, and the consistency condition includes at least one of the following:
  • the parameter information of the target corresponding to the point cloud cluster is consistent with the parameter information of the point cloud cluster.
  • the processor 602 is specifically configured to:
  • the target sensor data further includes image data; the target sensor further includes an image sensor, and the plurality of sensors 601 further include an image sensor.
  • the processor 602 is specifically configured to:
  • the point cloud cluster is based on a laser point cloud point clustering that does not conform to a plane or does not conform to a target curved surface, and the target curved surface is a curved surface with a curvature lower than a preset curvature.
  • the processor 602 is specifically configured to:
  • the state information of the point cloud cluster and the state information of the target do not meet the consistency condition.
  • the target sensor data further includes radar data
  • the target sensor further includes a radar
  • the plurality of sensors 601 further include a radar
  • the radar is, for example, a millimeter wave radar.
  • the processor 602 is specifically configured to:
  • the state information of the point cloud cluster and the state information of the target do not meet the consistency condition.
  • the target sensor data further includes radar data
  • the target sensor further includes a radar
  • the plurality of sensors 601 further include a radar
  • the radar is, for example, a millimeter wave radar.
  • the processor 602 is specifically configured to:
  • the state information of the point cloud cluster and the state information of the target object do not meet the consistency condition.
  • the processor 602 is further configured to:
  • the parameter information of the point cloud cluster corresponding to the target object is used as the parameter information of the target object.
  • the data processing device 600 of this embodiment may further include: a memory (not shown in the figure) for storing program codes.
  • the memory is used for storing program codes.
  • the data processing device 600 can implement the above-mentioned technical solutions.
  • the data processing device of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • the movable platform 700 of this embodiment may include: a plurality of sensors 701 and a processor 702.
  • the processor 702 is configured to acquire target sensor data and fusion data from a plurality of sensors 701, where the fusion data is obtained by fusion of the data of the plurality of sensors 701, and the sensor is used to Data collection is performed on the environment in which the platform 700 is located, the fusion data includes the status information of the detected target in the environment, the target sensor data includes point cloud data, and the road surface object points are performed on the point cloud data.
  • the cloud clustering process obtains the point cloud cluster, and determines the state information of the point cloud cluster; judges whether the state information of the point cloud cluster and the state information of the target object meet the consistency condition;
  • the observable range of the sensor 701 in the environment where the movable platform 700 is located determines the probability of false detection of the status information of the target, and the probability is used to indicate whether the movable platform 700 performs Obstacle avoidance operation.
  • the target sensor includes a laser sensor
  • the plurality of sensors 701 includes a laser sensor
  • the processor 702 is further configured to classify the environment into multiple environmental categories according to the observable range of the sensor 701 in the environment;
  • the processor 702 is specifically configured to: when determining the probability of false detection of the status information of the target object according to the observable range of the sensor 701 in the environment:
  • the probability of the occurrence of an erroneous detection of the state information of the target object is determined.
  • the processor 702 is specifically configured to:
  • the environment in which the movable platform 700 is located belongs to the environment probability information of each environment category.
  • the processor 702 is further configured to:
  • the movable platform 700 is controlled to perform obstacle avoidance operations.
  • the processor 702 is specifically configured to:
  • the movable platform 700 is controlled to decelerate and/or steer.
  • the processor 702 is specifically configured to:
  • the movable platform 700 is controlled to perform a decelerating movement on the movement track of the distance difference.
  • the processor 702 is specifically configured to:
  • the movable platform 700 is controlled to perform a deceleration motion at a second acceleration on the motion trajectory of the distance difference, and the absolute value of the second acceleration is smaller than the absolute value of the first acceleration.
  • the state information includes any parameter information of an object attribute, position, orientation, speed, acceleration, and the consistency condition includes at least one of the following:
  • the parameter information of the target corresponding to the point cloud cluster is consistent with the parameter information of the point cloud cluster.
  • the processor 702 is specifically configured to:
  • the target sensor data further includes image data; the target sensor further includes an image sensor, and the plurality of sensors 701 further include an image sensor.
  • the processor 702 is specifically configured to:
  • the point cloud cluster is based on a laser point cloud point clustering that does not conform to a plane or does not conform to a target curved surface, and the target curved surface is a curved surface with a curvature lower than a preset curvature.
  • the processor 702 is specifically configured to:
  • the state information of the point cloud cluster and the state information of the target do not meet the consistency condition.
  • the target sensor data further includes radar data
  • the target sensor further includes a radar
  • the plurality of sensors 701 further includes a radar
  • the radar is, for example, a millimeter wave radar.
  • the processor 702 is specifically configured to:
  • the state information of the point cloud cluster and the state information of the target do not meet the consistency condition.
  • the target sensor data further includes radar data
  • the target sensor further includes a radar
  • the plurality of sensors 701 further includes a radar
  • the radar is, for example, a millimeter wave radar.
  • the processor 702 is specifically configured to:
  • the state information of the point cloud cluster and the state information of the target object do not meet the consistency condition.
  • the processor 702 is further configured to:
  • the parameter information of the point cloud cluster corresponding to the target object is used as the parameter information of the target object.
  • the movable platform 700 of this embodiment may further include: a memory (not shown in the figure) for storing program codes, the memory is used for storing program codes, and when the program codes are executed, the movable platform 700 can implement the above-mentioned technical solutions.
  • the movable platform of this embodiment can be used to implement the technical solutions of FIG. 3 and the corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a movable platform provided by another embodiment of this application.
  • the movable platform 800 of this embodiment may include: a movable platform body 801 and a data processing device 802.
  • the data processing device 802 is installed on the movable platform body 801.
  • the data processing device 802 may be a device independent of the movable platform body 801.
  • the data processing device 802 may adopt the structure of the device embodiment shown in FIG. 6, and correspondingly, it may execute the technical solutions of FIG. 3 and its corresponding method embodiments. The implementation principles and technical effects are similar, and will not be repeated here.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

Procédé et dispositif de traitement de données, et plateforme mobile. Le procédé consiste à : acquérir des données de capteur cible et des données de fusion (S301) ; réaliser un traitement de regroupement de nuages de points d'objet sur route sur des données de nuages de points pour obtenir un regroupement de nuages de points, et déterminer des informations d'état du regroupement de nuages de points (S302) ; déterminer si les informations d'état du regroupement de nuages de points et les informations d'état d'un objet cible dans les données de fusion satisfont des conditions de cohérence (S303) ; et si tel n'est pas le cas, déterminer la probabilité de fausse détection des informations d'état de l'objet cible en fonction de la portée observable d'un capteur dans l'environnement où se trouve une plateforme mobile, la probabilité étant utilisée pour indiquer si la plateforme mobile effectue une opération d'évitement d'obstacle (S304), de telle sorte que la précision de contrôle des conditions de cohérence est plus élevée, et la probabilité obtenue correspond plus à la réalité objective, de façon à commander plus précisément si la plateforme mobile effectue une opération d'évitement d'obstacle ou non, assurant ainsi la sécurité de la plateforme mobile pendant le mouvement.
PCT/CN2019/108847 2019-09-29 2019-09-29 Procédé et dispositif de traitement de données, et plateforme mobile WO2021056499A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/108847 WO2021056499A1 (fr) 2019-09-29 2019-09-29 Procédé et dispositif de traitement de données, et plateforme mobile
CN201980033428.7A CN112154455B (zh) 2019-09-29 2019-09-29 数据处理方法、设备和可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108847 WO2021056499A1 (fr) 2019-09-29 2019-09-29 Procédé et dispositif de traitement de données, et plateforme mobile

Publications (1)

Publication Number Publication Date
WO2021056499A1 true WO2021056499A1 (fr) 2021-04-01

Family

ID=73891969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108847 WO2021056499A1 (fr) 2019-09-29 2019-09-29 Procédé et dispositif de traitement de données, et plateforme mobile

Country Status (2)

Country Link
CN (1) CN112154455B (fr)
WO (1) WO2021056499A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076922A (zh) * 2021-04-21 2021-07-06 北京经纬恒润科技股份有限公司 一种物体检测方法及装置
CN113391270A (zh) * 2021-06-11 2021-09-14 森思泰克河北科技有限公司 多雷达点云融合的虚假目标抑制方法、装置及终端设备
CN113851003A (zh) * 2021-09-26 2021-12-28 上汽通用五菱汽车股份有限公司 车辆控制系统、车辆控制方法、车辆控制设备及存储介质
CN114842455A (zh) * 2022-06-27 2022-08-02 小米汽车科技有限公司 障碍物检测方法、装置、设备、介质、芯片及车辆
CN115267746A (zh) * 2022-06-13 2022-11-01 广州文远知行科技有限公司 激光雷达点云投影错误的定位方法及相关设备
CN116796210A (zh) * 2023-08-25 2023-09-22 山东莱恩光电科技股份有限公司 基于激光雷达的障碍物检测方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115963851A (zh) * 2021-10-13 2023-04-14 北京三快在线科技有限公司 一种无人机的定位方法及装置
TWI805077B (zh) * 2021-11-16 2023-06-11 國立陽明交通大學 路徑規劃方法及其系統
WO2023123325A1 (fr) * 2021-12-31 2023-07-06 华为技术有限公司 Procédé et dispositif d'estimation d'état
CN115112360B (zh) * 2022-06-22 2024-04-16 南京智慧水运科技有限公司 一种基于信度更新与融合的船舵故障诊断方法
CN115600158B (zh) * 2022-12-08 2023-04-18 奥特贝睿(天津)科技有限公司 一种无人车多传感器融合方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574376A (zh) * 2014-12-24 2015-04-29 重庆大学 拥挤交通中基于双目视觉和激光雷达联合校验的防撞方法
US20170076616A1 (en) * 2015-09-11 2017-03-16 Qualcomm Incorporated Unmanned aerial vehicle obstacle detection and avoidance
CN108917752A (zh) * 2018-03-30 2018-11-30 深圳清创新科技有限公司 无人船导航方法、装置、计算机设备和存储介质
CN109444916A (zh) * 2018-10-17 2019-03-08 上海蔚来汽车有限公司 一种无人驾驶可行驶区域确定装置及方法
CN109490890A (zh) * 2018-11-29 2019-03-19 重庆邮电大学 一种面向智能车的毫米波雷达与单目相机信息融合方法
US20190186918A1 (en) * 2017-12-20 2019-06-20 National Chung Shan Institute Of Science And Technology Uav navigation obstacle avoidance system and method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574376A (zh) * 2014-12-24 2015-04-29 重庆大学 拥挤交通中基于双目视觉和激光雷达联合校验的防撞方法
US20170076616A1 (en) * 2015-09-11 2017-03-16 Qualcomm Incorporated Unmanned aerial vehicle obstacle detection and avoidance
US20190186918A1 (en) * 2017-12-20 2019-06-20 National Chung Shan Institute Of Science And Technology Uav navigation obstacle avoidance system and method thereof
CN108917752A (zh) * 2018-03-30 2018-11-30 深圳清创新科技有限公司 无人船导航方法、装置、计算机设备和存储介质
CN109444916A (zh) * 2018-10-17 2019-03-08 上海蔚来汽车有限公司 一种无人驾驶可行驶区域确定装置及方法
CN109490890A (zh) * 2018-11-29 2019-03-19 重庆邮电大学 一种面向智能车的毫米波雷达与单目相机信息融合方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANG XIN, CHEN XIAODONG, ZHANG JIACHEN, WANG YI, CAI HUAIYU: "An object detection and tracking algorithm based on LiDAR and camera information fusion", OPTO-ELECTRONIC ENGINEERING, vol. 46, no. 7, 31 July 2019 (2019-07-31), pages 91 - 101, XP055795824, ISSN: 1003-501X, DOI: 10.12086/oee.2019.180420 *
HE YONG,JIANG HAO,FANG HUI ,WANG YU ,LIU YUFEI: "Research progress of intelligent obstacle detection methods of vehicles and their application on agriculture", TRANSACTIONS OF THE CHINESE SOCIETY OF AGRICULTURAL ENGINEERING, vol. 34, no. 9, 8 May 2018 (2018-05-08), pages 21 - 32, XP055795818, ISSN: 1002-6819, DOI: 10.11975/j.issn.1002-6819.2018.09.003 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076922A (zh) * 2021-04-21 2021-07-06 北京经纬恒润科技股份有限公司 一种物体检测方法及装置
CN113076922B (zh) * 2021-04-21 2024-05-10 北京经纬恒润科技股份有限公司 一种物体检测方法及装置
CN113391270A (zh) * 2021-06-11 2021-09-14 森思泰克河北科技有限公司 多雷达点云融合的虚假目标抑制方法、装置及终端设备
CN113851003A (zh) * 2021-09-26 2021-12-28 上汽通用五菱汽车股份有限公司 车辆控制系统、车辆控制方法、车辆控制设备及存储介质
CN115267746A (zh) * 2022-06-13 2022-11-01 广州文远知行科技有限公司 激光雷达点云投影错误的定位方法及相关设备
CN114842455A (zh) * 2022-06-27 2022-08-02 小米汽车科技有限公司 障碍物检测方法、装置、设备、介质、芯片及车辆
CN114842455B (zh) * 2022-06-27 2022-09-09 小米汽车科技有限公司 障碍物检测方法、装置、设备、介质、芯片及车辆
CN116796210A (zh) * 2023-08-25 2023-09-22 山东莱恩光电科技股份有限公司 基于激光雷达的障碍物检测方法
CN116796210B (zh) * 2023-08-25 2023-11-28 山东莱恩光电科技股份有限公司 基于激光雷达的障碍物检测方法

Also Published As

Publication number Publication date
CN112154455A (zh) 2020-12-29
CN112154455B (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
WO2021056499A1 (fr) Procédé et dispositif de traitement de données, et plateforme mobile
US11836623B2 (en) Object detection and property determination for autonomous vehicles
CN110362077B (zh) 无人驾驶车辆紧急避险决策系统、方法及介质
US10310087B2 (en) Range-view LIDAR-based object detection
JP6800899B2 (ja) 視界に制限のある交差点への接近のためのリスクベースの運転者支援
RU2767955C1 (ru) Способы и системы для определения компьютером наличия динамических объектов
Cosgun et al. Towards full automated drive in urban environments: A demonstration in gomentum station, california
US11313976B2 (en) Host vehicle position estimation device
US11782158B2 (en) Multi-stage object heading estimation
US10553117B1 (en) System and method for determining lane occupancy of surrounding vehicles
EP3717324A1 (fr) Scénarios de gestion de fonctionnement de véhicule autonome
US10796571B2 (en) Method and device for detecting emergency vehicles in real time and planning driving routes to cope with situations to be expected to be occurred by the emergency vehicles
CN114442101B (zh) 基于成像毫米波雷达的车辆导航方法、装置、设备及介质
RU2750243C2 (ru) Способ и система для формирования траектории для беспилотного автомобиля (sdc)
US20220402492A1 (en) Method for Controlling Vehicle and Vehicle Control Device
US20220253065A1 (en) Information processing apparatus, information processing method, and information processing program
CN118235180A (zh) 预测可行驶车道的方法和装置
US11210941B2 (en) Systems and methods for mitigating anomalies in lane change detection
JP2021160425A (ja) 移動体制御装置、移動体制御方法、およびプログラム
US10845814B2 (en) Host vehicle position confidence degree calculation device
CN114084129A (zh) 一种基于融合的车辆自动驾驶控制方法及系统
Guo et al. Toward human-like lane following behavior in urban environment with a learning-based behavior-induction potential map
Uzer et al. A lidar-based dual-level virtual lanes construction and anticipation of specific road infrastructure events for autonomous driving
RU2788556C1 (ru) Способ управления транспортным средством и устройство управления транспортным средством
US12025752B2 (en) Systems and methods for detecting erroneous LIDAR data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19947252

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19947252

Country of ref document: EP

Kind code of ref document: A1